Developing a Science Process Skills Test for Secondary Students: Validity and Reliability Study
ERIC Educational Resources Information Center
Feyzioglu, Burak; Demirdag, Baris; Akyildiz, Murat; Altun, Eralp
2012-01-01
Science process skills are claimed to enable an individual to improve their own life visions and give a scientific view/literacy as a standard of their understanding about the nature of science. The main purpose of this study was to develop a test for measuring a valid, reliable and practical test for Science Process Skills (SPS) in secondary…
The Reliability and Validity of a Performance Task for Evaluating Science Process Skills.
ERIC Educational Resources Information Center
Adams, Cheryll M.; Callahan, Carolyn M.
1995-01-01
The Diet Cola Test was designed as a process assessment of science aptitude in intermediate grade students. Investigations of the instrument's reliability and validity indicated that data did not support use of the instrument for identifying individual students' aptitude. However, results suggested the test's appropriateness for evaluating…
The development of a science process assessment for fourth-grade students
NASA Astrophysics Data System (ADS)
Smith, Kathleen A.; Welliver, Paul W.
In this study, a multiple-choice test entitled the Science Process Assessment was developed to measure the science process skills of students in grade four. Based on the Recommended Science Competency Continuum for Grades K to 6 for Pennsylvania Schools, this instrument measured the skills of (1) observing, (2) classifying, (3) inferring, (4) predicting, (5) measuring, (6) communicating, (7) using space/time relations, (8) defining operationally, (9) formulating hypotheses, (10) experimenting, (11) recognizing variables, (12) interpreting data, and (13) formulating models. To prepare the instrument, classroom teachers and science educators were invited to participate in two science education workshops designed to develop an item bank of test questions applicable to measuring process skill learning. Participants formed writing teams and generated 65 test items representing the 13 process skills. After a comprehensive group critique of each item, 61 items were identified for inclusion into the Science Process Assessment item bank. To establish content validity, the item bank was submitted to a select panel of science educators for the purpose of judging item acceptability. This analysis yielded 55 acceptable test items and produced the Science Process Assessment, Pilot 1. Pilot 1 was administered to 184 fourth-grade students. Students were given a copy of the test booklet; teachers read each test aloud to the students. Upon completion of this first administration, data from the item analysis yielded a reliability coefficient of 0.73. Subsequently, 40 test items were identified for the Science Process Assessment, Pilot 2. Using the test-retest method, the Science Process Assessment, Pilot 2 (Test 1 and Test 2) was administered to 113 fourth-grade students. Reliability coefficients of 0.80 and 0.82, respectively, were ascertained. The correlation between Test 1 and Test 2 was 0.77. The results of this study indicate that (1) the Science Process Assessment, Pilot 2, is a valid and reliable instrument applicable to measuring the science process skills of students in grade four, (2) using educational workshops as a means of developing item banks of test questions is viable and productive in the test development process, and (3) involving classroom teachers and science educators in the test development process is educationally efficient and effective.
Boston-Fleischhauer, Carol
2008-01-01
The design and implementation of efficient, effective, and safe processes are never-ending challenges in healthcare. Less than optimal performance levels and rising concerns about patient safety suggest that traditional process design methods are insufficient to meet design requirements. In this 2-part series, the author presents human factors engineering and reliability science as important knowledge to enhance existing operational and clinical process design methods in healthcare. An examination of these theories, application approaches, and examples are presented.
Valid and Reliable Science Content Assessments for Science Teachers
NASA Astrophysics Data System (ADS)
Tretter, Thomas R.; Brown, Sherri L.; Bush, William S.; Saderholm, Jon C.; Holmes, Vicki-Lynn
2013-03-01
Science teachers' content knowledge is an important influence on student learning, highlighting an ongoing need for programs, and assessments of those programs, designed to support teacher learning of science. Valid and reliable assessments of teacher science knowledge are needed for direct measurement of this crucial variable. This paper describes multiple sources of validity and reliability (Cronbach's alpha greater than 0.8) evidence for physical, life, and earth/space science assessments—part of the Diagnostic Teacher Assessments of Mathematics and Science (DTAMS) project. Validity was strengthened by systematic synthesis of relevant documents, extensive use of external reviewers, and field tests with 900 teachers during assessment development process. Subsequent results from 4,400 teachers, analyzed with Rasch IRT modeling techniques, offer construct and concurrent validity evidence.
Boston-Fleischhauer, Carol
2008-02-01
The demand to redesign healthcare processes that achieve efficient, effective, and safe results is never-ending. Part 1 of this 2-part series introduced human factors engineering and reliability science as important knowledge to enhance existing operational and clinical process design methods in healthcare organizations. In part 2, the author applies this knowledge to one of the most common operational processes in healthcare: clinical documentation. Specific implementation strategies and anticipated results are discussed, along with organizational challenges and recommended executive responses.
Anderson-Cook, Christine M.; Morzinski, Jerome; Blecker, Kenneth D.
2015-08-19
Understanding the impact of production, environmental exposure and age characteristics on the reliability of a population is frequently based on underlying science and empirical assessment. When there is incomplete science to prescribe which inputs should be included in a model of reliability to predict future trends, statistical model/variable selection techniques can be leveraged on a stockpile or population of units to improve reliability predictions as well as suggest new mechanisms affecting reliability to explore. We describe a five-step process for exploring relationships between available summaries of age, usage and environmental exposure and reliability. The process involves first identifying potential candidatemore » inputs, then second organizing data for the analysis. Third, a variety of models with different combinations of the inputs are estimated, and fourth, flexible metrics are used to compare them. As a result, plots of the predicted relationships are examined to distill leading model contenders into a prioritized list for subject matter experts to understand and compare. The complexity of the model, quality of prediction and cost of future data collection are all factors to be considered by the subject matter experts when selecting a final model.« less
A Scale to Assess Science Activity Videos (SASAV): The Study of Validity and Reliability
ERIC Educational Resources Information Center
Kara, Yilmaz; Bakirci, Hasan
2018-01-01
The purpose of the study was to develop an assessment scale for science activity videos that can be used to determine qualified science activity videos that can fulfill the objectives of activity based science education, help teachers to evaluate any science activity videos and decide whether to include into science learning process. The subjects…
The Importance of Biology Education
ERIC Educational Resources Information Center
Nurse, Paul
2016-01-01
Understanding how science is done increases trust in science as it can be seen to be built on reliable data, rational argument and repeated testing. If science is taught as just an assemblage of facts without dealing with the process which gave rise to those facts, then why should pupils trust science more than fables or pseudoscience? Everyone…
Using Twitter for Demographic and Social Science Research: Tools for Data Collection and Processing
McCormick, Tyler H.; Lee, Hedwig; Cesare, Nina; Shojaie, Ali; Spiro, Emma S.
2015-01-01
Despite recent and growing interest in using Twitter to examine human behavior and attitudes, there is still significant room for growth regarding the ability to leverage Twitter data for social science research. In particular, gleaning demographic information about Twitter users—a key component of much social science research—remains a challenge. This article develops an accurate and reliable data processing approach for social science researchers interested in using Twitter data to examine behaviors and attitudes, as well as the demographic characteristics of the populations expressing or engaging in them. Using information gathered from Twitter users who state an intention to not vote in the 2012 presidential election, we describe and evaluate a method for processing data to retrieve demographic information reported by users that is not encoded as text (e.g., details of images) and evaluate the reliability of these techniques. We end by assessing the challenges of this data collection strategy and discussing how large-scale social media data may benefit demographic researchers. PMID:29033471
Using Twitter for Demographic and Social Science Research: Tools for Data Collection and Processing.
McCormick, Tyler H; Lee, Hedwig; Cesare, Nina; Shojaie, Ali; Spiro, Emma S
2017-08-01
Despite recent and growing interest in using Twitter to examine human behavior and attitudes, there is still significant room for growth regarding the ability to leverage Twitter data for social science research. In particular, gleaning demographic information about Twitter users-a key component of much social science research-remains a challenge. This article develops an accurate and reliable data processing approach for social science researchers interested in using Twitter data to examine behaviors and attitudes, as well as the demographic characteristics of the populations expressing or engaging in them. Using information gathered from Twitter users who state an intention to not vote in the 2012 presidential election, we describe and evaluate a method for processing data to retrieve demographic information reported by users that is not encoded as text (e.g., details of images) and evaluate the reliability of these techniques. We end by assessing the challenges of this data collection strategy and discussing how large-scale social media data may benefit demographic researchers.
Argonne News Brief: Cutting-Edge Science Makes 3D Printing More Efficient and Reliable
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
Argonne National Laboratory researchers are gaining a deeper understanding of the 3D printing process, and as a result, they are helping industries quickly and economically manufacture 3D-printed products that are truly reliable.
The Validation of the Citizen Science Self-Efficacy Scale (CSSES)
ERIC Educational Resources Information Center
Hiller, Suzanne E.
2016-01-01
Citizen science programs provide opportunities for students to help professional scientists while fostering science achievement and motivation. Instruments which measure the effects of this type of programs on student motivational beliefs are limited. The purpose of this study was to describe the process of examining the reliability and validity…
Oh, Deborah M; Kim, Joshua M; Garcia, Raymond E; Krilowicz, Beverly L
2005-06-01
There is increasing pressure, both from institutions central to the national scientific mission and from regional and national accrediting agencies, on natural sciences faculty to move beyond course examinations as measures of student performance and to instead develop and use reliable and valid authentic assessment measures for both individual courses and for degree-granting programs. We report here on a capstone course developed by two natural sciences departments, Biological Sciences and Chemistry/Biochemistry, which engages students in an important culminating experience, requiring synthesis of skills and knowledge developed throughout the program while providing the departments with important assessment information for use in program improvement. The student work products produced in the course, a written grant proposal, and an oral summary of the proposal, provide a rich source of data regarding student performance on an authentic assessment task. The validity and reliability of the instruments and the resulting student performance data were demonstrated by collaborative review by content experts and a variety of statistical measures of interrater reliability, including percentage agreement, intraclass correlations, and generalizability coefficients. The high interrater reliability reported when the assessment instruments were used for the first time by a group of external evaluators suggests that the assessment process and instruments reported here will be easily adopted by other natural science faculty.
The NASA computer science research program plan
NASA Technical Reports Server (NTRS)
1983-01-01
A taxonomy of computer science is included, one state of the art of each of the major computer science categories is summarized. A functional breakdown of NASA programs under Aeronautics R and D, space R and T, and institutional support is also included. These areas were assessed against the computer science categories. Concurrent processing, highly reliable computing, and information management are identified.
NASA Astrophysics Data System (ADS)
Haskins, Sandra Sue
The purpose of this study was to quantitatively determine whether the material found in ABC promotes scientific inquiry through the inclusion of science process skills, and to quantitatively determine the type (experimental, comparative, or descriptive) and character (wet-lab, paper and pencil, model, or computer) of laboratory activities. The research design allowed for an examination of the frequency and type of science process skills required of students in 79 laboratory activities sampled from all 12 units utilizing a modified 33-item laboratory analysis inventory (LAI) (Germane et al, 1996). Interrater reliability for the science process skills was completed on 19 of the laboratory activities with a mean score of 86.1%. Interrater reliability for the type and character of the laboratory, on the same 19 laboratory activities, was completed with mean scores of 79.0% and 96.5%, respectively. It was found that all laboratory activities provide a prelaboratory activity. In addition, the science process skill category of student performance is required most often of students with the skill of learning techniques or manipulating apparatus occurring 99% of the time. The science process skill category observed the least was student planning and design, occurring only 3% of the time. Students were rarely given the opportunity to practice science process skills such as developing and testing hypotheses through experiments they have designed. Chi-square tests, applied at the .05 level of significance, revealed that there was a significant difference in the type of laboratory activities; comparative laboratory activities appeared more often (59%). In addition the character of laboratory activities, "wet-lab" activities appeared more often (90%) than any of the others.
NASA Astrophysics Data System (ADS)
Dira-Smolleck, Lori
The purpose of this study was to develop, validate and establish the reliability of an instrument that measures preservice teachers' self-efficacy in regard to the teaching of science as inquiry. The instrument (TSI) is based upon the work of Bandura, Riggs, and Enochs & Riggs (1990). The study used Bandura's theoretical framework in that the instrument uses the self-efficacy construct to explore the beliefs of prospective elementary science teachers with regards to the teaching of science through inquiry: specifically, the two dimensions of self-efficacy beliefs defined by Bandura: personal self-efficacy and outcome expectancy. Self-efficacy in regard to the teaching of science as inquiry was measured through the use of a 69-item Likert scale instrument designed by the author of the study. A 13-step plan was designed and followed in the process of developing the instrument. Using the results from Chronbach Alpha and Analysis of Variance, a 69-item instrument was found to achieve the greatest balance across the construct validity, reliability and item balance with the Essential Elements of Classroom Inquiry content matrix. Based on the standardized development processes used and the associated evidence, the TSI appears to be a content and construct valid instrument, with high internal reliability for use with prospective elementary teachers to assess self-efficacy beliefs in regard to the teaching of science as inquiry. Implications for research, policy and practice are also discussed.
Unreliability as a Threat to Understanding Psychopathology: The Cautionary Tale of Attentional Bias
Rodebaugh, Thomas L.; Scullin, Rachel B.; Langer, Julia K.; Dixon, David J.; Huppert, Jonathan D.; Bernstein, Amit; Zvielli, Ariel; Lenze, Eric J.
2016-01-01
The use of unreliable measures constitutes a threat to our understanding of psychopathology, because advancement of science using both behavioral and biologically-oriented measures can only be certain if such measurements are reliable. Two pillars of NIMH’s portfolio – the Research Domain Criteria (RDoC) initiative for psychopathology and the target engagement initiative in clinical trials – cannot succeed without measures that possess the high reliability necessary for tests involving mediation and selection based on individual differences. We focus on the historical lack of reliability of attentional bias measures as an illustration of how reliability can pose a threat to our understanding. Our own data replicate previous findings of poor reliability for traditionally-used scores, which suggests a serious problem with the ability to test theories regarding attentional bias. This lack of reliability may also suggest problems with the assumption (in both theory and the formula for the scores) that attentional bias is consistent and stable across time. In contrast, measures accounting for attention as a dynamic process in time show good reliability in our data. The field is sorely in need of research reporting findings and reliability for attentional bias scores using multiple methods, including those focusing on dynamic processes over time. We urge researchers to test and report reliability of all measures, considering findings of low reliability not just as a nuisance but as an opportunity to modify and improve upon the underlying theory. Full assessment of reliability of measures will maximize the possibility that RDoC (and psychological science more generally) will succeed. PMID:27322741
Proceedings of the 25th Project Integration Meeting
NASA Technical Reports Server (NTRS)
Phillips, M.
1985-01-01
Topics addressed include: silicon sheet growth and characterization, silicon material, process development, high-efficiency cells, environmental isolation, engineering sciences, and reliability physics.
Can we save large carnivores without losing large carnivore science?
Allen, Benjamin L.; Allen, Lee R.; Andrén, Henrik; Ballard, Guy; Boitani, Luigi; Engeman, Richard M.; Fleming, Peter J.S.; Haswell, Peter M.; Ford, Adam T.; Kowalczyk, Rafał; Mech, L. David; Linnell, John D.C.; Parker, Daniel M.
2017-01-01
Large carnivores are depicted to shape entire ecosystems through top-down processes. Studies describing these processes are often used to support interventionist wildlife management practices, including carnivore reintroduction or lethal control programs. Unfortunately, there is an increasing tendency to ignore, disregard or devalue fundamental principles of the scientific method when communicating the reliability of current evidence for the ecological roles that large carnivores may play, eroding public confidence in large carnivore science and scientists. Here, we discuss six interrelated issues that currently undermine the reliability of the available literature on the ecological roles of large carnivores: (1) the overall paucity of available data, (2) reliability of carnivore population sampling techniques, (3) general disregard for alternative hypotheses to top-down forcing, (4) lack of applied science studies, (5) frequent use of logical fallacies, and (6) generalisation of results from relatively pristine systems to those substantially altered by humans. We first describe how widespread these issues are, and given this, show, for example, that evidence for the roles of wolves (Canis lupus) and dingoes (Canis lupus dingo) in initiating trophic cascades is not as strong as is often claimed. Managers and policy makers should exercise caution when relying on this literature to inform wildlife management decisions. We emphasise the value of manipulative experiments and discuss the role of scientific knowledge in the decision-making process. We hope that the issues we raise here prompt deeper consideration of actual evidence, leading towards an improvement in both the rigour and communication of large carnivore science.
Standardizing an approach to the evaluation of implementation science proposals.
Crable, Erika L; Biancarelli, Dea; Walkey, Allan J; Allen, Caitlin G; Proctor, Enola K; Drainoni, Mari-Lynn
2018-05-29
The fields of implementation and improvement sciences have experienced rapid growth in recent years. However, research that seeks to inform health care change may have difficulty translating core components of implementation and improvement sciences within the traditional paradigms used to evaluate efficacy and effectiveness research. A review of implementation and improvement sciences grant proposals within an academic medical center using a traditional National Institutes of Health framework highlighted the need for tools that could assist investigators and reviewers in describing and evaluating proposed implementation and improvement sciences research. We operationalized existing recommendations for writing implementation science proposals as the ImplemeNtation and Improvement Science Proposals Evaluation CriTeria (INSPECT) scoring system. The resulting system was applied to pilot grants submitted to a call for implementation and improvement science proposals at an academic medical center. We evaluated the reliability of the INSPECT system using Krippendorff's alpha coefficients and explored the utility of the INSPECT system to characterize common deficiencies in implementation research proposals. We scored 30 research proposals using the INSPECT system. Proposals received a median cumulative score of 7 out of a possible score of 30. Across individual elements of INSPECT, proposals scored highest for criteria rating evidence of a care or quality gap. Proposals generally performed poorly on all other criteria. Most proposals received scores of 0 for criteria identifying an evidence-based practice or treatment (50%), conceptual model and theoretical justification (70%), setting's readiness to adopt new services/treatment/programs (54%), implementation strategy/process (67%), and measurement and analysis (70%). Inter-coder reliability testing showed excellent reliability (Krippendorff's alpha coefficient 0.88) for the application of the scoring system overall and demonstrated reliability scores ranging from 0.77 to 0.99 for individual elements. The INSPECT scoring system presents a new scoring criteria with a high degree of inter-rater reliability and utility for evaluating the quality of implementation and improvement sciences grant proposals.
NASA Astrophysics Data System (ADS)
Dira Smolleck, Lori; Zembal-Saul, Carla; Yoder, Edgar P.
2006-06-01
The purpose of this study was to develop, validate, and establish the reliability of an instrument that measures preservice teachers' self-efficacy in regard to the teaching of science as inquiry. The instrument, Teaching Science as Inquiry (TSI), is based upon the work of Bandura (1977, 1981, 1982, 1986, 1989, 1995, 1997), Riggs (1988), and Enochs and Riggs (1990). Self-efficacy in regard to the teaching of science as inquiry was measured through the use of a 69-item Likert-type scale instrument designed by the author of the study. Based on the standardized development processes used and the associated evidence, the TSI appears to be a content and construct valid instrument with high internal reliability for use with preservice elementary teachers to assess self-efficacy beliefs in regard to the teaching of science as inquiry.
NASA Astrophysics Data System (ADS)
Smolleck, Lori Dira; Zembal-Saul, Carla; Yoder, Edgar P.
2006-06-01
The purpose of this study was to develop, validate, and establish the reliability of an instrument that measures preservice teachers' self-efficacy in regard to the teaching of science as inquiry. The instrument, Teaching Science as Inquiry (TSI), is based upon the work of Bandura (1977, 1981, 1982, 1986, 1989, 1995, 1997), Riggs (1988), and Enochs and Riggs (1990). Self-efficacy in regard to the teaching of science as inquiry was measured through the use of a 69-item Likert-type scale instrument designed by the author of the study. Based on the standardized development processes used and the associated evidence, the TSI appears to be a content and construct valid instrument with high internal reliability for use with preservice elementary teachers to assess self-efficacy beliefs in regard to the teaching of science as inquiry.
NASA's computer science research program
NASA Technical Reports Server (NTRS)
Larsen, R. L.
1983-01-01
Following a major assessment of NASA's computing technology needs, a new program of computer science research has been initiated by the Agency. The program includes work in concurrent processing, management of large scale scientific databases, software engineering, reliable computing, and artificial intelligence. The program is driven by applications requirements in computational fluid dynamics, image processing, sensor data management, real-time mission control and autonomous systems. It consists of university research, in-house NASA research, and NASA's Research Institute for Advanced Computer Science (RIACS) and Institute for Computer Applications in Science and Engineering (ICASE). The overall goal is to provide the technical foundation within NASA to exploit advancing computing technology in aerospace applications.
NASA Astrophysics Data System (ADS)
Shahali, Edy H. M.; Halim, Lilia; Treagust, David F.; Won, Mihye; Chandrasegaran, A. L.
2017-04-01
This study investigated the understanding of science process skills (SPS) of 329 science teachers from 52 primary schools selected by random sampling. The understanding of SPS was measured in terms of conceptual and operational aspects of SPS using an instrument called the Science Process Skills Questionnaire (SPSQ) with a Cronbach's alpha reliability of 0.88. The findings showed that the teachers' conceptual understanding of SPS was much weaker than their practical application of SPS. The teachers' understanding of SPS differed by their teaching qualifications but not so much by their teaching experience. Emphasis needs to be given to both conceptual and operational understanding of SPS during pre-service and in-service teacher education to enable science teachers to use the skills and implement inquiry-based lessons in schools.
ERIC Educational Resources Information Center
Teixeira, Elder Sales; Greca, Ileana Maria; Freire, Olival, Jr.
2012-01-01
This work is a systematic review of studies that investigate teaching experiences applying History and Philosophy of Science (HPS) in physics classrooms, with the aim of obtaining critical and reliable information on this subject. After a careful process of selection and exclusion of studies compiled from a variety of databases, an in-depth review…
Incorporating Primary Scientific Literature in Middle and High School Education.
Fankhauser, Sarah C; Lijek, Rebeccah S
2016-03-01
Primary literature is the most reliable and direct source of scientific information, but most middle school and high school science is taught using secondary and tertiary sources. One reason for this is that primary science articles can be difficult to access and interpret for young students and for their teachers, who may lack exposure to this type of writing. The Journal of Emerging Investigators (JEI) was created to fill this gap and provide primary research articles that can be accessed and read by students and their teachers. JEI is a non-profit, online, open-access, peer-reviewed science journal dedicated to mentoring and publishing the scientific research of middle and high school students. JEI articles provide reliable scientific information that is written by students and therefore at a level that their peers can understand. For student-authors who publish in JEI, the review process and the interaction with scientists provide invaluable insight into the scientific process. Moreover, the resulting repository of free, student-written articles allows teachers to incorporate age-appropriate primary literature into the middle and high school science classroom. JEI articles can be used for teaching specific scientific content or for teaching the process of the scientific method itself. The critical thinking skills that students learn by engaging with the primary literature will be invaluable for the development of a scientifically-literate public.
Incorporating Primary Scientific Literature in Middle and High School Education†
Fankhauser, Sarah C.; Lijek, Rebeccah S.
2016-01-01
Primary literature is the most reliable and direct source of scientific information, but most middle school and high school science is taught using secondary and tertiary sources. One reason for this is that primary science articles can be difficult to access and interpret for young students and for their teachers, who may lack exposure to this type of writing. The Journal of Emerging Investigators (JEI) was created to fill this gap and provide primary research articles that can be accessed and read by students and their teachers. JEI is a non-profit, online, open-access, peer-reviewed science journal dedicated to mentoring and publishing the scientific research of middle and high school students. JEI articles provide reliable scientific information that is written by students and therefore at a level that their peers can understand. For student-authors who publish in JEI, the review process and the interaction with scientists provide invaluable insight into the scientific process. Moreover, the resulting repository of free, student-written articles allows teachers to incorporate age-appropriate primary literature into the middle and high school science classroom. JEI articles can be used for teaching specific scientific content or for teaching the process of the scientific method itself. The critical thinking skills that students learn by engaging with the primary literature will be invaluable for the development of a scientifically-literate public. PMID:27047607
NASA Astrophysics Data System (ADS)
Irwanto, Rohaeti, Eli; LFX, Endang Widjajanti; Suyanta
2017-05-01
This research aims to develop instrument and determine the characteristics of an integrated assessment instrument. This research uses 4-D model, which includes define, design, develop, and disseminate. The primary product is validated by expert judgment, tested it's readability by students, and assessed it's feasibility by chemistry teachers. This research involved 246 students of grade XI of four senior high schools in Yogyakarta, Indonesia. Data collection techniques include interview, questionnaire, and test. Data collection instruments include interview guideline, item validation sheet, users' response questionnaire, instrument readability questionnaire, and essay test. The results show that the integrated assessment instrument has Aiken validity value of 0.95. Item reliability was 0.99 and person reliability was 0.69. Teachers' response to the integrated assessment instrument is very good. Therefore, the integrated assessment instrument is feasible to be applied to measure the students' analytical thinking and science process skills.
X-Ray Computed Tomography: The First Step in Mars Sample Return Processing
NASA Technical Reports Server (NTRS)
Welzenbach, L. C.; Fries, M. D.; Grady, M. M.; Greenwood, R. C.; McCubbin, F. M.; Zeigler, R. A.; Smith, C. L.; Steele, A.
2017-01-01
The Mars 2020 rover mission will collect and cache samples from the martian surface for possible retrieval and subsequent return to Earth. If the samples are returned, that mission would likely present an opportunity to analyze returned Mars samples within a geologic context on Mars. In addition, it may provide definitive information about the existence of past or present life on Mars. Mars sample return presents unique challenges for the collection, containment, transport, curation and processing of samples [1] Foremost in the processing of returned samples are the closely paired considerations of life detection and Planetary Protection. In order to achieve Mars Sample Return (MSR) science goals, reliable analyses will depend on overcoming some challenging signal/noise-related issues where sparse martian organic compounds must be reliably analyzed against the contamination background. While reliable analyses will depend on initial clean acquisition and robust documentation of all aspects of developing and managing the cache [2], there needs to be a reliable sample handling and analysis procedure that accounts for a variety of materials which may or may not contain evidence of past or present martian life. A recent report [3] suggests that a defined set of measurements should be made to effectively inform both science and Planetary Protection, when applied in the context of the two competing null hypotheses: 1) that there is no detectable life in the samples; or 2) that there is martian life in the samples. The defined measurements would include a phased approach that would be accepted by the community to preserve the bulk of the material, but provide unambiguous science data that can be used and interpreted by various disciplines. Fore-most is the concern that the initial steps would ensure the pristine nature of the samples. Preliminary, non-invasive techniques such as computed X-ray tomography (XCT) have been suggested as the first method to interrogate and characterize the cached samples without altering the materials [1,2]. A recent report [4] indicates that XCT may minimally alter samples for some techniques, and work is needed to quantify these effects, maximizing science return from XCT initial analysis while minimizing effects.
NASA Astrophysics Data System (ADS)
Buxbaum, T. M.; Trainor, S.; Warner, N.; Timm, K.
2015-12-01
Climate change is impacting ecological systems, coastal processes, and environmental disturbance regimes in Alaska, leading to a pressing need to communicate reliable scientific information about climate change, its impacts, and future projections for land and resource management and decision-making. However, little research has been done to dissect and analyze the process of making the results of scientific inquiry directly relevant and usable in resource management. Based within the Science Application division of the US Fish and Wildlife Service, Landscape Conservation Cooperatives (LCCs) are regional conservation science partnerships that provide scientific and technical expertise needed to support conservation planning at landscape scales and promote collaboration in defining shared conservation goals. The five LCCs with jurisdiction in Alaska recently held a training workshop with the goals of advancing staff understanding and skills related to science communication and translation. We report here preliminary results from analysis of workshop discussions and pre- and post- workshop interviews and surveys revealing expectations, assumptions, and mental models regarding science communication and the process of conducting use-inspired science. Generalizable conclusions can assist scientists and boundary organizations bridge knowledge gaps between science and resource management.
Page, Mark; Taylor, Jane; Blenkin, Matt
2011-07-01
Many studies regarding the legal status of forensic science have relied on the U.S. Supreme Court's mandate in Daubert v. Merrell Dow Pharmaceuticals Inc., and its progeny in order to make subsequent recommendations or rebuttals. This paper focuses on a more pragmatic approach to analyzing forensic science's immediate deficiencies by considering a qualitative analysis of actual judicial reasoning where forensic identification evidence has been excluded on reliability grounds since the Daubert precedent. Reliance on general acceptance is becoming insufficient as proof of the admissibility of forensic evidence. The citation of unfounded statistics, error rates and certainties, a failure to document the analytical process or follow standardized procedures, and the existence of observe bias represent some of the concerns that have lead to the exclusion or limitation of forensic identification evidence. Analysis of these reasons may serve to refocus forensic practitioners' testimony, resources, and research toward rectifying shortfalls in these areas. © 2011 American Academy of Forensic Sciences.
Beyond Objectivity and Subjectivity: The Intersubjective Foundations of Psychological Science.
Mascolo, Michael F
2016-12-01
The question of whether psychology can properly be regarded as a science has long been debated (Smedslund in Integrative Psychological & Behavioral Science, 50, 185-195, 2016). Science is typically understood as a method for producing reliable knowledge by testing falsifiable claims against objective evidence. Psychological phenomena, however, are traditionally taken to be "subjective" and hidden from view. To the extent that science relies upon objective observation, is a scientific psychology possible? In this paper, I argue that scientific psychology does not much fail to meet the requirements of objectivity as much as the concept of objectivity fails as a methodological principle for psychological science. The traditional notion of objectivity relies upon the distinction between a public, observable exterior and a private, subjective interior. There are good reasons, however, to reject this dichotomy. Scholarship suggests that psychological knowledge arises neither from the "inside out" (subjectively) nor from the outside-in (objectively), but instead intersubjective processes that occur between people. If this is so, then objectivist methodology may do more to obscure than illuminate our understanding of psychological functioning. From this view, we face a dilemma: Do we, in the name of science, cling to an objective epistemology that cuts us off from the richness of psychological activity? Or do we seek to develop a rigorous intersubjective psychology that exploits the processes through which we gain psychological knowledge in the first place? If such a psychology can produce systematic, reliable and useful knowledge, then the question of whether its practices are "scientific" in the traditional sense would become irrelevant.
NASA Astrophysics Data System (ADS)
Ritter, Jennifer M.
1999-12-01
The purpose of this study was to develop, validate and establish the reliability of an instrument to assess the self-efficacy beliefs of prospective elementary teachers with regards to science teaching and learning for diverse learners. The study used Bandura's theoretical framework, in that the instrument would use the self-efficacy construct to explore the beliefs of prospective elementary science teachers with regards to science teaching and learning to diverse learners: specifically the two dimensions of self-efficacy beliefs defined by Bandura (1977): personal self-efficacy and outcome expectancy. A seven step plan was designed and followed in the process of developing the instrument, which was titled the Self-Efficacy Beliefs about Equitable Science Teaching or SEBEST. Diverse learners as recognized by Science for All Americans (1989) are "those who in the past who have largely been bypassed in science and mathematics education: ethnic and language minorities and girls" (p. xviii). That definition was extended by this researcher to include children from low socioeconomic backgrounds based on the research by Gomez and Tabachnick (1992). The SEBEST was administered to 226 prospective elementary teachers at The Pennsylvania State University. Using the results from factor analyses, Coefficient Alpha, and Chi-Square a 34 item instrument was found to achieve the greatest balance across the construct validity, reliability and item balance with the content matrix. The 34 item SEBEST was found to load purely on four factors across the content matrix thus providing evidence construct validity. The Coefficient Alpha reliability for the 34 item SEBEST was .90 and .82 for the PSE sub-scale and .78 for the OE sub-scale. A Chi-Square test (X2 = 2.7 1, df = 7, p > .05) was used to confirm that the 34 items were balanced across the Personal Self-Efficacy/Outcome Expectancy and Ethnicity/LanguageMinority/Gender Socioeconomic Status/dimensions of the content matrix. Based on the standardized development procedures used and the associated evidence, the SEBEST appears to be a content and construct valid instrument, with high internal reliability and moderate test-retest reliability qualities, for use with prospective elementary teachers to assess self-efficacy beliefs for teaching and learning science for diverse learners.
Peer Review of EPA's Draft BMDS Document: Exponential ...
BMDS is one of the Agency's premier tools for estimating risk assessments, therefore the validity and reliability of its statistical models are of paramount importance. This page provides links to peer review of the BMDS applications and its models as they were developed and eventually released documenting the rigorous review process taken to provide the best science tools available for statistical modeling. This page provides links to peer review of the BMDS applications and its models as they were developed and eventually released documenting the rigorous review process taken to provide the best science tools available for statistical modeling.
Translations on USSR Science and Technology, Physical Sciences and Technology, Number 16
1977-08-05
34INVESTIGATION OF SPLITTING OF LIGHT NUCLEI WITH HIGH-ENERGY y -RAYS WITH THE METHOD OF WILSON’S CHAMBER OPERATING IN POWERFUL BEAMS OF ELECTRONIC...boast high reliability, high speed, and extremely modest power requirements. Information oh the Screen Visual display devices greatly facilitate...area of application of these units Includes navigation, control of power systems, machine tools, and manufac- turing processes. Th» ^»abilities of
Valid and Reliable Science Content Assessments for Science Teachers
ERIC Educational Resources Information Center
Tretter, Thomas R.; Brown, Sherri L.; Bush, William S.; Saderholm, Jon C.; Holmes, Vicki-Lynn
2013-01-01
Science teachers' content knowledge is an important influence on student learning, highlighting an ongoing need for programs, and assessments of those programs, designed to support teacher learning of science. Valid and reliable assessments of teacher science knowledge are needed for direct measurement of this crucial variable. This paper…
Citizen science: A new perspective to advance spatial pattern evaluation in hydrology.
Koch, Julian; Stisen, Simon
2017-01-01
Citizen science opens new pathways that can complement traditional scientific practice. Intuition and reasoning often make humans more effective than computer algorithms in various realms of problem solving. In particular, a simple visual comparison of spatial patterns is a task where humans are often considered to be more reliable than computer algorithms. However, in practice, science still largely depends on computer based solutions, which inevitably gives benefits such as speed and the possibility to automatize processes. However, the human vision can be harnessed to evaluate the reliability of algorithms which are tailored to quantify similarity in spatial patterns. We established a citizen science project to employ the human perception to rate similarity and dissimilarity between simulated spatial patterns of several scenarios of a hydrological catchment model. In total, the turnout counts more than 2500 volunteers that provided over 43000 classifications of 1095 individual subjects. We investigate the capability of a set of advanced statistical performance metrics to mimic the human perception to distinguish between similarity and dissimilarity. Results suggest that more complex metrics are not necessarily better at emulating the human perception, but clearly provide auxiliary information that is valuable for model diagnostics. The metrics clearly differ in their ability to unambiguously distinguish between similar and dissimilar patterns which is regarded a key feature of a reliable metric. The obtained dataset can provide an insightful benchmark to the community to test novel spatial metrics.
The TESS Transiting Planet Search Predicted Recovery and Reliability Rates
NASA Astrophysics Data System (ADS)
Smith, Jeffrey C.; Caldwell, Douglas A.; Davies, Misty; Jenkins, Jon Michael; Li, Jie; Morris, Robert L.; Rose, Mark; Tenenbaum, Peter; Ting, Eric; Twicken, Joseph D.; Wohler, Bill
2018-06-01
The Transiting Exoplanet Survey Satellite (TESS) will search for transiting planet signatures via the Science Processing Operations Center (SPOC) Science Pipeline at NASA Ames Research Center. We report on predicted transit recovery and reliability rates for planetary signatures. These estimates are based on simulated runs of the pipeline using realistic stellar models and transiting planet populations along with best estimates for instrumental noise, thermal induced focus changes, instrumental drift and stochastic artifacts in the light curve data. Key sources of false positives are identified and summarized. TESS will launch in 2018 and survey the full sky for transiting exoplanets over a period of two years. The SPOC pipeline was ported from the Kepler Science Operations Center (SOC) codebase and extended for TESS after the mission was selected for flight in the NASA Astrophysics Explorer program. Candidate planet detections and data products will be delivered to the Mikulski Archive for Space Telescopes (MAST); the MAST URL is archive.stsci.edu/tess. Funding for the TESS Mission has been provided by the NASA Science Mission Directorate.
NASA Astrophysics Data System (ADS)
Brown, M.
2013-12-01
Republican strategist Frank Luntz famously advised party leaders to emphasize the lack of ';scientific certainty' about global warming because, once the public became convinced of a scientific consensus on global warming, they would accept it and policies responding to it. But philosophical work on scientific methodology puts absolute certainty out of reach. Today almost all philosophers are fallibilists, holding that every belief is subject to correction. Does it follow that an honest report of what science has to say about any topic cannot claim that the matter is settled? I defend a more confident view: much of what science has to say about the world is settled. On this view, settled science doesn't require philosophical certainty. Instead, it is based on stable agreement and practical reliability--values central to C.S. Peirce's account of the scientific method. Defending this position requires care over changes in theory that can undermine the language in which these settled points were initially expressed. It also requires epistemic modesty, since our confidence in any claim, including observational claims, depends on a kind of induction, and could, in principle, be undermined. Interestingly, this line of thought also suggests many of the central claims the historical sciences make about our world are more stable and reliable than the more theory-dependent claims we think of as central to the foundations of physics. David Hume undermined philosophical certainty about causal laws when he argued that, no matter how many times we make observations confirming a law, it could still fail the next test. This point is sometimes confused with healthy scientific skepticism, but Hume's worry has a much broader scope than scientific skepticism. For Hume each instance of a causal relation was doubtful, no matter how similar to previous instances. But scientists are diffident about applying a well-tested law only when the circumstances differ. For instance, particle physicists regard the standard model as extremely reliable for a wide range of energies, though it is expected to fail at higher energies. Peirce's emphasis on stable agreement and practical reliability has two important implications for understanding settled science: first, it exposes our confidence in observations to Hume's worries about induction, since our confidence in observation is, on his account, grounded in regular, reliable agreement between observers. But well-established observational results are paradigmatic examples of settled science! In turn, this points toward a more general account of settled science. The successes of general relativity, quantum mechanics and the standard model in particle physics are inconclusive for the foundations of physics, which aim (ambitiously) at a unified theory of all the forces of nature. But they remain settled successes for these theories. Similarly, biology and geology have achieved stable consensus on many points, including the importance of processes such as natural selection and plate tectonics, and climate science has achieved stable consensus on many conclusions as well (Cook et al., 2013), from the processes responsible for the ';greenhouse' effect and the role of CO2 emissions in amplifying that effect, to a range of estimates for climate sensitivity, based on multiple lines of evidence, that makes the danger of continued emissions plain.
Problem Solving in Biology: A Methodology
ERIC Educational Resources Information Center
Wisehart, Gary; Mandell, Mark
2008-01-01
A methodology is described that teaches science process by combining informal logic and a heuristic for rating factual reliability. This system facilitates student hypothesis formation, testing, and evaluation of results. After problem solving with this scheme, students are asked to examine and evaluate arguments for the underlying principles of…
Virtual Sensor Web Architecture
NASA Astrophysics Data System (ADS)
Bose, P.; Zimdars, A.; Hurlburt, N.; Doug, S.
2006-12-01
NASA envisions the development of smart sensor webs, intelligent and integrated observation network that harness distributed sensing assets, their associated continuous and complex data sets, and predictive observation processing mechanisms for timely, collaborative hazard mitigation and enhanced science productivity and reliability. This paper presents Virtual Sensor Web Infrastructure for Collaborative Science (VSICS) Architecture for sustained coordination of (numerical and distributed) model-based processing, closed-loop resource allocation, and observation planning. VSICS's key ideas include i) rich descriptions of sensors as services based on semantic markup languages like OWL and SensorML; ii) service-oriented workflow composition and repair for simple and ensemble models; event-driven workflow execution based on event-based and distributed workflow management mechanisms; and iii) development of autonomous model interaction management capabilities providing closed-loop control of collection resources driven by competing targeted observation needs. We present results from initial work on collaborative science processing involving distributed services (COSEC framework) that is being extended to create VSICS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aydin, Süleyman, E-mail: yupul@hotmail.com; Haşiloğlu, M. Akif, E-mail: mehmet.hasiloglu@hotmail.com; Kunduraci, Ayşe, E-mail: ayse-kndrc@hotmail.com
In this study it was aimed to improve an academic achievement test to establish the students’ knowledge about the earthquake and the ways of protection from earthquakes. In the method of this study, the steps that Webb (1994) was created to improve an academic achievement test for a unit were followed. In the developmental process of multiple choice test having 25 questions, was prepared to measure the pre-service teachers’ knowledge levels about the earthquake and the ways of protection from earthquakes. The multiple choice test was presented to view of six academics (one of them was from geographic field andmore » five of them were science educator) and two expert teachers in science Prepared test was applied to 93 pre-service teachers studying in elementary education department in 2014-2015 academic years. As a result of validity and reliability of the study, the test was composed of 20 items. As a result of these applications, Pearson Moments Multiplication half-reliability coefficient was found to be 0.94. When this value is adjusted according to Spearman Brown reliability coefficient the reliability coefficient was set at 0.97.« less
Data collection and evaluation for experimental computer science research
NASA Technical Reports Server (NTRS)
Zelkowitz, Marvin V.
1983-01-01
The Software Engineering Laboratory was monitoring software development at NASA Goddard Space Flight Center since 1976. The data collection activities of the Laboratory and some of the difficulties of obtaining reliable data are described. In addition, the application of this data collection process to a current prototyping experiment is reviewed.
Proceedings of the 24th Project Integration Meeting
NASA Technical Reports Server (NTRS)
Tustin, D.
1984-01-01
Progress made by the Flat-Plate Solar Array Project is described. Reports on silicon sheet growth and characterization, silicon material, process development, high-efficiency cells, environmental isolation, engineering sciences, and reliability physics are presented along with copies of visual presentations made at the 24th Project Integration Meeting.
Computer-aided design of polymers and composites
NASA Technical Reports Server (NTRS)
Kaelble, D. H.
1985-01-01
This book on computer-aided design of polymers and composites introduces and discusses the subject from the viewpoint of atomic and molecular models. Thus, the origins of stiffness, strength, extensibility, and fracture toughness in composite materials can be analyzed directly in terms of chemical composition and molecular structure. Aspects of polymer composite reliability are considered along with characterization techniques for composite reliability, relations between atomic and molecular properties, computer aided design and manufacture, polymer CAD/CAM models, and composite CAD/CAM models. Attention is given to multiphase structural adhesives, fibrous composite reliability, metal joint reliability, polymer physical states and transitions, chemical quality assurance, processability testing, cure monitoring and management, nondestructive evaluation (NDE), surface NDE, elementary properties, ionic-covalent bonding, molecular analysis, acid-base interactions, the manufacturing science, and peel mechanics.
Managing unexpected events in the manufacturing of biologic medicines.
Grampp, Gustavo; Ramanan, Sundar
2013-08-01
The manufacturing of biologic medicines (biologics) requires robust process and facility design, rigorous regulatory compliance, and a well-trained workforce. Because of the complex attributes of biologics and their sensitivity to production and handling conditions, manufacturing of these medicines also requires a high-reliability manufacturing organization. As required by regulators, such an organization must monitor the state-of-control for the manufacturing process. A high-reliability organization also invests in an experienced and fully engaged technical support staff and fosters a management culture that rewards in-depth analysis of unexpected results, robust risk assessments, and timely and effective implementation of mitigation measures. Such a combination of infrastructure, technology, human capital, management, and a science-based operations culture does not occur without a strong organizational and financial commitment. These attributes of a high-reliability biologics manufacturer are difficult to achieve and may be differentiating factors as the supply of biologics diversifies in future years.
2004 research briefs :Materials and Process Sciences Center.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cieslak, Michael J.
2004-01-01
This report is the latest in a continuing series that highlights the recent technical accomplishments associated with the work being performed within the Materials and Process Sciences Center. Our research and development activities primarily address the materials-engineering needs of Sandia's Nuclear-Weapons (NW) program. In addition, we have significant efforts that support programs managed by the other laboratory business units. Our wide range of activities occurs within six thematic areas: Materials Aging and Reliability, Scientifically Engineered Materials, Materials Processing, Materials Characterization, Materials for Microsystems, and Materials Modeling and Simulation. We believe these highlights collectively demonstrate the importance that a strong materials-sciencemore » base has on the ultimate success of the NW program and the overall DOE technology portfolio.« less
2016-10-01
Reports an error in "Unreliability as a threat to understanding psychopathology: The cautionary tale of attentional bias" by Thomas L. Rodebaugh, Rachel B. Scullin, Julia K. Langer, David J. Dixon, Jonathan D. Huppert, Amit Bernstein, Ariel Zvielli and Eric J. Lenze ( Journal of Abnormal Psychology , 2016[Aug], Vol 125[6], 840-851). There was an error in the Author Note concerning the support of the MacBrain Face Stimulus Set. The correct statement is provided. (The following abstract of the original article appeared in record 2016-30117-001.) The use of unreliable measures constitutes a threat to our understanding of psychopathology, because advancement of science using both behavioral and biologically oriented measures can only be certain if such measurements are reliable. Two pillars of the National Institute of Mental Health's portfolio-the Research Domain Criteria (RDoC) initiative for psychopathology and the target engagement initiative in clinical trials-cannot succeed without measures that possess the high reliability necessary for tests involving mediation and selection based on individual differences. We focus on the historical lack of reliability of attentional bias measures as an illustration of how reliability can pose a threat to our understanding. Our own data replicate previous findings of poor reliability for traditionally used scores, which suggests a serious problem with the ability to test theories regarding attentional bias. This lack of reliability may also suggest problems with the assumption (in both theory and the formula for the scores) that attentional bias is consistent and stable across time. In contrast, measures accounting for attention as a dynamic process in time show good reliability in our data. The field is sorely in need of research reporting findings and reliability for attentional bias scores using multiple methods, including those focusing on dynamic processes over time. We urge researchers to test and report reliability of all measures, considering findings of low reliability not just as a nuisance but as an opportunity to modify and improve upon the underlying theory. Full assessment of reliability of measures will maximize the possibility that RDoC (and psychological science more generally) will succeed. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
A Robust, Low-Cost Virtual Archive for Science Data
NASA Technical Reports Server (NTRS)
Lynnes, Christopher; Vollmer, Bruce
2005-01-01
Despite their expense tape silos are still often the only affordable option for petabytescale science data archives, particularly when other factors such as data reliability, floor space, power and cooling load are accounted for. However, the complexity, management software, hardware reliability and access latency of tape silos make online data storage ever more attractive. Drastic reductions in low-cost mass-market PC disk drivers help to make this more affordable (approx. 1$/GB), but are challenging to scale to the petabyte range and of questionable reliability for archival use, On the other hand, if much of the science archive could be "virtualized", i.e., produced on demand when requested by users, we would need store only a fraction of the data online, perhaps bringing an online-only system into in affordable range. Radiance data from the satellite-borne Moderate Resolution Imaging Spectroradiometer (MODIS) instrument provides a good opportunity for such a virtual archive: the raw data amount to 140 GB/day, but these are small relative to the 550 GB/day making up the radiance products. These data are routinely processed as inputs for geophysical parameter products and then archived on tape at the Goddard Earth Sciences Distributed Active Archive (GES DAAC) for distributing to users. Virtualizing them would be an immediate and signifcant reduction in the amount of data being stored in the tape archives and provide more customizable products. A prototype of such a virtual archive is being developed to prove the concept and develop ways of incorporating the robustness that a science data archive requires.
Noninteractive macroscopic reliability model for whisker-reinforced ceramic composites
NASA Technical Reports Server (NTRS)
Duffy, Stephen F.; Arnold, Steven M.
1990-01-01
Considerable research is underway in the field of material science focusing on incorporating silicon carbide whiskers into silicon nitride and alumina matrices. These composites show the requisite thermal stability and thermal shock resistance necessary for use as components in advanced gas turbines and heat exchangers. This paper presents a macroscopic noninteractive reliability model for whisker-reinforced ceramic composites. The theory is multiaxial and is applicable to composites that can be characterized as transversely isotropic. Enough processing data exists to suggest this idealization encompasses a significantly large class of fabricated components. A qualitative assessment of the model is made by presenting reliability surfaces in several different stress spaces and for different values of model parameters.
Citizen science: A new perspective to advance spatial pattern evaluation in hydrology
Stisen, Simon
2017-01-01
Citizen science opens new pathways that can complement traditional scientific practice. Intuition and reasoning often make humans more effective than computer algorithms in various realms of problem solving. In particular, a simple visual comparison of spatial patterns is a task where humans are often considered to be more reliable than computer algorithms. However, in practice, science still largely depends on computer based solutions, which inevitably gives benefits such as speed and the possibility to automatize processes. However, the human vision can be harnessed to evaluate the reliability of algorithms which are tailored to quantify similarity in spatial patterns. We established a citizen science project to employ the human perception to rate similarity and dissimilarity between simulated spatial patterns of several scenarios of a hydrological catchment model. In total, the turnout counts more than 2500 volunteers that provided over 43000 classifications of 1095 individual subjects. We investigate the capability of a set of advanced statistical performance metrics to mimic the human perception to distinguish between similarity and dissimilarity. Results suggest that more complex metrics are not necessarily better at emulating the human perception, but clearly provide auxiliary information that is valuable for model diagnostics. The metrics clearly differ in their ability to unambiguously distinguish between similar and dissimilar patterns which is regarded a key feature of a reliable metric. The obtained dataset can provide an insightful benchmark to the community to test novel spatial metrics. PMID:28558050
ERIC Educational Resources Information Center
Martinez, Jose Felipe; Borko, Hilda; Stecher, Brian M.
2012-01-01
With growing interest in the role of teachers as the key mediators between educational policies and outcomes, the importance of developing good measures of classroom processes has become increasingly apparent. Yet, collecting reliable and valid information about a construct as complex as instruction poses important conceptual and technical…
Toward a Sociology of Criminological Theory
ERIC Educational Resources Information Center
Hauhart, Robert C.
2012-01-01
It is a truism to remind ourselves that scientific theory is a human product subject to many of the same social processes that govern other social acts. Science, however, whether social or natural, pretends to claim a higher mission, a more sophisticated methodology, and more consequential and reliable outcomes than human efforts arising from…
Project Longshot: A mission to Alpha Centauri
NASA Technical Reports Server (NTRS)
West, Curtis; Chamberlain, Sally; Pagan, Neftali; Stevens, Robert
1989-01-01
Project Longshot, an exercise in the Advanced Design Program for Space, had as its destination Alpha Centauri, the closest star system to our own solar system. Alpha Centauri, a trinary star system, is 4.34 light years from earth. Although Project Longshot is impossible based on existing technologies, areas that require further investigation in order to make this feat possible are identified. Three areas where advances in technology are needed are propulsion, data processing for autonomous command and control functions, and reliability. Propulsion, possibly by antimatter annihilation; navigation and navigation aids; reliable hardware and instruments; artificial intelligence to eliminate the need for command telemetry; laser communication; and a reliable, compact, and lightweight power system that converts energy efficiently and reliably present major challenges. Project Longshot promises exciting advances in science and technology and new information concerning the universe.
Design and validation of a standards-based science teacher efficacy instrument
NASA Astrophysics Data System (ADS)
Kerr, Patricia Reda
National standards for K--12 science education address all aspects of science education, with their main emphasis on curriculum---both science subject matter and the process involved in doing science. Standards for science teacher education programs have been developing along a parallel plane, as is self-efficacy research involving classroom teachers. Generally, studies about efficacy have been dichotomous---basing the theoretical underpinnings on the work of either Rotter's Locus of Control theory or on Bandura's explanations of efficacy beliefs and outcome expectancy. This study brings all three threads together---K--12 science standards, teacher education standards, and efficacy beliefs---in an instrument designed to measure science teacher efficacy with items based on identified critical attributes of standards-based science teaching and learning. Based on Bandura's explanation of efficacy being task-specific and having outcome expectancy, a developmental, systematic progression from standards-based strategies and activities to tasks to critical attributes was used to craft items for a standards-based science teacher efficacy instrument. Demographic questions related to school characteristics, teacher characteristics, preservice background, science teaching experience, and post-certification professional development were included in the instrument. The instrument was completed by 102 middle level science teachers, with complete data for 87 teachers. A principal components analysis of the science teachers' responses to the instrument resulted in two components: Standards-Based Science Teacher Efficacy: Beliefs About Teaching (BAT, reliability = .92) and Standards-Based Science Teacher Efficacy: Beliefs About Student Achievement (BASA, reliability = .82). Variables that were characteristic of professional development activities, science content preparation, and school environment were identified as members of the sets of variables predicting the BAT and BASA subscales. Correlations were computed for BAT, BASA, and demographic variables to identify relationships between teacher efficacy, teacher characteristics, and school characteristics. Further research is recommended to refine the instrument and apply its use to a larger sample of science teachers. Its further development also has significance for the enhancement of science teacher education programs.
Validity issues in the evaluation of a measure of science and mathematics teacher knowledge
NASA Astrophysics Data System (ADS)
Talbot, Robert M., III
2011-12-01
This study investigates the reliability and validity of an instrument designed to measure science and mathematics teachers' strategic knowledge . Strategic knowledge is conceptualized as a construct that is related to pedagogical knowledge and is comprised of two dimensions: Flexible Application (FA) and Student Centered Instruction (SCI). The FA dimension describes how a science teacher invokes, applies and modifies her instructional repertoire in a given teaching context. The SCI dimension describes how a science teacher conceives of a given situation as an opportunity for active engagement with the students. The Flexible Application of Student-Centered Instruction (FASCI) survey instrument was designed to measure science teachers' strategic knowledge by eliciting open-ended responses to scenario-based items. This study addresses the following overarching question: What are some potential issues pertaining to the validity of measures of science and mathematics teacher knowledge? Using a validity argument framework, different sources of evidence are identified, collected, and evaluated to examine support for a set or propositions related to the intended score interpretation and instrument use: FASCI scores can be used to compare and distinguish the strategic knowledge of novice science and mathematics teachers in the evaluation of teacher education programs. Three separate but related studies are presented and discussed. These studies focus on the reliability of FASCI scores, the effect of adding specific science content to the scenario-based items, and the observation of strategic knowledge in teaching practice. Serious issues were found with the reliability of scores from the FASCI instrument. It was also found that adding science content to the scenario-based items has an effect on FASCI scores, but not for the reason hypothesized. Finally, it was found that more evidence is needed to make stronger claims about the relationship between FASCI scores and novice teachers' practice. In concluding this work, a set of four recommendations are presented for others who are engaged in similar measure development efforts. These recommendations focus on the areas of construct definition, item design and development, rater recruitment and training, and the validation process.
High-Reliability Health Care: Getting There from Here
Chassin, Mark R; Loeb, Jerod M
2013-01-01
Context Despite serious and widespread efforts to improve the quality of health care, many patients still suffer preventable harm every day. Hospitals find improvement difficult to sustain, and they suffer “project fatigue” because so many problems need attention. No hospitals or health systems have achieved consistent excellence throughout their institutions. High-reliability science is the study of organizations in industries like commercial aviation and nuclear power that operate under hazardous conditions while maintaining safety levels that are far better than those of health care. Adapting and applying the lessons of this science to health care offer the promise of enabling hospitals to reach levels of quality and safety that are comparable to those of the best high-reliability organizations. Methods We combined the Joint Commission's knowledge of health care organizations with knowledge from the published literature and from experts in high-reliability industries and leading safety scholars outside health care. We developed a conceptual and practical framework for assessing hospitals’ readiness for and progress toward high reliability. By iterative testing with hospital leaders, we refined the framework and, for each of its fourteen components, defined stages of maturity through which we believe hospitals must pass to reach high reliability. Findings We discovered that the ways that high-reliability organizations generate and maintain high levels of safety cannot be directly applied to today's hospitals. We defined a series of incremental changes that hospitals should undertake to progress toward high reliability. These changes involve the leadership's commitment to achieving zero patient harm, a fully functional culture of safety throughout the organization, and the widespread deployment of highly effective process improvement tools. Conclusions Hospitals can make substantial progress toward high reliability by undertaking several specific organizational change initiatives. Further research and practical experience will be necessary to determine the validity and effectiveness of this framework for high-reliability health care. PMID:24028696
High-reliability health care: getting there from here.
Chassin, Mark R; Loeb, Jerod M
2013-09-01
Despite serious and widespread efforts to improve the quality of health care, many patients still suffer preventable harm every day. Hospitals find improvement difficult to sustain, and they suffer "project fatigue" because so many problems need attention. No hospitals or health systems have achieved consistent excellence throughout their institutions. High-reliability science is the study of organizations in industries like commercial aviation and nuclear power that operate under hazardous conditions while maintaining safety levels that are far better than those of health care. Adapting and applying the lessons of this science to health care offer the promise of enabling hospitals to reach levels of quality and safety that are comparable to those of the best high-reliability organizations. We combined the Joint Commission's knowledge of health care organizations with knowledge from the published literature and from experts in high-reliability industries and leading safety scholars outside health care. We developed a conceptual and practical framework for assessing hospitals' readiness for and progress toward high reliability. By iterative testing with hospital leaders, we refined the framework and, for each of its fourteen components, defined stages of maturity through which we believe hospitals must pass to reach high reliability. We discovered that the ways that high-reliability organizations generate and maintain high levels of safety cannot be directly applied to today's hospitals. We defined a series of incremental changes that hospitals should undertake to progress toward high reliability. These changes involve the leadership's commitment to achieving zero patient harm, a fully functional culture of safety throughout the organization, and the widespread deployment of highly effective process improvement tools. Hospitals can make substantial progress toward high reliability by undertaking several specific organizational change initiatives. Further research and practical experience will be necessary to determine the validity and effectiveness of this framework for high-reliability health care. © 2013 The Authors. The Milbank Quarterly published by Wiley Periodicals Inc. on behalf of Milbank Memorial Fund.
Mikulak, Anna
2011-06-01
As differentiation within scientific disciplines increases, so does differentiation between the sciences and other ways of knowing. This distancing between 'scientific' and 'non-scientific' cultures reflects differences in what are considered valid and reliable approaches to acquiring knowledge and has played a major role in recent science-oriented controversies. Scientists' reluctance to actively engage in science communication, coupled with journalists' reliance on the norms of balance, conflict, and human interest in covering scientific issues, have combined to exacerbate public mistrust of science on issues like the measles-mumps-rubella (MMR) vaccine. The failure of effective communications between scientists and non-scientists has hindered the progress of both effective science and effective policy. In order to better bridge the gap between the 'scientific' and 'non-scientific' cultures, renewed efforts must be made to encourage substantive public engagement, with the ultimate goal of facilitating an open, democratic policy-making process.
Reliability and Validation of a Short Scale to Measure Situational Emotions in Science Education
ERIC Educational Resources Information Center
Randler, Christoph; Hummel, Eberhard; Glaser-Zikuda, Michaela; Vollmer, Christian; Bogner, Franz X.; Mayring, Philipp
2011-01-01
Research has shown that emotions play a significant role in the learning process and academic achievement. However, the fact that measurement of emotions during or after instruction usually requires written responses on lengthy research instruments has been given as a reason why researchers have tended to avoid research on this topic in…
Increasing our understanding of how science really works
NASA Astrophysics Data System (ADS)
Scotchmoor, Judith
2010-03-01
``Most Americans do not understand the scientific process," nor can they distinguish between science and non-science (National Science Board, 2006). Given the impact of science on society, the lack of public understanding of science should be a concern to us all. In large part, the current confusions about evolution, global warming, and other aspects of science are symptomatic of a general misunderstanding of what science is and what it is not. Too few of our citizens view science as a dynamic process through which we gain a reliable understanding of the natural world. As a result, the public becomes vulnerable to misinformation and the very real benefits of science become obscured. In response, an NSF- funded initiative has emerged to improve public understanding about how science really works, why it matters, and who scientists are. Understanding Science, a collaborative project developed by the UC Museum of Paleontology, serves to both inspire and engage students in the dynamic nature of science. The ``scientific method'' within our textbooks is an impoverished depiction that does little to promote scientific literacy. If we are aiming for a public capable of assessing conflicting representations of scientific evidence in the media, they must understand the strengths, limitations, and basic methods of the enterprise that has produced those claims. While many teachers recognize the weakness of the standard pedagogical approach to these fundamentals of science literacy, until now they lacked any comprehensive resource to help them strengthen their own knowledge and teaching.
Methodological challenges when doing research that includes ethnic minorities: a scoping review.
Morville, Anne-Le; Erlandsson, Lena-Karin
2016-11-01
There are challenging methodological issues in obtaining valid and reliable results on which to base occupational therapy interventions for ethnic minorities. The aim of this scoping review is to describe the methodological problems within occupational therapy research, when ethnic minorities are included. A thorough literature search yielded 21 articles obtained from the scientific databases PubMed, Cinahl, Web of Science and PsychInfo. Analysis followed Arksey and O'Malley's framework for scoping reviews, applying content analysis. The results showed methodological issues concerning the entire research process from defining and recruiting samples, the conceptual understanding, lack of appropriate instruments, data collection using interpreters to analyzing data. In order to avoid excluding the ethnic minorities from adequate occupational therapy research and interventions, development of methods for the entire research process is needed. It is a costly and time-consuming process, but the results will be valid and reliable, and therefore more applicable in clinical practice.
Power processing for electric propulsion
NASA Technical Reports Server (NTRS)
Finke, R. C.; Herron, B. G.; Gant, G. D.
1975-01-01
The potential of achieving up to 30 per cent more spacecraft payload or 50 per cent more useful operating life by the use of electric propulsion in place of conventional cold gas or hydrazine systems in science, communications, and earth applications spacecraft is a compelling reason to consider the inclusion of electric thruster systems in new spacecraft design. The propulsion requirements of such spacecraft dictate a wide range of thruster power levels and operational lifetimes, which must be matched by lightweight, efficient, and reliable thruster power processing systems. This paper will present electron bombardment ion thruster requirements; review the performance characteristics of present power processing systems; discuss design philosophies and alternatives in areas such as inverter type, arc protection, and control methods; and project future performance potentials for meeting goals in the areas of power processor weight (10 kg/kW), efficiency (approaching 92 per cent), reliability (0.96 for 15,000 hr), and thermal control capability (0.3 to 5 AU).
Lucky Belief in Science Education - Gettier Cases and the Value of Reliable Belief-Forming Processes
NASA Astrophysics Data System (ADS)
Brock, Richard
2018-05-01
The conceptualisation of knowledge as justified true belief has been shown to be, at the very least, an incomplete account. One challenge to the justified true belief model arises from the proposition of situations in which a person possesses a belief that is both justified and true which some philosophers intuit should not be classified as knowledge. Though situations of this type have been imagined by a number of writers, they have come to be labelled Gettier cases. Gettier cases arise when a fallible justification happens to lead to a true belief in one context, a case of `lucky belief'. In this article, it is argued that students studying science may make claims that resemble Gettier cases. In some contexts, a student may make a claim that is both justified and true but which arises from an alternative conception of a scientific concept. A number of instances of lucky belief in topics in science education are considered leading to an examination of the criteria teachers use to assess students' claims in different contexts. The possibility of lucky belief leads to the proposal that, in addition to the acquisition of justified true beliefs, the development of reliable belief-forming processes is a significant goal of science education. The pedagogic value of various kinds of claims is considered and, it is argued, the criteria used to judge claims may be adjusted to suit the context of assessment. It is suggested that teachers should be alert to instances of lucky belief that mask alternative conceptions.
The value of the Semantic Web in the laboratory.
Frey, Jeremy G
2009-06-01
The Semantic Web is beginning to impact on the wider chemical and physical sciences, beyond the earlier adopted bio-informatics. While useful in large-scale data driven science with automated processing, these technologies can also help integrate the work of smaller scale laboratories producing diverse data. The semantics aid the discovery, reliable re-use of data, provide improved provenance and facilitate automated processing by increased resilience to changes in presentation and reduced ambiguity. The Semantic Web, its tools and collections are not yet competitive with well-established solutions to current problems. It is in the reduced cost of instituting solutions to new problems that the versatility of Semantic Web-enabled data and resources will make their mark once the more general-purpose tools are more available.
Reliability Assessment Approach for Stirling Convertors and Generators
NASA Technical Reports Server (NTRS)
Shah, Ashwin R.; Schreiber, Jeffrey G.; Zampino, Edward; Best, Timothy
2004-01-01
Stirling power conversion is being considered for use in a Radioisotope Power System for deep-space science missions because it offers a multifold increase in the conversion efficiency of heat to electric power. Quantifying the reliability of a Radioisotope Power System that utilizes Stirling power conversion technology is important in developing and demonstrating the capability for long-term success. A description of the Stirling power convertor is provided, along with a discussion about some of the key components. Ongoing efforts to understand component life, design variables at the component and system levels, related sources, and the nature of uncertainties is discussed. The requirement for reliability also is discussed, and some of the critical areas of concern are identified. A section on the objectives of the performance model development and a computation of reliability is included to highlight the goals of this effort. Also, a viable physics-based reliability plan to model the design-level variable uncertainties at the component and system levels is outlined, and potential benefits are elucidated. The plan involves the interaction of different disciplines, maintaining the physical and probabilistic correlations at all the levels, and a verification process based on rational short-term tests. In addition, both top-down and bottom-up coherency were maintained to follow the physics-based design process and mission requirements. The outlined reliability assessment approach provides guidelines to improve the design and identifies governing variables to achieve high reliability in the Stirling Radioisotope Generator design.
Fernández-Domínguez, Juan Carlos; de Pedro-Gómez, Joan Ernest; Morales-Asencio, José Miguel; Sastre-Fullana, Pedro; Sesé-Abad, Albert
2017-01-01
Introduction Most of the EBP measuring instruments available to date present limitations both in the operationalisation of the construct and also in the rigour of their psychometric development, as revealed in the literature review performed. The aim of this paper is to provide rigorous and adequate reliability and validity evidence of the scores of a new transdisciplinary psychometric tool, the Health Sciences Evidence-Based Practice (HS-EBP), for measuring the construct EBP in Health Sciences professionals. Methods A pilot study and a subsequent two-stage validation test sample were conducted to progressively refine the instrument until a reduced 60-item version with a five-factor latent structure. Reliability was analysed through both Cronbach’s alpha coefficient and intraclass correlations (ICC). Latent structure was contrasted using confirmatory factor analysis (CFA) following a model comparison aproach. Evidence of criterion validity of the scores obtained was achieved by considering attitudinal resistance to change, burnout, and quality of professional life as criterion variables; while convergent validity was assessed using the Spanish version of the Evidence-Based Practice Questionnaire (EBPQ-19). Results Adequate evidence of both reliability and ICC was obtained for the five dimensions of the questionnaire. According to the CFA model comparison, the best fit corresponded to the five-factor model (RMSEA = 0.049; CI 90% RMSEA = [0.047; 0.050]; CFI = 0.99). Adequate criterion and convergent validity evidence was also provided. Finally, the HS-EBP showed the capability to find differences between EBP training levels as an important evidence of decision validity. Conclusions Reliability and validity evidence obtained regarding the HS-EBP confirm the adequate operationalisation of the EBP construct as a process put into practice to respond to every clinical situation arising in the daily practice of professionals in health sciences (transprofessional). The tool could be useful for EBP individual assessment and for evaluating the impact of specific interventions to improve EBP. PMID:28486533
Fernández-Domínguez, Juan Carlos; de Pedro-Gómez, Joan Ernest; Morales-Asencio, José Miguel; Bennasar-Veny, Miquel; Sastre-Fullana, Pedro; Sesé-Abad, Albert
2017-01-01
Most of the EBP measuring instruments available to date present limitations both in the operationalisation of the construct and also in the rigour of their psychometric development, as revealed in the literature review performed. The aim of this paper is to provide rigorous and adequate reliability and validity evidence of the scores of a new transdisciplinary psychometric tool, the Health Sciences Evidence-Based Practice (HS-EBP), for measuring the construct EBP in Health Sciences professionals. A pilot study and a subsequent two-stage validation test sample were conducted to progressively refine the instrument until a reduced 60-item version with a five-factor latent structure. Reliability was analysed through both Cronbach's alpha coefficient and intraclass correlations (ICC). Latent structure was contrasted using confirmatory factor analysis (CFA) following a model comparison aproach. Evidence of criterion validity of the scores obtained was achieved by considering attitudinal resistance to change, burnout, and quality of professional life as criterion variables; while convergent validity was assessed using the Spanish version of the Evidence-Based Practice Questionnaire (EBPQ-19). Adequate evidence of both reliability and ICC was obtained for the five dimensions of the questionnaire. According to the CFA model comparison, the best fit corresponded to the five-factor model (RMSEA = 0.049; CI 90% RMSEA = [0.047; 0.050]; CFI = 0.99). Adequate criterion and convergent validity evidence was also provided. Finally, the HS-EBP showed the capability to find differences between EBP training levels as an important evidence of decision validity. Reliability and validity evidence obtained regarding the HS-EBP confirm the adequate operationalisation of the EBP construct as a process put into practice to respond to every clinical situation arising in the daily practice of professionals in health sciences (transprofessional). The tool could be useful for EBP individual assessment and for evaluating the impact of specific interventions to improve EBP.
Reliability and Probabilistic Risk Assessment - How They Play Together
NASA Technical Reports Server (NTRS)
Safie, Fayssal M.; Stutts, Richard G.; Zhaofeng, Huang
2015-01-01
PRA methodology is one of the probabilistic analysis methods that NASA brought from the nuclear industry to assess the risk of LOM, LOV and LOC for launch vehicles. PRA is a system scenario based risk assessment that uses a combination of fault trees, event trees, event sequence diagrams, and probability and statistical data to analyze the risk of a system, a process, or an activity. It is a process designed to answer three basic questions: What can go wrong? How likely is it? What is the severity of the degradation? Since 1986, NASA, along with industry partners, has conducted a number of PRA studies to predict the overall launch vehicles risks. Planning Research Corporation conducted the first of these studies in 1988. In 1995, Science Applications International Corporation (SAIC) conducted a comprehensive PRA study. In July 1996, NASA conducted a two-year study (October 1996 - September 1998) to develop a model that provided the overall Space Shuttle risk and estimates of risk changes due to proposed Space Shuttle upgrades. After the Columbia accident, NASA conducted a PRA on the Shuttle External Tank (ET) foam. This study was the most focused and extensive risk assessment that NASA has conducted in recent years. It used a dynamic, physics-based, integrated system analysis approach to understand the integrated system risk due to ET foam loss in flight. Most recently, a PRA for Ares I launch vehicle has been performed in support of the Constellation program. Reliability, on the other hand, addresses the loss of functions. In a broader sense, reliability engineering is a discipline that involves the application of engineering principles to the design and processing of products, both hardware and software, for meeting product reliability requirements or goals. It is a very broad design-support discipline. It has important interfaces with many other engineering disciplines. Reliability as a figure of merit (i.e. the metric) is the probability that an item will perform its intended function(s) for a specified mission profile. In general, the reliability metric can be calculated through the analyses using reliability demonstration and reliability prediction methodologies. Reliability analysis is very critical for understanding component failure mechanisms and in identifying reliability critical design and process drivers. The following sections discuss the PRA process and reliability engineering in detail and provide an application where reliability analysis and PRA were jointly used in a complementary manner to support a Space Shuttle flight risk assessment.
Social scientist's viewpoint on conflict management
Ertel, Madge O.
1990-01-01
Social scientists can bring to the conflict-management process objective, reliable information needed to resolve increasingly complex issues. Engineers need basic training in the principles of the social sciences and in strategies for public involvement. All scientists need to be sure that that the information they provide is unbiased by their own value judgments and that fair standards and open procedures govern its use.
ERIC Educational Resources Information Center
Yamamoto, Kentaro; He, Qiwei; Shin, Hyo Jeong; von Davier, Mattias
2017-01-01
Approximately a third of the Programme for International Student Assessment (PISA) items in the core domains (math, reading, and science) are constructed-response items and require human coding (scoring). This process is time-consuming, expensive, and prone to error as often (a) humans code inconsistently, and (b) coding reliability in…
ERIC Educational Resources Information Center
Talbot, Robert M., III
2017-01-01
There is a clear need for valid and reliable instrumentation that measures teacher knowledge. However, the process of investigating and making a case for instrument validity is not a simple undertaking; rather, it is a complex endeavor. This paper presents the empirical case of one aspect of such an instrument validation effort. The particular…
Assessment in Science Education
NASA Astrophysics Data System (ADS)
Rustaman, N. Y.
2017-09-01
An analyses study focusing on scientific reasoning literacy was conducted to strengthen the stressing on assessment in science by combining the important of the nature of science and assessment as references, higher order thinking and scientific skills in assessing science learning as well. Having background in developing science process skills test items, inquiry in its many form, scientific and STEM literacy, it is believed that inquiry based learning should first be implemented among science educators and science learners before STEM education can successfully be developed among science teachers, prospective teachers, and students at all levels. After studying thoroughly a number of science researchers through their works, a model of scientific reasoning was proposed, and also simple rubrics and some examples of the test items were introduced in this article. As it is only the beginning, further studies will still be needed in the future with the involvement of prospective science teachers who have interests in assessment, either on authentic assessment or in test items development. In balance usage of alternative assessment rubrics, as well as valid and reliable test items (standard) will be needed in accelerating STEM education in Indonesia.
ERIC Educational Resources Information Center
Köksal, Mustafa Serdar; Ertekin, Pelin; Çolakoglu, Özgür Murat
2014-01-01
The purpose of this study is to investigate association of data collectors' differences with the differences in reliability and validity of scores regarding affective variables (motivation toward science learning and science attitude) that are measured by Likert-type scales. Four researchers trained in data collection and seven science teachers…
Ball, Lianne C.
2016-07-14
Mangrove ecosystems protect vulnerable coastlines from storm effects, recycle nutrients, stabilize shorelines, improve water quality, and provide habitat for commercial and recreational fish species as well as for threatened and endangered wildlife. U.S. Geological Survey scientists conduct research on mangrove ecosystems to provide reliable scientific information about their ecology, productivity, hydrological processes, carbon storage stress response, and restoration success. The Mangrove Science Network is a collaboration of USGS scientists focused on working with natural resource managers to develop and conduct research to inform decisions on mangrove management and restoration. Information about the Mangrove Science Network can be found at: http://www.usgs.gov/ecosystems/environments/mangroves.html.
An Overview of the Jupiter Icy Moons Orbiter (JIMO) Mission, Environments, and Materials Challenges
NASA Technical Reports Server (NTRS)
Edwards, Dave
2012-01-01
Congress authorized NASA's Prometheus Project in February 2003, with the first Prometheus mission slated to explore the icy moons of Jupiter with the following main objectives: (1) Develop a nuclear reactor that would provide unprecedented levels of power and show that it could be processed safely and operated reliably in space for long-duration. (2) Explore the three icy moons of Jupiter -- Callisto, Ganymede, and Europa -- and return science data that would meet the scientific goals as set forth in the Decadal Survey Report of the National Academy of Sciences.
Validity and reliability of food security measures.
Cafiero, Carlo; Melgar-Quiñonez, Hugo R; Ballard, Terri J; Kepple, Anne W
2014-12-01
This paper reviews some of the existing food security indicators, discussing the validity of the underlying concept and the expected reliability of measures under reasonably feasible conditions. The main objective of the paper is to raise awareness on existing trade-offs between different qualities of possible food security measurement tools that must be taken into account when such tools are proposed for practical application, especially for use within an international monitoring framework. The hope is to provide a timely, useful contribution to the process leading to the definition of a food security goal and the associated monitoring framework within the post-2015 Development Agenda. © 2014 New York Academy of Sciences.
Process Skill Assessment Instrument: Innovation to measure student’s learning result holistically
NASA Astrophysics Data System (ADS)
Azizah, K. N.; Ibrahim, M.; Widodo, W.
2018-01-01
Science process skills (SPS) are very important skills for students. However, the fact that SPS is not being main concern in the primary school learning is undeniable. This research aimed to develop a valid, practical, and effective assessment instrument to measure student’s SPS. Assessment instruments comprise of worksheet and test. This development research used one group pre-test post-test design. Data were obtained with validation, observation, and test method to investigate validity, practicality, and the effectivenss of the instruments. Results showed that the validity of assessment instruments is very valid, the reliability is categorized as reliable, student SPS activities have a high percentage, and there is significant improvement on student’s SPS score. It can be concluded that assessment instruments of SPS are valid, practical, and effective to be used to measure student’s SPS result.
Michaud, Jean-Philippe; Moreau, Gaétan
2011-01-01
Using pig carcasses exposed over 3 years in rural fields during spring, summer, and fall, we studied the relationship between decomposition stages and degree-day accumulation (i) to verify the predictability of the decomposition stages used in forensic entomology to document carcass decomposition and (ii) to build a degree-day accumulation model applicable to various decomposition-related processes. Results indicate that the decomposition stages can be predicted with accuracy from temperature records and that a reliable degree-day index can be developed to study decomposition-related processes. The development of degree-day indices opens new doors for researchers and allows for the application of inferential tools unaffected by climatic variability, as well as for the inclusion of statistics in a science that is primarily descriptive and in need of validation methods in courtroom proceedings. © 2010 American Academy of Forensic Sciences.
The development and testing of a qualitative instrument designed to assess critical thinking
NASA Astrophysics Data System (ADS)
Clauson, Cynthia Louisa
This study examined a qualitative approach to assess critical thinking. An instrument was developed that incorporates an assessment process based on Dewey's (1933) concepts of self-reflection and critical thinking as problem solving. The study was designed to pilot test the critical thinking assessment process with writing samples collected from a heterogeneous group of students. The pilot test included two phases. Phase 1 was designed to determine the validity and inter-rater reliability of the instrument using two experts in critical thinking, problem solving, and literacy development. Validity of the instrument was addressed by requesting both experts to respond to ten questions in an interview. The inter-rater reliability was assessed by analyzing the consistency of the two experts' scorings of the 20 writing samples to each other, as well as to my scoring of the same 20 writing samples. Statistical analyses included the Spearman Rho and the Kuder-Richardson (Formula 20). Phase 2 was designed to determine the validity and reliability of the critical thinking assessment process with seven science teachers. Validity was addressed by requesting the teachers to respond to ten questions in a survey and interview. Inter-rater reliability was addressed by comparing the seven teachers' scoring of five writing samples with my scoring of the same five writing samples. Again, the Spearman Rho and the Kuder-Richardson (Formula 20) were used to determine the inter-rater reliability. The validity results suggest that the instrument is helpful as a guide for instruction and provides a systematic method to teach and assess critical thinking while problem solving with students in the classroom. The reliability results show the critical thinking assessment instrument to possess fairly high reliability when used by the experts, but weak reliability when used by classroom teachers. A major conclusion was drawn that teachers, as well as students, would need to receive instruction in critical thinking and in how to use the assessment process in order to gain more consistent interpretations of the six problem-solving steps. Specific changes needing to be made in the instrument to improve the quality are included.
Aghamolaei, Teamur; Zare, Shahram
2008-06-18
Higher education is growing fast and every day it becomes more and more exposed to globalization processes. The aim of this study was to determine the quality gap of educational services by using a modified SERVQUAL instrument among students in Hormozgan University of Medical Sciences. A cross-sectional study was carried out at Hormozgan University of Medical Sciences in 2007. In this study, a total of 300 students were selected randomly and asked to complete a questionnaire that was designed according to SERVQUAL methods. This questionnaire measured students' perceptions and expectations in five dimensions of service that consists of assurance, responsiveness, empathy, reliability and tangibles. The quality gap of educational services was determined based on differences between students' perceptions and expectations. The results demonstrated that in each of the five SERVQUAL dimensions, there was a negative quality gap. The least and the most negative quality gap means were in the reliability (-0.71) and responsiveness (-1.14) dimensions respectively. Also, there were significant differences between perceptions and expectations of students in all of the five SERVQUAL dimensions (p < 0.001). Negative quality gaps mean students' expectations exceed their perceptions. Thus, improvements are needed across all five dimensions.
In-situ Frequency Dependent Dielectric Sensing of Cure
NASA Technical Reports Server (NTRS)
Kranbuehl, David E.
1996-01-01
With the expanding use of polymeric materials as composite matrices, adhesives, coatings and films, the need to develop low cost, automated fabrication processes to produce consistently high quality parts is critical. Essential to the development of reliable, automated, intelligent processing is the ability to continuously monitor the changing state of the polymeric resin in-situ in the fabrication tool. This final report discusses work done on developing dielectric sensing to monitor polymeric material cure and which provides a fundamental understanding of the underlying science for the use of frequency dependent dielectri sensors to monitor the cure process.
Where Non-Science Majors Get Information about Science and How They Rate that Information
NASA Astrophysics Data System (ADS)
Buxner, Sanlyn; Impey, Chris; Nieberding, Megan; Romine, James
2014-11-01
College non-science major courses represent one of the last science courses many students will ever take. We report on a study of 400 undergraduate non-majors students enrolled in introductory astronomy courses at the University of Arizona to gain insight into where they get their information about science and their perception of that information. Students completed an online survey during the 2013-2014 school year. In addition to demographic information, students reported where they obtained information about science when they want to know something both for their own knowledge as well as information for a course assignment. They reported their interest in different science topics, rated the reliability of different sources of information, and reported how important science was to their life, including their future career choice.Overall, students reported getting information from a variety of online sources when looking up a topic for their own knowledge, including internet searches (71%), Wikipedia (46%), and online science sites (e.g. NASA) (45%). When asked where they got information for course assignments, most reported from assigned readings (82%) but a large percentage still reported getting information from online sources such as internet searches (60%), Wikipedia (30%) and online science sites (e.g. NASA) (20%). Overall, students rated professors/teachers and textbooks at the most reliable sources of scientific information and rated social media sites, blogs and Wikipedia as the least reliable sources of scientific information. Additionally, friends and family members were rated as less reliable sources of scientific information than similar information found on multiple websites. Students’ interest in science and self-reported knowledge in science was positively correlated. There was a significant positive correlation between those who reported that they liked science and felt that science was important to their future career. Overall, our results are giving us insights into how our non-science majors get and evaluate scientific information.
Flitter, Marc A; Riesenmy, Kelly Rouse; van Stralen, Daved
2012-01-01
To offer a theoretical explanation for observed physician resistance and rejection of high reliability patient safety initiatives. A grounded theoretical qualitative approach, utilizing the organizational theory of sensemaking, provided the foundation for inductive and deductive reasoning employed to analyze medical staff rejection of two successfully performing high reliability programs at separate hospitals. Physician behaviors resistant to patient-centric high reliability processes were traced to provider-centric physician sensemaking. Research, conducted with the advantage that prospective studies have over the limitations of this retrospective investigation, is needed to evaluate the potential for overcoming physician resistance to innovation implementation, employing strategies based upon these findings and sensemaking theory in general. If hospitals are to emulate high reliability industries that do successfully manage environments of extreme hazard, physicians must be fully integrated into the complex teams required to accomplish this goal. Reforming health care, through high reliability organizing, with its attendant continuous focus on patient-centric processes, offers a distinct alternative to efforts directed primarily at reforming health care insurance. It is by changing how health care is provided that true cost efficiencies can be achieved. Technology and the insights of organizational science present the opportunity of replacing the current emphasis on privileged information with collective tools capable of providing quality and safety in health care. The fictions that have sustained a provider-centric health care system have been challenged. The benefits of patient-centric care should be obtainable.
Fault tolerant, radiation hard, high performance digital signal processor
NASA Technical Reports Server (NTRS)
Holmann, Edgar; Linscott, Ivan R.; Maurer, Michael J.; Tyler, G. L.; Libby, Vibeke
1990-01-01
An architecture has been developed for a high-performance VLSI digital signal processor that is highly reliable, fault-tolerant, and radiation-hard. The signal processor, part of a spacecraft receiver designed to support uplink radio science experiments at the outer planets, organizes the connections between redundant arithmetic resources, register files, and memory through a shuffle exchange communication network. The configuration of the network and the state of the processor resources are all under microprogram control, which both maps the resources according to algorithmic needs and reconfigures the processing should a failure occur. In addition, the microprogram is reloadable through the uplink to accommodate changes in the science objectives throughout the course of the mission. The processor will be implemented with silicon compiler tools, and its design will be verified through silicon compilation simulation at all levels from the resources to full functionality. By blending reconfiguration with redundancy the processor implementation is fault-tolerant and reliable, and possesses the long expected lifetime needed for a spacecraft mission to the outer planets.
The Conceptualization and Development of the Practical Epistemology in Science Survey (PESS)
NASA Astrophysics Data System (ADS)
Villanueva, Mary Grace; Hand, Brian; Shelley, Mack; Therrien, William
2017-08-01
Various inquiry approaches have been promoted in science classrooms as a way for students to engage in, and have a deeper understanding of scientific discourse. However, there is a paucity of empirical evidence to suggest how children's actions and engagement in these approaches, or practical epistemologies (Sandoval, Science Education 89(4): 634-656, 2005), may contribute to the development of their personal epistemologies, or their views about the nature of knowledge and knowing and the nature of learning. This paper puts forth the conceptualization and development of the Practical Epistemology in Science Survey (PESS) instrument, a 26-item Likert-scale self-assessment which measures how students view their individual and social participation in the classroom scientific community. Data were collected from 4th-6th-grade students (n = 1019) in the USA and a psychometric evaluation of the reliability, validity, and dimensionality of the instrument was conducted. The Cronbach's alpha value indices for all subsets of items of the PESS suggest a strong reliability of the instrument (α ≥ .80). The development of the PESS may be useful in science education research to (a) detect changes to students' beliefs about knowledge and knowledge development; (b) identify dispositions and beliefs which may or may not be in line with the aims and values of various pedagogical approaches; (c) monitor the process of change, e.g., time it takes for students to change their approaches and beliefs with respect to teacher practice; and, (d) overall, to provide an understanding of how students' formal epistemologies are developed and informed by the affordances in science classrooms.
Yaffe, Michael B
2015-04-07
The issue of reproducibility and reliability in science has come to the forefront in light of several high-profile studies that could not be reproduced. Whereas some errors in reliability can be attributed to the application of new techniques that have unappreciated caveats, some problems with reproducibility lie in the climate of intense pressure for funding and to publish faced by many researchers. Copyright © 2015, American Association for the Advancement of Science.
ERIC Educational Resources Information Center
Kop, Yasar; Demir, Özden
2017-01-01
This research mainly aims to test the reliability and validity of the Epistemological Beliefs Scale developed by Kop and Demir (2014) on the level of social sciences teaching from Faculty of Education. A total of 176 students participated in the study, which was carried out on 1st, 2nd, 3rd and 4th grades of the Social Sciences Teaching Department…
The Power of Engaging Citizen Scientists for Scientific Progress
Garbarino, Jeanne; Mason, Christopher E.
2016-01-01
Citizen science has become a powerful force for scientific inquiry, providing researchers with access to a vast array of data points while connecting nonscientists to the authentic process of science. This citizen-researcher relationship creates an incredible synergy, allowing for the creation, execution, and analysis of research projects that would otherwise prove impossible in traditional research settings, namely due to the scope of needed human or financial resources (or both). However, citizen-science projects are not without their challenges. For instance, as projects are scaled up, there is concern regarding the rigor and usability of data collected by citizens who are not formally trained in research science. While these concerns are legitimate, we have seen examples of highly successful citizen-science projects from multiple scientific disciplines that have enhanced our collective understanding of science, such as how RNA molecules fold or determining the microbial metagenomic snapshot of an entire public transportation system. These and other emerging citizen-science projects show how improved protocols for reliable, large-scale science can realize both an improvement of scientific understanding for the general public and novel views of the world around us. PMID:27047581
On a methodology for robust segmentation of nonideal iris images.
Schmid, Natalia A; Zuo, Jinyu
2010-06-01
Iris biometric is one of the most reliable biometrics with respect to performance. However, this reliability is a function of the ideality of the data. One of the most important steps in processing nonideal data is reliable and precise segmentation of the iris pattern from remaining background. In this paper, a segmentation methodology that aims at compensating various nonidealities contained in iris images during segmentation is proposed. The virtue of this methodology lies in its capability to reliably segment nonideal imagery that is simultaneously affected with such factors as specular reflection, blur, lighting variation, occlusion, and off-angle images. We demonstrate the robustness of our segmentation methodology by evaluating ideal and nonideal data sets, namely, the Chinese Academy of Sciences iris data version 3 interval subdirectory, the iris challenge evaluation data, the West Virginia University (WVU) data, and the WVU off-angle data. Furthermore, we compare our performance to that of our implementation of Camus and Wildes's algorithm and Masek's algorithm. We demonstrate considerable improvement in segmentation performance over the formerly mentioned algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cramer, Christopher J.
Charge transfer and charge transport in photoactivated systems are fundamental processes that underlie solar energy capture, solar energy conversion, and photoactivated catalysis, both organometallic and enzymatic. We developed methods, algorithms, and software tools needed for reliable treatment of the underlying physics for charge transfer and charge transport, an undertaking with broad applicability to the goals of the fundamental-interaction component of the Department of Energy Office of Basic Energy Sciences and the exascale initiative of the Office of Advanced Scientific Computing Research.
Materials Science Research Rack Onboard the International Space Station
NASA Technical Reports Server (NTRS)
Frazier, Natalie C.; Johnson, Jimmie; Aicher, Winfried
2011-01-01
The Materials Science Research Rack (MSRR) allows for the study of a variety of materials including metals, ceramics, semiconductor crystals, and glasses onboard the International Space Station (ISS). MSRR was launched on STS-128 in August 2009, and is currently installed in the U. S. Destiny Laboratory Module. Since that time, MSRR has performed virtually flawlessly logging more than 550 hours of operating time. Materials science is an integral part of development of new materials for everyday life here on Earth. The goal of studying materials processing in space is to develop a better understanding of the chemical and physical mechanisms involved. Materials science research benefits from the microgravity environment of space, where the researcher can better isolate chemical and thermal properties of materials from the effects of gravity. With this knowledge, reliable predictions can be made about the conditions required on Earth to achieve improved materials. MSRR is a highly automated facility containing two furnace inserts in which Sample Cartridge Assemblies (SCAs), each containing one material sample, can be processed up to temperatures of 1400C. Once an SCA is installed by a Crew Member, the experiment can be run by automatic command or science conducted via telemetry commands from the ground. Initially, 12 SCAs were processed in the first furnace insert for a team of European and US investigators. The processed samples have been returned to Earth for evaluation and comparison of their properties to samples similarly processed on the ground. A preliminary examination of the samples indicates that the majority of the desired science objectives have been successfully met leading to significant improvements in the understanding of alloy solidification processes. The second furnace insert will be installed in the facility in January 2011 for processing the remaining SCA currently on orbit. Six SCAs are planned for launch summer 2011, and additional batches are planned for future processing. This facility is available to support additional materials science investigations through programs such as the US National Laboratory, Technology Development, NASA Research Announcements, ESA application oriented research programs, and others. The development of the research rack was a cooperative effort between NASA's Marshall Space Flight Center and the European Space Agency (ESA).
Brazhnik, Olga; Jones, John F.
2007-01-01
Producing reliable information is the ultimate goal of data processing. The ocean of data created with the advances of science and technologies calls for integration of data coming from heterogeneous sources that are diverse in their purposes, business rules, underlying models and enabling technologies. Reference models, Semantic Web, standards, ontology, and other technologies enable fast and efficient merging of heterogeneous data, while the reliability of produced information is largely defined by how well the data represent the reality. In this paper we initiate a framework for assessing the informational value of data that includes data dimensions; aligning data quality with business practices; identifying authoritative sources and integration keys; merging models; uniting updates of varying frequency and overlapping or gapped data sets. PMID:17071142
High End Computing Technologies for Earth Science Applications: Trends, Challenges, and Innovations
NASA Technical Reports Server (NTRS)
Parks, John (Technical Monitor); Biswas, Rupak; Yan, Jerry C.; Brooks, Walter F.; Sterling, Thomas L.
2003-01-01
Earth science applications of the future will stress the capabilities of even the highest performance supercomputers in the areas of raw compute power, mass storage management, and software environments. These NASA mission critical problems demand usable multi-petaflops and exabyte-scale systems to fully realize their science goals. With an exciting vision of the technologies needed, NASA has established a comprehensive program of advanced research in computer architecture, software tools, and device technology to ensure that, in partnership with US industry, it can meet these demanding requirements with reliable, cost effective, and usable ultra-scale systems. NASA will exploit, explore, and influence emerging high end computing architectures and technologies to accelerate the next generation of engineering, operations, and discovery processes for NASA Enterprises. This article captures this vision and describes the concepts, accomplishments, and the potential payoff of the key thrusts that will help meet the computational challenges in Earth science applications.
Materials Science Research Rack Onboard the International Space Station
NASA Technical Reports Server (NTRS)
Reagan, Shawn E.; Lehman, John R.; Frazier, Natalie C.
2014-01-01
The Materials Science Research Rack (MSRR) is a highly automated facility developed in a joint venture/partnership between NASA and ESA center dot Allows for the study of a variety of materials including metals, ceramics, semiconductor crystals, and glasses onboard the International Space Station (ISS) center dot Multi-user facility for high temperature materials science research center dot Launched on STS-128 in August 2009, and is currently installed in the U.S. Destiny Laboratory Module ?Research goals center dot Provide means of studying materials processing in space to develop a better understanding of the chemical and physical mechanisms involved center dot Benefit materials science research via the microgravity environment of space where the researcher can better isolate the effects of gravity during solidification on the properties of materials center dot Use the knowledge gained from experiments to make reliable predictions about conditions required on Earth to achieve improved materials
Criminalistics and the forensic nursing process.
Burgess, Ann Wolbert; Piatelli, Michael J; Pasqualone, Georgia
2011-06-01
Students learn science by actually performing science activities. The 12 laboratories described in this article assist students in applying the fundamental techniques germane to the field of forensic science to "solve" contrived cases and present "evidence" in a mock trial. Moreover, students are also confronted with some of the legal and ethical issues concerning the validity, reliability, and application of some forensic techniques. The pedagogical design of the laboratory course provides a rich, challenging, and interdisciplinary academic experience intended to augment and compliment the didactic forensic lecture portion of the course. This laboratory course was designed to engender, embody, and articulate one of the University's directive goals to support interdisciplinary teaching, research, and programming. Because we developed the laboratories on minimal funds, we demonstrated that it could be cost-effective. And thus, we recommend a laboratory science course be included as part of the curriculum of all forensic nursing students and practitioners. © 2011 International Association of Forensic Nurses.
NASA Astrophysics Data System (ADS)
Genc, Evrim
The primary purpose of this study was to develop a valid and reliable instrument to examine science teachers' assessment beliefs and practices in science classrooms. The present study also investigated the relationship between teachers' beliefs and practices in terms of assessment issues in science, their perceptions of the factors that influenced their assessment practices and their feelings towards high-stakes testing. The participants of the study were 408 science teachers teaching at middle and high school levels in the State of Florida. Data were collected through two modes of administration of the instrument as a paper-and-pencil and a web-based form. The response rate for paper-and-pencil administration was estimated as 68% whereas the response for the web administration was found to be 27%. Results from the various dimensions of validity and reliability analyses revealed that the 24 item-four-factor belief and practice measures were psychometrically sound and conceptually anchored measures of science teachers' assessment beliefs and self-reported practices. Reliability estimates for the belief measure ranged from .83 to .91 whereas alpha values for the practice measure ranged from .56 to .90. Results from the multigroup analysis supported that the instrument has the same theoretical structure across both administration groups. Therefore, future researchers may use either a paper-and-pencil or web-based format of the instrument. This study underscored a discrepancy between what teachers believe and how they act in classroom settings. It was emphasized that certain factors were mediating the dynamics between the belief and the practice. The majority of teachers reported that instruction time, class size, professional development activities, availability of school funding, and state testing mandates impact their assessment routines. Teachers reported that both the preparation process and the results of the test created unbelievable tension both on students and teachers. Implications of the study indicated that it would be valuable to conduct alignment studies to examine whether state tests are fully aligned with the state standards and classroom assessment. Perhaps, such analyses would assist state level decision makers in reconsidering the current policies and "unintended" influences of mandated tests on classroom practices.
ERIC Educational Resources Information Center
Fussler, Herman H.; Payne, Charles T.
The project's second year (1967/68) was devoted to upgrading the computer operating software and programs to increase versatility and reliability. General conclusions about the program after 24 months of operation are that the project's objectives are sound and that effective utilization of computer-aided bibliographic data processing is essential…
Proceedings of the 26th Project Integration Meeting
NASA Technical Reports Server (NTRS)
1986-01-01
Progress made by the Flat-plate Solar Array (FSA) Project is described for the period July 1985 to April 1986. Included are reports on silicon sheet growth and characterization, silicon material, process development, high-efficienty cells, environmental isolation, engineering sciences, and reliability physics. Also included are technical and plenary presentations made at the 26th Project Integration Meeting (PIM) held on April 29 to 30 and May 1, 1986.
Introspections on the Semantic Gap
2015-04-14
cloud comput - ing. Zhang received an MS in computer science from Stony Brook University. Contact him at dozhang@ cs.stonybrook.edu. Donald E. Porter...designated by other documentation. ... 2 March/April 2015 Copublished by the IEEE Computer and Reliability Societies 1540-7993/15/$31.00 © 2015 IEEE IEEE S...pauses the VM, and the VMI tool introspects the process descriptor list. In contrast, an asynchronous mechanism would intro - spect memory
Simple, Script-Based Science Processing Archive
NASA Technical Reports Server (NTRS)
Lynnes, Christopher; Hegde, Mahabaleshwara; Barth, C. Wrandle
2007-01-01
The Simple, Scalable, Script-based Science Processing (S4P) Archive (S4PA) is a disk-based archival system for remote sensing data. It is based on the data-driven framework of S4P and is used for data transfer, data preprocessing, metadata generation, data archive, and data distribution. New data are automatically detected by the system. S4P provides services such as data access control, data subscription, metadata publication, data replication, and data recovery. It comprises scripts that control the data flow. The system detects the availability of data on an FTP (file transfer protocol) server, initiates data transfer, preprocesses data if necessary, and archives it on readily available disk drives with FTP and HTTP (Hypertext Transfer Protocol) access, allowing instantaneous data access. There are options for plug-ins for data preprocessing before storage. Publication of metadata to external applications such as the Earth Observing System Clearinghouse (ECHO) is also supported. S4PA includes a graphical user interface for monitoring the system operation and a tool for deploying the system. To ensure reliability, S4P continuously checks stored data for integrity, Further reliability is provided by tape backups of disks made once a disk partition is full and closed. The system is designed for low maintenance, requiring minimal operator oversight.
Validity and Reliability in Social Science Research
ERIC Educational Resources Information Center
Drost, Ellen A.
2011-01-01
In this paper, the author aims to provide novice researchers with an understanding of the general problem of validity in social science research and to acquaint them with approaches to developing strong support for the validity of their research. She provides insight into these two important concepts, namely (1) validity; and (2) reliability, and…
Teaching the Nature of Science in a Course in Sustainable Agriculture
ERIC Educational Resources Information Center
Cessna, Stephen; Neufeld, Douglas Graber; Horst, S. Jeanne
2013-01-01
Claims of the (non-)sustainability of a given agricultural practice generally hinge on scientific evidence and the reliability of that evidence, or at least the perception of its reliability. Advocates of sustainable agriculture may dismiss science as purely subjective, or at the other extreme, may inappropriately elevate scientific findings to…
James Webb Space Telescope - L2 Communications for Science Data Processing
NASA Technical Reports Server (NTRS)
Johns, Alan; Seaton, Bonita; Gal-Edd, Jonathan; Jones, Ronald; Fatig, Curtis; Wasiak, Francis
2008-01-01
JWST is the first NASA mission at the second Lagrange point (L2) to identify the need for data rates higher than 10 megabits per second (Mbps). JWST will produce approximately 235 Gigabits of science data every day that will be downlinked to the Deep Space Network (DSN). To get the data rates desired required moving away from X-band frequencies to Ka-band frequencies. To accomplish this transition, the DSN is upgrading its infrastructure. This new range of frequencies are becoming the new standard for high data rate science missions at L2. With the new frequency range, the issues of alternatives antenna deployment, off nominal scenarios, NASA implementation of the Ka-band 26 GHz, and navigation requirements will be discussed in this paper. JWST is also using Consultative Committee for Space Data Systems (CCSDS) standard process for reliable file transfer using CCSDS File Delivery Protocol (CFDP). For JWST the use of the CFDP protocol provides level zero processing at the DSN site. This paper will address NASA implementations of Ground Stations in support of Ka-band 26 GHz and lesson learned from implementing a file base (CFDP) protocol operational system.
Citizen science: A new perspective to evaluate spatial patterns in hydrology.
NASA Astrophysics Data System (ADS)
Koch, J.; Stisen, S.
2016-12-01
Citizen science opens new pathways that can complement traditional scientific practice. Intuition and reasoning make humans often more effective than computer algorithms in various realms of problem solving. In particular, a simple visual comparison of spatial patterns is a task where humans are often considered to be more reliable than computer algorithms. However, in practice, science still largely depends on computer based solutions, which is inevitable giving benefits such as speed and the possibility to automatize processes. This study highlights the integration of the generally underused human resource into hydrology. We established a citizen science project on the zooniverse platform entitled Pattern Perception. The aim is to employ the human perception to rate similarity and dissimilarity between simulated spatial patterns of a hydrological catchment model. In total, the turnout counts more than 2,800 users that provided over 46,000 classifications of 1,095 individual subjects within 64 days after the launch. Each subject displays simulated spatial patterns of land-surface variables of a baseline model and six modelling scenarios. The citizen science data discloses a numeric pattern similarity score for each of the scenarios with respect to the reference. We investigate the capability of a set of innovative statistical performance metrics to mimic the human perception to distinguish between similarity and dissimilarity. Results suggest that more complex metrics are not necessarily better at emulating the human perception, but clearly provide flexibility and auxiliary information that is valuable for model diagnostics. The metrics clearly differ in their ability to unambiguously distinguish between similar and dissimilar patterns which is regarded a key feature of a reliable metric.
Mousavi, Seyed Mohammad Hadi; Dargahi, Hossein; Mohammadi, Sara
2016-10-01
Creating a safe of health care system requires the establishment of High Reliability Organizations (HROs), which reduces errors, and increases the level of safety in hospitals. This model focuses on improving reliability through higher process design, building a culture of accreditation, and leveraging human factors. The present study intends to determine the readiness of hospitals for the establishment of HROs model in Tehran University of Medical Sciences from the viewpoint of managers of these hospitals. This is a descriptive-analytical study carried out in 2013-2014. The research population consists of 105 senior and middle managers of 15 hospitals of Tehran University of Medical Sciences. The data collection tool was a 55-question researcher-made questionnaire, included six elements of HROs to assess the level of readiness for establishing HROS model from managers' point of view. The validity of the questionnaire was calculated through the content validity method using 10 experts in the area of hospitals' accreditation, and its reliability was calculated through test-retest method with a correlation coefficient of 0.90. The response rate was 90 percent. The Likert scale was used for the questions, and data analysis was conducted through SPSS version 21 Descriptive statistics was presented via tables and normal distributions of data and means. Analytical methods, including t-test, Mann-Whitney, Spearman, and Kruskal-Wallis, were used for presenting inferential statistics. The study showed that from the viewpoint of senior and middle managers of the hospitals considered in this study, these hospitals are indeed ready for acceptance and establishment of HROs model. A significant relationship was showed between HROs model and its elements with demographic details of managers like their age, work experience, management experience, and level of management. Although the studied hospitals, as viewed by their managers, are capable of attaining the goals of HROs, it seems there are a lot of challenges in this way. Therefore, it is suggested that a detailed audit is conducted among hospitals' current status regarding different characteristics of HROs, and workshops are held for medical and non-medical employees and managers of hospitals as an influencing factor; and a re-assessment process afterward, can help moving the hospitals from their current position towards an HROs culture.
1990-12-01
data rate to the electronics would be much lower on the average and the data much "richer" in information. Intelligent use of...system bottleneck, a high data rate should be provided by I/O systems. 2. machines with intelligent storage management specially designed for logic...management information processing, surveillance sensors, intelligence data collection and handling, solid state sciences, electromagnetics, and propagation, and electronic reliability/maintainability and compatibility.
Advancing implementation science through measure development and evaluation: a study protocol.
Lewis, Cara C; Weiner, Bryan J; Stanick, Cameo; Fischer, Sarah M
2015-07-22
Significant gaps related to measurement issues are among the most critical barriers to advancing implementation science. Three issues motivated the study aims: (a) the lack of stakeholder involvement in defining pragmatic measure qualities; (b) the dearth of measures, particularly for implementation outcomes; and (c) unknown psychometric and pragmatic strength of existing measures. Aim 1: Establish a stakeholder-driven operationalization of pragmatic measures and develop reliable, valid rating criteria for assessing the construct. Aim 2: Develop reliable, valid, and pragmatic measures of three critical implementation outcomes, acceptability, appropriateness, and feasibility. Aim 3: Identify Consolidated Framework for Implementation Research and Implementation Outcome Framework-linked measures that demonstrate both psychometric and pragmatic strength. For Aim 1, we will conduct (a) interviews with stakeholder panelists (N = 7) and complete a literature review to populate pragmatic measure construct criteria, (b) Q-sort activities (N = 20) to clarify the internal structure of the definition, (c) Delphi activities (N = 20) to achieve consensus on the dimension priorities, (d) test-retest and inter-rater reliability assessments of the emergent rating system, and (e) known-groups validity testing of the top three prioritized pragmatic criteria. For Aim 2, our systematic development process involves domain delineation, item generation, substantive validity assessment, structural validity assessment, reliability assessment, and predictive validity assessment. We will also assess discriminant validity, known-groups validity, structural invariance, sensitivity to change, and other pragmatic features. For Aim 3, we will refine our established evidence-based assessment (EBA) criteria, extract the relevant data from the literature, rate each measure using the EBA criteria, and summarize the data. The study outputs of each aim are expected to have a positive impact as they will establish and guide a comprehensive measurement-focused research agenda for implementation science and provide empirically supported measures, tools, and methods for accomplishing this work.
Genomics in the land of regulatory science.
Tong, Weida; Ostroff, Stephen; Blais, Burton; Silva, Primal; Dubuc, Martine; Healy, Marion; Slikker, William
2015-06-01
Genomics science has played a major role in the generation of new knowledge in the basic research arena, and currently question arises as to its potential to support regulatory processes. However, the integration of genomics in the regulatory decision-making process requires rigorous assessment and would benefit from consensus amongst international partners and research communities. To that end, the Global Coalition for Regulatory Science Research (GCRSR) hosted the fourth Global Summit on Regulatory Science (GSRS2014) to discuss the role of genomics in regulatory decision making, with a specific emphasis on applications in food safety and medical product development. Challenges and issues were discussed in the context of developing an international consensus for objective criteria in the analysis, interpretation and reporting of genomics data with an emphasis on transparency, traceability and "fitness for purpose" for the intended application. It was recognized that there is a need for a global path in the establishment of a regulatory bioinformatics framework for the development of transparent, reliable, reproducible and auditable processes in the management of food and medical product safety risks. It was also recognized that training is an important mechanism in achieving internationally consistent outcomes. GSRS2014 provided an effective venue for regulators andresearchers to meet, discuss common issues, and develop collaborations to address the challenges posed by the application of genomics to regulatory science, with the ultimate goal of wisely integrating novel technical innovations into regulatory decision-making. Published by Elsevier Inc.
ERIC Educational Resources Information Center
Koksal, Mustafa Serdar; Ertekin, Pelin
2016-01-01
The study is focusing on development of an instrument to determine science-specific epistemological beliefs of prospective science teachers. The study involved 364 (male = 82, female = 282) prospective science teachers enrolled in a science teacher education program. The confirmatory factor analysis, reliability analysis and correlation analysis…
Applying learning theories and instructional design models for effective instruction.
Khalil, Mohammed K; Elkhider, Ihsan A
2016-06-01
Faculty members in higher education are involved in many instructional design activities without formal training in learning theories and the science of instruction. Learning theories provide the foundation for the selection of instructional strategies and allow for reliable prediction of their effectiveness. To achieve effective learning outcomes, the science of instruction and instructional design models are used to guide the development of instructional design strategies that elicit appropriate cognitive processes. Here, the major learning theories are discussed and selected examples of instructional design models are explained. The main objective of this article is to present the science of learning and instruction as theoretical evidence for the design and delivery of instructional materials. In addition, this article provides a practical framework for implementing those theories in the classroom and laboratory. Copyright © 2016 The American Physiological Society.
NASA Technical Reports Server (NTRS)
Stone, M. S.; Mcadam, P. L.; Saunders, O. W.
1977-01-01
The results are presented of a 4 month study to design a hybrid analog/digital receiver for outer planet mission probe communication links. The scope of this study includes functional design of the receiver; comparisons between analog and digital processing; hardware tradeoffs for key components including frequency generators, A/D converters, and digital processors; development and simulation of the processing algorithms for acquisition, tracking, and demodulation; and detailed design of the receiver in order to determine its size, weight, power, reliability, and radiation hardness. In addition, an evaluation was made of the receiver's capabilities to perform accurate measurement of signal strength and frequency for radio science missions.
Yen, Wendy; Hovey, Richard; Hodwitz, Kathryn; Zhang, Su
2011-03-01
The present study explored the relationship between the Multiple Mini-Interview (MMI) admissions process and the Bar-On EQ-i emotional intelligence (EI) instrument in order to investigate the potential for the EQ-i to serve as a proxy measure to the MMI. Participants were 196 health science candidates who completed both the MMI and the EQ-i as part of their admissions procedure at the Michener Institute for Applied Health Sciences. Three types of analyses were conducted to examine the relationship between the two tools: reliability analyses, correlational analyses, and a t-test. The tools were found to be moderately reliable. No significant relationships were found between the MMI and the EQ-i at the total or subscale level. The ability of the EQ-i to discriminate between accepted and not-accepted students was also not supported. These findings do not support the use of the EQ-i as a potential pre-screening tool for the MMI, but rather highlight the need to exercise caution when using emotional intelligence instruments for high-stakes admissions purposes.
Centralized Alert-Processing and Asset Planning for Sensorwebs
NASA Technical Reports Server (NTRS)
Castano, Rebecca; Chien, Steve A.; Rabideau, Gregg R.; Tang, Benyang
2010-01-01
A software program provides a Sensorweb architecture for alert-processing, event detection, asset allocation and planning, and visualization. It automatically tasks and re-tasks various types of assets such as satellites and robotic vehicles in response to alerts (fire, weather) extracted from various data sources, including low-level Webcam data. JPL has adapted cons iderable Sensorweb infrastructure that had been previously applied to NASA Earth Science applications. This NASA Earth Science Sensorweb has been in operational use since 2003, and has proven reliability of the Sensorweb technologies for robust event detection and autonomous response using space and ground assets. Unique features of the software include flexibility to a range of detection and tasking methods including those that require aggregation of data over spatial and temporal ranges, generality of the response structure to represent and implement a range of response campaigns, and the ability to respond rapidly.
Astronomy Teaching Self-Efficacy Belief Scale: The Validity and Reliability Study
ERIC Educational Resources Information Center
Demirci, Filiz; Ozyurek, Cengiz
2018-01-01
The purpose of this study is to develop a reliable and safe scale for determining the self-efficacy levels of science teachers in the teaching of astronomy subjects. The study used a survey approach, which is a qualitative research method. The study was conducted with a total of 106 science teachers working in the secondary schools of Ordu city…
ERIC Educational Resources Information Center
Casteel, J. Doyle; Stahl, Robert J.
Systematic and reliable feedback are critical elements of microteaching. One system whereby pre-service and in-service teachers may obtain systematic and reliable feedback during microteaching is called the Social Science Observation Record (SSOR). This monograph is intended to meet three purposes: (1) To explain the SSOR as a verbal system for…
Taylor, Kimberly A.; Short, A.
2009-01-01
Integrating science into resource management activities is a goal of the CALFED Bay-Delta Program, a multi-agency effort to address water supply reliability, ecological condition, drinking water quality, and levees in the Sacramento-San Joaquin Delta of northern California. Under CALFED, many different strategies were used to integrate science, including interaction between the research and management communities, public dialogues about scientific work, and peer review. This paper explores ways science was (and was not) integrated into CALFED's management actions and decision systems through three narratives describing different patterns of scientific integration and application in CALFED. Though a collaborative process and certain organizational conditions may be necessary for developing new understandings of the system of interest, we find that those factors are not sufficient for translating that knowledge into management actions and decision systems. We suggest that the application of knowledge may be facilitated or hindered by (1) differences in the objectives, approaches, and cultures of scientists operating in the research community and those operating in the management community and (2) other factors external to the collaborative process and organization.
Translational medicine: science or wishful thinking?
Wehling, Martin
2008-01-01
"Translational medicine" as a fashionable term is being increasingly used to describe the wish of biomedical researchers to ultimately help patients. Despite increased efforts and investments into R&D, the output of novel medicines has been declining dramatically over the past years. Improvement of translation is thought to become a remedy as one of the reasons for this widening gap between input and output is the difficult transition between preclinical ("basic") and clinical stages in the R&D process. Animal experiments, test tube analyses and early human trials do simply not reflect the patient situation well enough to reliably predict efficacy and safety of a novel compound or device. This goal, however, can only be achieved if the translational processes are scientifically backed up by robust methods some of which still need to be developed. This mainly relates to biomarker development and predictivity assessment, biostatistical methods, smart and accelerated early human study designs and decision algorithms among other features. It is therefore claimed that a new science needs to be developed called 'translational science in medicine'. PMID:18559092
Testing the Difference between Reliability Coefficients Alpha and Omega
ERIC Educational Resources Information Center
Deng, Lifang; Chan, Wai
2017-01-01
Reliable measurements are key to social science research. Multiple measures of reliability of the total score have been developed, including coefficient alpha, coefficient omega, the greatest lower bound reliability, and others. Among these, the coefficient alpha has been most widely used, and it is reported in nearly every study involving the…
NASA Technical Reports Server (NTRS)
Lu, George C.
2003-01-01
The purpose of the EXPRESS (Expedite the PRocessing of Experiments to Space Station) rack project is to provide a set of predefined interfaces for scientific payloads which allow rapid integration into a payload rack on International Space Station (ISS). VxWorks' was selected as the operating system for the rack and payload resource controller, primarily based on the proliferation of VME (Versa Module Eurocard) products. These products provide needed flexibility for future hardware upgrades to meet everchanging science research rack configuration requirements. On the International Space Station, there are multiple science research rack configurations, including: 1) Human Research Facility (HRF); 2) EXPRESS ARIS (Active Rack Isolation System); 3) WORF (Window Observational Research Facility); and 4) HHR (Habitat Holding Rack). The RIC (Rack Interface Controller) connects payloads to the ISS bus architecture for data transfer between the payload and ground control. The RIC is a general purpose embedded computer which supports multiple communication protocols, including fiber optic communication buses, Ethernet buses, EIA-422, Mil-Std-1553 buses, SMPTE (Society Motion Picture Television Engineers)-170M video, and audio interfaces to payloads and the ISS. As a cost saving and software reliability strategy, the Boeing Payload Software Organization developed reusable common software where appropriate. These reusable modules included a set of low-level driver software interfaces to 1553B. RS232, RS422, Ethernet buses, HRDL (High Rate Data Link), video switch functionality, telemetry processing, and executive software hosted on the FUC computer. These drivers formed the basis for software development of the HRF, EXPRESS, EXPRESS ARIS, WORF, and HHR RIC executable modules. The reusable RIC common software has provided extensive benefits, including: 1) Significant reduction in development flow time; 2) Minimal rework and maintenance; 3) Improved reliability; and 4) Overall reduction in software life cycle cost. Due to the limited number of crew hours available on ISS for science research, operational efficiency is a critical customer concern. The current method of upgrading RIC software is a time consuming process; thus, an improved methodology for uploading RIC software is currently under evaluation.
Materials Science Research Rack Onboard the International Space Station Hardware and Operations
NASA Technical Reports Server (NTRS)
Lehman, John R.; Frazier, Natalie C.; Johnson, Jimmie
2012-01-01
The Materials Science Research Rack (MSRR) is a research facility developed under a cooperative research agreement between NASA and ESA for materials science investigations on the International Space Station (ISS). MSRR was launched on STS-128 in August 2009, and is currently installed in the U.S. Destiny Laboratory Module. Since that time, MSRR has performed virtually flawlessly, logging more than 620 hours of operating time. The MSRR accommodates advanced investigations in the microgravity environment on the ISS for basic materials science research in areas such as solidification of metals and alloys. The purpose is to advance the scientific understanding of materials processing as affected by microgravity and to gain insight into the physical behavior of materials processing. MSRR allows for the study of a variety of materials including metals, ceramics, semiconductor crystals, and glasses. Materials science research benefits from the microgravity environment of space, where the researcher can better isolate chemical and thermal properties of materials from the effects of gravity. With this knowledge, reliable predictions can be made about the conditions required on Earth to achieve improved materials. MSRR is a highly automated facility with a modular design capable of supporting multiple types of investigations. Currently the NASA-provided Rack Support Subsystem provides services (power, thermal control, vacuum access, and command and data handling) to the ESA developed Materials Science Laboratory (MSL) which accommodates interchangeable Furnace Inserts (FI). Two ESA-developed FIs are presently available on the ISS: the Low Gradient Furnace (LGF) and the Solidification and Quenching Furnace (SQF). Sample-Cartridge Assemblies (SCAs), each containing one or more material samples, are installed in the FI by the crew and can be processed at temperatures up to 1400 C. Once an SCA is installed, the experiment can be run by automatic command or science conducted via telemetry commands from the ground. Initially, 12 SCAs were processed in the first furnace insert for a team of European and US investigators. After these samples were processed the Furnaces Inserts were exchanged and an additional single sample was processed. The processed samples have been returned to Earth for evaluation and comparison of their properties to samples similarly processed on the ground. A preliminary examination of the samples indicates that the majority of the desired science objectives have been successfully met leading to significant improvements in the understanding of alloy solidification processes. Six SCAs were launched on Space Shuttle Mission STS-135 in July 2011 for processing during the Fall of 2011. Additional batches are planned for future processing. This facility is available to support additional materials science investigations through programs such as the US National Laboratory, Technology Development, NASA Research Announcements, and others.
NASA Astrophysics Data System (ADS)
Wang, Tzu-Ling; Berlin, Donna
2010-12-01
The main purpose of this study is to develop a valid and reliable instrument for measuring the attitudes toward science class of fourth- and fifth-grade students in an Asian school culture. Specifically, the development focused on three science attitude constructs-science enjoyment, science confidence, and importance of science as related to science class experiences. A total of 265 elementary school students in Taiwan responded to the instrument developed. Data analysis indicated that the instrument exhibited satisfactory validity and reliability with the Taiwan population used. The Cronbach's alpha coefficient was 0.93 for the entire instrument indicating a satisfactory level of internal consistency. However, both principal component analysis and parallel analysis showed that the three attitude scales were not unique and should be combined and used as a general "attitudes toward science class" scale. The analysis also showed that there were no gender or grade-level differences in students' overall attitudes toward science class.
NASA Astrophysics Data System (ADS)
Wang, Tian; Cui, Xiaoxin; Ni, Yewen; Liao, Kai; Liao, Nan; Yu, Dunshan; Cui, Xiaole
2017-04-01
With shrinking transistor feature size, the fin-type field-effect transistor (FinFET) has become the most promising option in low-power circuit design due to its superior capability to suppress leakage. To support the VLSI digital system flow based on logic synthesis, we have designed an optimized high-performance low-power FinFET standard cell library based on employing the mixed FBB/RBB technique in the existing stacked structure of each cell. This paper presents the reliability evaluation of the optimized cells under process and operating environment variations based on Monte Carlo analysis. The variations are modelled with Gaussian distribution of the device parameters and 10000 sweeps are conducted in the simulation to obtain the statistical properties of the worst-case delay and input-dependent leakage for each cell. For comparison, a set of non-optimal cells that adopt the same topology without employing the mixed biasing technique is also generated. Experimental results show that the optimized cells achieve standard deviation reduction of 39.1% and 30.7% at most in worst-case delay and input-dependent leakage respectively while the normalized deviation shrinking in worst-case delay and input-dependent leakage can be up to 98.37% and 24.13%, respectively, which demonstrates that our optimized cells are less sensitive to variability and exhibit more reliability. Project supported by the National Natural Science Foundation of China (No. 61306040), the State Key Development Program for Basic Research of China (No. 2015CB057201), the Beijing Natural Science Foundation (No. 4152020), and Natural Science Foundation of Guangdong Province, China (No. 2015A030313147).
Beyond Engagement to Reflection and Understanding: Focusing on the process of science
NASA Astrophysics Data System (ADS)
Scotchmoor, J. G.; Mitchell, B. J.
2011-12-01
We must engage the public and make science more accessible to all...It is important that the scientific community, in its outreach, help people not only to see the fun of science but also to understand what science is, what a scientific theory is, how science is done, that accepted scientific models or theories are based on evidence, that hypotheses are tested by experiment, and that theories change as new evidence emerges. Shirley Ann Jackson, AAAS Presidential Address, 2005 The nature of science is noted as a critical topic for science literacy; however, by all accounts, Americans' understanding of the nature of science is inadequate, and students and teachers at all grade levels have inaccurate understandings of what science is and how it works. Such findings do not bode well for the future of scientific literacy in the United States. In large part, the current confusions about evolution, global warming, stem cell research, and other aspects of science deemed by some as "controversial" are symptomatic of a general misunderstanding of what science is and what it is not. Too few of our citizens view science as a dynamic process through which we gain a reliable understanding of the natural world. As a result, the public becomes vulnerable to misinformation and the very real benefits of science are obscured. New opportunities are emerging for members of the scientific community to share their science with segments of the public - both informally through science cafés and science festivals, and more formally through science competitions and classroom visits. Each of these helps to make science more accessible and provides a critical first step toward connecting the public to the "fun and excitement" of science. Less often these activities focus on how science works - what science is, what it is not, and what is not science - as well as the creativity, curiosity, exploration, dead-ends, and a-ha moments that inspire scientists. This talk will share a teacher professional development project in which graduate students play a critical role not only in engaging elementary teachers in science, but also in making the process explicit, offering the opportunity to reflect, and increasing teacher understanding of how science really works. Teachers had the chance to do science themselves which went a long way toward reducing their fears of science and increasing their confidence to teach science more effectively. At the same time, the graduate students found that working with the teachers made them better realize their own strengths and revitalized their personal excitement about science.
High-end clinical domain information systems for effective healthcare delivery.
Mangalampalli, Ashish; Rama, Chakravarthy; Muthiyalian, Raja; Jain, Ajeet K
2007-01-01
The Electronic Health Record (EHR) provides doctors with a quick, reliable, secure, real-time and user-friendly source of all relevant patient data. The latest information system technologies, such as Clinical Data Warehouses (CDW), Clinical Decision-Support (CDS) systems and data-mining techniques (Online Analytical Processing (OLAP) and Online Transactional Processing (OLTP)), are used to maintain and utilise patient data intelligently, based on the users' requirements. Moreover, clinical trial reports for new drug approvals are now being submitted electronically for faster and easier processing. Also, information systems are used in educating patients about the latest developments in medical science through the internet and specially configured kiosks in hospitals and clinics.
Borotikar, Bhushan; Lempereur, Mathieu; Lelievre, Mathieu; Burdin, Valérie; Ben Salem, Douraied; Brochard, Sylvain
2017-01-01
To report evidence for the concurrent validity and reliability of dynamic MRI techniques to evaluate in vivo joint and muscle mechanics, and to propose recommendations for their use in the assessment of normal and impaired musculoskeletal function. The search was conducted on articles published in Web of science, PubMed, Scopus, Academic search Premier, and Cochrane Library between 1990 and August 2017. Studies that reported the concurrent validity and/or reliability of dynamic MRI techniques for in vivo evaluation of joint or muscle mechanics were included after assessment by two independent reviewers. Selected articles were assessed using an adapted quality assessment tool and a data extraction process. Results for concurrent validity and reliability were categorized as poor, moderate, or excellent. Twenty articles fulfilled the inclusion criteria with a mean quality assessment score of 66% (±10.4%). Concurrent validity and/or reliability of eight dynamic MRI techniques were reported, with the knee being the most evaluated joint (seven studies). Moderate to excellent concurrent validity and reliability were reported for seven out of eight dynamic MRI techniques. Cine phase contrast and real-time MRI appeared to be the most valid and reliable techniques to evaluate joint motion, and spin tag for muscle motion. Dynamic MRI techniques are promising for the in vivo evaluation of musculoskeletal mechanics; however results should be evaluated with caution since validity and reliability have not been determined for all joints and muscles, nor for many pathological conditions.
NASA Astrophysics Data System (ADS)
Hill, Sharon A.
21st century television and the Internet are awash in content regarding amateur paranormal investigators and research groups. These groups proliferated after reality investigation programs appeared on television. Exactly how many groups are active in the U.S. at any time is not known. The Internet provides an ideal means for people with niche interests to find each other and organize activities. This study collected information from 1000 websites of amateur research and investigation groups (ARIGs) to determine their location, area of inquiry, methodology and, particularly, to determine if they state that they use science as part of their mission, methods or goals. 57.3% of the ARIGs examined specifically noted or suggested use of science as part of the groups' approach to investigation and research. Even when not explicit, ARIGs often used science-like language, symbols and methods to describe their groups' views or activities. Yet, non-scientific and subjective methods were described as employed in conjunction with objective methods. Furthermore, what were considered scientific processes by ARIGs did not match with established methods and the ethos of the scientific research community or scientific processes of investigation. ARIGs failed to display fundamental understanding regarding objectivity, methodological naturalism, peer review, critical thought and theoretical plausibility. The processes of science appear to be mimicked to present a serious and credible reputation to the non-scientific public. These processes are also actively promoted in the media and directly to the local public as "scientific". These results highlight the gap between the scientific community and the lay public regarding the understanding of what it means to do science and what criteria are necessary to establish reliable knowledge about the world.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
1973-01-01
Congressman James Symington was chairman of the Subcommittee on Space Science and Applications of the House Committee on Science and Astronautics. Congressman Mike McCormack was present at the proceedings and witnesses presented data on the development of long-term fuel supplies, a continued search for more reliable and more efficient designs of plants, and the optimization of the impact on society in the use of energy for such things as food, shelter, clothing, heat, light, health, recreation, travel, and education. Then, general problems of the byproducts of the energy-producing processes are examined including fly ash, sulfur oxides, nitrogen, oxides, warm water,more » esthetics, strip mining, and radiation. Representatives from the utilities, national laboratories, gas companies, universities, environmental councils, space agencies, and communication companies presented the data covering all aspects of energy research. (MCW)« less
The international development of forensic science standards - A review.
Wilson-Wilde, Linzi
2018-04-16
Standards establish specifications and procedures designed to ensure products, services and systems are safe, reliable and consistently perform as intended. Standards can be used in the accreditation of forensic laboratories or facilities and in the certification of products and services. In recent years there have been various international activities aiming at developing forensic science standards and guidelines. The most significant initiative currently underway within the global forensic community is the development of International Organization for Standardization (ISO) standards. This paper reviews the main bodies working on standards for forensic science, the processes used and the implications for accreditation. This paper specifically discusses the work of ISO Technical Committee TC272, the future TC272 work program for the development of forensic science standards and associated timelines. Also discussed, are the lessons learnt to date in navigating the complex environment of multi-country stakeholder deliberations in standards development. Crown Copyright © 2018. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Mulkey, Lynn M.
The intention of this research was to measure attitudes of young children toward women scientists. A 27-item instrument, the Early Childhood Women in Science Scale (ECWiSS) was validated in a test case of the proposition that differential socialization predicts entry into the scientific talent pool. Estimates of internal consistency indicated that the scale is highly reliable. Known groups and correlates procedures, employed to determine the validity of the instrument, revealed that the scale is able to discriminate significant differences between groups and distinguishes three dimensions of attitude (role-specific self-concept, home-related sex-role conflict, and work-related sex-role conflict). Results of the analyses also confirmed the anticipated pattern of correlations with measures of another construct. The findings suggest the utility of the ECWiSS for measurement of early childhood attitudes in models of the ascriptive and/or meritocratic processes affecting recruitment to science and more generally in program and curriculum evaluation where attitude toward women in science is the construct of interest.
NASA Technical Reports Server (NTRS)
White, Mark
2012-01-01
The recently launched Mars Science Laboratory (MSL) flagship mission, named Curiosity, is the most complex rover ever built by NASA and is scheduled to touch down on the red planet in August, 2012 in Gale Crater. The rover and its instruments will have to endure the harsh environments of the surface of Mars to fulfill its main science objectives. Such complex systems require reliable microelectronic components coupled with adequate component and system-level design margins. Reliability aspects of these elements of the spacecraft system are presented from bottom- up and top-down perspectives.
ERIC Educational Resources Information Center
Wang, Tzu-Ling; Berlin, Donna
2010-01-01
The main purpose of this study is to develop a valid and reliable instrument for measuring the "attitudes toward science class" of fourth- and fifth-grade students in an Asian school culture. Specifically, the development focused on three science attitude constructs--science enjoyment, science confidence, and importance of science as…
Medicine is not science: guessing the future, predicting the past.
Miller, Clifford
2014-12-01
Irregularity limits human ability to know, understand and predict. A better understanding of irregularity may improve the reliability of knowledge. Irregularity and its consequences for knowledge are considered. Reliable predictive empirical knowledge of the physical world has always been obtained by observation of regularities, without needing science or theory. Prediction from observational knowledge can remain reliable despite some theories based on it proving false. A naïve theory of irregularity is outlined. Reducing irregularity and/or increasing regularity can increase the reliability of knowledge. Beyond long experience and specialization, improvements include implementing supporting knowledge systems of libraries of appropriately classified prior cases and clinical histories and education about expertise, intuition and professional judgement. A consequence of irregularity and complexity is that classical reductionist science cannot provide reliable predictions of the behaviour of complex systems found in nature, including of the human body. Expertise, expert judgement and their exercise appear overarching. Diagnosis involves predicting the past will recur in the current patient applying expertise and intuition from knowledge and experience of previous cases and probabilistic medical theory. Treatment decisions are an educated guess about the future (prognosis). Benefits of the improvements suggested here are likely in fields where paucity of feedback for practitioners limits development of reliable expert diagnostic intuition. Further analysis, definition and classification of irregularity is appropriate. Observing and recording irregularities are initial steps in developing irregularity theory to improve the reliability and extent of knowledge, albeit some forms of irregularity present inherent difficulties. © 2014 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Eshach, Haim
2014-06-01
This article describes the development and field test of the Sound Concept Inventory Instrument (SCII), designed to measure middle school students' concepts of sound. The instrument was designed based on known students' difficulties in understanding sound and the history of science related to sound and focuses on two main aspects of sound: sound has material properties, and sound has process properties. The final SCII consists of 71 statements that respondents rate as either true or false and also indicate their confidence on a five-point scale. Administration to 355 middle school students resulted in a Cronbach alpha of 0.906, suggesting a high reliability. In addition, the average percentage of students' answers to statements that associate sound with material properties is significantly higher than the average percentage of statements associating sound with process properties (p <0.001). The SCII is a valid and reliable tool that can be used to determine students' conceptions of sound.
The spaces in between: science, ocean, empire.
Reidy, Michael S; Rozwadowski, Helen M
2014-06-01
Historians of science have richly documented the interconnections between science and empire in the nineteenth century. These studies primarily begin with Britain, Europe, or the United States at the center and have focused almost entirely on lands far off in the periphery--India or Australia, for instance. The spaces in between have received scant attention. Because use of the ocean in this period was infused with the doctrine of the freedom of the seas, the ocean was constructed as a space amenable to control by any nation that could master its surface and use its resources effectively. Oceans transformed in the mid-nineteenth century from highway to destination, becoming--among other things--the focus of sustained scientific interest for the first time in history. Use of the sea rested on reliable knowledge of the ocean. Particularly significant were the graphical representations of knowledge that could be passed from scientists to publishers to captains or other agents of empire. This process also motivated early government patronage of science and crystallized scientists' rising authority in society. The advance of science, the creation of empire, and the construction of the ocean were mutually sustaining.
Bitzer, Sonja; Albertini, Nicola; Lock, Eric; Ribaux, Olivier; Delémont, Olivier
2015-12-01
In an attempt to grasp the effectiveness of forensic science in the criminal justice process, a number of studies introduced some form of performance indicator. However, most of these indicators suffer from different weaknesses, from the definition of forensic science itself to problems of reliability and validity. We suggest the introduction of the concept of utility of the clue as an internal evaluation indicator of forensic science in the investigation. Utility of the clue is defined as added value of information, gained by the use of traces. This concept could be used to assess the contribution of the trace in the context of the case. By extension, a second application of this concept is suggested. By formalising and considering, a priori, the perceived utility of using traces, we introduce the notion of expected utility that could be used as decision factor when choosing which traces to use, once they have been collected at the crime scene or from an object in the laboratory. In a case-based approach, utility can be assessed in the light of the available information to evaluate the investigative contribution of forensic science. In the decision-making process, the projection or estimation of the utility of the clue is proposed to be a factor to take into account when triaging the set of traces. Copyright © 2015 The Chartered Society of Forensic Sciences. Published by Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Navarro, Marianela; Förster, Carla; González, Caterina; González-Pose, Paulina
2016-06-01
Understanding attitudes toward science and measuring them remain two major challenges for science teaching. This article reviews the concept of attitudes toward science and their measurement. It subsequently analyzes the psychometric properties of the Test of Science-Related Attitudes (TOSRA), such as its construct validity, its discriminant and concurrent validity, and its reliability. The evidence presented suggests that TOSRA, in its Spanish-adapted version, has adequate construct validity regarding its theoretical referents, as well as good indexes of reliability. In addition, it determines the attitudes toward science of secondary school students in Santiago de Chile (n = 664) and analyzes the sex variable as a differentiating factor in such attitudes. The analysis by sex revealed low-relevance gender difference. The results are contrasted with those obtained in English-speaking countries. This TOSRA sample showed good psychometric parameters for measuring and evaluating attitudes toward science, which can be used in classrooms of Spanish-speaking countries or with immigrant populations with limited English proficiency.
Role of Network Science in the Study of Anesthetic State Transitions.
Lee, UnCheol; Mashour, George A
2018-04-23
The heterogeneity of molecular mechanisms, target neural circuits, and neurophysiologic effects of general anesthetics makes it difficult to develop a reliable and drug-invariant index of general anesthesia. No single brain region or mechanism has been identified as the neural correlate of consciousness, suggesting that consciousness might emerge through complex interactions of spatially and temporally distributed brain functions. The goal of this review article is to introduce the basic concepts of networks and explain why the application of network science to general anesthesia could be a pathway to discover a fundamental mechanism of anesthetic-induced unconsciousness. This article reviews data suggesting that reduced network efficiency, constrained network repertoires, and changes in cortical dynamics create inhospitable conditions for information processing and transfer, which lead to unconsciousness. This review proposes that network science is not just a useful tool but a necessary theoretical framework and method to uncover common principles of anesthetic-induced unconsciousness.
McAllister, Sue; Lincoln, Michelle; Ferguson, Allison; McAllister, Lindy
2013-01-01
Valid assessment of health science students' ability to perform in the real world of workplace practice is critical for promoting quality learning and ultimately certifying students as fit to enter the world of professional practice. Current practice in performance assessment in the health sciences field has been hampered by multiple issues regarding assessment content and process. Evidence for the validity of scores derived from assessment tools are usually evaluated against traditional validity categories with reliability evidence privileged over validity, resulting in the paradoxical effect of compromising the assessment validity and learning processes the assessments seek to promote. Furthermore, the dominant statistical approaches used to validate scores from these assessments fall under the umbrella of classical test theory approaches. This paper reports on the successful national development and validation of measures derived from an assessment of Australian speech pathology students' performance in the workplace. Validation of these measures considered each of Messick's interrelated validity evidence categories and included using evidence generated through Rasch analyses to support score interpretation and related action. This research demonstrated that it is possible to develop an assessment of real, complex, work based performance of speech pathology students, that generates valid measures without compromising the learning processes the assessment seeks to promote. The process described provides a model for other health professional education programs to trial.
Laboratory Directed Research and Development FY2010 Annual Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jackson, K J
2011-03-22
A premier applied-science laboratory, Lawrence Livermore National Laboratory (LLNL) has at its core a primary national security mission - to ensure the safety, security, and reliability of the nation's nuclear weapons stockpile without nuclear testing, and to prevent and counter the spread and use of weapons of mass destruction: nuclear, chemical, and biological. The Laboratory uses the scientific and engineering expertise and facilities developed for its primary mission to pursue advanced technologies to meet other important national security needs - homeland defense, military operations, and missile defense, for example - that evolve in response to emerging threats. For broader nationalmore » needs, LLNL executes programs in energy security, climate change and long-term energy needs, environmental assessment and management, bioscience and technology to improve human health, and for breakthroughs in fundamental science and technology. With this multidisciplinary expertise, the Laboratory serves as a science and technology resource to the U.S. government and as a partner with industry and academia. This annual report discusses the following topics: (1) Advanced Sensors and Instrumentation; (2) Biological Sciences; (3) Chemistry; (4) Earth and Space Sciences; (5) Energy Supply and Use; (6) Engineering and Manufacturing Processes; (7) Materials Science and Technology; Mathematics and Computing Science; (8) Nuclear Science and Engineering; and (9) Physics.« less
Green Liquid Monopropellant Thruster
NASA Technical Reports Server (NTRS)
Joshi, Prakash B.
2015-01-01
Physical Sciences, Inc. (PSI), and Orbital Technologies Corporation (ORBITEC) are developing a unique chemical propulsion system for next-generation NASA science spacecraft and missions. The system is compact, lightweight, and can operate with high reliability over extended periods of time and under a wide range of thermal environments. The system uses a new storable, low-toxicity liquid monopropellant as its working fluid. In Phase I, the team demonstrated experimentally the critical ignition and combustion processes for the propellant and used the data to develop thruster design concepts. In Phase II, the team developed and demonstrated in the laboratory a proof-of-concept prototype thruster. A Phase III project is envisioned to develop a full-scale protoflight propulsion system applicable to a class of NASA missions.
Neutron Tomography at the Los Alamos Neutron Science Center
DOE Office of Scientific and Technical Information (OSTI.GOV)
Myers, William Riley
Neutron imaging is an incredibly powerful tool for non-destructive sample characterization and materials science. Neutron tomography is one technique that results in a three-dimensional model of the sample, representing the interaction of the neutrons with the sample. This relies both on reliable data acquisition and on image processing after acquisition. Over the course of the project, the focus has changed from the former to the latter, culminating in a large-scale reconstruction of a meter-long fossilized skull. The full reconstruction is not yet complete, though tools have been developed to improve the speed and accuracy of the reconstruction. This project helpsmore » to improve the capabilities of LANSCE and LANL with regards to imaging large or unwieldy objects.« less
Publication ethics from the perspective of PhD students of health sciences: a limited experience.
Arda, Berna
2012-06-01
Publication ethics, an important subtopic of science ethics, deals with determination of the misconducts of science in performing research or in the dissemination of ideas, data and products. Science, the main features of which are secure, reliable and ethically obtained data, plays a major role in shaping the society. As long as science maintains its quality by being based on reliable and ethically obtained data, it will be possible to maintain its role in shaping the society. This article is devoted to the presentation of opinions of PhD candidate students in health sciences in Ankara concerning publication ethics. The data obtained from 143 PhD students from the fields of medicine, dentistry, pharmacy and veterinary reveal limited but unique experiences. It also shows that plagiarism is one of the worst issues in the publication ethics from the perspective of these young academics.
Reliable Characterization for Pyrolysis Bio-Oils Leads to Enhanced
Upgrading Methods | NREL Reliable Characterization for Pyrolysis Bio-Oils Leads to Enhanced Upgrading Methods Science and Technology Highlights Highlights in Research & Development Reliable Characterization for Pyrolysis Bio-Oils Leads to Enhanced Upgrading Methods Key Research Results Achievement As co
Fundamentals and applications of electrochemistry
NASA Astrophysics Data System (ADS)
McEvoy, A. J.
2013-06-01
The Voltaic pile, invented here on Lake Como 200 years ago, was a crucial step in the development of electrical engineering. For the first time a controlled and reliable source of electric current was available. The science of electrochemistry developed rapidly and is now a key contributor, not just to energy technology but also, for example, to metallurgy and industrial processes. The basic concepts of electrochemistry are presented, with the practical examples of its application in fuel cells, and with the perspective of the history of the subject.
European Workshop Industrical Computer Science Systems approach to design for safety
NASA Technical Reports Server (NTRS)
Zalewski, Janusz
1992-01-01
This paper presents guidelines on designing systems for safety, developed by the Technical Committee 7 on Reliability and Safety of the European Workshop on Industrial Computer Systems. The focus is on complementing the traditional development process by adding the following four steps: (1) overall safety analysis; (2) analysis of the functional specifications; (3) designing for safety; (4) validation of design. Quantitative assessment of safety is possible by means of a modular questionnaire covering various aspects of the major stages of system development.
Personal identification by eyes.
Marinović, Dunja; Njirić, Sanja; Coklo, Miran; Muzić, Vedrana
2011-09-01
Identification of persons through the eyes is in the field of biometrical science. Many security systems are based on biometric methods of personal identification, to determine whether a person is presenting itself truly. The human eye contains an extremely large number of individual characteristics that make it particularly suitable for the process of identifying a person. Today, the eye is considered to be one of the most reliable body parts for human identification. Systems using iris recognition are among the most secure biometric systems.
Report: Studies Addressing EPA’s Organizational Structure
Report #2006-P-00029, August 16, 2006. The 13 studies, articles, publications, and reports we reviewed identified issues with cross-media management, regional offices, reliable information, and reliable science.
Isoelectric points and points of zero charge of metal (hydr)oxides: 50years after Parks' review.
Kosmulski, Marek
2016-12-01
The pH-dependent surface charging of metal (hydr)oxides is reviewed on the occasion of the 50th anniversary of the publication by G.A. Parks: "Isoelectric points of solid oxides, solid hydroxides, and aqueous hydroxo complex systems" in Chemical Reviews. The point of zero charge (PZC) and isoelectric point (IEP) became standard parameters to characterize metal oxides in aqueous dispersions, and they define adsorption (surface excess) of ions, stability against coagulation, rheological properties of dispersions, etc. They are commonly used in many branches of science including mineral processing, soil science, materials science, geochemistry, environmental engineering, and corrosion science. Parks established standard procedures and experimental conditions which are required to obtain reliable and reproducible values of PZC and IEP. The field is very active, and the number of related papers exceeds 300 a year, and the standards established by Parks remain still valid. Relevant experimental techniques improved over the years, especially the measurements of electrophoretic mobility became easier and more reliable, are the numerical values of PZC and IEP compiled by Parks were confirmed by contemporary publications with a few exceptions. The present paper is an up-to-date compilation of the values of PZC and IEP of metal oxides. Unlike in former reviews by the same author, which were more comprehensive, only limited number of selected results are presented and discussed here. On top of the results obtained by means of classical methods (titration and electrokinetic methods), new methods and correlations found over the recent 50years are presented. Copyright © 2016 Elsevier B.V. All rights reserved.
Can I Get a Second Opinion? - Translating Hazard Understanding to Disaster Response
NASA Astrophysics Data System (ADS)
Green, D. S.; Stough, T.; Murray, J. J.
2015-12-01
Policy makers, operational response agencies and scientists are aware that when addressing hazard events decisions must be made in a timely manner with limited environmental information or less than 100% certainty. This presentation will examine how lessons captured from disaster events are mainstreaming the use of global earth observation data and derived products of sufficient reliability and timeliness to provide situational awareness. What is good enough for disaster response is a challenge, especially where the requirements for earth system research and experimentation are not the same as application science and operations. In areas of timeliness and access to data or processing of information to knowledge the economic and policy objectives are not always aligned between research and application. Even when both are addressing substantive science area questions and critical data is available, creating scientifically-informed guidance, forecasts and assessments may take considerable effort to be made accessible and understandable, and even longer to reflect consensus or consistency. Conveying the degree of science certainty and accountability that triggers a threshold for action is always a challenge at the interface of hazard characterization and disaster response. Often decisions and interpretation must be reached when staring down a hazard or potential disaster situation, which makes automation a potential solution. Yet human opinions remain important, social cultural and behavioral context suggest that observational information, maps, models and other derived information is only acted upon when provided by multiple trusted and reliable sources. This presentation will discuss examples drawn from NASA's research and partnership portfolio in disaster application science and explore strategic approaches to strengthen disaster risk reduction and resilience.
Defining Success in Open Science
Ali-Khan, Sarah E.; Jean, Antoine; MacDonald, Emily; Gold, E. Richard
2018-01-01
Mounting evidence indicates that worldwide, innovation systems are increasing unsustainable. Equally, concerns about inequities in the science and innovation process, and in access to its benefits, continue. Against a backdrop of growing health, economic and scientific challenges global stakeholders are urgently seeking to spur innovation and maximize the just distribution of benefits for all. Open Science collaboration (OS) – comprising a variety of approaches to increase open, public, and rapid mobilization of scientific knowledge – is seen to be one of the most promising ways forward. Yet, many decision-makers hesitate to construct policy to support the adoption and implementation of OS without access to substantive, clear and reliable evidence. In October 2017, international thought-leaders gathered at an Open Science Leadership Forum in the Washington DC offices of the Bill and Melinda Gates Foundation to share their views on what successful Open Science looks like. Delegates from developed and developing nations, national governments, science agencies and funding bodies, philanthropy, researchers, patient organizations and the biotechnology, pharma and artificial intelligence (AI) industries discussed the outcomes that would rally them to invest in OS, as well as wider issues of policy and implementation. This first of two reports, summarizes delegates' views on what they believe OS will deliver in terms of research, innovation and social impact in the life sciences. Through open and collaborative process over the next months, we will translate these success outcomes into a toolkit of quantitative and qualitative indicators to assess when, where and how open science collaborations best advance research, innovation and social benefit. Ultimately, this work aims to develop and openly share tools to allow stakeholders to evaluate and re-invent their innovation ecosystems, to maximize value for the global public and patients, and address long-standing questions about the mechanics of innovation. PMID:29553146
Defining Success in Open Science.
Ali-Khan, Sarah E; Jean, Antoine; MacDonald, Emily; Gold, E Richard
2018-01-01
Mounting evidence indicates that worldwide, innovation systems are increasing unsustainable. Equally, concerns about inequities in the science and innovation process, and in access to its benefits, continue. Against a backdrop of growing health, economic and scientific challenges global stakeholders are urgently seeking to spur innovation and maximize the just distribution of benefits for all. Open Science collaboration (OS) - comprising a variety of approaches to increase open, public, and rapid mobilization of scientific knowledge - is seen to be one of the most promising ways forward. Yet, many decision-makers hesitate to construct policy to support the adoption and implementation of OS without access to substantive, clear and reliable evidence. In October 2017, international thought-leaders gathered at an Open Science Leadership Forum in the Washington DC offices of the Bill and Melinda Gates Foundation to share their views on what successful Open Science looks like. Delegates from developed and developing nations, national governments, science agencies and funding bodies, philanthropy, researchers, patient organizations and the biotechnology, pharma and artificial intelligence (AI) industries discussed the outcomes that would rally them to invest in OS, as well as wider issues of policy and implementation. This first of two reports, summarizes delegates' views on what they believe OS will deliver in terms of research, innovation and social impact in the life sciences. Through open and collaborative process over the next months, we will translate these success outcomes into a toolkit of quantitative and qualitative indicators to assess when, where and how open science collaborations best advance research, innovation and social benefit. Ultimately, this work aims to develop and openly share tools to allow stakeholders to evaluate and re-invent their innovation ecosystems, to maximize value for the global public and patients, and address long-standing questions about the mechanics of innovation.
ERIC Educational Resources Information Center
van Aalderen-Smeets, Sandra; Walma van der Molen, Juliette
2013-01-01
In this article, we present a valid and reliable instrument which measures the attitude of in-service and pre-service primary teachers toward teaching science, called the Dimensions of Attitude Toward Science (DAS) Instrument. Attention to the attitudes of primary teachers toward teaching science is of fundamental importance to the…
Translated Versions of Voice Handicap Index (VHI)-30 across Languages: A Systematic Review
SEIFPANAHI, Sadegh; JALAIE, Shohreh; NIKOO, Mohammad Reza; SOBHANI-RAD, Davood
2015-01-01
Background: In this systematic review, the aim is to investigate different VHI-30 versions between languages regarding their validity, reliability and their translation process. Methods: Articles were extracted systematically from some of the prime databases including Cochrane, googlescholar, MEDLINE (via PubMed gate), Sciencedirect, Web of science, and their reference lists by Voice Handicap Index keyword with only title limitation and time of publication (from 1997 to 2014). However the other limitations (e.g. excluding non-English, other versions of VHI ones, and so on) applied manually after studying the papers. In order to appraise the methodology of the papers, three authors did it by 12-item diagnostic test checklist in “Critical Appraisal Skills Programme” or (CASP) site. After applying all of the screenings, the papers that had the study eligibility criteria such as; translation, validity, and reliability processes, included in this review. Results: The remained non-repeated articles were 12 from different languages. All of them reported validity, reliability and translation method, which presented in details in this review. Conclusion: Mainly the preferred method for translation in the gathered papers was “Brislin’s classic back-translation model (1970), although the procedure was not performed completely but it was more prominent than other translation procedures. High test-retest reliability, internal consistency and moderate construct validity between different languages in regards to all 3 VHI-30 domains confirm the applicability of translated VHI-30 version across languages. PMID:26056664
Evolving Scale-Free Networks by Poisson Process: Modeling and Degree Distribution.
Feng, Minyu; Qu, Hong; Yi, Zhang; Xie, Xiurui; Kurths, Jurgen
2016-05-01
Since the great mathematician Leonhard Euler initiated the study of graph theory, the network has been one of the most significant research subject in multidisciplinary. In recent years, the proposition of the small-world and scale-free properties of complex networks in statistical physics made the network science intriguing again for many researchers. One of the challenges of the network science is to propose rational models for complex networks. In this paper, in order to reveal the influence of the vertex generating mechanism of complex networks, we propose three novel models based on the homogeneous Poisson, nonhomogeneous Poisson and birth death process, respectively, which can be regarded as typical scale-free networks and utilized to simulate practical networks. The degree distribution and exponent are analyzed and explained in mathematics by different approaches. In the simulation, we display the modeling process, the degree distribution of empirical data by statistical methods, and reliability of proposed networks, results show our models follow the features of typical complex networks. Finally, some future challenges for complex systems are discussed.
Lempereur, Mathieu; Lelievre, Mathieu; Burdin, Valérie; Ben Salem, Douraied; Brochard, Sylvain
2017-01-01
Purpose To report evidence for the concurrent validity and reliability of dynamic MRI techniques to evaluate in vivo joint and muscle mechanics, and to propose recommendations for their use in the assessment of normal and impaired musculoskeletal function. Materials and methods The search was conducted on articles published in Web of science, PubMed, Scopus, Academic search Premier, and Cochrane Library between 1990 and August 2017. Studies that reported the concurrent validity and/or reliability of dynamic MRI techniques for in vivo evaluation of joint or muscle mechanics were included after assessment by two independent reviewers. Selected articles were assessed using an adapted quality assessment tool and a data extraction process. Results for concurrent validity and reliability were categorized as poor, moderate, or excellent. Results Twenty articles fulfilled the inclusion criteria with a mean quality assessment score of 66% (±10.4%). Concurrent validity and/or reliability of eight dynamic MRI techniques were reported, with the knee being the most evaluated joint (seven studies). Moderate to excellent concurrent validity and reliability were reported for seven out of eight dynamic MRI techniques. Cine phase contrast and real-time MRI appeared to be the most valid and reliable techniques to evaluate joint motion, and spin tag for muscle motion. Conclusion Dynamic MRI techniques are promising for the in vivo evaluation of musculoskeletal mechanics; however results should be evaluated with caution since validity and reliability have not been determined for all joints and muscles, nor for many pathological conditions. PMID:29232401
NASA Astrophysics Data System (ADS)
Mintz, Jessica A.
The goal of this study was to investigate New York State’s Annual Professional Performance Review (APPR) from the perspectives of secondary science teachers and their administrators. Examining their perceptions through interviews was insightful due to the subjects’ proximity to the teaching and learning processes. Five science teacher/administrator pairs from selected school districts were interviewed; all had varied ranges of experience and content certifications. The study also investigated the unintended consequences the teachers and administrators experienced using the APPR system. This phenomenological research study lays the groundwork for making policy recommendations for science teacher evaluations. The goal was to examine teacher and administrator perceptions, the clarity and practicality of teacher evaluation reforms, as well as how motivational theory might incentivize teacher change through future reform efforts. Provisional coding was used in this study based upon prior research. The list of codes was generated using motivational theories applied to the design of teacher evaluation policy and reform implementation constructs. Although the science teachers agreed with the importance of being evaluated, they generally viewed aspects of the process of quantifying their effectiveness as unclear, unfair, and flawed. The science teachers indicated that student variations in ability and performance were not considered when APPR was established. The science teachers recommended that the focus of teacher evaluations should be on content specific professional development. They proposed the establishment of peer review systems, teacher collaboration networks, and self-reflection documentation as means to improve their science teaching practices. The administrators agreed that accountability was important, however, holding individual teachers accountable for student outcomes was not reliably measured through the APPR process. They recommended other forms of evaluative measures that would focus on professional development instead of an evaluative effectiveness score. Their recommendations involved. creating more time for science administrators to be teacher leaders rather than evaluators. The administrators proposed three main recommendations: 1) decreasing the number of formal observations and replacing them with frequent informal classroom visits; 2) peer-peer observations utilizing instructional rounds; and 3) educator involvement in the creation of improved science teacher evaluation, with implicit trust in the administrators to exert local control.
Managing Reliability in the 21st Century
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dellin, T.A.
1998-11-23
The rapid pace of change at Ike end of the 20th Century should continue unabated well into the 21st Century. The driver will be the marketplace imperative of "faster, better, cheaper." This imperative has already stimulated a revolution-in-engineering in design and manufacturing. In contrast, to date, reliability engineering has not undergone a similar level of change. It is critical that we implement a corresponding revolution-in-reliability-engineering as we enter the new millennium. If we are still using 20th Century reliability approaches in the 21st Century, then reliability issues will be the limiting factor in faster, better, and cheaper. At the heartmore » of this reliability revolution will be a science-based approach to reliability engineering. Science-based reliability will enable building-in reliability, application-specific products, virtual qualification, and predictive maintenance. The purpose of this paper is to stimulate a dialogue on the future of reliability engineering. We will try to gaze into the crystal ball and predict some key issues that will drive reliability programs in the new millennium. In the 21st Century, we will demand more of our reliability programs. We will need the ability to make accurate reliability predictions that will enable optimizing cost, performance and time-to-market to meet the needs of every market segment. We will require that all of these new capabilities be in place prior to the stint of a product development cycle. The management of reliability programs will be driven by quantifiable metrics of value added to the organization business objectives.« less
NASA Astrophysics Data System (ADS)
Smartt, S. J.; Valenti, S.; Fraser, M.; Inserra, C.; Young, D. R.; Sullivan, M.; Pastorello, A.; Benetti, S.; Gal-Yam, A.; Knapic, C.; Molinaro, M.; Smareglia, R.; Smith, K. W.; Taubenberger, S.; Yaron, O.; Anderson, J. P.; Ashall, C.; Balland, C.; Baltay, C.; Barbarino, C.; Bauer, F. E.; Baumont, S.; Bersier, D.; Blagorodnova, N.; Bongard, S.; Botticella, M. T.; Bufano, F.; Bulla, M.; Cappellaro, E.; Campbell, H.; Cellier-Holzem, F.; Chen, T.-W.; Childress, M. J.; Clocchiatti, A.; Contreras, C.; Dall'Ora, M.; Danziger, J.; de Jaeger, T.; De Cia, A.; Della Valle, M.; Dennefeld, M.; Elias-Rosa, N.; Elman, N.; Feindt, U.; Fleury, M.; Gall, E.; Gonzalez-Gaitan, S.; Galbany, L.; Morales Garoffolo, A.; Greggio, L.; Guillou, L. L.; Hachinger, S.; Hadjiyska, E.; Hage, P. E.; Hillebrandt, W.; Hodgkin, S.; Hsiao, E. Y.; James, P. A.; Jerkstrand, A.; Kangas, T.; Kankare, E.; Kotak, R.; Kromer, M.; Kuncarayakti, H.; Leloudas, G.; Lundqvist, P.; Lyman, J. D.; Hook, I. M.; Maguire, K.; Manulis, I.; Margheim, S. J.; Mattila, S.; Maund, J. R.; Mazzali, P. A.; McCrum, M.; McKinnon, R.; Moreno-Raya, M. E.; Nicholl, M.; Nugent, P.; Pain, R.; Pignata, G.; Phillips, M. M.; Polshaw, J.; Pumo, M. L.; Rabinowitz, D.; Reilly, E.; Romero-Cañizales, C.; Scalzo, R.; Schmidt, B.; Schulze, S.; Sim, S.; Sollerman, J.; Taddia, F.; Tartaglia, L.; Terreran, G.; Tomasella, L.; Turatto, M.; Walker, E.; Walton, N. A.; Wyrzykowski, L.; Yuan, F.; Zampieri, L.
2015-07-01
Context. The Public European Southern Observatory Spectroscopic Survey of Transient Objects (PESSTO) began as a public spectroscopic survey in April 2012. PESSTO classifies transients from publicly available sources and wide-field surveys, and selects science targets for detailed spectroscopic and photometric follow-up. PESSTO runs for nine months of the year, January - April and August - December inclusive, and typically has allocations of 10 nights per month. Aims: We describe the data reduction strategy and data products that are publicly available through the ESO archive as the Spectroscopic Survey data release 1 (SSDR1). Methods: PESSTO uses the New Technology Telescope with the instruments EFOSC2 and SOFI to provide optical and NIR spectroscopy and imaging. We target supernovae and optical transients brighter than 20.5m for classification. Science targets are selected for follow-up based on the PESSTO science goal of extending knowledge of the extremes of the supernova population. We use standard EFOSC2 set-ups providing spectra with resolutions of 13-18 Å between 3345-9995 Å. A subset of the brighter science targets are selected for SOFI spectroscopy with the blue and red grisms (0.935-2.53 μm and resolutions 23-33 Å) and imaging with broadband JHKs filters. Results: This first data release (SSDR1) contains flux calibrated spectra from the first year (April 2012-2013). A total of 221 confirmed supernovae were classified, and we released calibrated optical spectra and classifications publicly within 24 h of the data being taken (via WISeREP). The data in SSDR1 replace those released spectra. They have more reliable and quantifiable flux calibrations, correction for telluric absorption, and are made available in standard ESO Phase 3 formats. We estimate the absolute accuracy of the flux calibrations for EFOSC2 across the whole survey in SSDR1 to be typically ~15%, although a number of spectra will have less reliable absolute flux calibration because of weather and slit losses. Acquisition images for each spectrum are available which, in principle, can allow the user to refine the absolute flux calibration. The standard NIR reduction process does not produce high accuracy absolute spectrophotometry but synthetic photometry with accompanying JHKs imaging can improve this. Whenever possible, reduced SOFI images are provided to allow this. Conclusions: Future data releases will focus on improving the automated flux calibration of the data products. The rapid turnaround between discovery and classification and access to reliable pipeline processed data products has allowed early science papers in the first few months of the survey. Based on observations collected at the European Organisation for Astronomical Research in the Southern Hemisphere, Chile, as part of programme 188.D-3003 (PESSTO). http://www.pessto.org
Proteomics in food: Quality, safety, microbes, and allergens.
Piras, Cristian; Roncada, Paola; Rodrigues, Pedro M; Bonizzi, Luigi; Soggiu, Alessio
2016-03-01
Food safety and quality and their associated risks pose a major concern worldwide regarding not only the relative economical losses but also the potential danger to consumer's health. Customer's confidence in the integrity of the food supply could be hampered by inappropriate food safety measures. A lack of measures and reliable assays to evaluate and maintain a good control of food characteristics may affect the food industry economy and shatter consumer confidence. It is imperative to create and to establish fast and reliable analytical methods that allow a good and rapid analysis of food products during the whole food chain. Proteomics can represent a powerful tool to address this issue, due to its proven excellent quantitative and qualitative drawbacks in protein analysis. This review illustrates the applications of proteomics in the past few years in food science focusing on food of animal origin with some brief hints on other types. Aim of this review is to highlight the importance of this science as a valuable tool to assess food quality and safety. Emphasis is also posed in food processing, allergies, and possible contaminants like bacteria, fungi, and other pathogens. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Public Reception of Climate Science: Coherence, Reliability, and Independence.
Hahn, Ulrike; Harris, Adam J L; Corner, Adam
2016-01-01
Possible measures to mitigate climate change require global collective actions whose impacts will be felt by many, if not all. Implementing such actions requires successful communication of the reasons for them, and hence the underlying climate science, to a degree that far exceeds typical scientific issues which do not require large-scale societal response. Empirical studies have identified factors, such as the perceived level of consensus in scientific opinion and the perceived reliability of scientists, that can limit people's trust in science communicators and their subsequent acceptance of climate change claims. Little consideration has been given, however, to recent formal results within philosophy concerning the relationship between truth, the reliability of evidence sources, the coherence of multiple pieces of evidence/testimonies, and the impact of (non-)independence between sources of evidence. This study draws on these results to evaluate exactly what has (and, more important, has not yet) been established in the empirical literature about the factors that bias the public's reception of scientific communications about climate change. Copyright © 2015 Cognitive Science Society, Inc.
A synthetic design environment for ship design
NASA Technical Reports Server (NTRS)
Chipman, Richard R.
1995-01-01
Rapid advances in computer science and information system technology have made possible the creation of synthetic design environments (SDE) which use virtual prototypes to increase the efficiency and agility of the design process. This next generation of computer-based design tools will rely heavily on simulation and advanced visualization techniques to enable integrated product and process teams to concurrently conceptualize, design, and test a product and its fabrication processes. This paper summarizes a successful demonstration of the feasibility of using a simulation based design environment in the shipbuilding industry. As computer science and information science technologies have evolved, there have been many attempts to apply and integrate the new capabilities into systems for the improvement of the process of design. We see the benefits of those efforts in the abundance of highly reliable, technologically complex products and services in the modern marketplace. Furthermore, the computer-based technologies have been so cost effective that the improvements embodied in modern products have been accompanied by lowered costs. Today the state-of-the-art in computerized design has advanced so dramatically that the focus is no longer on merely improving design methodology; rather the goal is to revolutionize the entire process by which complex products are conceived, designed, fabricated, tested, deployed, operated, maintained, refurbished and eventually decommissioned. By concurrently addressing all life-cycle issues, the basic decision making process within an enterprise will be improved dramatically, leading to new levels of quality, innovation, efficiency, and customer responsiveness. By integrating functions and people with an enterprise, such systems will change the fundamental way American industries are organized, creating companies that are more competitive, creative, and productive.
Fisher, Jason C; Godfried, David H; Lighter-Fisher, Jennifer; Pratko, Joseph; Sheldon, Mary Ellen; Diago, Thelma; Kuenzler, Keith A; Tomita, Sandra S; Ginsburg, Howard B
2016-06-01
Quality improvement (QI) bundles have been widely adopted to reduce surgical site infections (SSI). Improvement science suggests when organizations achieve high-reliability to QI processes, outcomes dramatically improve. However, measuring QI process compliance is poorly supported by electronic health record (EHR) systems. We developed a custom EHR tool to facilitate capture of process data for SSI prevention with the aim of increasing bundle compliance and reducing adverse events. Ten SSI prevention bundle processes were linked to EHR data elements that were then aggregated into a snapshot display superimposed on weekly case-log reports. The data aggregation and user interface facilitated efficient review of all SSI bundle elements, providing an exact bundle compliance rate without random sampling or chart review. Nine months after implementation of our custom EHR tool, we observed centerline shifts in median SSI bundle compliance (46% to 72%). Additionally, as predicted by high reliability principles, we began to see a trend toward improvement in SSI rates (1.68 to 0.87 per 100 operations), but a discrete centerline shift was not detected. Simple informatics solutions can facilitate extraction of QI process data from the EHR without relying on adjunctive systems. Analyses of these data may drive reductions in adverse events. Pediatric surgical departments should consider leveraging the EHR to enhance bundle compliance as they implement QI strategies. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Campbell, Todd; Abd-Hamid, Nor Hashidah
2013-08-01
This study describes the development of an instrument to investigate the extent to which technology is integrated in science instruction in ways aligned to science reform outlined in standards documents. The instrument was developed by: (a) creating items consistent with the five dimensions identified in science education literature, (b) establishing content validity with both national and international content experts, (c) refining the item pool based on content expert feedback, (d) piloting testing of the instrument, (e) checking statistical reliability and item analysis, and (f) subsequently refining and finalization of the instrument. The TUSI was administered in a field test across eleven classrooms by three observers, with a total of 33 TUSI ratings completed. The finalized instrument was found to have acceptable inter-rater intraclass correlation reliability estimates. After the final stage of development, the TUSI instrument consisted of 26-items separated into the original five categories, which aligned with the exploratory factor analysis clustering of the items. Additionally, concurrent validity of the TUSI was established with the Reformed Teaching Observation Protocol. Finally, a subsequent set of 17 different classrooms were observed during the spring of 2011, and for the 9 classrooms where technology integration was observed, an overall Cronbach alpha reliability coefficient of 0.913 was found. Based on the analyses completed, the TUSI appears to be a useful instrument for measuring how technology is integrated into science classrooms and is seen as one mechanism for measuring the intersection of technological, pedagogical, and content knowledge in science classrooms.
Safari Science: Assessing the reliability of citizen science data for wildlife surveys
Steger, Cara; Butt, Bilal; Hooten, Mevin B.
2017-01-01
Protected areas are the cornerstone of global conservation, yet financial support for basic monitoring infrastructure is lacking in 60% of them. Citizen science holds potential to address these shortcomings in wildlife monitoring, particularly for resource-limited conservation initiatives in developing countries – if we can account for the reliability of data produced by volunteer citizen scientists (VCS).This study tests the reliability of VCS data vs. data produced by trained ecologists, presenting a hierarchical framework for integrating diverse datasets to assess extra variability from VCS data.Our results show that while VCS data are likely to be overdispersed for our system, the overdispersion varies widely by species. We contend that citizen science methods, within the context of East African drylands, may be more appropriate for species with large body sizes, which are relatively rare, or those that form small herds. VCS perceptions of the charisma of a species may also influence their enthusiasm for recording it.Tailored programme design (such as incentives for VCS) may mitigate the biases in citizen science data and improve overall participation. However, the cost of designing and implementing high-quality citizen science programmes may be prohibitive for the small protected areas that would most benefit from these approaches.Synthesis and applications. As citizen science methods continue to gain momentum, it is critical that managers remain cautious in their implementation of these programmes while working to ensure methods match data purpose. Context-specific tests of citizen science data quality can improve programme implementation, and separate data models should be used when volunteer citizen scientists' variability differs from trained ecologists' data. Partnerships across protected areas and between protected areas and other conservation institutions could help to cover the costs of citizen science programme design and implementation.
Accuracy of Press Reports in Astronomy
NASA Astrophysics Data System (ADS)
Schaefer, B. E.; Hurley, K.; Nemiroff, R. J.; Branch, D.; Perlmutter, S.; Schaefer, M. W.; Consolmagno, G. J.; McSween, H.; Strom, R.
1999-12-01
Most Americans learn about modern science from press reports, while such articles have a bad reputation among scientists. We have performed a study of 403 news articles on three topics (gamma-ray astronomy, supernovae, and Mars) to quantitatively answer the questions 'How accurate are press reports of astronomy?' and 'What fraction of the basic science claims in the press are correct?' We have taken all articles on the topics from five news sources (UPI, NYT, S&T, SN, and 5 newspapers) for one decade (1987-1996). All articles were evaluated for a variety of errors, ranging from the fundamental to the trivial. For 'trivial' errors, S&T and SN were virtually perfect while the various newspapers averaged roughly one trivial error every two articles. For meaningful errors, we found that none of our 403 articles significantly mislead the reader or misrepresented the science. So a major result of our study is that reporters should be rehabilitated into the good graces of astronomers, since they are actually doing a good job. For our second question, we rated each story with the probability that its basic new science claim is correct. We found that the average probability over all stories is 70%, regardless of source, topic, importance, or quoted pundit. How do we reconcile our findings that the press does not make significant errors yet the basic science presented is 30% wrong? The reason is that the nature of news reporting is to present front-line science and the nature of front-line science is that reliable conclusions have not yet been reached. So a second major result of our study is to make the distinction between textbook science (with reliability near 100%) and front-line science which you read in the press (with reliability near 70%).
Developing and Validating a Science Notebook Rubric for Fifth-Grade Non-Mainstream Students
NASA Astrophysics Data System (ADS)
Huerta, Margarita; Lara-Alecio, Rafael; Tong, Fuhui; Irby, Beverly J.
2014-07-01
We present the development and validation of a science notebook rubric intended to measure the academic language and conceptual understanding of non-mainstream students, specifically fifth-grade male and female economically disadvantaged Hispanic English language learner (ELL) and African-American or Hispanic native English-speaking students. The science notebook rubric is based on two main constructs: academic language and conceptual understanding. The constructs are grounded in second-language acquisition theory and theories of writing and conceptual understanding. We established content validity and calculated reliability measures using G theory and percent agreement (for comparison) with a sample of approximately 144 unique science notebook entries and 432 data points. Results reveal sufficient reliability estimates, indicating that the instrument is promising for use in future research studies including science notebooks in classrooms with populations of economically disadvantaged Hispanic ELL and African-American or Hispanic native English-speaking students.
NASA'S Earth Science Data Stewardship Activities
NASA Technical Reports Server (NTRS)
Lowe, Dawn R.; Murphy, Kevin J.; Ramapriyan, Hampapuram
2015-01-01
NASA has been collecting Earth observation data for over 50 years using instruments on board satellites, aircraft and ground-based systems. With the inception of the Earth Observing System (EOS) Program in 1990, NASA established the Earth Science Data and Information System (ESDIS) Project and initiated development of the Earth Observing System Data and Information System (EOSDIS). A set of Distributed Active Archive Centers (DAACs) was established at locations based on science discipline expertise. Today, EOSDIS consists of 12 DAACs and 12 Science Investigator-led Processing Systems (SIPS), processing data from the EOS missions, as well as the Suomi National Polar Orbiting Partnership mission, and other satellite and airborne missions. The DAACs archive and distribute the vast majority of data from NASA’s Earth science missions, with data holdings exceeding 12 petabytes The data held by EOSDIS are available to all users consistent with NASA’s free and open data policy, which has been in effect since 1990. The EOSDIS archives consist of raw instrument data counts (level 0 data), as well as higher level standard products (e.g., geophysical parameters, products mapped to standard spatio-temporal grids, results of Earth system models using multi-instrument observations, and long time series of Earth System Data Records resulting from multiple satellite observations of a given type of phenomenon). EOSDIS data stewardship responsibilities include ensuring that the data and information content are reliable, of high quality, easily accessible, and usable for as long as they are considered to be of value.
Representation and re-presentation in litigation science.
Jasanoff, Sheila
2008-01-01
Federal appellate courts have devised several criteria to help judges distinguish between reliable and unreliable scientific evidence. The best known are the U.S. Supreme Court's criteria offered in 1993 in Daubert v. Merrell Dow Pharmaceuticals, Inc. This article focuses on another criterion, offered by the Ninth Circuit Court of Appeals, that instructs judges to assign lower credibility to "litigation science" than to science generated before litigation. In this article I argue that the criterion-based approach to judicial screening of scientific evidence is deeply flawed. That approach buys into the faulty premise that there are external criteria, lying outside the legal process, by which judges can distinguish between good and bad science. It erroneously assumes that judges can ascertain the appropriate criteria and objectively apply them to challenged evidence before litigation unfolds, and before methodological disputes are sorted out during that process. Judicial screening does not take into account the dynamics of litigation itself, including gaming by the parties and framing by judges, as constitutive factors in the production and representation of knowledge. What is admitted through judicial screening, in other words, is not precisely what a jury would see anyway. Courts are sites of repeated re-representations of scientific knowledge. In sum, the screening approach fails to take account of the wealth of existing scholarship on the production and validation of scientific facts. An unreflective application of that approach thus puts courts at risk of relying upon a "junk science" of the nature of scientific knowledge.
Robotic Assembly of Truss Structures for Space Systems and Future Research Plans
NASA Technical Reports Server (NTRS)
Doggett, William
2002-01-01
Many initiatives under study by both the space science and earth science communities require large space systems, i.e. with apertures greater than 15 m or dimensions greater than 20 m. This paper reviews the effort in NASA Langley Research Center's Automated Structural Assembly Laboratory which laid the foundations for robotic construction of these systems. In the Automated Structural Assembly Laboratory reliable autonomous assembly and disassembly of an 8 meter planar structure composed of 102 truss elements covered by 12 panels was demonstrated. The paper reviews the hardware and software design philosophy which led to reliable operation during weeks of near continuous testing. Special attention is given to highlight the features enhancing assembly reliability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Birkmire, R.W.; Phillips, J.E.; Shafarman, W.N.
2000-08-25
This report describes results achieved during phase 1 of a three-phase subcontract to develop and understand thin-film solar cell technology associated to CuInSe{sub 2} and related alloys, a-Si and its alloys, and CdTe. Modules based on all these thin films are promising candidates to meet DOE long-range efficiency, reliability, and manufacturing cost goals. The critical issues being addressed under this program are intended to provide the science and engineering basis for the development of viable commercial processes and to improve module performance. The generic research issues addressed are: (1) quantitative analysis of processing steps to provide information for efficient commercial-scalemore » equipment design and operation; (2) device characterization relating the device performance to materials properties and process conditions; (3) development of alloy materials with different bandgaps to allow improved device structures for stability and compatibility with module design; (4) development of improved window/heterojunction layers and contacts to improve device performance and reliability; and (5) evaluation of cell stability with respect to illumination, temperature, and ambient and with respect to device structure and module encapsulation.« less
FPGA Sequencer for Radar Altimeter Applications
NASA Technical Reports Server (NTRS)
Berkun, Andrew C.; Pollard, Brian D.; Chen, Curtis W.
2011-01-01
A sequencer for a radar altimeter provides accurate attitude information for a reliable soft landing of the Mars Science Laboratory (MSL). This is a field-programmable- gate-array (FPGA)-only implementation. A table loaded externally into the FPGA controls timing, processing, and decision structures. Radar is memory-less and does not use previous acquisitions to assist in the current acquisition. All cycles complete in exactly 50 milliseconds, regardless of range or whether a target was found. A RAM (random access memory) within the FPGA holds instructions for up to 15 sets. For each set, timing is run, echoes are processed, and a comparison is made. If a target is seen, more detailed processing is run on that set. If no target is seen, the next set is tried. When all sets have been run, the FPGA terminates and waits for the next 50-millisecond event. This setup simplifies testing and improves reliability. A single vertex chip does the work of an entire assembly. Output products require minor processing to become range and velocity. This technology is the heart of the Terminal Descent Sensor, which is an integral part of the Entry Decent and Landing system for MSL. In addition, it is a strong candidate for manned landings on Mars or the Moon.
Boekeloo, Bradley; Randolph, Suzanne; Timmons-Brown, Stephanie; Wang, Min Qi
2014-08-01
Measures are needed to assess youth perceptions about health science careers to facilitate research aimed at increasing youth pursuit of health science. Although the Indiana Instrument provides an established measure of perceptions regarding nursing and ideal careers, we were interested in learning how high-achieving 10th graders from relatively low socioeconomic areas who identify as black/African American (black) perceive health science and ideal careers. The Indiana Instrument was modified, administered to 90 youth of interest, and psychometrically analyzed. Reliable subscales were identified that may facilitate parsimonious, theoretical, and reliable study of youth decision-making regarding health science careers. Such research may help to develop and evaluate strategies for increasing the number of minority health scientists.
Dougherty, Edward R.; Highfield, Roger R.
2016-01-01
The current interest in big data, machine learning and data analytics has generated the widespread impression that such methods are capable of solving most problems without the need for conventional scientific methods of inquiry. Interest in these methods is intensifying, accelerated by the ease with which digitized data can be acquired in virtually all fields of endeavour, from science, healthcare and cybersecurity to economics, social sciences and the humanities. In multiscale modelling, machine learning appears to provide a shortcut to reveal correlations of arbitrary complexity between processes at the atomic, molecular, meso- and macroscales. Here, we point out the weaknesses of pure big data approaches with particular focus on biology and medicine, which fail to provide conceptual accounts for the processes to which they are applied. No matter their ‘depth’ and the sophistication of data-driven methods, such as artificial neural nets, in the end they merely fit curves to existing data. Not only do these methods invariably require far larger quantities of data than anticipated by big data aficionados in order to produce statistically reliable results, but they can also fail in circumstances beyond the range of the data used to train them because they are not designed to model the structural characteristics of the underlying system. We argue that it is vital to use theory as a guide to experimental design for maximal efficiency of data collection and to produce reliable predictive models and conceptual knowledge. Rather than continuing to fund, pursue and promote ‘blind’ big data projects with massive budgets, we call for more funding to be allocated to the elucidation of the multiscale and stochastic processes controlling the behaviour of complex systems, including those of life, medicine and healthcare. This article is part of the themed issue ‘Multiscale modelling at the physics–chemistry–biology interface’. PMID:27698035
Coveney, Peter V; Dougherty, Edward R; Highfield, Roger R
2016-11-13
The current interest in big data, machine learning and data analytics has generated the widespread impression that such methods are capable of solving most problems without the need for conventional scientific methods of inquiry. Interest in these methods is intensifying, accelerated by the ease with which digitized data can be acquired in virtually all fields of endeavour, from science, healthcare and cybersecurity to economics, social sciences and the humanities. In multiscale modelling, machine learning appears to provide a shortcut to reveal correlations of arbitrary complexity between processes at the atomic, molecular, meso- and macroscales. Here, we point out the weaknesses of pure big data approaches with particular focus on biology and medicine, which fail to provide conceptual accounts for the processes to which they are applied. No matter their 'depth' and the sophistication of data-driven methods, such as artificial neural nets, in the end they merely fit curves to existing data. Not only do these methods invariably require far larger quantities of data than anticipated by big data aficionados in order to produce statistically reliable results, but they can also fail in circumstances beyond the range of the data used to train them because they are not designed to model the structural characteristics of the underlying system. We argue that it is vital to use theory as a guide to experimental design for maximal efficiency of data collection and to produce reliable predictive models and conceptual knowledge. Rather than continuing to fund, pursue and promote 'blind' big data projects with massive budgets, we call for more funding to be allocated to the elucidation of the multiscale and stochastic processes controlling the behaviour of complex systems, including those of life, medicine and healthcare.This article is part of the themed issue 'Multiscale modelling at the physics-chemistry-biology interface'. © 2015 The Authors.
Final Report for DOE Award ER25756
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kesselman, Carl
2014-11-17
The SciDAC-funded Center for Enabling Distributed Petascale Science (CEDPS) was established to address technical challenges that arise due to the frequent geographic distribution of data producers (in particular, supercomputers and scientific instruments) and data consumers (people and computers) within the DOE laboratory system. Its goal is to produce technical innovations that meet DOE end-user needs for (a) rapid and dependable placement of large quantities of data within a distributed high-performance environment, and (b) the convenient construction of scalable science services that provide for the reliable and high-performance processing of computation and data analysis requests from many remote clients. The Centermore » is also addressing (c) the important problem of troubleshooting these and other related ultra-high-performance distributed activities from the perspective of both performance and functionality« less
Shared issues of wavefield inversion and illustrations in 3-D diffusive electromagnetics
NASA Astrophysics Data System (ADS)
Lesselier, Dominique; Lambert, Marc; Perrusson, Gaële
2005-07-01
Electromagnetic non-destructive evaluation of complex objects means that one has to decipher data which result from their interaction with imposed sources. This task is crucial in civil, environmental and medical engineering, to quote obvious fields, as well as for safety and reliability of industrial processes of various kinds in key energy and transportation sectors, for example. This short contribution does not attempt to review the huge variety of themes and the many applications of the science of inversion, but aims at emphasizing a number of points that seem common enough to this science to be worthwhile to be reviewed. Two illustrations and a few main references thought of good interest among the ever increasing literature are given. To cite this article: D. Lesselier et al., C. R. Physique 6 (2005).
A Note on "Accuracy" and "Precision"
ERIC Educational Resources Information Center
Stallings, William M.; Gillmore, Gerald M.
1971-01-01
Advocates the use of precision" rather than accuracy" in defining reliability. These terms are consistently differentiated in certain sciences. Review of psychological and measurement literature reveals, however, interchangeable usage of the terms in defining reliability. (Author/GS)
Chiari, Brasília M; Goulart, Bárbara N G
2009-09-01
Studies showing stronger scientific evidence related to speech, language and hearing pathology (SLP) have an impact on the prevention and rehabilitation of human communication and gained ground in SLP research agenda. In this paper we discuss some aspects and directions that should be considered for in-depth knowledge about speech, language and hearing needs in different population groups (age group, gender and other variables according to specific related disorders) for improved comprehensive care, successful efforts and effective use of financial and human resources. It is also discussed the decision making process for requesting complementary evaluations and tests, from routine to highly complex ones, that should be based on each test and/or procedure and their contribution to the diagnosis and therapeutic planning. In fact, it is crucial to have reliable parameters for planning, preventing and treating human communication and its related disorders. Epidemiology, biostatistics and social sciences can contribute with more specific information in human communication sciences and guide more specific studies on the international science and technology agenda, improving communication sciences involvement in the international health-related scientific scenario.
Characterizing College Science Assessments: The Three-Dimensional Learning Assessment Protocol
Underwood, Sonia M.; Matz, Rebecca L.; Posey, Lynmarie A.; Carmel, Justin H.; Caballero, Marcos D.; Fata-Hartley, Cori L.; Ebert-May, Diane; Jardeleza, Sarah E.; Cooper, Melanie M.
2016-01-01
Many calls to improve science education in college and university settings have focused on improving instructor pedagogy. Meanwhile, science education at the K-12 level is undergoing significant changes as a result of the emphasis on scientific and engineering practices, crosscutting concepts, and disciplinary core ideas. This framework of “three-dimensional learning” is based on the literature about how people learn science and how we can help students put their knowledge to use. Recently, similar changes are underway in higher education by incorporating three-dimensional learning into college science courses. As these transformations move forward, it will become important to assess three-dimensional learning both to align assessments with the learning environment, and to assess the extent of the transformations. In this paper we introduce the Three-Dimensional Learning Assessment Protocol (3D-LAP), which is designed to characterize and support the development of assessment tasks in biology, chemistry, and physics that align with transformation efforts. We describe the development process used by our interdisciplinary team, discuss the validity and reliability of the protocol, and provide evidence that the protocol can distinguish between assessments that have the potential to elicit evidence of three-dimensional learning and those that do not. PMID:27606671
Characterizing College Science Assessments: The Three-Dimensional Learning Assessment Protocol.
Laverty, James T; Underwood, Sonia M; Matz, Rebecca L; Posey, Lynmarie A; Carmel, Justin H; Caballero, Marcos D; Fata-Hartley, Cori L; Ebert-May, Diane; Jardeleza, Sarah E; Cooper, Melanie M
2016-01-01
Many calls to improve science education in college and university settings have focused on improving instructor pedagogy. Meanwhile, science education at the K-12 level is undergoing significant changes as a result of the emphasis on scientific and engineering practices, crosscutting concepts, and disciplinary core ideas. This framework of "three-dimensional learning" is based on the literature about how people learn science and how we can help students put their knowledge to use. Recently, similar changes are underway in higher education by incorporating three-dimensional learning into college science courses. As these transformations move forward, it will become important to assess three-dimensional learning both to align assessments with the learning environment, and to assess the extent of the transformations. In this paper we introduce the Three-Dimensional Learning Assessment Protocol (3D-LAP), which is designed to characterize and support the development of assessment tasks in biology, chemistry, and physics that align with transformation efforts. We describe the development process used by our interdisciplinary team, discuss the validity and reliability of the protocol, and provide evidence that the protocol can distinguish between assessments that have the potential to elicit evidence of three-dimensional learning and those that do not.
Performance Assessments in Science: Hands-On Tasks and Scoring Guides.
ERIC Educational Resources Information Center
Stecher, Brian M.; Klein, Stephen P.
In 1992, RAND received a grant from the National Science Foundation to study the technical quality of performance assessments in science and to evaluate their feasibility for use in large-scale testing programs. The specific goals of the project were to assess the reliability and validity of hands-on science testing and to investigate the cost and…
A Multidisciplinary Assessment of Faculty Accuracy and Reliability with Bloom's Taxonomy
ERIC Educational Resources Information Center
Welch, Adam C.; Karpen, Samuel C.; Cross, L. Brian; LeBlanc, Brandie N.
2017-01-01
The aims of this study were to determine faculty's ability to accurately and reliably categorize exam questions using Bloom's Taxonomy, and if modified versions would improve the accuracy and reliability. Faculty experience and affiliation with a health sciences discipline were also considered. Faculty at one university were asked to categorize 30…
Reliability Generalization: An Examination of the Positive Affect and Negative Affect Schedule
ERIC Educational Resources Information Center
Leue, Anja; Lange, Sebastian
2011-01-01
The assessment of positive affect (PA) and negative affect (NA) by means of the Positive Affect and Negative Affect Schedule has received a remarkable popularity in the social sciences. Using a meta-analytic tool--namely, reliability generalization (RG)--population reliability scores of both scales have been investigated on the basis of a random…
U.S. Geological Survey Virginia and West Virginia Water Science Center
Jastram, John D.
2017-08-22
The U.S. Geological Survey (USGS) serves the Nation by providing reliable scientific information to describe and understand the Earth; minimize loss of life and property from natural disasters; manage water, biological, energy, and mineral resources; and enhance and protect our quality of life. In support of this mission, the USGS Virginia and West Virginia Water Science Center works in cooperation with many entities to provide reliable, impartial scientific information to resource managers, planners, and the public.
Measuring Science Instructional Practice: A Survey Tool for the Age of NGSS
NASA Astrophysics Data System (ADS)
Hayes, Kathryn N.; Lee, Christine S.; DiStefano, Rachelle; O'Connor, Dawn; Seitz, Jeffery C.
2016-03-01
Ambitious efforts are taking place to implement a new vision for science education in the United States, in both Next Generation Science Standards (NGSS)-adopted states and those states creating their own, often related, standards. In-service and pre-service teacher educators are involved in supporting teacher shifts in practice toward the new standards. With these efforts, it will be important to document shifts in science instruction toward the goals of NGSS and broader science education reform. Survey instruments are often used to capture instructional practices; however, existing surveys primarily measure inquiry based on previous definitions and standards and with a few exceptions, disregard key instructional practices considered outside the scope of inquiry. A comprehensive survey and a clearly defined set of items do not exist. Moreover, items specific to the NGSS Science and Engineering practices have not yet been tested. To address this need, we developed and validated a Science Instructional Practices survey instrument that is appropriate for NGSS and other related science standards. Survey construction was based on a literature review establishing key areas of science instruction, followed by a systematic process for identifying and creating items. Instrument validity and reliability were then tested through a procedure that included cognitive interviews, expert review, exploratory and confirmatory factor analysis (using independent samples), and analysis of criterion validity. Based on these analyses, final subscales include: Instigating an Investigation, Data Collection and Analysis, Critique, Explanation and Argumentation, Modeling, Traditional Instruction, Prior Knowledge, Science Communication, and Discourse.
Arabiat, Diana; Elliott, Barbara; Draper, Peter; Al Jabery, Mohammad
2011-12-01
A range of scales is available to measure health-related quality of life. Recently, established quality of life scales have been translated for use in a wide range of Western and non-Western cultures. One of the most widely used health-related quality of life scales for use with children is the PedsQL™ 4.0. In this paper, we describe the process of translating this scale into Arabic and establishing its reliability and validity. This paper has three aims: first, to explain the process of translating the PedsQL™ (4.0) self- and proxy-reports for the ages 8-12 and 13-18, from English into Arabic; second, to assess the reliability of the new Arabic version of the scale and third, to assess its validity. The scale was translated from English to Arabic and back-translated to ensure accuracy. The Arabic version was administered to healthy children and those with cancer and a range of chronic illnesses in Jordan. Statistical methods were used to test the psychometric properties (reliability and validity) of the Arabic version of the PedsQL™ (4.0) and its ability to discriminate between children in the above groups. Cronbach's alpha coefficients for child self- and parent proxy-reports exceeded 0.7 for the total scores, health summary scores and psychological health summary scores. Testing for discriminant validity showed that the healthy (control) group had a higher health-related quality of life than children and young people with cancer and chronic illness. The children with chronic illnesses had the lowest scores for physical, emotional and school functioning. Initial testing of the Arabic version of the PedsQL™ (4.0) suggests that the scale has satisfactory psychometric properties. © 2011 The Authors. Scandinavian Journal of Caring Sciences © 2011 Nordic College of Caring Science.
Haruna, Hussein; Tshuma, Ndumiso; Hu, Xiao
Understanding health information needs and health-seeking behavior is a prerequisite for developing an electronic health information literacy (EHIL) or eHealth literacy program for nondegree health sciences students. At present, interest in researching health information needs and reliable sources paradigms has gained momentum in many countries. However, most studies focus on health professionals and students in higher education institutions. The present study was aimed at providing new insight and filling the existing gap by examining health information needs and reliability of sources among nondegree health sciences students in Tanzania. A cross-sectional study was conducted in 15 conveniently selected health training institutions, where 403 health sciences students were participated. Thirty health sciences students were both purposely and conveniently chosen from each health-training institution. The selected students were pursuing nursing and midwifery, clinical medicine, dentistry, environmental health sciences, pharmacy, and medical laboratory sciences courses. Involved students were either in their first year, second year, or third year of study. Health sciences students' health information needs focus on their educational requirements, clinical practice, and personal information. They use print, human, and electronic health information. They lack eHealth research skills in navigating health information resources and have insufficient facilities for accessing eHealth information, a lack of specialists in health information, high costs for subscription electronic information, and unawareness of the availability of free Internet and other online health-related databases. This study found that nondegree health sciences students have limited skills in EHIL. Thus, designing and incorporating EHIL skills programs into the curriculum of nondegree health sciences students is vital. EHIL is a requirement common to all health settings, learning environments, and levels of study. Our future intention is to design EHIL to support nondegree health sciences students to retrieve and use available health information resources on the Internet. Copyright © 2017 Icahn School of Medicine at Mount Sinai. Published by Elsevier Inc. All rights reserved.
Accuracy of press reports on gamma-ray astronomy
NASA Astrophysics Data System (ADS)
Schaefer, Bradley E.; Nemiroff, Robert J.; Hurley, Kevin
2000-09-01
Most Americans learn about modern science from press reports, while such articles have a bad reputation among scientists. We have performed a study of 148 news articles on gamma-ray astronomy to quantitatively answer the questions ``How accurate are press reports of gamma-ray astronomy?'' and ``What fraction of the basic claims in the press are correct?'' We have taken all articles on the topic from five news sources (UPI, New York Times, Sky & Telescope, Science News, and five middle-sized city newspapers) for one decade (1987-1996) We found an average rate of roughly one trivial error every two articles, while none of our 148 articles significantly mislead the reader or misrepresented the science. This quantitative result is in stark contrast to the nearly universal opinion among scientists that the press frequently butchers science stories. So a major result from our study is that reporters should be rehabilitated into the good graces of astrophysicists, since they actually are doing a good job. For our second question, we rated each story with the probability that its basic new science claim is correct. We found that the average probability over all stories is 70%. Since the reporters and the scientists are both doing good jobs, then why is 30% of the science you read in the press wrong? The reason is that the nature of news reporting is to present front-line science and the nature of front-line science is that reliable conclusions have not yet been reached. The combination of these two natures forces fast breaking science news to have frequent incorrect ideas that are subsequently identified and corrected. So a second major result from our study is to make the distinction between textbook science (with reliabilities near 100%) and front-line science which you read about in the press (with reliabilities near 70%). .
Kobayashi, Hideyuki; Takemura, Yukie; Kanda, Katsuya
2011-09-01
Nursing is a labour-intensive field, and an extensive amount of latent information exists to aid in evaluating the quality of nursing service, with patients' experiences, the primary focus of such evaluations. To effect further improvement in nursing as well as medical care, Donabedian's structure-process-outcome approach has been applied. To classify and confirm patients' specific experiences with regard to nursing service based on Donabedian's structure-process-outcomes model for improving the quality of nursing care. Items were compiled from existing scales and assigned to structure, process or outcomes in Donabedian's model through discussion among expert nurses and pilot data collection. With regard to comfort, surroundings were classified as structure (e.g. accessibility to nurses, disturbance); with regard to patient-practitioner interaction, patient participation was classified as a process (e.g. expertise and skill, patient decision-making); and with regard to changes in patients, satisfaction was classified as an outcome (e.g. information support, overall satisfaction). Patient inquiry was carried out using the finalized questionnaire at general wards in Japanese hospitals in 2005-2006. Reliability and validity were tested using psychometric methods. Data from 1,810 patients (mean age: 59.7 years; mean length of stay: 23.7 days) were analysed. Internal consistency reliability was supported (α = 0.69-0.96), with factor analysis items of structure aggregated to one factor and overall satisfaction under outcome aggregated to one. The remaining items of outcome and process were distributed together in two factors. Inter-scale correlation (r = 0.442-0.807) supported the construct validity of each structure-process-outcome approach. All structure items were represented as negative-worded examples, as they dealt with basic conditions under Japanese universal health care system, and were regarded as representative related to concepts of dissatisfaction and no dissatisfaction. Patients' experiences with nursing service were confirmed using Donabedian's approach and can therefore be applied to improve quality of nursing practice by practitioners, managers and policy makers. © 2010 The Authors. Scandinavian Journal of Caring Sciences © 2010 Nordic College of Caring Science.
NASA Astrophysics Data System (ADS)
Foster, S. Q.; Johnson, R. M.; Randall, D. A.; Denning, A.; Burt, M. A.; Gardiner, L.; Genyuk, J.; Hatheway, B.; Jones, B.; La Grave, M. L.; Russell, R. M.
2009-12-01
The need for improving the representation of cloud processes in climate models has been one of the most important limitations of the reliability of climate-change simulations. Now in its fourth year, the National Science Foundation-funded Center for Multi-scale Modeling of Atmospheric Processes (CMMAP) at Colorado State University (CSU) is addressing this problem through a revolutionary new approach to representing cloud processes on their native scales, including the cloud-scale interaction processes that are active in cloud systems. CMMAP has set ambitious education and human-resource goals to share basic information about the atmosphere, clouds, weather, climate, and modeling with diverse K-12 and public audiences. This is accomplished through collaborations in resource development and dissemination between CMMAP scientists, CSU’s Little Shop of Physics (LSOP) program, and the Windows to the Universe (W2U) program at University Corporation for Atmospheric Research (UCAR). Little Shop of Physics develops new hands on science activities demonstrating basic science concepts fundamental to understanding atmospheric characteristics, weather, and climate. Videos capture demonstrations of children completing these activities which are broadcast to school districts and public television programs. CMMAP and LSOP educators and scientists partner in teaching a summer professional development workshops for teachers at CSU with a semester's worth of college-level content on the basic physics of the atmosphere, weather, climate, climate modeling, and climate change, as well as dozens of LSOP inquiry-based activities suitable for use in classrooms. The W2U project complements these efforts by developing and broadly disseminating new CMMAP-related online content pages, animations, interactives, image galleries, scientists’ biographies, and LSOP videos to K-12 and public audiences. Reaching nearly 20 million users annually, W2U is highly valued as a curriculum enhancement resource, because its content is written at three levels in English and Spanish. Links between science topics and literature, art, and mythology enable teachers of English Language Learners, literacy, and the arts to integrate science into their classrooms. In summary, the CMMAP NSF-funded Science and Technology Center has established a highly effective and productive partnership of scientists and educators focused on enhancing public science literacy about weather, climate, and global change. All CMMAP, LSOP, and W2U resources can be accessed online at no cost by the entire atmospheric science K-12 and informal science education community.
NASA Astrophysics Data System (ADS)
Foster, S. Q.; Johnson, R. M.; Randall, D. A.; Denning, A.; Russell, R. M.; Gardiner, L. S.; Hatheway, B.; Jones, B.; Burt, M. A.; Genyuk, J.
2010-12-01
The need for improving the representation of cloud processes in climate models has been one of the most important limitations of the reliability of climate-change simulations. Now in its fifth year, the National Science Foundation-funded Center for Multi-scale Modeling of Atmospheric Processes (CMMAP) at Colorado State University (CSU) is addressing this problem through a revolutionary new approach to representing cloud processes on their native scales, including the cloud-scale interaction processes that are active in cloud systems. CMMAP has set ambitious education and human-resource goals to share basic information about the atmosphere, clouds, weather, climate, and modeling with diverse K-12 and public audiences. This is accomplished through collaborations in resource development and dissemination between CMMAP scientists, CSU’s Little Shop of Physics (LSOP) program, and the Windows to the Universe (W2U) program at University Corporation for Atmospheric Research (UCAR). Little Shop of Physics develops new hands on science activities demonstrating basic science concepts fundamental to understanding atmospheric characteristics, weather, and climate. Videos capture demonstrations of children completing these activities which are broadcast to school districts and public television programs. CMMAP and LSOP educators and scientists partner in teaching a summer professional development workshops for teachers at CSU with a semester's worth of college-level content on the basic physics of the atmosphere, weather, climate, climate modeling, and climate change, as well as dozens of LSOP inquiry-based activities suitable for use in classrooms. The W2U project complements these efforts by developing and broadly disseminating new CMMAP-related online content pages, animations, interactives, image galleries, scientists’ biographies, and LSOP videos to K-12 and public audiences. Reaching nearly 20 million users annually, W2U is highly valued as a curriculum enhancement resource, because its content is written at three levels in English and Spanish. Links between science topics and literature, art, and mythology enable teachers of English Language Learners, literacy, and the arts to integrate science into their classrooms. In summary, the CMMAP NSF-funded Science and Technology Center has established a highly effective and productive partnership of scientists and educators focused on enhancing public science literacy about weather, climate, and global change. All CMMAP, LSOP, and W2U resources can be accessed online at no cost by the entire atmospheric science K-12 and informal science education community.
ERIC Educational Resources Information Center
Pallant, Amy; Pryputniewicz, Sarah; Lee, Hee-Sun
2012-01-01
Scientists, and science in general, move from the unknown to increasing levels of certainty. Teaching students about science means encouraging them to embrace and investigate the unknown, make reliable scientific claims, justify those claims with evidence, and evaluate the quality of the evidence. In all areas of science--and especially in…
NASA Astrophysics Data System (ADS)
Foster, S. Q.; Johnson, R. M.; Randall, D.; Denning, S.; Russell, R.; Gardiner, L.; Hatheway, B.; Genyuk, J.; Bergman, J.
2008-12-01
The need for improving the representation of cloud processes in climate models has been one of the most important limitations of the reliability of climate-change simulations. Now in its third year, the National Science Foundation-funded Center for Multi-scale Modeling of Atmospheric Processes (CMMAP) at Colorado State University is addressing this problem through a revolutionary new approach to representing cloud processes on their native scales, including the cloud-scale interaction processes that are active in cloud systems. CMMAP has set ambitious education and human-resource goals to share basic information about the atmosphere, clouds, weather, climate, and modeling with diverse K-12 and public audiences through its affiliation with the Windows to the Universe (W2U) program at University Corporation for Atmospheric Research (UCAR). W2U web pages are written at three levels in English and Spanish. This information targets learners at all levels, educators, and families who seek to understand and share resources and information about the nature of weather and the climate system, and career role models from related research fields. This resource can also be helpful to educators who are building bridges in the classroom between the sciences, the arts, and literacy. Visitors to the W2U's CMMAP web portal can access a beautiful new clouds image gallery; information about each cloud type and the atmospheric processes that produce them; a Clouds in Art interactive; collections of weather-themed poetry, art, and myths; links to games and puzzles for children; and extensive classroom- ready resources and activities for K-12 teachers. Biographies of CMMAP scientists and graduate students are featured. Basic science concepts important to understanding the atmosphere, such as condensation, atmosphere pressure, lapse rate, and more have been developed, as well as 'microworlds' that enable students to interact with experimental tools while building fundamental knowledge. These resources can be accessed online at no cost by the entire atmospheric science K-12 and informal science education community.
Peer Review Documents Related to the Evaluation of ...
BMDS is one of the Agency's premier tools for estimating risk assessments, therefore the validity and reliability of its statistical models are of paramount importance. This page provides links to peer review and expert summaries of the BMDS application and its models as they were developed and eventually released documenting the rigorous review process taken to provide the best science tools available for statistical modeling. This page provides links to peer reviews and expert summaries of the BMDS applications and its models as they were developed and eventually released.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gallarno, George; Rogers, James H; Maxwell, Don E
The high computational capability of graphics processing units (GPUs) is enabling and driving the scientific discovery process at large-scale. The world s second fastest supercomputer for open science, Titan, has more than 18,000 GPUs that computational scientists use to perform scientific simu- lations and data analysis. Understanding of GPU reliability characteristics, however, is still in its nascent stage since GPUs have only recently been deployed at large-scale. This paper presents a detailed study of GPU errors and their impact on system operations and applications, describing experiences with the 18,688 GPUs on the Titan supercom- puter as well as lessons learnedmore » in the process of efficient operation of GPUs at scale. These experiences are helpful to HPC sites which already have large-scale GPU clusters or plan to deploy GPUs in the future.« less
NASA Astrophysics Data System (ADS)
Talbot, Robert M.
2017-12-01
There is a clear need for valid and reliable instrumentation that measures teacher knowledge. However, the process of investigating and making a case for instrument validity is not a simple undertaking; rather, it is a complex endeavor. This paper presents the empirical case of one aspect of such an instrument validation effort. The particular instrument under scrutiny was developed in order to determine the effect of a teacher education program on novice science and mathematics teachers' strategic knowledge (SK). The relationship between novice science and mathematics teachers' SK as measured by a survey and their SK as inferred from observations of practice using a widely used observation protocol is the subject of this paper. Moderate correlations between parts of the observation-based construct and the SK construct were observed. However, the main finding of this work is that the context in which the measurement is made (in situ observations vs. ex situ survey) is an essential factor in establishing the validity of the measurement itself.
The logical foundations of forensic science: towards reliable knowledge
Evett, Ian
2015-01-01
The generation of observations is a technical process and the advances that have been made in forensic science techniques over the last 50 years have been staggering. But science is about reasoning—about making sense from observations. For the forensic scientist, this is the challenge of interpreting a pattern of observations within the context of a legal trial. Here too, there have been major advances over recent years and there is a broad consensus among serious thinkers, both scientific and legal, that the logical framework is furnished by Bayesian inference (Aitken et al. Fundamentals of Probability and Statistical Evidence in Criminal Proceedings). This paper shows how the paradigm has matured, centred on the notion of the balanced scientist. Progress through the courts has not been always smooth and difficulties arising from recent judgments are discussed. Nevertheless, the future holds exciting prospects, in particular the opportunities for managing and calibrating the knowledge of the forensic scientists who assign the probabilities that are at the foundation of logical inference in the courtroom. PMID:26101288
A New Generation of Telecommunications for Mars: The Reconfigurable Software Radio
NASA Technical Reports Server (NTRS)
Adams, J.; Horne, W.
2000-01-01
Telecommunications is a critical component for any mission at Mars as it is an enabling function that provides connectivity back to Earth and provides a means for conducting science. New developments in telecommunications, specifically in software - configurable radios, expand the possible approaches for science missions at Mars. These radios provide a flexible and re-configurable platform that can evolve with the mission and that provide an integrated approach to communications and science data processing. Deep space telecommunication faces challenges not normally faced by terrestrial and near-earth communications. Radiation, thermal, highly constrained mass, volume, packaging and reliability all are significant issues. Additionally, once the spacecraft leaves earth, there is no way to go out and upgrade or replace radio components. The reconfigurable software radio is an effort to provide not only a product that is immediately usable in the harsh space environment but also to develop a radio that will stay current as the years pass and technologies evolve.
Science In The Courtroom: The Impact Of Recent US Supreme Court Decisions
NASA Astrophysics Data System (ADS)
Poulter, Susan
2000-03-01
Most physicists' work is far removed from the courtroom, but the principles of physics are important to a number of legal controversies. Several recent lawsuits have claimed that cellular phones cause brain cancer. And litigation over claims that electromagnetic fields cause other cancers has even more important implications for society. The problem of how to distinguish good science from bad in the courtroom has vexed lawyers and scientists alike for many years, and finally drew the attention of the United States Supreme Court in 1993. The Court has now issued three opinions on the standards for screening expert testimony, which require trial judges to evaluate scientific expert witnesses to determine if their testimony is reliable. How well are the new standards working? Is the judicial system doing any better at screening out junk science? This session will discuss how the Supreme Court's opinions are being applied and suggest several strategies, including the use of court appointed experts, that are being implemented to improve the process further.
Information Literacy in the Sciences: Faculty Perception of Undergraduate Student Skill
ERIC Educational Resources Information Center
Perry, Heather Brodie
2017-01-01
Academic librarians need reliable information on the needs of faculty teaching undergraduates about seeking and using information. This study describes information gathered from semistructured interviews of teaching faculty in the sciences from several Boston-area colleges. The interview results provided insight into science faculty attitudes…
Science, Evolution, and Creationism
ERIC Educational Resources Information Center
National Academies Press, 2008
2008-01-01
How did life evolve on Earth? The answer to this question can help us understand our past and prepare for our future. Although evolution provides credible and reliable answers, polls show that many people turn away from science, seeking other explanations with which they are more comfortable. In the book "Science, Evolution, and…
The Development of Laboratory Safety Questionnaire for Middle School Science Teachers
ERIC Educational Resources Information Center
Akpullukcu, Simge; Cavas, Bulent
2017-01-01
The purpose of this paper is to develop a "valid and reliable laboratory safety questionnaire" which could be used to identify science teachers' understanding about laboratory safety issues during their science laboratory activities. The questionnaire was developed from a literature review and prior instruments developed on laboratory…
Development and Exemplification of a Model for Teacher Assessment in Primary Science
ERIC Educational Resources Information Center
Davies, D. J.; Earle, S.; McMahon, K.; Howe, A.; Collier, C.
2017-01-01
The Teacher Assessment in Primary Science project is funded by the Primary Science Teaching Trust and based at Bath Spa University. The study aims to develop a whole-school model of valid, reliable and manageable teacher assessment to inform practice and make a positive impact on primary-aged children's learning in science. The model is based on a…
NASA Astrophysics Data System (ADS)
van Aalderen-Smeets, Sandra; Walma van der Molen, Juliette
2013-03-01
In this article, we present a valid and reliable instrument which measures the attitude of in-service and pre-service primary teachers toward teaching science, called the Dimensions of Attitude Toward Science (DAS) Instrument. Attention to the attitudes of primary teachers toward teaching science is of fundamental importance to the professionalization of these teachers in the field of primary science education. With the development of this instrument, we sought to fulfill the need for a statistically and theoretically valid and reliable instrument to measure pre-service and in-service teachers' attitudes. The DAS Instrument is based on a comprehensive theoretical framework for attitude toward (teaching) science. After pilot testing, the DAS was revised and subsequently validated using a large group of respondents (pre-service and in-service primary teachers) (N = 556). The theoretical underpinning of the DAS combined with the statistical data indicate that the DAS possesses good construct validity and that it proves to be a promising instrument that can be utilized for research purposes, and also as a teacher training and coaching tool. This instrument can therefore make a valuable contribution to progress within the field of science education.
ERIC Educational Resources Information Center
Scanlan, Aaron T.; Richter-Stretton, Gina L.; Madueno, Maria C.; Borges, Nattai R.; Fenning, Andrew S.
2017-01-01
Measurement of plasma osmolality (P[subscript osm]) remains popular for assessing hydration status in exercise science. However, a controlled reliability assessment of micro-osmometry using small sample volumes to measure Posm remains to be performed. This study aimed to examine the reliability of a cryoscopic micro-osmometer requiring 15-µL…
Science Diplomacy in Large International Collaborations
NASA Astrophysics Data System (ADS)
Barish, Barry C.
2011-04-01
What opportunities and challenges does the rapidly growing internationalization of science, especially large scale science and technology projects, present for US science policy? On one hand, the interchange of scientists, the sharing of technology and facilities and the working together on common scientific goals promotes better understanding and better science. On the other hand, challenges are presented, because the science cannot be divorced from government policies, and solutions must be found for issues varying from visas to making reliable international commitments.
De Champlain, André F; Scoles, Peter; Holtzman, Kathy; Angelucci, Kathy; Flores, Maria C; Mendoza, Enrique; Martin, Marion; De Calvo, Oriz Lam
2005-01-01
The Ministry of Health of the Republic of Panama is currently developing a national examination system that will be used to license graduates to practice medicine in that country, as well as to undertake postgraduate medical training. As part of these efforts, a preliminary project was undertaken between the National Board of Medical Examiners (NBME) and the Faculty of Medicine of the University of Panama to develop a Residency Selection Process Examination (RSPE). The purpose of this study was to assess the reliability and validity of RSPE scores for a sample of candidates who wished to obtain a residency slot in Panama. The RSPE, composed of 200 basic and clinical sciences multiple-choice items, was administered to 261 residency applicants at the University of Panama. The reliability estimate computed was comparable with that reported with other high-stakes examinations (Cronbach's alpha = 0.89). Also, a Rasch examinee proficiency item difficulty plot showed that the RSPE was well targeted to the proficiency levels of candidates. Finally, a moderate correlation was noted between local grade point averages and RSPE scores for University of Panama students (r = 0.38). Findings suggest that it is possible to translate and adapt test materials for use in other contexts.
Challenges in Managing Trustworthy Large-scale Digital Science
NASA Astrophysics Data System (ADS)
Evans, B. J. K.
2017-12-01
The increased use of large-scale international digital science has opened a number of challenges for managing, handling, using and preserving scientific information. The large volumes of information are driven by three main categories - model outputs including coupled models and ensembles, data products that have been processing to a level of usability, and increasingly heuristically driven data analysis. These data products are increasingly the ones that are usable by the broad communities, and far in excess of the raw instruments data outputs. The data, software and workflows are then shared and replicated to allow broad use at an international scale, which places further demands of infrastructure to support how the information is managed reliably across distributed resources. Users necessarily rely on these underlying "black boxes" so that they are productive to produce new scientific outcomes. The software for these systems depend on computational infrastructure, software interconnected systems, and information capture systems. This ranges from the fundamentals of the reliability of the compute hardware, system software stacks and libraries, and the model software. Due to these complexities and capacity of the infrastructure, there is an increased emphasis of transparency of the approach and robustness of the methods over the full reproducibility. Furthermore, with large volume data management, it is increasingly difficult to store the historical versions of all model and derived data. Instead, the emphasis is on the ability to access the updated products and the reliability by which both previous outcomes are still relevant and can be updated for the new information. We will discuss these challenges and some of the approaches underway that are being used to address these issues.
Development and validation of an instrument for evaluating inquiry-based tasks in science textbooks
NASA Astrophysics Data System (ADS)
Yang, Wenyuan; Liu, Enshan
2016-12-01
This article describes the development and validation of an instrument that can be used for content analysis of inquiry-based tasks. According to the theories of educational evaluation and qualities of inquiry, four essential functions that inquiry-based tasks should serve are defined: (1) assisting in the construction of understandings about scientific concepts, (2) providing students opportunities to use inquiry process skills, (3) being conducive to establishing understandings about scientific inquiry, and (4) giving students opportunities to develop higher order thinking skills. An instrument - the Inquiry-Based Tasks Analysis Inventory (ITAI) - was developed to judge whether inquiry-based tasks perform these functions well. To test the reliability and validity of the ITAI, 4 faculty members were invited to use the ITAI to collect data from 53 inquiry-based tasks in the 3 most widely adopted senior secondary biology textbooks in Mainland China. The results indicate that (1) the inter-rater reliability reached 87.7%, (2) the grading criteria have high discriminant validity, (3) the items possess high convergent validity, and (4) the Cronbach's alpha reliability coefficient reached 0.792. The study concludes that the ITAI is valid and reliable. Because of its solid foundations in theoretical and empirical argumentation, the ITAI is trustworthy.
NASA Astrophysics Data System (ADS)
Lorditch, E.; O'Riordan, C.
2010-12-01
According to the National Science Foundation’s Science and Engineering Indicators for 2010, the general public in the USA relies on local television news more than any other medium for their science and technology news and information -- with the internet coming in as a fast-rising second. Ten years ago, the American Institute of Physics (AIP) created Discoveries and Breakthroughs Inside Science (DBIS) as a way to reach this large audience and provide them with accurate and reliable science information. DBIS is a syndicated science news service that distributes twelve 90-second news segments to local television stations throughout the USA and internationally each month. DBIS topics cover a range of science, technology, engineering, and mathematics (STEM) topics including everything from astronomy to zoology. DBIS has created a unique pathway for science communication. Story ideas go through a rigorous process of background research and peer review to make sure that they meet not only our science criteria, but also our television criteria standards to make sure that television stations will air the segments. The program is supported by a STEM coalition of over 20 organizations- including AGU - that work together to identify research breakthroughs in diverse fields of science. We will describe the creation of this service and the fine-tuning of the editorial process. We will also highlight results from a 2003-2007 NSF grant to study the impact DBIS has on viewing audiences. The study showed us that 78% of television viewers would like to see more STEM news segments during their local news broadcast. Another important finding from the study is that there is a statistically significant difference in television viewers support for STEM in cities where DBIS segments are broadcasted compared to cities where they are not showing that DBIS is having an impact in communicating science to the general public. Finally, we will summarize what we have learned about making STEM news entertaining and informative as well as the balance between reporting the details of STEM news and making it relevant to the public.
Advanced Communication Processing Techniques
NASA Astrophysics Data System (ADS)
Scholtz, Robert A.
This document contains the proceedings of the workshop Advanced Communication Processing Techniques, held May 14 to 17, 1989, near Ruidoso, New Mexico. Sponsored by the Army Research Office (under Contract DAAL03-89-G-0016) and organized by the Communication Sciences Institute of the University of Southern California, the workshop had as its objective to determine those applications of intelligent/adaptive communication signal processing that have been realized and to define areas of future research. We at the Communication Sciences Institute believe that there are two emerging areas which deserve considerably more study in the near future: (1) Modulation characterization, i.e., the automation of modulation format recognition so that a receiver can reliably demodulate a signal without using a priori information concerning the signal's structure, and (2) the incorporation of adaptive coding into communication links and networks. (Encoders and decoders which can operate with a wide variety of codes exist, but the way to utilize and control them in links and networks is an issue). To support these two new interest areas, one must have both a knowledge of (3) the kinds of channels and environments in which the systems must operate, and of (4) the latest adaptive equalization techniques which might be employed in these efforts.
Dinov, Ivo D
2016-01-01
Managing, processing and understanding big healthcare data is challenging, costly and demanding. Without a robust fundamental theory for representation, analysis and inference, a roadmap for uniform handling and analyzing of such complex data remains elusive. In this article, we outline various big data challenges, opportunities, modeling methods and software techniques for blending complex healthcare data, advanced analytic tools, and distributed scientific computing. Using imaging, genetic and healthcare data we provide examples of processing heterogeneous datasets using distributed cloud services, automated and semi-automated classification techniques, and open-science protocols. Despite substantial advances, new innovative technologies need to be developed that enhance, scale and optimize the management and processing of large, complex and heterogeneous data. Stakeholder investments in data acquisition, research and development, computational infrastructure and education will be critical to realize the huge potential of big data, to reap the expected information benefits and to build lasting knowledge assets. Multi-faceted proprietary, open-source, and community developments will be essential to enable broad, reliable, sustainable and efficient data-driven discovery and analytics. Big data will affect every sector of the economy and their hallmark will be 'team science'.
NASA Astrophysics Data System (ADS)
Ray, A. J.; Walker, S. H.; Trainor, S. F.; Cherry, J. E.
2014-12-01
This presentation focuses on linking climate knowledge to the complicated decision process for hydropower dam licensing, and the affected parties involved in that process. The U.S. Federal Energy Regulatory Commission issues of licenses for nonfederal hydroelectric operations, typically 30-50 year licenses, and longer infrastructure lifespan, a similar time frame as the anticipated risks of changing climate and hydrology. Resources managed by other federal and state agencies such as the NOAA National Marine Fisheries Service may be affected by new or re-licensed projects. The federal Integrated Licensing Process gives the opportunity for affected parties to recommend issues for consultative investigation and possible mitigation, such as impacts to downstream fisheries. New or re-licensed projects have the potential to "pre-adapt" by considering and incorporating risks of climate change into their planned operations as license terms and conditions. Hundreds of hydropower facilities will be up for relicensing in the coming years (over 100 in the western Sierra Nevada alone, and large-scale water projects such as the proposed Lake Powell Pipeline), as well as proposed new dams such as the Susitna project in Alaska. Therefore, there is a need for comprehensive guidance on delivering climate analysis to support understanding of risks of hydropower projects to other affected resources, and decisions on licensing. While each project will have a specific context, many of the questions will be similar. We also will discuss best practices for the use of climate science in water project planning and management, and how creating the best and most appropriate science is also still a developing art. We will discuss the potential reliability of that science for consideration in long term planning, licensing, and mitigation planning for those projects. For science to be "actionable," that science must be understood and accepted by the potential users. This process is a negotiation, with climate scientists needing to understand the concerns of users and respond, and users developing a better understanding of the state of climate science in order to make an informed choice. We will also discuss what is needed to streamline providing that analysis for the many re-licensing decisions expected in the upcoming years.
Materials Science Research Rack Onboard the International Space Station
NASA Technical Reports Server (NTRS)
Reagan, Shawn; Frazier, Natalie; Lehman, John; Aicher, Winfried
2013-01-01
The Materials Science Research Rack (MSRR) is a research facility developed under a cooperative research agreement between NASA and ESA for materials science investigations on the International Space Station (ISS). MSRR was launched on STS-128 in August 2009 and currently resides in the U.S. Destiny Laboratory Module. Since that time, MSRR has logged more than 1000 hours of operating time. The MSRR accommodates advanced investigations in the microgravity environment on the ISS for basic materials science research in areas such as solidification of metals and alloys. The purpose is to advance the scientific understanding of materials processing as affected by microgravity and to gain insight into the physical behavior of materials processing. MSRR allows for the study of a variety of materials, including metals, ceramics, semiconductor crystals, and glasses. Materials science research benefits from the microgravity environment of space, where the researcher can better isolate chemical and thermal properties of materials from the effects of gravity. With this knowledge, reliable predictions can be made about the conditions required on Earth to achieve improved materials. MSRR is a highly automated facility with a modular design capable of supporting multiple types of investigations. The NASA-provided Rack Support Subsystem provides services (power, thermal control, vacuum access, and command and data handling) to the ESA-developed Materials Science Laboratory (MSL) that accommodates interchangeable Furnace Inserts (FI). Two ESA-developed FIs are presently available on the ISS: the Low Gradient Furnace (LGF) and the Solidification and Quenching Furnace (SQF). Sample Cartridge Assemblies (SCAs), each containing one or more material samples, are installed in the FI by the crew and can be processed at temperatures up to 1400C. ESA continues to develop samples with 14 planned for launch and processing in the near future. Additionally NASA has begun developing SCAs to support US PIs and their partners. The first of these Flight SCAs are being developed for investigations to support research in the areas of crystal growth and liquid phase sintering. Subsequent investigations are in various stages of development. US investigations will include a ground test program in order to distinguish the particular effects of the absence of gravity.
Materials Science Research Rack Onboard the International Space Station
NASA Technical Reports Server (NTRS)
Reagan, S. E.; Lehman, J. R.; Frazier, N. C.
2016-01-01
The Materials Science Research Rack (MSRR) is a research facility developed under a cooperative research agreement between NASA and ESA for materials science investigations on the International Space Station (ISS). MSRR was launched on STS-128 in August 2009 and currently resides in the U.S. Destiny Laboratory Module. Since that time, MSRR has logged more than 1400 hours of operating time. The MSRR accommodates advanced investigations in the microgravity environment on the ISS for basic materials science research in areas such as solidification of metals and alloys. The purpose is to advance the scientific understanding of materials processing as affected by microgravity and to gain insight into the physical behavior of materials processing. MSRR allows for the study of a variety of materials, including metals, ceramics, semiconductor crystals, and glasses. Materials science research benefits from the microgravity environment of space, where the researcher can better isolate chemical and thermal properties of materials from the effects of gravity. With this knowledge, reliable predictions can be made about the conditions required on Earth to achieve improved materials. MSRR is a highly automated facility with a modular design capable of supporting multiple types of investigations. The NASA-provided Rack Support Subsystem provides services (power, thermal control, vacuum access, and command and data handling) to the ESA-developed Materials Science Laboratory (MSL) that accommodates interchangeable Furnace Inserts (FI). Two ESA-developed FIs are presently available on the ISS: the Low Gradient Furnace (LGF) and the Solidification and Quenching Furnace (SQF). Sample Cartridge Assemblies (SCAs), each containing one or more material samples, are installed in the FI by the crew and can be processed at temperatures up to 1400degC. ESA continues to develop samples with 14 planned for launch and processing in the near future. Additionally NASA has begun developing SCAs to support US PIs and their partners. The first of these Flight SCAs are being developed for investigations to support research in the areas of crystal growth and liquid phase sintering. Subsequent investigations are in various stages of development. US investigations will include a ground test program in order to distinguish the particular effects of the absence of gravity.
Materials Science Research Rack Onboard the International Space Station
NASA Technical Reports Server (NTRS)
Reagan, Shawn; Frazier, Natalie; Lehman, John
2016-01-01
The Materials Science Research Rack (MSRR) is a research facility developed under a cooperative research agreement between NASA and ESA for materials science investigations on the International Space Station (ISS). MSRR was launched on STS-128 in August 2009 and currently resides in the U.S. Destiny Laboratory Module. Since that time, MSRR has logged more than 1400 hours of operating time. The MSRR accommodates advanced investigations in the microgravity environment on the ISS for basic materials science research in areas such as solidification of metals and alloys. The purpose is to advance the scientific understanding of materials processing as affected by microgravity and to gain insight into the physical behavior of materials processing. MSRR allows for the study of a variety of materials, including metals, ceramics, semiconductor crystals, and glasses. Materials science research benefits from the microgravity environment of space, where the researcher can better isolate chemical and thermal properties of materials from the effects of gravity. With this knowledge, reliable predictions can be made about the conditions required on Earth to achieve improved materials. MSRR is a highly automated facility with a modular design capable of supporting multiple types of investigations. The NASA-provided Rack Support Subsystem provides services (power, thermal control, vacuum access, and command and data handling) to the ESA-developed Materials Science Laboratory (MSL) that accommodates interchangeable Furnace Inserts (FI). Two ESA-developed FIs are presently available on the ISS: the Low Gradient Furnace (LGF) and the Solidification and Quenching Furnace (SQF). Sample Cartridge Assemblies (SCAs), each containing one or more material samples, are installed in the FI by the crew and can be processed at temperatures up to 1400?C. ESA continues to develop samples with 14 planned for launch and processing in the near future. Additionally NASA has begun developing SCAs to support US PIs and their partners. The first of these Flight SCAs are being developed for investigations to support research in the areas of crystal growth and liquid phase sintering. Subsequent investigations are in various stages of development. US investigations will include a ground test program in order to distinguish the particular effects of the absence of gravity.
NASA Astrophysics Data System (ADS)
Pavlis, Nikolaos K.
Geomatics is a trendy term that has been used in recent years to describe academic departments that teach and research theories, methods, algorithms, and practices used in processing and analyzing data related to the Earth and other planets. Naming trends aside, geomatics could be considered as the mathematical and statistical “toolbox” that allows Earth scientists to extract information about physically relevant parameters from the available data and accompany such information with some measure of its reliability. This book is an attempt to present the mathematical-statistical methods used in data analysis within various disciplines—geodesy, geophysics, photogrammetry and remote sensing—from a unifying perspective that inverse problem formalism permits. At the same time, it allows us to stretch the relevance of statistical methods in achieving an optimal solution.
NASA Astrophysics Data System (ADS)
Wang, Ya-Ling; Tsai, Chin-Chung; Wei, Shih-Hsuan
2015-09-01
This study aimed to investigate the factors accounting for science teaching self-efficacy and to examine the relationships among Taiwanese teachers' science teaching self-efficacy, teaching and learning conceptions, technological-pedagogical content knowledge for the Internet (TPACK-I), and attitudes toward Internet-based instruction (Attitudes) using a mediational model approach. A total of 233 science teachers from 41 elementary schools in Taiwan were invited to take part in the study. After ensuring the validity and reliability of each questionnaire, the results indicated that each measure had satisfactory validity and reliability. Furthermore, through mediational models, the results revealed that TPACK-I and Attitudes mediated the relationship between teaching and learning conceptions and science teaching self-efficacy, suggesting that (1) knowledge of and attitudes toward Internet-based instruction (KATII) mediated the positive relationship between constructivist conceptions of teaching and learning and outcome expectancy, and that (2) KATII mediated the negative correlations between traditional conceptions of teaching and learning and teaching efficacy.
Key Provenance of Earth Science Observational Data Products
NASA Astrophysics Data System (ADS)
Conover, H.; Plale, B.; Aktas, M.; Ramachandran, R.; Purohit, P.; Jensen, S.; Graves, S. J.
2011-12-01
As the sheer volume of data increases, particularly evidenced in the earth and environmental sciences, local arrangements for sharing data need to be replaced with reliable records about the what, who, how, and where of a data set or collection. This is frequently called the provenance of a data set. While observational data processing systems in the earth sciences have a long history of capturing metadata about the processing pipeline, current processes are limited in both what is captured and how it is disseminated to the science community. Provenance capture plays a role in scientific data preservation and stewardship precisely because it can automatically capture and represent a coherent picture of the what, how and who of a particular scientific collection. It reflects the transformations that a data collection underwent prior to its current form and the sequence of tasks that were executed and data products applied to generate a new product. In the NASA-funded Instant Karma project, we examine provenance capture in earth science applications, specifically the Advanced Microwave Scanning Radiometer - Earth Observing System (AMSR-E) Science Investigator-led Processing system (SIPS). The project is integrating the Karma provenance collection and representation tool into the AMSR-E SIPS production environment, with an initial focus on Sea Ice. This presentation will describe capture and representation of provenance that is guided by the Open Provenance Model (OPM). Several things have become clear during the course of the project to date. One is that core OPM entities and relationships are not adequate for expressing the kinds of provenance that is of interest in the science domain. OPM supports name-value pair annotations that can be used to augment what is known about the provenance entities and relationships, but in Karma, annotations cannot be added during capture, but only after the fact. This limits the capture system's ability to record something it learned about an entity after the event of its creation in the provenance record. We will discuss extensions to the Open Provenance Model (OPM) and modifications to the Karma tool suite to address this issue, more efficient representations of earth science kinds of provenance, and definition of metadata structures for capturing related knowledge about the data products and science algorithms used to generate them. Use scenarios for provenance information is an active topic of investigation. It has additionally become clear through the project that not all provenance is created equal. In processing pipelines, some provenance is repetitive and uninteresting. Because of the volume of provenance, this obscures what are the interesting pieces of provenance. Methodologies to reveal science-relevant provenance will be presented, along with a preview of the AMSR-E Provenance Browser.
NASA Astrophysics Data System (ADS)
Foster, S. Q.; Randall, D.; Denning, S.; Jones, B.; Russell, R.; Gardiner, L.; Hatheway, B.; Johnson, R. M.; Drossman, H.; Pandya, R.; Swartz, D.; Lanting, J.; Pitot, L.
2007-12-01
The need for improving the representation of cloud processes in climate models has been one of the most important limitations of the reliability of climate-change simulations. The new National Science Foundation- funded Center for Multi-scale Modeling of Atmospheric Processes (CMMAP) at Colorado State University (CSU) is a major research program addressing this problem over the next five years through a revolutionary new approach to representing cloud processes on their native scales, including the cloud-scale interactions among the many physical and chemical processes that are active in cloud systems. At the end of its first year, CMMAP has established effective partnerships between scientists, students, and teachers to meet its goals to: (1) provide first-rate graduate education in atmospheric science; (2) recruit diverse undergraduates into graduate education and careers in climate science; and (3) develop, evaluate, and disseminate educational resources designed to inform K-12 students, teachers, and the general public about the nature of the climate system, global climate change, and career opportunities in climate science. This presentation will describe the partners, our challenges and successes, and measures of achievement involved in the integrated suite of programs launched in the first year. They include: (1) a new high school Colorado Climate Conference drawing prestigious climate scientists to speak to students, (2) a summer Weather and Climate Workshop at CSU and the National Center for Atmospheric Research introducing K-12 teachers to Earth system science and a rich toolkit of teaching materials, (3) a program from CSU's Little Shop of Physics reaching 50 schools and 20,000 K-12 students through the new "It's Up In the Air" program, (4) expanded content, imagery, and interactives on clouds, weather, climate, and modeling for students, teachers, and the public on The Windows to the Universe web site at University Corporation for Atmospheric Research (UCAR), (5) mentoring programs engaging diverse undergraduate and graduate level students in CMMAP research through UCAR's Significant Opportunities in Atmospheric Research and Science (SOARS) Program, and (6) after school activities about clouds, climate and weather for underrepresented middle school students at the Catamount Institute. CMMAP is also enabling Windows to the Universe to continue its commitment to translate all new web pages into Spanish. This presentation will explain how resources emerging from CMMAP can be accessed and used by the entire Earth and Ocean Science educational outreach community.
NASA Astrophysics Data System (ADS)
Nguyen, L.; Chee, T.; Palikonda, R.; Smith, W. L., Jr.; Bedka, K. M.; Spangenberg, D.; Vakhnin, A.; Lutz, N. E.; Walter, J.; Kusterer, J.
2017-12-01
Cloud Computing offers new opportunities for large-scale scientific data producers to utilize Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS) IT resources to process and deliver data products in an operational environment where timely delivery, reliability, and availability are critical. The NASA Langley Research Center Atmospheric Science Data Center (ASDC) is building and testing a private and public facing cloud for users in the Science Directorate to utilize as an everyday production environment. The NASA SatCORPS (Satellite ClOud and Radiation Property Retrieval System) team processes and derives near real-time (NRT) global cloud products from operational geostationary (GEO) satellite imager datasets. To deliver these products, we will utilize the public facing cloud and OpenShift to deploy a load-balanced webserver for data storage, access, and dissemination. The OpenStack private cloud will host data ingest and computational capabilities for SatCORPS processing. This paper will discuss the SatCORPS migration towards, and usage of, the ASDC Cloud Services in an operational environment. Detailed lessons learned from use of prior cloud providers, specifically the Amazon Web Services (AWS) GovCloud and the Government Cloud administered by the Langley Managed Cloud Environment (LMCE) will also be discussed.
Introducing the TAPS Pyramid Model
ERIC Educational Resources Information Center
Earle, Sarah
2015-01-01
The Teacher Assessment in Primary Science (TAPS) project is a three-year project based at Bath Spa University and funded by the Primary Science Teaching Trust (PSTT). It aims to develop support for a valid, reliable and manageable system of science assessment that will have a positive impact on children's learning. In this article, the author…
Measuring Graph Comprehension, Critique, and Construction in Science
ERIC Educational Resources Information Center
Lai, Kevin; Cabrera, Julio; Vitale, Jonathan M.; Madhok, Jacquie; Tinker, Robert; Linn, Marcia C.
2016-01-01
Interpreting and creating graphs plays a critical role in scientific practice. The K-12 Next Generation Science Standards call for students to use graphs for scientific modeling, reasoning, and communication. To measure progress on this dimension, we need valid and reliable measures of graph understanding in science. In this research, we designed…
Teaching Critical Thinking? New Directions in Science Education
ERIC Educational Resources Information Center
Osborne, Jonathan
2014-01-01
Critique and questioning are central to the practice of science; without argument and evaluation, the construction of reliable knowledge would be impossible. The challenge is to incorporate an understanding of the role of critique and, more importantly, the ability to engage in critique, within the teaching of science. The emphasis in both the US…
ERIC Educational Resources Information Center
Yang, Yang; He, Peng; Liu, Xiufeng
2018-01-01
So far, not enough effort has been invested in developing reliable, valid, and engaging assessments in school science, especially assessment of interdisciplinary science based on the new Next Generation Science Standards (NGSS). Furthermore, previous tools rely mostly on multiple-choice items and evaluation of student outcome is linked only to…
An image-processing methodology for extracting bloodstain pattern features.
Arthur, Ravishka M; Humburg, Philomena J; Hoogenboom, Jerry; Baiker, Martin; Taylor, Michael C; de Bruin, Karla G
2017-08-01
There is a growing trend in forensic science to develop methods to make forensic pattern comparison tasks more objective. This has generally involved the application of suitable image-processing methods to provide numerical data for identification or comparison. This paper outlines a unique image-processing methodology that can be utilised by analysts to generate reliable pattern data that will assist them in forming objective conclusions about a pattern. A range of features were defined and extracted from a laboratory-generated impact spatter pattern. These features were based in part on bloodstain properties commonly used in the analysis of spatter bloodstain patterns. The values of these features were consistent with properties reported qualitatively for such patterns. The image-processing method developed shows considerable promise as a way to establish measurable discriminating pattern criteria that are lacking in current bloodstain pattern taxonomies. Copyright © 2017 Elsevier B.V. All rights reserved.
Entry Grades and Academic Performance in Nigerian Universities.
ERIC Educational Resources Information Center
Ojo, Folayan
1976-01-01
The reliability of Nigeria's entry qualification examinations as a predictor of success at the university level is examined. Results indicate a positive correlation in the science-based fields and very low predictability in the social sciences. (JMF)
NASA Astrophysics Data System (ADS)
Krystyniak, Rebecca A.
2001-12-01
This study explored the effect of participation by second-semester general chemistry students in an extended open-inquiry laboratory investigation on their use of science process skills and confidence in performing specific aspects of laboratory investigations. In addition, verbal interactions of a student lab team among team members and with their instructor over three open-inquiry laboratory sessions and two non-inquiry sessions were investigated. Instruments included the Test of Integrated Skills (TIPS), a 36-item multiple-choice instrument, and the Chemistry Laboratory Survey (CLS), a researcher co-designed 20-item 8-point instrument. Instruments were administered at the beginning and close of the semester to 157 second-semester general chemistry students at the two universities; students at only one university participated in open-inquiry activity. A MANCOVA was performed to investigate relationships among control and experimental students, TIPS, and CLS post-test scores. Covariates were TIPS and CLS pre-test scores and prior high school and college science experience. No significant relationships were found. Wilcoxen analyses indicated both groups showed increase in confidence; experimental-group students with below-average TIPS pre-test scores showed a significant increase in science process skills. Transcribed audio tapes of all laboratory-based verbal interactions were analyzed. Coding categories, developed using the constant comparison method, led to an inter-rater reliability of .96. During open-inquiry activities, the lab team interacted less often, sought less guidance from their instructor, and talked less about chemistry concepts than during non-inquiry activities. Evidence confirmed that students used science process skills and engaged in higher-order thinking during both types of activities. A four-student focus shared their experiences with open-inquiry activities, indicating that they enjoyed the experience, viewed it as worthwhile, and believed it helped them gain understanding of the nature of chemistry research. Research results indicate that participation in open-inquiry laboratory increases student confidence and, for some students, the ability to use science process skills. Evidence documents differences in student laboratory interactions and behavior that are attributable to the type of laboratory experience. Further research into aspects of open-inquiry laboratory experiences is recommended.
Geospatial decision support systems for societal decision making
Bernknopf, R.L.
2005-01-01
While science provides reliable information to describe and understand the earth and its natural processes, it can contribute more. There are many important societal issues in which scientific information can play a critical role. Science can add greatly to policy and management decisions to minimize loss of life and property from natural and man-made disasters, to manage water, biological, energy, and mineral resources, and in general, to enhance and protect our quality of life. However, the link between science and decision-making is often complicated and imperfect. Technical language and methods surround scientific research and the dissemination of its results. Scientific investigations often are conducted under different conditions, with different spatial boundaries, and in different timeframes than those needed to support specific policy and societal decisions. Uncertainty is not uniformly reported in scientific investigations. If society does not know that data exist, what the data mean, where to use the data, or how to include uncertainty when a decision has to be made, then science gets left out -or misused- in a decision making process. This paper is about using Geospatial Decision Support Systems (GDSS) for quantitative policy analysis. Integrated natural -social science methods and tools in a Geographic Information System that respond to decision-making needs can be used to close the gap between science and society. The GDSS has been developed so that nonscientists can pose "what if" scenarios to evaluate hypothetical outcomes of policy and management choices. In this approach decision makers can evaluate the financial and geographic distribution of potential policy options and their societal implications. Actions, based on scientific information, can be taken to mitigate hazards, protect our air and water quality, preserve the planet's biodiversity, promote balanced land use planning, and judiciously exploit natural resources. Applications using the GDSS have demonstrated the benefits of utilizing science for policy decisions. Investment in science reduces decision-making uncertainty and reducing that uncertainty has economic value.
NASA Astrophysics Data System (ADS)
Hua, H.; Owen, S. E.; Yun, S.; Lundgren, P.; Fielding, E. J.; Agram, P.; Manipon, G.; Stough, T. M.; Simons, M.; Rosen, P. A.; Wilson, B. D.; Poland, M. P.; Cervelli, P. F.; Cruz, J.
2013-12-01
Space-based geodetic measurement techniques such as Interferometric Synthetic Aperture Radar (InSAR) and Continuous Global Positioning System (CGPS) are now important elements in our toolset for monitoring earthquake-generating faults, volcanic eruptions, hurricane damage, landslides, reservoir subsidence, and other natural and man-made hazards. Geodetic imaging's unique ability to capture surface deformation with high spatial and temporal resolution has revolutionized both earthquake science and volcanology. Continuous monitoring of surface deformation and surface change before, during, and after natural hazards improves decision-making from better forecasts, increased situational awareness, and more informed recovery. However, analyses of InSAR and GPS data sets are currently handcrafted following events and are not generated rapidly and reliably enough for use in operational response to natural disasters. Additionally, the sheer data volumes needed to handle a continuous stream of InSAR data sets also presents a bottleneck. It has been estimated that continuous processing of InSAR coverage of California alone over 3-years would reach PB-scale data volumes. Our Advanced Rapid Imaging and Analysis for Monitoring Hazards (ARIA-MH) science data system enables both science and decision-making communities to monitor areas of interest with derived geodetic data products via seamless data preparation, processing, discovery, and access. We will present our findings on the use of hybrid-cloud computing to improve the timely processing and delivery of geodetic data products, integrating event notifications from USGS to improve the timely processing for response, as well as providing browse results for quick looks with other tools for integrative analysis.
A Multi-Disciplinary Approach to Remote Sensing through Low-Cost UAVs.
Calvario, Gabriela; Sierra, Basilio; Alarcón, Teresa E; Hernandez, Carmen; Dalmau, Oscar
2017-06-16
The use of Unmanned Aerial Vehicles (UAVs) based on remote sensing has generated low cost monitoring, since the data can be acquired quickly and easily. This paper reports the experience related to agave crop analysis with a low cost UAV. The data were processed by traditional photogrammetric flow and data extraction techniques were applied to extract new layers and separate the agave plants from weeds and other elements of the environment. Our proposal combines elements of photogrammetry, computer vision, data mining, geomatics and computer science. This fusion leads to very interesting results in agave control. This paper aims to demonstrate the potential of UAV monitoring in agave crops and the importance of information processing with reliable data flow.
A Multi-Disciplinary Approach to Remote Sensing through Low-Cost UAVs
Calvario, Gabriela; Sierra, Basilio; Alarcón, Teresa E.; Hernandez, Carmen; Dalmau, Oscar
2017-01-01
The use of Unmanned Aerial Vehicles (UAVs) based on remote sensing has generated low cost monitoring, since the data can be acquired quickly and easily. This paper reports the experience related to agave crop analysis with a low cost UAV. The data were processed by traditional photogrammetric flow and data extraction techniques were applied to extract new layers and separate the agave plants from weeds and other elements of the environment. Our proposal combines elements of photogrammetry, computer vision, data mining, geomatics and computer science. This fusion leads to very interesting results in agave control. This paper aims to demonstrate the potential of UAV monitoring in agave crops and the importance of information processing with reliable data flow. PMID:28621740
Method Development in Forensic Toxicology.
Peters, Frank T; Wissenbach, Dirk K; Busardo, Francesco Paolo; Marchei, Emilia; Pichini, Simona
2017-01-01
In the field of forensic toxicology, the quality of analytical methods is of great importance to ensure the reliability of results and to avoid unjustified legal consequences. A key to high quality analytical methods is a thorough method development. The presented article will provide an overview on the process of developing methods for forensic applications. This includes the definition of the method's purpose (e.g. qualitative vs quantitative) and the analytes to be included, choosing an appropriate sample matrix, setting up separation and detection systems as well as establishing a versatile sample preparation. Method development is concluded by an optimization process after which the new method is subject to method validation. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Technology developments toward 30-year-life of photovoltaic modules
NASA Technical Reports Server (NTRS)
Ross, R. G., Jr.
1984-01-01
As part of the United States National Photovoltaics Program, the Jet Propulsion Laboratory's Flat-Plate Solar Array Project (FSA) has maintained a comprehensive reliability and engineering sciences activity addressed toward understanding the reliability attributes of terrestrial flat-plate photovoltaic arrays and to deriving analysis and design tools necessary to achieve module designs with a 30-year useful life. The considerable progress to date stemming from the ongoing reliability research is discussed, and the major areas requiring continued research are highlighted. The result is an overview of the total array reliability problem and of available means of achieving high reliability at minimum cost.
Lövestam, Elin; Orrevall, Ylva; Koochek, Afsaneh; Karlström, Brita; Andersson, Agneta
2014-06-01
Adequate documentation in medical records is important for high-quality health care. Documentation quality is widely studied within nursing, but studies are lacking within dietetic care. The aim of this study was to translate, elaborate and evaluate an audit instrument, based on the four-step Nutrition Care Process model, for documentation of dietetic care in medical records. The audit instrument includes 14 items focused on essential parts of dietetic care and the documentation's clarity and structure. Each item is to be rated 0-1 or 0-2 points, with a maximum total instrument score of 26. A detailed manual was added to facilitate the interpretation and increase the reliability of the instrument. The instrument is based on a similar tool initiated 9 years ago in the United States, which in this study was translated to Swedish and further elaborated. The translated and further elaborated instrument was named Diet-NCP-Audit. Firstly, the content validity of the Diet-NCP-Audit instrument was tested by five experienced dietitians. They rated the relevance and clarity of the included items. After a first rating, minor improvements were made. After the second rating, the Content Validity Indexes were 1.0, and the Clarity Index was 0.98. Secondly, to test the reliability, four dietitians reviewed 20 systematically collected dietetic notes independently using the audit instrument. Before the review, a calibration process was performed. A comparison of the reviews was performed, which resulted in a moderate inter-rater agreement with Krippendorff's α = 0.65-0.67. Grouping the audit results in three levels: lower, medium or higher range, a Krippendorff's α of 0.74 was considered high reliability. Also, an intra-rater reliability test-retest with a 9 weeks interval, performed by one dietitian, showed strong agreement. To conclude, the evaluated audit instrument had high content validity and moderate to high reliability and can be used in auditing documentation of dietetic care. © 2013 Nordic College of Caring Science.
New methods for analyzing semantic graph based assessments in science education
NASA Astrophysics Data System (ADS)
Vikaros, Lance Steven
This research investigated how the scoring of semantic graphs (known by many as concept maps) could be improved and automated in order to address issues of inter-rater reliability and scalability. As part of the NSF funded SENSE-IT project to introduce secondary school science students to sensor networks (NSF Grant No. 0833440), semantic graphs illustrating how temperature change affects water ecology were collected from 221 students across 16 schools. The graphing task did not constrain students' use of terms, as is often done with semantic graph based assessment due to coding and scoring concerns. The graphing software used provided real-time feedback to help students learn how to construct graphs, stay on topic and effectively communicate ideas. The collected graphs were scored by human raters using assessment methods expected to boost reliability, which included adaptations of traditional holistic and propositional scoring methods, use of expert raters, topical rubrics, and criterion graphs. High levels of inter-rater reliability were achieved, demonstrating that vocabulary constraints may not be necessary after all. To investigate a new approach to automating the scoring of graphs, thirty-two different graph features characterizing graphs' structure, semantics, configuration and process of construction were then used to predict human raters' scoring of graphs in order to identify feature patterns correlated to raters' evaluations of graphs' topical accuracy and complexity. Results led to the development of a regression model able to predict raters' scoring with 77% accuracy, with 46% accuracy expected when used to score new sets of graphs, as estimated via cross-validation tests. Although such performance is comparable to other graph and essay based scoring systems, cross-context testing of the model and methods used to develop it would be needed before it could be recommended for widespread use. Still, the findings suggest techniques for improving the reliability and scalability of semantic graph based assessments without requiring constraint of how ideas are expressed.
Applicability and Limitations of Reliability Allocation Methods
NASA Technical Reports Server (NTRS)
Cruz, Jose A.
2016-01-01
Reliability allocation process may be described as the process of assigning reliability requirements to individual components within a system to attain the specified system reliability. For large systems, the allocation process is often performed at different stages of system design. The allocation process often begins at the conceptual stage. As the system design develops, more information about components and the operating environment becomes available, different allocation methods can be considered. Reliability allocation methods are usually divided into two categories: weighting factors and optimal reliability allocation. When properly applied, these methods can produce reasonable approximations. Reliability allocation techniques have limitations and implied assumptions that need to be understood by system engineers. Applying reliability allocation techniques without understanding their limitations and assumptions can produce unrealistic results. This report addresses weighting factors, optimal reliability allocation techniques, and identifies the applicability and limitations of each reliability allocation technique.
As Ethics is a Core Attribute of Science, So Geoethics Must Be at the Core of Geoscience
NASA Astrophysics Data System (ADS)
Cronin, V. S.; Bank, C.; Bobrowsky, P. T.; Geissman, J. W.; Kieffer, S. W.; Mogk, D. W.; Palinkas, C. M.; Pappas Maenz, C.; Peppoloni, S.; Ryan, A. M.
2015-12-01
The daily quest of a geoscientist is to seek reliable information about Earth: its history, nature, materials, processes, resources and hazards. In science, reliable information is based on reproducible observations (scientific facts), and includes an estimate of uncertainty. All geoscientists share that basic quest, regardless of whether they wear a lab coat, business suit or field boots at work. All geoscientists also share a responsibility to serve society - the same society that invested in science and education, and thereby enabled the development of geoscience as well as the commercial ventures that utilize geoscience. What does society expect in return for that investment? It just wants the truth, along with a clear indication of the uncertainty. Society needs reliable geoscience information and expertise so that it can make good, informed decisions about resources, risk and our shared environment. Unreliable geoscience information, if represented as valid, might do irreparable harm. The authors represent the International Association for Promoting Geoethics (IAPG, www.geoethics.org), which seeks to develop and advance geoethics worldwide. Geoethics is based on the moral imperative for geoscientists to use our knowledge and expertise about Earth for the benefit of humankind. Informed by the geologic record of the intertwined history of life and our planet, that moral imperative extends beyond our time, our culture, and even our species. Ultimately, Earth is a small lifeboat in space. Geoscientists form the essential interface between our human society and Earth, and we must act for the health and benefit of both. Einstein wrote, "Truth is what stands the test of experience." If geoscientists are unwilling to engage the public and to speak the truth about Earth, who else will assume that role? The challenges we face together - resources, energy, potable water, soil conservation, sea-level rise - are too serious for geoscientists to be mute. Voices motivated by narrow self-interest might fill the void left by our indifference. Our children's children's children will expect us to have done our job in our time: to be honest, to be good scientists, to provide reliable expertise about Earth, to help reorient society toward sustainability, and to pass on a healthy ecosystem to those who follow.
Vaucher, Paul; Cardoso, Isabel; Veldstra, Janet L.; Herzig, Daniela; Herzog, Michael; Mangin, Patrice; Favrat, Bernard
2014-01-01
When facing age-related cerebral decline, older adults are unequally affected by cognitive impairment without us knowing why. To explore underlying mechanisms and find possible solutions to maintain life-space mobility, there is a need for a standardized behavioral test that relates to behaviors in natural environments. The aim of the project described in this paper was therefore to provide a free, reliable, transparent, computer-based instrument capable of detecting age-related changes on visual processing and cortical functions for the purposes of research into human behavior in computational transportation science. After obtaining content validity, exploring psychometric properties of the developed tasks, we derived (Study 1) the scoring method for measuring cerebral decline on 106 older drivers aged ≥70 years attending a driving refresher course organized by the Swiss Automobile Association to test the instrument's validity against on-road driving performance (106 older drivers). We then validated the derived method on a new sample of 182 drivers (Study 2). We then measured the instrument's reliability having 17 healthy, young volunteers repeat all tests included in the instrument five times (Study 3) and explored the instrument's psychophysical underlying functions on 47 older drivers (Study 4). Finally, we tested the instrument's responsiveness to alcohol and effects on performance on a driving simulator in a randomized, double-blinded, placebo, crossover, dose-response, validation trial including 20 healthy, young volunteers (Study 5). The developed instrument revealed good psychometric properties related to processing speed. It was reliable (ICC = 0.853) and showed reasonable association to driving performance (R2 = 0.053), and responded to blood alcohol concentrations of 0.5 g/L (p = 0.008). Our results suggest that MedDrive is capable of detecting age-related changes that affect processing speed. These changes nevertheless do not necessarily affect driving behavior. PMID:25346674
NASA Astrophysics Data System (ADS)
Farrar, Cathy
As part of the National Science Foundation Science Literacy through Science Journalism (SciJourn) research and development initiative (http://www.scijourn.org ; Polman, Saul, Newman, and Farrar, 2008) a quasi-experimental design was used to investigate what impact incorporating science journalism activities had on students' scientific literacy. Over the course of a school year students participated in a variety of activities culminating in the production of science news articles for Scijourner, a regional print and online high school science news magazine. Participating teachers and SciJourn team members collaboratively developed activities focused on five aspects of scientific literacy: placing information into context, recognizing relevance, evaluating factual accuracy, use of multiple credible sources and information seeking processes. This study details the development process for the Scientific Literacy Assessment (SLA) including validity and reliability studies, evaluates student scientific literacy using the SLA, examines student SLA responses to provide a description of high school students' scientific literacy, and outlines implications of the findings in relation to the National Research Council's A Framework for K-12 Science Education: Practices, Crosscutting Concepts, and Core Ideas (2012) and classroom science teaching practices. Scientifically literate adults acting as experts in the assessment development phase informed the creation of a scoring guide that was used to analyze student responses. Experts tended to draw on both their understanding of science concepts and life experiences to formulate answers; paying close attention to scientific factual inaccuracies, sources of information, how new information fit into their view of science and society as well as targeted strategies for information seeking. Novices (i.e., students), in contrast, tended to ignore factual inaccuracies, showed little understanding about source credibility and suggested unproductive information seeking strategies. However, similar to the experts, novices made references to both scientific and societal contexts. The expert/novice comparison provides a rough description of a developmental continuum of scientific literacy. The findings of this study including student results and Generalized Linear Mixed Modeling suggest that the incorporation of science journalism activities focused on STEM issues can improve student scientific literacy. Incorporation of a wide variety of strategies raised scores on the SLA. Teachers who included a writing and revision process that prioritized content had significantly larger gains in student scores. Future studies could broaden the description of high school student scientific literacy and measured by the SLA and provide alternative pathways for developing scientific literacy as envisioned by SciJourn and the NRC Frameworks.
Modified Bayesian Kriging for Noisy Response Problems for Reliability Analysis
2015-01-01
52242, USA nicholas-gaul@uiowa.edu Mary Kathryn Cowles Department of Statistics & Actuarial Science College of Liberal Arts and Sciences , The...Forrester, A. I. J., & Keane, A. J. (2009). Recent advances in surrogate-based optimization. Progress in Aerospace Sciences , 45(1–3), 50-79. doi...Wiley. [27] Sacks, J., Welch, W. J., Toby J. Mitchell, & Wynn, H. P. (1989). Design and analysis of computer experiments. Statistical Science , 4
NSI customer service representatives and user support office: NASA Science Internet
NASA Technical Reports Server (NTRS)
1991-01-01
The NASA Science Internet, (NSI) was established in 1987 to provide NASA's Offices of Space Science and Applications (OSSA) missions with transparent wide-area data connectivity to NASA's researchers, computational resources, and databases. The NSI Office at NASA/Ames Research Center has the lead responsibility for implementing a total, open networking program to serve the OSSA community. NSI is a full-service communications provider whose services include science network planning, network engineering, applications development, network operations, and network information center/user support services. NSI's mission is to provide reliable high-speed communications to the NASA science community. To this end, the NSI Office manages and operates the NASA Science Internet, a multiprotocol network currently supporting both DECnet and TCP/IP protocols. NSI utilizes state-of-the-art network technology to meet its customers' requirements. THe NASA Science Internet interconnects with other national networks including the National Science Foundation's NSFNET, the Department of Energy's ESnet, and the Department of Defense's MILNET. NSI also has international connections to Japan, Australia, New Zealand, Chile, and several European countries. NSI cooperates with other government agencies as well as academic and commercial organizations to implement networking technologies which foster interoperability, improve reliability and performance, increase security and control, and expedite migration to the OSI protocols.
Developing an Instrument of Scientific Literacy Assessment on the Cycle Theme
ERIC Educational Resources Information Center
Rusilowati, Ani; Kurniawati, Lina; Nugroho, Sunyoto E.; Widiyatmoko, Arif
2016-01-01
The purpose of this study is to develop scientific literacy evaluation instrument that tested its validity, reliability, and characteristics to measure the skill of student's scientific literacy used four scientific literacy, categories as follow:science as a body of knowledge (category A), science as a way of thinking (category B), science as a…
ERIC Educational Resources Information Center
Said, Ziad; Summers, Ryan; Abd-El-Khalick, Fouad; Wang, Shuai
2016-01-01
This study assessed students' attitudes toward science in Qatar. A cross-sectional, nationwide probability sample representing all students enrolled in grades 3 through 12 in the various types of schools in Qatar completed the "Arabic Speaking Students' Attitudes toward Science Survey" (ASSASS). The validity and reliability of the…
ERIC Educational Resources Information Center
Mangione, Katherine Anna
2010-01-01
This study was to determine reliability and validity for a two-tiered, multiple- choice instrument designed to identify alternative conceptions in earth science. Additionally, this study sought to identify alternative conceptions in earth science held by preservice teachers, to investigate relationships between self-reported confidence scores and…
Development of Teachers' Attitude Scale towards Science Fair
ERIC Educational Resources Information Center
Tortop, Hasan Said
2013-01-01
This study was conducted to develop a new scale for measuring teachers' attitude towards science fair. Teacher Attitude Scale towards Science Fair (TASSF) is an inventory made up of 19 items and five dimensions. The study included such stages as literature review, the preparation of the item pool and the reliability and validity analysis. First of…
Science Anxiety: Relation with Gender, Year in Chemistry Class, Achievement, and Test Anxiety.
ERIC Educational Resources Information Center
Wynstra, Sharon; Cummings, Corenna
The relationships of science anxiety to measures of achievement, test anxiety, year of chemistry taken, and gender were investigated for high school students; the study also attemped to establish reliability data on the Czerniak Assessment of Science Anxiety (CASA) of L. Chiarelott and C. Czerniak (1987). Subjects were 101 students (45 males and…
Visual Signaling in a High-Search Virtual World-Based Assessment: A SAVE Science Design Study
ERIC Educational Resources Information Center
Nelson, Brian C.; Kim, Younsu; Slack, Kent
2016-01-01
Education policy in the United States centers K-12 assessment efforts primarily on standardized tests. However, such tests may not provide an accurate and reliable representation of what students understand about the complexity of science. Research indicates that students tend to pass science tests, even if they do not understand the concepts…
NASA Astrophysics Data System (ADS)
Kirby, Nicola Frances; Dempster, Edith Roslyn
2014-11-01
The Foundation Programme of the Centre for Science Access at the University of KwaZulu-Natal, South Africa provides access to tertiary science studies to educationally disadvantaged students who do not meet formal faculty entrance requirements. The low number of students proceeding from the programme into mainstream is of concern, particularly given the national imperative to increase participation and levels of performance in tertiary-level science. An attempt was made to understand foundation student performance in a campus of this university, with the view to identifying challenges and opportunities for remediation in the curriculum and processes of selection into the programme. A classification and regression tree analysis was used to identify which variables best described student performance. The explanatory variables included biographical and school-history data, performance in selection tests, and socio-economic data pertaining to their year in the programme. The results illustrate the prognostic reliability of the model used to select students, raise concerns about the inefficiency of school performance indicators as a measure of students' academic potential in the Foundation Programme, and highlight the importance of accommodation arrangements and financial support for student success in their access year.
NASA Astrophysics Data System (ADS)
Shi, Yu-Fang; Ma, Yi-Yi; Song, Ping-Ping
2018-03-01
System Reliability Theory is a research hotspot of management science and system engineering in recent years, and construction reliability is useful for quantitative evaluation of project management level. According to reliability theory and target system of engineering project management, the defination of construction reliability appears. Based on fuzzy mathematics theory and language operator, value space of construction reliability is divided into seven fuzzy subsets and correspondingly, seven membership function and fuzzy evaluation intervals are got with the operation of language operator, which provides the basis of corresponding method and parameter for the evaluation of construction reliability. This method is proved to be scientific and reasonable for construction condition and an useful attempt for theory and method research of engineering project system reliability.
Leonelli, Sabina
2016-12-28
The distributed and global nature of data science creates challenges for evaluating the quality, import and potential impact of the data and knowledge claims being produced. This has significant consequences for the management and oversight of responsibilities and accountabilities in data science. In particular, it makes it difficult to determine who is responsible for what output, and how such responsibilities relate to each other; what 'participation' means and which accountabilities it involves, with regard to data ownership, donation and sharing as well as data analysis, re-use and authorship; and whether the trust placed on automated tools for data mining and interpretation is warranted (especially as data processing strategies and tools are often developed separately from the situations of data use where ethical concerns typically emerge). To address these challenges, this paper advocates a participative, reflexive management of data practices. Regulatory structures should encourage data scientists to examine the historical lineages and ethical implications of their work at regular intervals. They should also foster awareness of the multitude of skills and perspectives involved in data science, highlighting how each perspective is partial and in need of confrontation with others. This approach has the potential to improve not only the ethical oversight for data science initiatives, but also the quality and reliability of research outputs.This article is part of the themed issue 'The ethical impact of data science'. © 2015 The Authors.
Instrument for Measuring Thermal Conductivity of Materials at Low Temperatures
NASA Technical Reports Server (NTRS)
Fesmire, James; Sass, Jared; Johnson, Wesley
2010-01-01
With the advance of polymer and other non-metallic material sciences, whole new series of polymeric materials and composites are being created. These materials are being optimized for many different applications including cryogenic and low-temperature industrial processes. Engineers need these data to perform detailed system designs and enable new design possibilities for improved control, reliability, and efficiency in specific applications. One main area of interest is cryogenic structural elements and fluid handling components and other parts, films, and coatings for low-temperature application. An important thermal property of these new materials is the apparent thermal conductivity (k-value).
HiveScience: A Citizen Science Project for Beekeepers##
Over the past decade, beekeepers have been experiencing unacceptably high colony losses while the demand for insect-pollinated crops has tripled during the same time period. Underscoring the need to develop reliable methods for predicting colony health, the United States Departme...
Influence diagrams as oil spill decision science tools
Making inferences on risks to ecosystem services (ES) from ecological crises can be more reliably handled using decision science tools. Influence diagrams (IDs) are probabilistic networks that explicitly represent the decisions related to a problem and evidence of their influence...
The end of the (forensic science) world as we know it? The example of trace evidence
Roux, Claude; Talbot-Wright, Benjamin; Robertson, James; Crispino, Frank; Ribaux, Olivier
2015-01-01
The dominant conception of forensic science as a patchwork of disciplines primarily assisting the criminal justice system (i.e. forensics) is in crisis or at least shows a series of anomalies and serious limitations. In recent years, symptoms of the crisis have been discussed in a number of reports by various commentators, without a doubt epitomized by the 2009 report by the US National Academies of Sciences (NAS 2009 Strengthening forensic science in the United States: a path forward). Although needed, but viewed as the solution to these drawbacks, the almost generalized adoption of stricter business models in forensic science casework compounded with ever-increasing normative and compliance processes not only place additional pressures on a discipline that already appears in difficulty, but also induce more fragmentation of the different forensic science tasks, a tenet many times denounced by the same NAS report and other similar reviews. One may ask whether these issues are not simply the result of an unfit paradigm. If this is the case, the current problems faced by forensic science may indicate future significant changes for the discipline. To facilitate broader discussion this presentation focuses on trace evidence, an area that is seminal to forensic science both for epistemological and historical reasons. There is, however, little doubt that this area is currently under siege worldwide. Current and future challenges faced by trace evidence are discussed along with some possible answers. The current situation ultimately presents some significant opportunities to re-invent not only trace evidence but also forensic science. Ultimately, a distinctive, more robust and more reliable science may emerge through rethinking the forensics paradigm built on specialisms, revisiting fundamental forensic science principles and adapting them to the twenty-first century. PMID:26101285
Laibhen-Parkes, Natasha; Kimble, Laura P; Melnyk, Bernadette Mazurek; Sudia, Tanya; Codone, Susan
2018-06-01
Instruments used to assess evidence-based practice (EBP) competence in nurses have been subjective, unreliable, or invalid. The Fresno test was identified as the only instrument to measure all the steps of EBP with supportive reliability and validity data. However, the items and psychometric properties of the original Fresno test are only relevant to measure EBP with medical residents. Therefore, the purpose of this paper is to describe the development of the adapted Fresno test for pediatric nurses, and provide preliminary validity and reliability data for its use with Bachelor of Science in Nursing-prepared pediatric bedside nurses. General adaptations were made to the original instrument's case studies, item content, wording, and format to meet the needs of a pediatric nursing sample. The scoring rubric was also modified to complement changes made to the instrument. Content and face validity, and intrarater reliability of the adapted Fresno test were assessed during a mixed-methods pilot study conducted from October to December 2013 with 29 Bachelor of Science in Nursing-prepared pediatric nurses. Validity data provided evidence for good content and face validity. Intrarater reliability estimates were high. The adapted Fresno test presented here appears to be a valid and reliable assessment of EBP competence in Bachelor of Science in Nursing-prepared pediatric nurses. However, further testing of this instrument is warranted using a larger sample of pediatric nurses in diverse settings. This instrument can be a starting point for evaluating the impact of EBP competence on patient outcomes. © 2018 Sigma Theta Tau International.
Bartels, Meike; Cath, Danielle C.; Boomsma, Dorret I.
2008-01-01
The factor structure of the Dutch translation of the Autism-Spectrum Quotient (AQ; a continuous, quantitative measure of autistic traits) was evaluated with confirmatory factor analyses in a large general population and student sample. The criterion validity of the AQ was examined in three matched patient groups (autism spectrum conditions (ASC), social anxiety disorder, and obsessive–compulsive disorder). A two factor model, consisting of a “Social interaction” factor and “Attention to detail” factor could be identified. The internal consistency and test–retest reliability of the AQ were satisfactory. High total AQ and factor scores were specific to ASC patients. Men scored higher than women and science students higher than non-science students. The Dutch translation of the AQ is a reliable instrument to assess autism spectrum conditions. PMID:18302013
The computational challenges of Earth-system science.
O'Neill, Alan; Steenman-Clark, Lois
2002-06-15
The Earth system--comprising atmosphere, ocean, land, cryosphere and biosphere--is an immensely complex system, involving processes and interactions on a wide range of space- and time-scales. To understand and predict the evolution of the Earth system is one of the greatest challenges of modern science, with success likely to bring enormous societal benefits. High-performance computing, along with the wealth of new observational data, is revolutionizing our ability to simulate the Earth system with computer models that link the different components of the system together. There are, however, considerable scientific and technical challenges to be overcome. This paper will consider four of them: complexity, spatial resolution, inherent uncertainty and time-scales. Meeting these challenges requires a significant increase in the power of high-performance computers. The benefits of being able to make reliable predictions about the evolution of the Earth system should, on their own, amply repay this investment.
Ferderer, David A.
2001-01-01
Documented, reliable, and accessible data and information are essential building blocks supporting scientific research and applications that enhance society's knowledge base (fig. 1). The U.S. Geological Survey (USGS), a leading provider of science data, information, and knowledge, is uniquely positioned to integrate science and natural resource information to address societal needs. The USGS Central Energy Resources Team (USGS-CERT) provides critical information and knowledge on the quantity, quality, and distribution of the Nation's and the world's oil, gas, and coal resources. By using a life-cycle model, the USGS-CERT Data Management Project is developing an integrated data management system to (1) promote access to energy data and information, (2) increase data documentation, and (3) streamline product delivery to the public, scientists, and decision makers. The project incorporates web-based technology, data cataloging systems, data processing routines, and metadata documentation tools to improve data access, enhance data consistency, and increase office efficiency
Development and Validation of an Instrument to Measure University Students' Biotechnology Attitude
NASA Astrophysics Data System (ADS)
Erdogan, Mehmet; Özel, Murat; Uşak, Muhammet; Prokop, Pavol
2009-06-01
The impact of biotechnologies on peoples' everyday lives continuously increases. Measuring young peoples' attitudes toward biotechnologies is therefore very important and its results are useful not only for science curriculum developers and policy makers, but also for producers and distributors of genetically modified products. Despite of substantial number of instruments which focused on measuring student attitudes toward biotechnology, a majority of them were not rigorously validated. This study deals with the development and validation of an attitude questionnaire toward biotechnology. Detailed information on development and validation process of the instrument is provided. Data gathered from 326 university students provided evidence for the validity and reliability of the new instrument which consists of 28 attitude items on a five point likert type scale. It is believed that the instrument will serve as a valuable tool for both instructors and researchers in science education to assess students' biotechnology attitudes.
Approaching Terahertz Range with 3-color Broadband Coherent Raman Micro Spectroscopy
NASA Astrophysics Data System (ADS)
Ujj, Laszlo; Olson, Trevor; Amos, James
The presentation reports the recent progress made on reliable signal recording and processing using 3-color broadband coherent Raman scattering (3C-BCRS). Signals are generated either from nanoparticle structures on surfaces or from bulk samples in transmission and in epi-detected mode. Spectra are recorded with a narrowband (at 532 nm) and a broadband radiation produced by a newly optimized optical parametric oscillator using the signal or idler beams. Vibrational and librational bands are measured over the 0.15-15 THz spectral range from solution and crystalline samples. Volumetric Brag-filter approach is introduced for recording 3C-BCRS spectra at the first time. The technical limitations and advantages of the narrowband filtering relative to the Notch-filter technic is clarified. The signal is proportional to the spectral autocorrelation of the broadband radiation therefore the present scheme gives a better signal-to-noise ratio relative to the traditional multiplex CRS methods. This makes the automation of non-model dependent signal processing more reliable to extract vibrational information which is very crucial in coherent Raman microscopy. Financial support from the Hal Marcus College of Science and Engineering is greatly appreciated.
A novel toolbox for E. coli lysis monitoring.
Rajamanickam, Vignesh; Wurm, David; Slouka, Christoph; Herwig, Christoph; Spadiut, Oliver
2017-01-01
The bacterium Escherichia coli is a well-studied recombinant host organism with a plethora of applications in biotechnology. Highly valuable biopharmaceuticals, such as antibody fragments and growth factors, are currently being produced in E. coli. However, the high metabolic burden during recombinant protein production can lead to cell death, consequent lysis, and undesired product loss. Thus, fast and precise analyzers to monitor E. coli bioprocesses and to retrieve key process information, such as the optimal time point of harvest, are needed. However, such reliable monitoring tools are still scarce to date. In this study, we cultivated an E. coli strain producing a recombinant single-chain antibody fragment in the cytoplasm. In bioreactor cultivations, we purposely triggered cell lysis by pH ramps. We developed a novel toolbox using UV chromatograms as fingerprints and chemometric techniques to monitor these lysis events and used flow cytometry (FCM) as reference method to quantify viability offline. Summarizing, we were able to show that a novel toolbox comprising HPLC chromatogram fingerprinting and data science tools allowed the identification of E. coli lysis in a fast and reliable manner. We are convinced that this toolbox will not only facilitate E. coli bioprocess monitoring but will also allow enhanced process control in the future.
ERIC Educational Resources Information Center
Gill, Clara Joanne Schneberger
2010-01-01
This study attempted to verify points of intersection (POIs) between mathematics and science in the eighth grade Sunshine State Standards (SSS), and to develop a valid and reliable instrument to evaluate these POIs as they were presented in the respective mathematics and science textbooks approved for use in Florida public schools. Shannon and…
Factor Structure and Reliability of Test Items for Saudi Teacher Licence Assessment
ERIC Educational Resources Information Center
Alsadaawi, Abdullah Saleh
2017-01-01
The Saudi National Assessment Centre administers the Computer Science Teacher Test for teacher certification. The aim of this study is to explore gender differences in candidates' scores, and investigate dimensionality, reliability, and differential item functioning using confirmatory factor analysis and item response theory. The confirmatory…
Psychometric survey of nursing competences illustrated with nursing students and apprentices
Reichardt, Christoph; Wernecke, Frances; Giesler, Marianne; Petersen-Ewert, Corinna
2016-09-01
Background: The term competences is discussed differently in various disciplines of science. Furthermore there is no international or discipline comprehensive accepted definition of this term. Problem: So far, there are few practical, reliable and valid measuring instruments for a survey of general nursing skills. This article describes the adaptation process of a measuring instrument for medical skills into one for nursing competences. Method: The measurement quality of the questionnaire was audited using a sample of two different courses of studies and regular nursing apprentices. Another research question focused whether the adapted questionnaire is able to detect a change of nursing skills. For the validation of reliability and validity data from the first point of measurement was used (n = 240). The data from the second point of measurement, which was conducted two years later (n = 163), were used to validate, whether the questionnaire is able to detect a change of nursing competences. Results/Conclusions: The results indicate that the adapted version of the questionnaire is reliable and valid. Also the questionnaire was able to detect significant, partly even strong, effects of change in nursing skills (d = 0,17 – 1,04). It was possible to adapt the questionnaire for the measurement of nursing competences.
[Estimators of internal consistency in health research: the use of the alpha coefficient].
da Silva, Franciele Cascaes; Gonçalves, Elizandra; Arancibia, Beatriz Angélica Valdivia; Bento, Gisele Graziele; Castro, Thiago Luis da Silva; Hernandez, Salma Stephany Soleman; da Silva, Rudney
2015-01-01
Academic production has increased in the area of health, increasingly demanding high quality in publications of great impact. One of the ways to consider quality is through methods that increase the consistency of data analysis, such as reliability which, depending on the type of data, can be evaluated by different coefficients, especially the alpha coefficient. Based on this, the present review systematically gathers scientific articles produced in the last five years, which in a methodological manner gave the α coefficient psychometric use as an estimator of internal consistency and reliability in the processes of construction, adaptation and validation of instruments. The identification of the studies was conducted systematically in the databases BioMed Central Journals, Web of Science, Wiley Online Library, Medline, SciELO, Scopus, Journals@Ovid, BMJ and Springer, using inclusion and exclusion criteria. Data analyses were performed by means of triangulation, content analysis and descriptive analysis. It was found that most studies were conducted in Iran (f=3), Spain (f=2) and Brazil (f=2). These studies aimed to test the psychometric properties of instruments, with eight studies using the α coefficient to assess reliability and nine for assessing internal consistency. All studies were classified as methodological research when their objectives were analyzed. In addition, four studies were also classified as correlational and one as descriptive-correlational. It can be concluded that though the α coefficient is widely used as one of the main parameters for assessing internal consistency of questionnaires in health sciences, its use as an estimator of trust of the methodology used and internal consistency has some critiques that should be considered.
Mid-level perceptual features distinguish objects of different real-world sizes.
Long, Bria; Konkle, Talia; Cohen, Michael A; Alvarez, George A
2016-01-01
Understanding how perceptual and conceptual representations are connected is a fundamental goal of cognitive science. Here, we focus on a broad conceptual distinction that constrains how we interact with objects--real-world size. Although there appear to be clear perceptual correlates for basic-level categories (apples look like other apples, oranges look like other oranges), the perceptual correlates of broader categorical distinctions are largely unexplored, i.e., do small objects look like other small objects? Because there are many kinds of small objects (e.g., cups, keys), there may be no reliable perceptual features that distinguish them from big objects (e.g., cars, tables). Contrary to this intuition, we demonstrated that big and small objects have reliable perceptual differences that can be extracted by early stages of visual processing. In a series of visual search studies, participants found target objects faster when the distractor objects differed in real-world size. These results held when we broadly sampled big and small objects, when we controlled for low-level features and image statistics, and when we reduced objects to texforms--unrecognizable textures that loosely preserve an object's form. However, this effect was absent when we used more basic textures. These results demonstrate that big and small objects have reliably different mid-level perceptual features, and suggest that early perceptual information about broad-category membership may influence downstream object perception, recognition, and categorization processes. (c) 2015 APA, all rights reserved).
Karimi, Fatemeh Zahra; Alesheikh, Aytay; Pakravan, Soheila; Abdollahi, Mahbubeh; Damough, Mozhdeh; Anbaran, Zahra Khosravi; Farahani, Leila Amiri
2017-10-01
In medical sciences, commitment to lifelong learning has been expressed as an important element. Today, due to the rapid development of medical information and technology, lifelong learning is critical for safe medical care and development in medical research. JeffSPLL is one of the scales for measuring lifelong learning among the staff of medical sciences that has never been used in Iran. The aim of the present study was to determine the factor structure and reliability of the Persian version of JeffSPLL among Persian-speaking staff of universities of medical sciences in Iran. This study was a cross-sectional study, methodologically, that was conducted in 2012-2013. In this study, 210 staff members of Birjand University of Medical Sciences were selected. Data collection tool was the Persian version of JeffSPLL. To investigate the factor structure of this tool, confirmatory factor analysis was used and to evaluate the model fit, goodness-of-fit indices, root mean square error of approximation (RMSEA), the ratio of chi-square to the degree of freedom associated with it, comparative fit index (CFI), and root mean square residual (RMR) were used. To investigate the reliability of tool, Cronbach's alpha was employed. Data analysis was conducted using LISREL8.8 and SPSS 20 software. Confirmatory factor analysis showed that RMSEA was close to 0.1, and CFI and GFI were close to one. Therefore, four-factor model was appropriate. Cronbach's alpha was 0.92 for the whole tool and it was between 0.82 and 0.89 for subscales. The present study verified the four-factor structure of the 19-item Persian version of JeffSPLL that included professional learning beliefs and motivation, scholarly activities, attention to learning opportunities, and technical skills in information seeking among the staff. In addition, this tool has acceptable reliability. Therefore, it was appropriate to assess lifelong learning in the Persian-speaking staff population.
Measuring Graph Comprehension, Critique, and Construction in Science
NASA Astrophysics Data System (ADS)
Lai, Kevin; Cabrera, Julio; Vitale, Jonathan M.; Madhok, Jacquie; Tinker, Robert; Linn, Marcia C.
2016-08-01
Interpreting and creating graphs plays a critical role in scientific practice. The K-12 Next Generation Science Standards call for students to use graphs for scientific modeling, reasoning, and communication. To measure progress on this dimension, we need valid and reliable measures of graph understanding in science. In this research, we designed items to measure graph comprehension, critique, and construction and developed scoring rubrics based on the knowledge integration (KI) framework. We administered the items to over 460 middle school students. We found that the items formed a coherent scale and had good reliability using both item response theory and classical test theory. The KI scoring rubric showed that most students had difficulty linking graphs features to science concepts, especially when asked to critique or construct graphs. In addition, students with limited access to computers as well as those who speak a language other than English at home have less integrated understanding than others. These findings point to the need to increase the integration of graphing into science instruction. The results suggest directions for further research leading to comprehensive assessments of graph understanding.
The logical foundations of forensic science: towards reliable knowledge.
Evett, Ian
2015-08-05
The generation of observations is a technical process and the advances that have been made in forensic science techniques over the last 50 years have been staggering. But science is about reasoning-about making sense from observations. For the forensic scientist, this is the challenge of interpreting a pattern of observations within the context of a legal trial. Here too, there have been major advances over recent years and there is a broad consensus among serious thinkers, both scientific and legal, that the logical framework is furnished by Bayesian inference (Aitken et al. Fundamentals of Probability and Statistical Evidence in Criminal Proceedings). This paper shows how the paradigm has matured, centred on the notion of the balanced scientist. Progress through the courts has not been always smooth and difficulties arising from recent judgments are discussed. Nevertheless, the future holds exciting prospects, in particular the opportunities for managing and calibrating the knowledge of the forensic scientists who assign the probabilities that are at the foundation of logical inference in the courtroom. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Halliday, Drew W R; MacDonald, Stuart W S; Scherf, K Suzanne; Sherf, Suzanne K; Tanaka, James W
2014-01-01
Although not a core symptom of the disorder, individuals with autism often exhibit selective impairments in their face processing abilities. Importantly, the reciprocal connection between autistic traits and face perception has rarely been examined within the typically developing population. In this study, university participants from the social sciences, physical sciences, and humanities completed a battery of measures that assessed face, object and emotion recognition abilities, general perceptual-cognitive style, and sub-clinical autistic traits (the Autism Quotient (AQ)). We employed separate hierarchical multiple regression analyses to evaluate which factors could predict face recognition scores and AQ scores. Gender, object recognition performance, and AQ scores predicted face recognition behaviour. Specifically, males, individuals with more autistic traits, and those with lower object recognition scores performed more poorly on the face recognition test. Conversely, university major, gender and face recognition performance reliably predicted AQ scores. Science majors, males, and individuals with poor face recognition skills showed more autistic-like traits. These results suggest that the broader autism phenotype is associated with lower face recognition abilities, even among typically developing individuals.
Halliday, Drew W. R.; MacDonald, Stuart W. S.; Sherf, Suzanne K.; Tanaka, James W.
2014-01-01
Although not a core symptom of the disorder, individuals with autism often exhibit selective impairments in their face processing abilities. Importantly, the reciprocal connection between autistic traits and face perception has rarely been examined within the typically developing population. In this study, university participants from the social sciences, physical sciences, and humanities completed a battery of measures that assessed face, object and emotion recognition abilities, general perceptual-cognitive style, and sub-clinical autistic traits (the Autism Quotient (AQ)). We employed separate hierarchical multiple regression analyses to evaluate which factors could predict face recognition scores and AQ scores. Gender, object recognition performance, and AQ scores predicted face recognition behaviour. Specifically, males, individuals with more autistic traits, and those with lower object recognition scores performed more poorly on the face recognition test. Conversely, university major, gender and face recognition performance reliably predicted AQ scores. Science majors, males, and individuals with poor face recognition skills showed more autistic-like traits. These results suggest that the broader autism phenotype is associated with lower face recognition abilities, even among typically developing individuals. PMID:24853862
Psychometrics Matter in Health Behavior: A Long-term Reliability Generalization Study.
Pickett, Andrew C; Valdez, Danny; Barry, Adam E
2017-09-01
Despite numerous calls for increased understanding and reporting of reliability estimates, social science research, including the field of health behavior, has been slow to respond and adopt such practices. Therefore, we offer a brief overview of reliability and common reporting errors; we then perform analyses to examine and demonstrate the variability of reliability estimates by sample and over time. Using meta-analytic reliability generalization, we examined the variability of coefficient alpha scores for a well-designed, consistent, nationwide health study, covering a span of nearly 40 years. For each year and sample, reliability varied. Furthermore, reliability was predicted by a sample characteristic that differed among age groups within each administration. We demonstrated that reliability is influenced by the methods and individuals from which a given sample is drawn. Our work echoes previous calls that psychometric properties, particularly reliability of scores, are important and must be considered and reported before drawing statistical conclusions.
JOWOG 22/2 - Actinide Chemical Technology (July 9-13, 2012)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jackson, Jay M.; Lopez, Jacquelyn C.; Wayne, David M.
2012-07-05
The Plutonium Science and Manufacturing Directorate provides world-class, safe, secure, and reliable special nuclear material research, process development, technology demonstration, and manufacturing capabilities that support the nation's defense, energy, and environmental needs. We safely and efficiently process plutonium, uranium, and other actinide materials to meet national program requirements, while expanding the scientific and engineering basis of nuclear weapons-based manufacturing, and while producing the next generation of nuclear engineers and scientists. Actinide Process Chemistry (NCO-2) safely and efficiently processes plutonium and other actinide compounds to meet the nation's nuclear defense program needs. All of our processing activities are done in amore » world class and highly regulated nuclear facility. NCO-2's plutonium processing activities consist of direct oxide reduction, metal chlorination, americium extraction, and electrorefining. In addition, NCO-2 uses hydrochloric and nitric acid dissolutions for both plutonium processing and reduction of hazardous components in the waste streams. Finally, NCO-2 is a key team member in the processing of plutonium oxide from disassembled pits and the subsequent stabilization of plutonium oxide for safe and stable long-term storage.« less
NASA Astrophysics Data System (ADS)
Gordov, Evgeny; Lykosov, Vasily; Krupchatnikov, Vladimir; Okladnikov, Igor; Titov, Alexander; Shulgina, Tamara
2013-04-01
Analysis of growing volume of related to climate change data from sensors and model outputs requires collaborative multidisciplinary efforts of researchers. To do it timely and in reliable way one needs in modern information-computational infrastructure supporting integrated studies in the field of environmental sciences. Recently developed experimental software and hardware platform Climate (http://climate.scert.ru/) provides required environment for regional climate change related investigations. The platform combines modern web 2.0 approach, GIS-functionality and capabilities to run climate and meteorological models, process large geophysical datasets and support relevant analysis. It also supports joint software development by distributed research groups, and organization of thematic education for students and post-graduate students. In particular, platform software developed includes dedicated modules for numerical processing of regional and global modeling results for consequent analysis and visualization. Also run of integrated into the platform WRF and «Planet Simulator» models, modeling results data preprocessing and visualization is provided. All functions of the platform are accessible by a user through a web-portal using common graphical web-browser in the form of an interactive graphical user interface which provides, particularly, capabilities of selection of geographical region of interest (pan and zoom), data layers manipulation (order, enable/disable, features extraction) and visualization of results. Platform developed provides users with capabilities of heterogeneous geophysical data analysis, including high-resolution data, and discovering of tendencies in climatic and ecosystem changes in the framework of different multidisciplinary researches. Using it even unskilled user without specific knowledge can perform reliable computational processing and visualization of large meteorological, climatic and satellite monitoring datasets through unified graphical web-interface. Partial support of RF Ministry of Education and Science grant 8345, SB RAS Program VIII.80.2 and Projects 69, 131, 140 and APN CBA2012-16NSY project is acknowledged.
Evaluating Measurement Tools in Science Education Research
ERIC Educational Resources Information Center
Hayward, Elizabeth O.
2012-01-01
In this paper I explore how Margaret Beier, Lesley Miller, and Shu Wang make claims for the validity and reliability of the instrument they developed to explore the construct of "possible selves" as described in their manuscript, "Science Games and the Development of Scientific Possible Selves."
Evaluating measurement tools in science education research
NASA Astrophysics Data System (ADS)
Hayward, Elizabeth O.
2012-12-01
In this paper I explore how Margaret Beier, Lesley Miller, and Shu Wang make claims for the validity and reliability of the instrument they developed to explore the construct of "possible selves" as described in their manuscript, Science Games and the Development of Scientific Possible Selves.
NASA Astrophysics Data System (ADS)
Price, Aaron
2010-01-01
Citizen Sky is a new three-year, astronomical citizen science project launched in June, 2009 with funding from the National Science Foundation. This paper reports on early results of an assessment delivered to 1000 participants when they first joined the project. The goal of the assessment, based on the Nature of Scientific Knowledge Scale (NSKS), is to characterize their attitudes towards the nature of scientific knowledge. Our results are that the NSKS components of the assessment achieved high levels of reliability. Both reliability and overall scores fall within the range reported from other NSKS studies in the literature. Correlation analysis with other components of the assessment reveals some factors, such as age and understanding of scientific evidence, may be reflected in scores of subscales of NSKS items. Further work will be done using online discourse analysis and interviews. Overall, we find that the NSKS can be used as an entrance assessment for an online citizen science project.
NASA Technical Reports Server (NTRS)
Curreri, Peter A.
2005-01-01
This tutorial is a primer on the motivational and materials science basis for utilizing space resources to lower the cost and increase the safety and reliability of human systems beyond Earth's orbit. Past research in materials processing in orbit will be briefly reviewed to emphasize the challenges and advantages inherent in processing materials in space. Data on resource availability from human Lunar and robotic/sensor missions beyond the Moon will be overviewed for resource relevance to human exploration and development of space. Specific scenarios such as propellant production on the Moon and Mars, and lunar photovoltaic power production from in-situ materials will be discussed in relation to exploration and commercialization of space. A conclusion will cover some of the visionary proposals for the use of space resources to extend human society and prosperity beyond Earth.
An Interdisciplinary Network Making Progress on Climate Change Communication
NASA Astrophysics Data System (ADS)
Spitzer, W.; Anderson, J. C.; Bales, S.; Fraser, J.; Yoder, J. A.
2012-12-01
Public understanding of climate change continues to lag far behind the scientific consensus not merely because the public lacks information, but because there is in fact too much complex and contradictory information available. Fortunately, we can now (1) build on careful empirical cognitive and social science research to understand what people already value, believe, and understand; and then (2) design and test strategies for translating complex science so that people can examine evidence, make well-informed inferences, and embrace science-based solutions. Informal science education institutions can help bridge the gap between climate scientists and the public. In the US, more than 1,500 informal science venues (science centers, museums, aquariums, zoos, nature centers, national parks, etc.) are visited annually by 61% of the population. Extensive research shows that these visitors are receptive to learning about climate change and trust these institutions as reliable sources. Ultimately, we need to take a strategic approach to the way climate change is communicated. An interdisciplinary approach is needed to bring together three key areas of expertise (as recommended by Pidgeon and Fischhoff, 2011): 1. Climate and decision science experts - who can summarize and explain what is known, characterize risks, and describe appropriate mitigation and adaptation strategies; 2. Social scientists - who can bring to bear research, theory, and best practices from cognitive, communication, knowledge acquisition, and social learning theory; and 3. Informal educators and program designers - who bring a practitioner perspective and can exponentially facilitate a learning process for additional interpreters. With support from an NSF CCEP Phase I grant, we have tested this approach, bringing together Interdisciplinary teams of colleagues for a five month "study circles" to develop skills to communicate climate change based on research in the social and cognitive sciences. In 2011, social scientists, Ph.D. students studying oceanography, and staff from more than 20 institutions that teach science to the public came together in these learning groups. Most participants were motivated to create new or revised training or public programs based on lessons learned together. The success of this program rests on a twofold approach that combines collaborative learning with a cognitive and social sciences research based approach to communications. The learning process facilitated trust and experimentation among co-learners to practice applications for communications that has continued beyond the study circle experience through the networks established during the process. Examples drawn from the study circle outputs suggest that this approach could have a transformative impact on informal science education on a broad scale. Ultimately, we envision informal science interpreters as "vectors" for effective science communication, ocean and climate scientists with enhanced communication skills, and increased public demand for explanation and dialogue about global issues.
U.S. Geological Survey energy and minerals science strategy
Ferrero, Richard C.; Kolak, Jonathan J.; Bills, Donald J.; Bowen, Zachary H.; Cordier, Daniel J.; Gallegos, Tanya J.; Hein, James R.; Kelley, Karen D.; Nelson, Philip H.; Nuccio, Vito F.; Schmidt, Jeanine M.; Seal, Robert R.
2012-01-01
The economy, national security, and standard of living of the United States depend heavily on adequate and reliable supplies of energy and mineral resources. Based on current population and consumption trends, the Nation's use of energy and minerals can be expected to grow, driving the demand for ever broader scientific understanding of resource formation, location, and availability. In addition, the increasing importance of environmental stewardship, human health, and sustainable growth place further emphasis on energy and mineral resources research and understanding. Collectively, these trends in resource demand and the interconnectedness among resources will lead to new challenges and, in turn, require cutting-edge science for the next generation of societal decisions. The contributions of the U.S. Geological Survey to energy and minerals research are well established. Based on five interrelated goals, this plan establishes a comprehensive science strategy. It provides a structure that identifies the most critical aspects of energy and mineral resources for the coming decade. * Goal 1. - Understand fundamental Earth processes that form energy and mineral resources. * Goal 2. - Understand the environmental behavior of energy and mineral resources and their waste products. * Goal 3. - Provide inventories and assessments of energy and mineral resources. * Goal 4. - Understand the effects of energy and mineral development on natural resources. * Goal 5. - Understand the availability and reliability of energy and mineral resource supplies. Within each goal, multiple, scalable actions are identified. The level of specificity and complexity of these actions varies, consistent with the reality that even a modest refocus can yield large payoffs in the near term whereas more ambitious plans may take years to reach fruition. As such, prioritization of actions is largely dependent on policy direction, available resources, and the sequencing of prerequisite steps that will lead up to the most visionary directions. The science strategy stresses early planning and places an emphasis on interdisciplinary collaboration and leveraging of expertise across the U.S. Geological Survey.
Reliability and Validity of Rubrics for Assessment through Writing
ERIC Educational Resources Information Center
Rezaei, Ali Reza; Lovorn, Michael
2010-01-01
This experimental project investigated the reliability and validity of rubrics in assessment of students' written responses to a social science "writing prompt". The participants were asked to grade one of the two samples of writing assuming it was written by a graduate student. In fact both samples were prepared by the authors. The…
the cost of solar cells, modules, and systems; and improving the reliability of PV components and Science-funded Center for Next Generation of Materials by Design. Reliability. Real-Time PV and Solar Research Solar panels line the rooftop of the parking garage at the south table mountain campus of
Turkish Adaptation of the Mentorship Effectiveness Scale: A Validity and Reliability Study
ERIC Educational Resources Information Center
Yirci, Ramazan; Karakose, Turgut; Uygun, Harun; Ozdemir, Tuncay Yavuz
2016-01-01
The purpose of this study is to adapt the Mentoring Relationship Effectiveness Scale to Turkish, and to conduct validity and reliability tests regarding the scale. The study group consisted of 156 university science students receiving graduate education. Construct validity and factor structure of the scale was analyzed first through exploratory…
Composing, Analyzing and Validating Software Models
NASA Astrophysics Data System (ADS)
Sheldon, Frederick T.
1998-10-01
This research has been conducted at the Computational Sciences Division of the Information Sciences Directorate at Ames Research Center (Automated Software Engineering Grp). The principle work this summer has been to review and refine the agenda that were carried forward from last summer. Formal specifications provide good support for designing a functionally correct system, however they are weak at incorporating non-functional performance requirements (like reliability). Techniques which utilize stochastic Petri nets (SPNs) are good for evaluating the performance and reliability for a system, but they may be too abstract and cumbersome from the stand point of specifying and evaluating functional behavior. Therefore, one major objective of this research is to provide an integrated approach to assist the user in specifying both functionality (qualitative: mutual exclusion and synchronization) and performance requirements (quantitative: reliability and execution deadlines). In this way, the merits of a powerful modeling technique for performability analysis (using SPNs) can be combined with a well-defined formal specification language. In doing so, we can come closer to providing a formal approach to designing a functionally correct system that meets reliability and performance goals.
Composing, Analyzing and Validating Software Models
NASA Technical Reports Server (NTRS)
Sheldon, Frederick T.
1998-01-01
This research has been conducted at the Computational Sciences Division of the Information Sciences Directorate at Ames Research Center (Automated Software Engineering Grp). The principle work this summer has been to review and refine the agenda that were carried forward from last summer. Formal specifications provide good support for designing a functionally correct system, however they are weak at incorporating non-functional performance requirements (like reliability). Techniques which utilize stochastic Petri nets (SPNs) are good for evaluating the performance and reliability for a system, but they may be too abstract and cumbersome from the stand point of specifying and evaluating functional behavior. Therefore, one major objective of this research is to provide an integrated approach to assist the user in specifying both functionality (qualitative: mutual exclusion and synchronization) and performance requirements (quantitative: reliability and execution deadlines). In this way, the merits of a powerful modeling technique for performability analysis (using SPNs) can be combined with a well-defined formal specification language. In doing so, we can come closer to providing a formal approach to designing a functionally correct system that meets reliability and performance goals.
Assessing clinical competency in the health sciences
NASA Astrophysics Data System (ADS)
Panzarella, Karen Joanne
To test the success of integrated curricula in schools of health sciences, meaningful measurements of student performance are required to assess clinical competency. This research project analyzed a new performance assessment tool, the Integrated Standardized Patient Examination (ISPE), for assessing clinical competency: specifically, to assess Doctor of Physical Therapy (DPT) students' clinical competence as the ability to integrate basic science knowledge with clinical communication skills. Thirty-four DPT students performed two ISPE cases, one of a patient who sustained a stroke and the other a patient with a herniated lumbar disc. Cases were portrayed by standardized patients (SPs) in a simulated clinical setting. Each case was scored by an expert evaluator in the exam room and then by one investigator and the students themselves via videotape. The SPs scored each student on an overall encounter rubric. Written feedback was obtained from all participants in the study. Acceptable reliability was demonstrated via inter-rater agreement as well as inter-rater correlations on items that used a dichotomous scale, whereas the items requiring the use of the 4-point rubric were somewhat less reliable. For the entire scale both cases had a significant correlation between the Expert-Investigator pair of raters, for the CVA case r = .547, p < .05 and for the HD case r = .700, p < .01. The SPs scored students higher than the other raters. Students' self-assessments were most closely aligned with the investigator. Effects were apparent due to case. Content validity was gathered in the process of developing cases and patient scenarios that were used in this study. Construct validity was obtained from the survey results analyzed from the experts and students. Future studies should examine the effect of rater training upon the reliability. Criterion or predictive validity could be further studied by comparing students' performances on the ISPE with other independent estimates of students' competence. The unique integration questions of the ISPE were judged to have good content validity from experts and students, suggestive that integration, a most crucial element of clinical competence, while done in the mind of the student, can be practiced, learned and assessed.
Conflicts of interest in medical science: peer usage, peer review and 'CoI consultancy'.
Charlton, Bruce G
2004-01-01
In recent years, the perception has grown that conflicts of interest are having a detrimental effect on medical science as it influences health policy and clinical practice, leading medical journals to enforce self-declaration of potential biases in the attempt to counteract or compensate for the problem. Conflict of interest (CoI) declarations have traditionally been considered inappropriate in pure science since its evaluation systems themselves constitute a mechanism for eliminating the effect of individual biases. Pure science is primarily evaluated by 'peer usage', in which scientific information is 'replicated' by being incorporated in the work of other scientists, and tested by further observation of the natural world. Over the long-term, the process works because significant biases impair the quality of science, and bad science tends to be neglected or refuted. However, scientific evaluation operates slowly over years and decades, and only a small proportion of published work is ever actually evaluated. But most of modern medical science no longer conforms to the model of pure science, and may instead be conceptualized as a system of 'applied' science having different aims and evaluation processes. The aim of applied medical science is to solve pre-specified problems, and to provide scientific information ready for implementation immediately following publication. The primary evaluation process of applied science is peer review, not peer usage. Peer review is much more rapid (with a timescale of weeks or months) and cheaper than peer usage and (consequently) has a much wider application: peer review is a prospective validation while peer usage is retrospective. Since applied science consists of incremental advances on existing knowledge achieved using established techniques, its results can usually be reliably evaluated by peer review. However, despite its considerable convenience, peer review has significant limitations related to its reliance on opinion. One major limitation of peer review has proved to be its inability to deal with conflicts of interest, especially in a 'big science' context when prestigious scientists may have similar biases, and conflicts of interest are widely shared among peer reviewers. When applied medical science has been later checked against the slower but more valid processes of peer usage, it seems that reliance on peer review may allow damaging distortions to become 'locked-in' to clinical practice and health policy for considerable periods. Scientific progress is generally underpinned by increasing specialization. Medical journals should specialize in the communication of scientific information, and they have neither the resources nor the motivation to investigate and measure conflicts of interest. Effectively dealing with the problem of conflicts of interest in applied medical science firstly requires a more explicit demarcation between the communications media of pure medical science and applied medical science. Greater specialization of these activities would then allow distinctive aims and evaluation systems to evolve with the expectation of improved performance in both pure and applied systems. In future, applied medical science should operate with an assumption of bias, with the onus of proof on applied medical scientists to facilitate the 'data transparency' necessary to validate their research. Journals of applied medical science will probably require more rigorous processes of peer review than at present, since their publications are intended to be ready for implementation. But since peer review does not adequately filter-out conflicts of interest in applied medical science, there is a need for the evolution of specialist post-publication institutional mechanisms. The suggested solution is to encourage the establishment of independent 'CoI consultancy' services, whose role would be to evaluate conflicts of interest and other biases in published applied medical science prior to their implementation. Such services would be paid-for by the groups who intend to implement applied medical research.
NASA Technical Reports Server (NTRS)
Zanley, Nancy L.
1991-01-01
The NASA Science Internet (NSI) Network Operations Staff is responsible for providing reliable communication connectivity for the NASA science community. As the NSI user community expands, so does the demand for greater interoperability with users and resources on other networks (e.g., NSFnet, ESnet), both nationally and internationally. Coupled with the science community's demand for greater access to other resources is the demand for more reliable communication connectivity. Recognizing this, the NASA Science Internet Project Office (NSIPO) expands its Operations activities. By January 1990, Network Operations was equipped with a telephone hotline, and its staff was expanded to six Network Operations Analysts. These six analysts provide 24-hour-a-day, 7-day-a-week coverage to assist site managers with problem determination and resolution. The NSI Operations staff monitors network circuits and their associated routers. In most instances, NSI Operations diagnoses and reports problems before users realize a problem exists. Monitoring of the NSI TCP/IP Network is currently being done with Proteon's Overview monitoring system. The Overview monitoring system displays a map of the NSI network utilizing various colors to indicate the conditions of the components being monitored. Each node or site is polled via the Simple Network Monitoring Protocol (SNMP). If a circuit goes down, Overview alerts the Network Operations staff with an audible alarm and changes the color of the component. When an alert is received, Network Operations personnel immediately verify and diagnose the problem, coordinate repair with other networking service groups, track problems, and document problem and resolution into a trouble ticket data base. NSI Operations offers the NSI science community reliable connectivity by exercising prompt assessment and resolution of network problems.
The Preschool Rating Instrument for Science and Mathematics (PRISM)
ERIC Educational Resources Information Center
Brenneman, Kimberly; Stevenson-Garcia, Judi; Jung, Kwanghee; Frede, Ellen
2011-01-01
Until recently, few valid and reliable assessments were available to measure young children's mathematics and science learning in a "comprehensive" way. Now, a number of mathematics assessments have been developed and subjected to testing (Klein, Starkey, & Wakeley, 2000; Ginsburg, 2008; Clements & Sarama, 2008), and progress has…
NASA Astrophysics Data System (ADS)
Launch vehicle propulsion system reliability considerations during the design and verification processes are discussed. The tools available for predicting and minimizing anomalies or failure modes are described and objectives for validating advanced launch system propulsion reliability are listed. Methods for ensuring vehicle/propulsion system interface reliability are examined and improvements in the propulsion system development process are suggested to improve reliability in launch operations. Also, possible approaches to streamline the specification and procurement process are given. It is suggested that government and industry should define reliability program requirements and manage production and operations activities in a manner that provides control over reliability drivers. Also, it is recommended that sufficient funds should be invested in design, development, test, and evaluation processes to ensure that reliability is not inappropriately subordinated to other management considerations.
Data processing pipeline for Herschel HIFI
NASA Astrophysics Data System (ADS)
Shipman, R. F.; Beaulieu, S. F.; Teyssier, D.; Morris, P.; Rengel, M.; McCoey, C.; Edwards, K.; Kester, D.; Lorenzani, A.; Coeur-Joly, O.; Melchior, M.; Xie, J.; Sanchez, E.; Zaal, P.; Avruch, I.; Borys, C.; Braine, J.; Comito, C.; Delforge, B.; Herpin, F.; Hoac, A.; Kwon, W.; Lord, S. D.; Marston, A.; Mueller, M.; Olberg, M.; Ossenkopf, V.; Puga, E.; Akyilmaz-Yabaci, M.
2017-12-01
Context. The HIFI instrument on the Herschel Space Observatory performed over 9100 astronomical observations, almost 900 of which were calibration observations in the course of the nearly four-year Herschel mission. The data from each observation had to be converted from raw telemetry into calibrated products and were included in the Herschel Science Archive. Aims: The HIFI pipeline was designed to provide robust conversion from raw telemetry into calibrated data throughout all phases of the HIFI missions. Pre-launch laboratory testing was supported as were routine mission operations. Methods: A modular software design allowed components to be easily added, removed, amended and/or extended as the understanding of the HIFI data developed during and after mission operations. Results: The HIFI pipeline processed data from all HIFI observing modes within the Herschel automated processing environment as well as within an interactive environment. The same software can be used by the general astronomical community to reprocess any standard HIFI observation. The pipeline also recorded the consistency of processing results and provided automated quality reports. Many pipeline modules were in use since the HIFI pre-launch instrument level testing. Conclusions: Processing in steps facilitated data analysis to discover and address instrument artefacts and uncertainties. The availability of the same pipeline components from pre-launch throughout the mission made for well-understood, tested, and stable processing. A smooth transition from one phase to the next significantly enhanced processing reliability and robustness. Herschel was an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.
NASA Technical Reports Server (NTRS)
Atwell, William; Koontz, Steve; Normand, Eugene
2012-01-01
In this paper we review the discovery of cosmic ray effects on the performance and reliability of microelectronic systems as well as on human health and safety, as well as the development of the engineering and health science tools used to evaluate and mitigate cosmic ray effects in earth surface, atmospheric flight, and space flight environments. Three twentieth century technological developments, 1) high altitude commercial and military aircraft; 2) manned and unmanned spacecraft; and 3) increasingly complex and sensitive solid state micro-electronics systems, have driven an ongoing evolution of basic cosmic ray science into a set of practical engineering tools (e.g. ground based test methods as well as high energy particle transport and reaction codes) needed to design, test, and verify the safety and reliability of modern complex electronic systems as well as effects on human health and safety. The effects of primary cosmic ray particles, and secondary particle showers produced by nuclear reactions with spacecraft materials, can determine the design and verification processes (as well as the total dollar cost) for manned and unmanned spacecraft avionics systems. Similar considerations apply to commercial and military aircraft operating at high latitudes and altitudes near the atmospheric Pfotzer maximum. Even ground based computational and controls systems can be negatively affected by secondary particle showers at the Earth's surface, especially if the net target area of the sensitive electronic system components is large. Accumulation of both primary cosmic ray and secondary cosmic ray induced particle shower radiation dose is an important health and safety consideration for commercial or military air crews operating at high altitude/latitude and is also one of the most important factors presently limiting manned space flight operations beyond low-Earth orbit (LEO).
A 100-Year Review: Cheese production and quality.
Johnson, M E
2017-12-01
In the beginning, cheese making in the United States was all art, but embracing science and technology was necessary to make progress in producing a higher quality cheese. Traditional cheese making could not keep up with the demand for cheese, and the development of the factory system was necessary. Cheese quality suffered because of poor-quality milk, but 3 major innovations changed that: refrigeration, commercial starters, and the use of pasteurized milk for cheese making. Although by all accounts cold storage improved cheese quality, it was the improvement of milk quality, pasteurization of milk, and the use of reliable cultures for fermentation that had the biggest effect. Together with use of purified commercial cultures, pasteurization enabled cheese production to be conducted on a fixed time schedule. Fundamental research on the genetics of starter bacteria greatly increased the reliability of fermentation, which in turn made automation feasible. Demand for functionality, machinability, application in baking, and more emphasis on nutritional aspects (low fat and low sodium) of cheese took us back to the fundamental principles of cheese making and resulted in renewed vigor for scientific investigations into the chemical, microbiological, and enzymatic changes that occur during cheese making and ripening. As milk production increased, cheese factories needed to become more efficient. Membrane concentration and separation of milk offered a solution and greatly enhanced plant capacity. Full implementation of membrane processing and use of its full potential have yet to be achieved. Implementation of new technologies, the science of cheese making, and the development of further advances will require highly trained personnel at both the academic and industrial levels. This will be a great challenge to address and overcome. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
The development and validation of a test of science critical thinking for fifth graders.
Mapeala, Ruslan; Siew, Nyet Moi
2015-01-01
The paper described the development and validation of the Test of Science Critical Thinking (TSCT) to measure the three critical thinking skill constructs: comparing and contrasting, sequencing, and identifying cause and effect. The initial TSCT consisted of 55 multiple choice test items, each of which required participants to select a correct response and a correct choice of critical thinking used for their response. Data were obtained from a purposive sampling of 30 fifth graders in a pilot study carried out in a primary school in Sabah, Malaysia. Students underwent the sessions of teaching and learning activities for 9 weeks using the Thinking Maps-aided Problem-Based Learning Module before they answered the TSCT test. Analyses were conducted to check on difficulty index (p) and discrimination index (d), internal consistency reliability, content validity, and face validity. Analysis of the test-retest reliability data was conducted separately for a group of fifth graders with similar ability. Findings of the pilot study showed that out of initial 55 administered items, only 30 items with relatively good difficulty index (p) ranged from 0.40 to 0.60 and with good discrimination index (d) ranged within 0.20-1.00 were selected. The Kuder-Richardson reliability value was found to be appropriate and relatively high with 0.70, 0.73 and 0.92 for identifying cause and effect, sequencing, and comparing and contrasting respectively. The content validity index obtained from three expert judgments equalled or exceeded 0.95. In addition, test-retest reliability showed good, statistically significant correlations ([Formula: see text]). From the above results, the selected 30-item TSCT was found to have sufficient reliability and validity and would therefore represent a useful tool for measuring critical thinking ability among fifth graders in primary science.
Sweet, Robert M; Hananel, David; Lawrenz, Frances
2010-02-01
To present modern educational psychology theory and apply these concepts to validity and reliability of surgical skills training and assessment. In a series of cross-disciplinary meetings, we applied a unified approach of behavioral science principles and theory to medical technical skills education given the recent advances in the theories in the field of behavioral psychology and statistics. While validation of the individual simulation tools is important, it is only one piece of a multimodal curriculum that in and of itself deserves examination and study. We propose concurrent validation throughout the design of simulation-based curriculum rather than once it is complete. We embrace the concept that validity and curriculum development are interdependent, ongoing processes that are never truly complete. Individual predictive, construct, content, and face validity aspects should not be considered separately but as interdependent and complementary toward an end application. Such an approach could help guide our acceptance and appropriate application of these exciting new training and assessment tools for technical skills training in medicine.
Reliability of a science admission test (HAM-Nat) at Hamburg medical school.
Hissbach, Johanna; Klusmann, Dietrich; Hampe, Wolfgang
2011-01-01
The University Hospital in Hamburg (UKE) started to develop a test of knowledge in natural sciences for admission to medical school in 2005 (Hamburger Auswahlverfahren für Medizinische Studiengänge, Naturwissenschaftsteil, HAM-Nat). This study is a step towards establishing the HAM-Nat. We are investigating parallel forms reliability, the effect of a crash course in chemistry on test results, and correlations of HAM-Nat test results with a test of scientific reasoning (similar to a subtest of the "Test for Medical Studies", TMS). 316 first-year students participated in the study in 2007. They completed different versions of the HAM-Nat test which consisted of items that had already been used (HN2006) and new items (HN2007). Four weeks later half of the participants were tested on the HN2007 version of the HAM-Nat again, while the other half completed the test of scientific reasoning. Within this four week interval students were offered a five day chemistry course. Parallel forms reliability for four different test versions ranged from r(tt)=.53 to r(tt)=.67. The retest reliabilities of the HN2007 halves were r(tt)=.54 and r(tt )=.61. Correlations of the two HAM-Nat versions with the test of scientific reasoning were r=.34 und r=.21. The crash course in chemistry had no effect on HAM-Nat scores. The results suggest that further versions of the test of natural sciences will not easily conform to the standards of internal consistency, parallel-forms reliability and retest reliability. Much care has to be taken in order to assemble items which could be used interchangeably for the construction of new test versions. The test of scientific reasoning and the HAM-Nat are tapping different constructs. Participation in a chemistry course did not improve students' achievement, probably because the content of the course was not coordinated with the test and many students lacked of motivation to do well in the second test.
Reliability of a science admission test (HAM-Nat) at Hamburg medical school
Hissbach, Johanna; Klusmann, Dietrich; Hampe, Wolfgang
2011-01-01
Objective: The University Hospital in Hamburg (UKE) started to develop a test of knowledge in natural sciences for admission to medical school in 2005 (Hamburger Auswahlverfahren für Medizinische Studiengänge, Naturwissenschaftsteil, HAM-Nat). This study is a step towards establishing the HAM-Nat. We are investigating parallel forms reliability, the effect of a crash course in chemistry on test results, and correlations of HAM-Nat test results with a test of scientific reasoning (similar to a subtest of the "Test for Medical Studies", TMS). Methods: 316 first-year students participated in the study in 2007. They completed different versions of the HAM-Nat test which consisted of items that had already been used (HN2006) and new items (HN2007). Four weeks later half of the participants were tested on the HN2007 version of the HAM-Nat again, while the other half completed the test of scientific reasoning. Within this four week interval students were offered a five day chemistry course. Results: Parallel forms reliability for four different test versions ranged from rtt=.53 to rtt=.67. The retest reliabilities of the HN2007 halves were rtt=.54 and rtt =.61. Correlations of the two HAM-Nat versions with the test of scientific reasoning were r=.34 und r=.21. The crash course in chemistry had no effect on HAM-Nat scores. Conclusions: The results suggest that further versions of the test of natural sciences will not easily conform to the standards of internal consistency, parallel-forms reliability and retest reliability. Much care has to be taken in order to assemble items which could be used interchangeably for the construction of new test versions. The test of scientific reasoning and the HAM-Nat are tapping different constructs. Participation in a chemistry course did not improve students’ achievement, probably because the content of the course was not coordinated with the test and many students lacked of motivation to do well in the second test. PMID:21866246
John M. Eisenberg Patient Safety Awards. System innovation: Concord Hospital.
Uhlig, Paul N; Brown, Jeffrey; Nason, Anne K; Camelio, Addie; Kendall, Elise
2002-12-01
The Cardiac Surgery Program at Concord Hospital (Concord, NH) restructured clinical teamwork for improved safety and effectiveness on the basis of theory and practice from human factors science, aviation safety, and high-reliability organization theory. A team-based, collaborative rounds process--the Concord Collaborative Care Model--that involved use of a structured communications protocol was conducted daily at each patient's bedside. The entire care team agreed to meet at the same time each day (8:45 AM to 9:30 AM) to share information and develop a plan of care for each patient, with patient and family members as active participants. The cardiac surgery team developed a structured communications protocol adapted from human factors science. To provide a forum for discussion of team goals and progress and to address system-level concerns, a biweekly system rounds process was established. Following implementation of collaborative rounds, mortality of Concord Hospital's cardiac surgery patients declined significantly from expected rates. Satisfaction rates of open heart patients scores were consistently in the 97th-99th percentile nationally. A quality of work life survey indicated that in every category, providers expressed greater satisfaction with the collaborative care process than with the traditional rounds process. Practice patterns in the Cardiac Surgery Program at Concord Hospital have changed to a much more collaborative and participatory process, with improved outcomes, happier patients, and more satisfied practitioners. A culture of continuous program improvement has been implemented that continues to evolve and produce benefits.
Reliability and validity in a nutshell.
Bannigan, Katrina; Watson, Roger
2009-12-01
To explore and explain the different concepts of reliability and validity as they are related to measurement instruments in social science and health care. There are different concepts contained in the terms reliability and validity and these are often explained poorly and there is often confusion between them. To develop some clarity about reliability and validity a conceptual framework was built based on the existing literature. The concepts of reliability, validity and utility are explored and explained. Reliability contains the concepts of internal consistency and stability and equivalence. Validity contains the concepts of content, face, criterion, concurrent, predictive, construct, convergent (and divergent), factorial and discriminant. In addition, for clinical practice and research, it is essential to establish the utility of a measurement instrument. To use measurement instruments appropriately in clinical practice, the extent to which they are reliable, valid and usable must be established.
NASA Astrophysics Data System (ADS)
Saha, Gouranga Chandra
Very often a number of factors, especially time, space and money, deter many science educators from using inquiry-based, hands-on, laboratory practical tasks as alternative assessment instruments in science. A shortage of valid inquiry-based laboratory tasks for high school biology has been cited. Driven by this need, this study addressed the following three research questions: (1) How can laboratory-based performance tasks be designed and developed that are doable by students for whom they are designed/written? (2) Do student responses to the laboratory-based performance tasks validly represent at least some of the intended process skills that new biology learning goals want students to acquire? (3) Are the laboratory-based performance tasks psychometrically consistent as individual tasks and as a set? To answer these questions, three tasks were used from the six biology tasks initially designed and developed by an iterative process of trial testing. Analyses of data from 224 students showed that performance-based laboratory tasks that are doable by all students require careful and iterative process of development. Although the students demonstrated more skill in performing than planning and reasoning, their performances at the item level were very poor for some items. Possible reasons for the poor performances have been discussed and suggestions on how to remediate the deficiencies have been made. Empirical evidences for validity and reliability of the instrument have been presented both from the classical and the modern validity criteria point of view. Limitations of the study have been identified. Finally implications of the study and directions for further research have been discussed.
2016-01-01
The distributed and global nature of data science creates challenges for evaluating the quality, import and potential impact of the data and knowledge claims being produced. This has significant consequences for the management and oversight of responsibilities and accountabilities in data science. In particular, it makes it difficult to determine who is responsible for what output, and how such responsibilities relate to each other; what ‘participation’ means and which accountabilities it involves, with regard to data ownership, donation and sharing as well as data analysis, re-use and authorship; and whether the trust placed on automated tools for data mining and interpretation is warranted (especially as data processing strategies and tools are often developed separately from the situations of data use where ethical concerns typically emerge). To address these challenges, this paper advocates a participative, reflexive management of data practices. Regulatory structures should encourage data scientists to examine the historical lineages and ethical implications of their work at regular intervals. They should also foster awareness of the multitude of skills and perspectives involved in data science, highlighting how each perspective is partial and in need of confrontation with others. This approach has the potential to improve not only the ethical oversight for data science initiatives, but also the quality and reliability of research outputs. This article is part of the themed issue ‘The ethical impact of data science’. PMID:28336799
Putnam, James E.; Hansen, Cristi V.
2014-01-01
As the Nation’s principle earth-science information agency, the U.S. Geological Survey (USGS) is depended on to collect data of the highest quality. This document is a quality-assurance plan for groundwater activities (GWQAP) of the Kansas Water Science Center. The purpose of this GWQAP is to establish a minimum set of guidelines and practices to be used by the Kansas Water Science Center to ensure quality in groundwater activities. Included within these practices are the assignment of responsibilities for implementing quality-assurance activities in the Kansas Water Science Center and establishment of review procedures needed to ensure the technical quality and reliability of the groundwater products. In addition, this GWQAP is intended to complement quality-assurance plans for surface-water and water-quality activities and similar plans for the Kansas Water Science Center and general project activities throughout the USGS. This document provides the framework for collecting, analyzing, and reporting groundwater data that are quality assured and quality controlled. This GWQAP presents policies directing the collection, processing, analysis, storage, review, and publication of groundwater data. In addition, policies related to organizational responsibilities, training, project planning, and safety are presented. These policies and practices pertain to all groundwater activities conducted by the Kansas Water Science Center, including data-collection programs, interpretive and research projects. This report also includes the data management plan that describes the progression of data management from data collection to archiving and publication.
Huffhines, Lindsay; Tunno, Angela M; Cho, Bridget; Hambrick, Erin P; Campos, Ilse; Lichty, Brittany; Jackson, Yo
2016-08-01
State social service agency case files are a common mechanism for obtaining information about a child's maltreatment history, yet these documents are often challenging for researchers to access, and then to process in a manner consistent with the requirements of social science research designs. Specifically, accessing and navigating case files is an extensive undertaking, and a task that many researchers have had to maneuver with little guidance. Even after the files are in hand and the research questions and relevant variables have been clarified, case file information about a child's maltreatment exposure can be idiosyncratic, vague, inconsistent, and incomplete, making coding such information into useful variables for statistical analyses difficult. The Modified Maltreatment Classification System (MMCS) is a popular tool used to guide the process, and though comprehensive, this coding system cannot cover all idiosyncrasies found in case files. It is not clear from the literature how researchers implement this system while accounting for issues outside of the purview of the MMCS or that arise during MMCS use. Finally, a large yet reliable file coding team is essential to the process, however, the literature lacks training guidelines and methods for establishing reliability between coders. In an effort to move the field toward a common approach, the purpose of the present discussion is to detail the process used by one large-scale study of child maltreatment, the Studying Pathways to Adjustment and Resilience in Kids (SPARK) project, a longitudinal study of resilience in youth in foster care. The article addresses each phase of case file coding, from accessing case files, to identifying how to measure constructs of interest, to dealing with exceptions to the coding system, to coding variables reliably, to training large teams of coders and monitoring for fidelity. Implications for a comprehensive and efficient approach to case file coding are discussed.
Huffhines, Lindsay; Tunno, Angela M.; Cho, Bridget; Hambrick, Erin P.; Campos, Ilse; Lichty, Brittany; Jackson, Yo
2016-01-01
State social service agency case files are a common mechanism for obtaining information about a child’s maltreatment history, yet these documents are often challenging for researchers to access, and then to process in a manner consistent with the requirements of social science research designs. Specifically, accessing and navigating case files is an extensive undertaking, and a task that many researchers have had to maneuver with little guidance. Even after the files are in hand and the research questions and relevant variables have been clarified, case file information about a child’s maltreatment exposure can be idiosyncratic, vague, inconsistent, and incomplete, making coding such information into useful variables for statistical analyses difficult. The Modified Maltreatment Classification System (MMCS) is a popular tool used to guide the process, and though comprehensive, this coding system cannot cover all idiosyncrasies found in case files. It is not clear from the literature how researchers implement this system while accounting for issues outside of the purview of the MMCS or that arise during MMCS use. Finally, a large yet reliable file coding team is essential to the process, however, the literature lacks training guidelines and methods for establishing reliability between coders. In an effort to move the field toward a common approach, the purpose of the present discussion is to detail the process used by one large-scale study of child maltreatment, the Studying Pathways to Adjustment and Resilience in Kids (SPARK) project, a longitudinal study of resilience in youth in foster care. The article addresses each phase of case file coding, from accessing case files, to identifying how to measure constructs of interest, to dealing with exceptions to the coding system, to coding variables reliably, to training large teams of coders and monitoring for fidelity. Implications for a comprehensive and efficient approach to case file coding are discussed. PMID:28138207
NASA Technical Reports Server (NTRS)
Handley, Thomas H., Jr.; Preheim, Larry E.
1990-01-01
Data systems requirements in the Earth Observing System (EOS) Space Station Freedom (SSF) eras indicate increasing data volume, increased discipline interplay, higher complexity and broader data integration and interpretation. A response to the needs of the interdisciplinary investigator is proposed, considering the increasing complexity and rising costs of scientific investigation. The EOS Data Information System, conceived to be a widely distributed system with reliable communication links between central processing and the science user community, is described. Details are provided on information architecture, system models, intelligent data management of large complex databases, and standards for archiving ancillary data, using a research library, a laboratory and collaboration services.
Contemporary Science and Worldview-Making
ERIC Educational Resources Information Center
Cordero, Alberto
2009-01-01
This paper discusses the impact of contemporary scientific knowledge on worldviews. The first three sections provide epistemological background for the arguments that follow. Sections 2 and 3 discuss the reliable part of science, specifically the characterization, scope and limits of the present scientific canon. Section 4 deals with the mode of…
How Reliable is the Temperature Forecast?
ERIC Educational Resources Information Center
Christmann, Edwin P.
2005-01-01
Project 2061 suggests "technology provides the eyes and ears of science--and some of the muscle too. The electronic computer, for example, has led to substantial progress in the study of weather systems...." Obviously, now that teachers have access to a kaleidoscope of technological advancements, middle school science teachers can engage students…
Argonne to lead 8 DOE Grid Modernization Projects | Argonne National
Photos Videos Fact Sheets, Brochures and Reports Summer Science Writing Internship Careers Education Photos Videos Fact Sheets, Brochures and Reports Summer Science Writing Internship Argonne to lead 8 DOE Performance and Reliability of Combined Transmission-Distribution with High Solar Penetration Develop a
Understanding Mathematics and Science Matters. Studies in Mathematical Thinking and Learning Series
ERIC Educational Resources Information Center
Romberg, Thomas A., Ed.; Carpenter, Thomas P., Ed.; Dremock, Fae, Ed.
2005-01-01
The research reported in this book provides reliable evidence on and knowledge about mathematics and science instruction that emphasizes student understanding--instruction consistent with the needs of students who will be citizens in an increasingly demanding technological world. The National Center for Improving Student Learning in Mathematics…
Bringing climate sciences to the general public with the Climanosco initiative
NASA Astrophysics Data System (ADS)
Bourqui, Michel; Bolduc, Cassandra; Charbonneau, Paul; Charrière, Marie; Hill, Daniel; Lòpez Gladko, Angélica; Loubet, Enrique; Roy, Philippe; Winter, Barbara
2016-04-01
This paper presents the first months of operation of the scientists-initiated Climanosco.org platform. The goal of this initiative is to bridge climate sciences with the general public by building a network of climate scientists and citizens around the world, by stimulating the writing of quality climate science articles in non-scientific language, and by publishing these articles in an open-access, multilingual format. For the climate scientist, this platform will offer a simple and reliable channel to disseminate research results to the general public. High standards are enforced by: a) requiring that the main author is an active climate scientist, and b) an innovative peer-review process involving scientific and non-scientific referees with distinct roles. Direct participation of non-scientists is allowed through co-authoring, peer-reviewing, language translation. Furthermore, public engagement is stimulated by allowing non-scientists to invite manuscripts to be written by scientists on topics of their concern. The targeted public includes journalists, teachers, students, local politicians, economists, members of the agriculture sector, and any other citizens from around the world with an interest in climate sciences. The initiative is now several months into operations. In this paper, I will discuss what we have achieved so far and what we plan for the next future.
EOSDIS: Archive and Distribution Systems in the Year 2000
NASA Technical Reports Server (NTRS)
Behnke, Jeanne; Lake, Alla
2000-01-01
Earth Science Enterprise (ESE) is a long-term NASA research mission to study the processes leading to global climate change. The Earth Observing System (EOS) is a NASA campaign of satellite observatories that are a major component of ESE. The EOS Data and Information System (EOSDIS) is another component of ESE that will provide the Earth science community with easy, affordable, and reliable access to Earth science data. EOSDIS is a distributed system, with major facilities at seven Distributed Active Archive Centers (DAACs) located throughout the United States. The EOSDIS software architecture is being designed to receive, process, and archive several terabytes of science data on a daily basis. Thousands of science users and perhaps several hundred thousands of non-science users are expected to access the system. The first major set of data to be archived in the EOSDIS is from Landsat-7. Another EOS satellite, Terra, was launched on December 18, 1999. With the Terra launch, the EOSDIS will be required to support approximately one terabyte of data into and out of the archives per day. Since EOS is a multi-mission program, including the launch of more satellites and many other missions, the role of the archive systems becomes larger and more critical. In 1995, at the fourth convening of NASA Mass Storage Systems and Technologies Conference, the development plans for the EOSDIS information system and archive were described. Five years later, many changes have occurred in the effort to field an operational system. It is interesting to reflect on some of the changes driving the archive technology and system development for EOSDIS. This paper principally describes the Data Server subsystem including how the other subsystems access the archive, the nature of the data repository, and the mass-storage I/O management. The paper reviews the system architecture (both hardware and software) of the basic components of the archive. It discusses the operations concept, code development, and testing phase of the system. Finally, it describes the future plans for the archive.
Connecting Climate Science to Policy: from Global Food Production to the US Supreme Court
NASA Astrophysics Data System (ADS)
Battisti, D. S.
2016-12-01
There are myriad ways climate science has been used to inform on global food security, and to affect law and policy. In this talk, I will summarize examples that include the application of the El Nino - Southern Oscillation science to improve food security in Indonesia and provide water forecasts for agriculture in northwest Mexico, as well as the application of climate change science to project changes in global grain production. In the latter case, reliable information on the impact of increasing greenhouse gases on growing season temperature is applied to assess the impact of climate change on average crop yields, on the volatility in crop yields, and on the loss of yield due to increasing pest pressure - all of which have acute implications for agricultural policy. In the US, climate change science was of paramount importance for the Supreme Court decision in the case "Massachusetts vs. EPA," which to this day greatly shapes US policy related to climate change - most notably in setting emission standards for vehicles. My colleagues and I have learned several lessons from our experiences in these applications of climate science that I will share, including some thoughts on the nature of interdisciplinary teams for producing reliable and effective products, and the on the professional pros and cons of pursuing applied work.
New Trends in E-Science: Machine Learning and Knowledge Discovery in Databases
NASA Astrophysics Data System (ADS)
Brescia, Massimo
2012-11-01
Data mining, or Knowledge Discovery in Databases (KDD), while being the main methodology to extract the scientific information contained in Massive Data Sets (MDS), needs to tackle crucial problems since it has to orchestrate complex challenges posed by transparent access to different computing environments, scalability of algorithms, reusability of resources. To achieve a leap forward for the progress of e-science in the data avalanche era, the community needs to implement an infrastructure capable of performing data access, processing and mining in a distributed but integrated context. The increasing complexity of modern technologies carried out a huge production of data, whose related warehouse management and the need to optimize analysis and mining procedures lead to a change in concept on modern science. Classical data exploration, based on local user own data storage and limited computing infrastructures, is no more efficient in the case of MDS, worldwide spread over inhomogeneous data centres and requiring teraflop processing power. In this context modern experimental and observational science requires a good understanding of computer science, network infrastructures, Data Mining, etc. i.e. of all those techniques which fall into the domain of the so called e-science (recently assessed also by the Fourth Paradigm of Science). Such understanding is almost completely absent in the older generations of scientists and this reflects in the inadequacy of most academic and research programs. A paradigm shift is needed: statistical pattern recognition, object oriented programming, distributed computing, parallel programming need to become an essential part of scientific background. A possible practical solution is to provide the research community with easy-to understand, easy-to-use tools, based on the Web 2.0 technologies and Machine Learning methodology. Tools where almost all the complexity is hidden to the final user, but which are still flexible and able to produce efficient and reliable scientific results. All these considerations will be described in the detail in the chapter. Moreover, examples of modern applications offering to a wide variety of e-science communities a large spectrum of computational facilities to exploit the wealth of available massive data sets and powerful machine learning and statistical algorithms will be also introduced.
NASA Astrophysics Data System (ADS)
Kahveci, Ajda
2010-07-01
In this study, multiple thematically based and quantitative analysis procedures were utilized to explore the effectiveness of Turkish chemistry and science textbooks in terms of their reflection of reform. The themes gender equity, questioning level, science vocabulary load, and readability level provided the conceptual framework for the analyses. An unobtrusive research method, content analysis, was used by coding the manifest content and counting the frequency of words, photographs, drawings, and questions by cognitive level. The context was an undergraduate chemistry teacher preparation program at a large public university in a metropolitan area in northwestern Turkey. Forty preservice chemistry teachers were guided to analyze 10 middle school science and 10 high school chemistry textbooks. Overall, the textbooks included unfair gender representations, a considerably higher number of input and processing than output level questions, and high load of science terminology. The textbooks failed to provide sufficient empirical evidence to be considered as gender equitable and inquiry-based. The quantitative approach employed for evaluation contrasts with a more interpretive approach, and has the potential in depicting textbook profiles in a more reliable way, complementing the commonly employed qualitative procedures. Implications suggest that further work in this line is needed on calibrating the analysis procedures with science textbooks used in different international settings. The procedures could be modified and improved to meet specific evaluation needs. In the Turkish context, next step research may concern the analysis of science textbooks being rewritten for the reform-based curricula to make cross-comparisons and evaluate a possible progression.
Development of Nonelectronic Part Cyclic Failure Rates
1977-12-01
Schilling, W. A., "The User-Oriented Connector," Microwave Journal, Octcber 1976 40. Schneider, C., "Military Relay Reliability," Bell Telephone...polyimide B Diallyl phthalate, melamine , -55 to 200 fluorosilicone, silicone rubber, polysulfone, epoxy resin C Polytetrafluoroethylene (teflon) -55 to 125...propagation, solid state sciences, microwave physics and electronic reliability, maintainabilitg andcompatibility. .,% -UT104, , 8. g z
Water Awareness Scale for Pre-Service Science Teachers: Validity and Reliability Study
ERIC Educational Resources Information Center
Filik Iscen, Cansu
2015-01-01
The role of teachers in the formation of environmentally sensitive behaviors in students is quite high. Thus, the water awareness of teachers, who represent role models for students, is rather important. The main purpose of this study is to identify the reliability and validity study outcomes of the Water Awareness Scale, which was developed to…
Interpreting Variance Components as Evidence for Reliability and Validity.
ERIC Educational Resources Information Center
Kane, Michael T.
The reliability and validity of measurement is analyzed by a sampling model based on generalizability theory. A model for the relationship between a measurement procedure and an attribute is developed from an analysis of how measurements are used and interpreted in science. The model provides a basis for analyzing the concept of an error of…
A Study of Developing an Attitude Scale towards Authentic Learning Environments and Evaluation
ERIC Educational Resources Information Center
Çetinkaya, Murat
2018-01-01
The aim of the research is to improve a valid and reliable attributing scale which identifies authentic learning environments and evaluation attributes of the science teacher candidates. The study has been designed on the base of validity and reliability of the scale developed to evaluate the authentic learning environments. The research group is…
ERIC Educational Resources Information Center
Stevens, Christopher John; Dascombe, Ben James
2015-01-01
Sports performance testing is one of the most common and important measures used in sport science. Performance testing protocols must have high reliability to ensure any changes are not due to measurement error or inter-individual differences. High validity is also important to ensure test performance reflects true performance. Time-trial…
A visitor study approach to INGV exhibition at Genova Science Festival 2011
NASA Astrophysics Data System (ADS)
Nave, R.; D'Addezio, G.; Carosi, A.
2012-04-01
The Istituto Nazionale di Geofisica e Vulcanologia (INGV) is one of the largest European scientific institution dealing with Earth Sciences research and seismic and volcanic surveillance. We organizes every year intense educational and outreach activities focalizing in particular on causes of earthquakes and volcanic eruptions and how to behave properly and deal with these events. This approach derived from the consciousness on the social role of a correct information on natural hazards and on the awareness that preparedness is the best way to live with and to mitigate natural hazard. The Genova Science Festival, held since 2003, is the most remarkable among the Italian Science Communication events and for or the 2011 edition, the INGV realized an exibition called COME E' PROFONDO IL MARE, la geofisica in acqua (HOW DEEP THE SEA IS, geophysics in water). The exhibition shows and explains the main geodinamic processes trough interactive exhibits and colorful panels exploring events as earthquakes, volcanic eruption and tsunami, their impact on our territory. In order to approach a visitor study related to this scientific educational path we elaborated questionnaires designed for students, for teacher and for general public. We have chosen this survey instrument for its advantage to get a wide variety of information and quantitative data. In developing the questionnaire three main aspects were taken in account: its shortness, clarity in the questions, and answers structure able to grade different indicator of visitor opinion and exhibition impact. That will also allow us to combine indicators scores during data elaboration phase. The questionnaire goes through all the section of the educational path, trying to have a feedback on the proposed layout and its efficacy. The Science Festival lasted 2 weeks and was visited by about 8000 people. During the event were handed out and recollected about 300 questionnaires that allows us to make a reliable assessment on the impact of our exhibition. This first effort of visitor study has the aim to have a framework for exhibition impact and its edutaiment appeal, and represent a starting point in a wider and specific study on visitor learning. These kind of survey contribute to set up a reliable feedback loops between scientists and end users of natural hazards information that can help to close the gap between science and society.
NASA Astrophysics Data System (ADS)
Suhir, E.
2014-05-01
The well known and widely used experimental reliability "passport" of a mass manufactured electronic or a photonic product — the bathtub curve — reflects the combined contribution of the statistics-related and reliability-physics (physics-of-failure)-related processes. When time progresses, the first process results in a decreasing failure rate, while the second process associated with the material aging and degradation leads to an increased failure rate. An attempt has been made in this analysis to assess the level of the reliability physics-related aging process from the available bathtub curve (diagram). It is assumed that the products of interest underwent the burn-in testing and therefore the obtained bathtub curve does not contain the infant mortality portion. It has been also assumed that the two random processes in question are statistically independent, and that the failure rate of the physical process can be obtained by deducting the theoretically assessed statistical failure rate from the bathtub curve ordinates. In the carried out numerical example, the Raleigh distribution for the statistical failure rate was used, for the sake of a relatively simple illustration. The developed methodology can be used in reliability physics evaluations, when there is a need to better understand the roles of the statistics-related and reliability-physics-related irreversible random processes in reliability evaluations. The future work should include investigations on how powerful and flexible methods and approaches of the statistical mechanics can be effectively employed, in addition to reliability physics techniques, to model the operational reliability of electronic and photonic products.
NASA Astrophysics Data System (ADS)
Wyborn, L.
2012-04-01
The advent of the petascale era, in both storage and compute facilities, will offer new opportunities for earth scientists to transform the way they do their science and to undertake cross-disciplinary science at a global scale. No longer will data have to be averaged and subsampled: it can be analysed to its fullest resolution at national or even global scales. Much larger data volumes can be analysed in single passes and at higher resolution: large scale cross domain science is now feasible. However, in general, earth sciences have been slow to capitalise on the potential of these new petascale compute facilities: many struggle to even use terascale facilities. Our chances of using these new facilities will require a vast improvement in the management of the full life cycle of data: in reality it will need to be transformed. Many of our current issues with earth science data are historic and stem from the limitations of early data storage systems. As storage was so expensive, metadata was usually stored separate from the data and attached as a readme file. Likewise, attributes that defined uncertainty, reliability and traceability were recoded in lab note books and rarely stored with the data. Data were routinely transferred as files. The new opportunities require that the traditional discover, display and locally download and process paradigm is too limited. For data access and assimilation to be improved, data will need to be self describing. For heterogeneous data to be rapidly integrated attributes such as reliability, uncertainty and traceability will need to be systematically recorded with each observation. The petascale era also requires that individual data files be transformed and aggregated into calibrated data arrays or data cubes. Standards become critical and are the enablers of integration. These changes are common to almost every science discipline. What makes earth sciences unique is that many domains record time series data, particularly in the environmental geosciences areas (weathering, soil changes, climate change). The data life cycle will be measured in decades and centuries, not years. Preservation over such time spans is quite a challenge to the earth sciences as data will have to be managed over many evolutions of software and hardware. The focus has to be on managing the data and not the media. Currently storage is not an issue, but it is predicted that data volumes will soon exceed the effective storage media than can be physically manufactured. This means that organisations will have to think about disposal and destruction of data. For earth sciences, this will be a particularly sensitive issue. Petascale computing offers many new opportunities to the earth sciences and by 2020 exascale computers will be a reality. To fully realise these opportunities the earth sciences needs to actively and systematically rethink what the ramifications of these new systems will have on current practices for data storage, discovery, access and assimilation.
NASA Technical Reports Server (NTRS)
Rakow, Glenn; Schnurr, Richard; Dailey, Christopher; Shakoorzadeh, Kamdin
2003-01-01
NASA's James Webb Space Telescope (JWST) faces difficult technical and budgetary challenges to overcome before it is scheduled launch in 2010. The Integrated Science Instrument Module (ISIM), shares these challenges. The major challenge addressed in this paper is the data network used to collect, process, compresses and store Infrared data. A total of 114 Mbps of raw information must be collected from 19 sources and delivered to the two redundant data processing units across a twenty meter deployed thermally restricted interface. Further data must be transferred to the solid-state recorder and the spacecraft. The JWST detectors are kept at cryogenic temperatures to obtain the sensitivity necessary to measure faint energy sources. The Focal Plane Electronics (FPE) that sample the detector, generate packets from the samples, and transmit these packets to the processing electronics must dissipate little power in order to help keep the detectors at these cold temperatures. Separating the low powered front-end electronics from the higher-powered processing electronics, and using a simple high-speed protocol to transmit the detector data minimize the power dissipation near the detectors. Low Voltage Differential Signaling (LVDS) drivers were considered an obvious choice for physical layer because of their high speed and low power. The mechanical restriction on the number cables across the thermal interface force the Image packets to be concentrated upon two high-speed links. These links connect the many image packet sources, Focal Plane Electronics (FPE), located near the cryogenic detectors to the processing electronics on the spacecraft structure. From 12 to 10,000 seconds of raw data are processed to make up an image, various algorithms integrate the pixel data Loss of commands to configure the detectors as well as the loss of science data itself may cause inefficiency in the use of the telescope that are unacceptable given the high cost of the observatory. This combination of requirements necessitates a redundant, fault tolerant, high- speed, low mass, low power network with a low Bit error Rate(1E-9- 1E-12). The ISIM systems team performed many studies of the various network architectures that meeting these requirements. The architecture selected uses the Spacewire protocol, with the addition of a new transport and network layer added to implement end-to-end reliable transport. The network and reliable transport mechanism must be implemented in hardware because of the high average information rate and the restriction on the ability of the detectors to buffer data due to power and size restrictions. This network and transport mechanism was designed to be compatible with existing Spacewire links and routers so that existing equipment and designs may be leveraged upon. The transport layer specification is being coordinated with European Space Agency (ESA), Spacewire Working Group and the Consultative Committee for Space Data System (CCSDS) PlK Standard Onboard Interface (SOIF) panel, with the intent of developing a standard for reliable transport for Spacewire. Changes to the protocol presented are likely since negotiations are ongoing with these groups. A block of RTL VHDL that implements a multi-port Spacewire router with an external user interface will be developed and integrated with an existing Spacewire Link design. The external user interface will be the local interface that sources and sinks packets onto and off of the network (Figure 3). The external user interface implements the network and transport layer and handles acknowledgements and re-tries of packets for reliable transport over the network. Because the design is written in RTL, it may be ported to any technology but will initially be targeted to the new Actel Accelerator series (AX) part. Each link will run at 160 Mbps and the power will be about 0.165 Watt per link worst case in the Actel AX.
Prediction of Software Reliability using Bio Inspired Soft Computing Techniques.
Diwaker, Chander; Tomar, Pradeep; Poonia, Ramesh C; Singh, Vijander
2018-04-10
A lot of models have been made for predicting software reliability. The reliability models are restricted to using particular types of methodologies and restricted number of parameters. There are a number of techniques and methodologies that may be used for reliability prediction. There is need to focus on parameters consideration while estimating reliability. The reliability of a system may increase or decreases depending on the selection of different parameters used. Thus there is need to identify factors that heavily affecting the reliability of the system. In present days, reusability is mostly used in the various area of research. Reusability is the basis of Component-Based System (CBS). The cost, time and human skill can be saved using Component-Based Software Engineering (CBSE) concepts. CBSE metrics may be used to assess those techniques which are more suitable for estimating system reliability. Soft computing is used for small as well as large-scale problems where it is difficult to find accurate results due to uncertainty or randomness. Several possibilities are available to apply soft computing techniques in medicine related problems. Clinical science of medicine using fuzzy-logic, neural network methodology significantly while basic science of medicine using neural-networks-genetic algorithm most frequently and preferably. There is unavoidable interest shown by medical scientists to use the various soft computing methodologies in genetics, physiology, radiology, cardiology and neurology discipline. CBSE boost users to reuse the past and existing software for making new products to provide quality with a saving of time, memory space, and money. This paper focused on assessment of commonly used soft computing technique like Genetic Algorithm (GA), Neural-Network (NN), Fuzzy Logic, Support Vector Machine (SVM), Ant Colony Optimization (ACO), Particle Swarm Optimization (PSO), and Artificial Bee Colony (ABC). This paper presents working of soft computing techniques and assessment of soft computing techniques to predict reliability. The parameter considered while estimating and prediction of reliability are also discussed. This study can be used in estimation and prediction of the reliability of various instruments used in the medical system, software engineering, computer engineering and mechanical engineering also. These concepts can be applied to both software and hardware, to predict the reliability using CBSE.
NASA Technical Reports Server (NTRS)
Eppler, Dean B.
2013-01-01
The scientific success of any future human lunar exploration mission will be strongly dependent on design of both the systems and operations practices that underpin crew operations on the lunar surface. Inept surface mission preparation and design will either ensure poor science return, or will make achieving quality science operation unacceptably difficult for the crew and the mission operations and science teams. In particular, ensuring a robust system for managing real-time science information flow during surface operations, and ensuring the crews receive extensive field training in geological sciences, are as critical to mission success as reliable spacecraft and a competent operations team.
2009-01-13
Vandenberg Air Force Base, Calif. – In the Astrotech Payload Processing Facility at Vandenberg Air Force Base in California, a technician monitors data during fueling of NASA's Orbiting Carbon Observatory, or OCO, with hydrazine thruster control propellant. The OCO is a new Earth-orbiting mission sponsored by NASA's Earth System Science Pathfinder Program. The OCO mission will collect precise global measurements of carbon dioxide (CO2) in the Earth's atmosphere. Scientists will analyze OCO data to improve our understanding of the natural processes and human activities that regulate the abundance and distribution of this important greenhouse gas. This improved understanding will enable more reliable forecasts of future changes in the abundance and distribution of CO2 in the atmosphere and the effect that these changes may have on the Earth's climate. The launch of OCO is scheduled for Feb. 23 from Vandenberg. Photo credit: Robert Hargreaves Jr., VAFB
2009-01-13
Vandenberg Air Force Base, Calif. – In the Astrotech Payload Processing Facility at Vandenberg Air Force Base in California, preparations are under way to fuel NASA's Orbiting Carbon Observatory, or OCO, with hydrazine thruster control propellant. The OCO is a new Earth-orbiting mission sponsored by NASA's Earth System Science Pathfinder Program. The OCO mission will collect precise global measurements of carbon dioxide (CO2) in the Earth's atmosphere. Scientists will analyze OCO data to improve our understanding of the natural processes and human activities that regulate the abundance and distribution of this important greenhouse gas. This improved understanding will enable more reliable forecasts of future changes in the abundance and distribution of CO2 in the atmosphere and the effect that these changes may have on the Earth's climate. The launch of OCO is scheduled for Feb. 23 from Vandenberg. Photo credit: Robert Hargreaves Jr., VAFB
2009-01-13
Vandenberg Air Force Base, Calif. – In the Astrotech Payload Processing Facility at Vandenberg Air Force Base in California, preparations are under way to fuel NASA's Orbiting Carbon Observatory, or OCO, with hydrazine thruster control propellant. The OCO is a new Earth-orbiting mission sponsored by NASA's Earth System Science Pathfinder Program. The OCO mission will collect precise global measurements of carbon dioxide (CO2) in the Earth's atmosphere. Scientists will analyze OCO data to improve our understanding of the natural processes and human activities that regulate the abundance and distribution of this important greenhouse gas. This improved understanding will enable more reliable forecasts of future changes in the abundance and distribution of CO2 in the atmosphere and the effect that these changes may have on the Earth's climate. The launch of OCO is scheduled for Feb. 23 from Vandenberg. Photo credit: Robert Hargreaves Jr., VAFB
2009-01-13
Vandenberg Air Force Base, Calif. – In the Astrotech Payload Processing Facility at Vandenberg Air Force Base in California, a technician monitors data during fueling of NASA's Orbiting Carbon Observatory, or OCO, with hydrazine thruster control propellant. The OCO is a new Earth-orbiting mission sponsored by NASA's Earth System Science Pathfinder Program. The OCO mission will collect precise global measurements of carbon dioxide (CO2) in the Earth's atmosphere. Scientists will analyze OCO data to improve our understanding of the natural processes and human activities that regulate the abundance and distribution of this important greenhouse gas. This improved understanding will enable more reliable forecasts of future changes in the abundance and distribution of CO2 in the atmosphere and the effect that these changes may have on the Earth's climate. The launch of OCO is scheduled for Feb. 23 from Vandenberg. Photo credit: Robert Hargreaves Jr., VAFB
NASA Astrophysics Data System (ADS)
Guffey, S. K.; Slater, T. F.; Slater, S. J.
2017-12-01
Discipline-based geoscience education researchers have considerable need for criterion-referenced, easy-to-administer and easy-to-score, conceptual diagnostic surveys for undergraduates taking introductory science survey courses in order for faculty to better be able to monitor the learning impacts of various interactive teaching approaches. To support ongoing discipline-based science education research to improve teaching and learning across the geosciences, this study establishes the reliability and validity of a 28-item, multiple-choice, pre- and post- Exam of GeoloGy Standards, hereafter simply called EGGS. The content knowledge EGGS addresses is based on 11 consensus concepts derived from a systematic, thematic analysis of the overlapping ideas presented in national science education reform documents including the Next Generation Science Standards, the AAAS Benchmarks for Science Literacy, the Earth Science Literacy Principles, and the NRC National Science Education Standards. Using community agreed upon best-practices for creating, field-testing, and iteratively revising modern multiple-choice test items using classical item analysis techniques, EGGS emphasizes natural student language over technical scientific vocabulary, leverages illustrations over students' reading ability, specifically targets students' misconceptions identified in the scholarly literature, and covers the range of topics most geology educators expect general education students to know at the end of their formal science learning experiences. The current version of EGGS is judged to be valid and reliable with college-level, introductory science survey students based on both standard quantitative and qualitative measures, including extensive clinical interviews with targeted students and systematic expert review.
Numerical Databases: Their Vital Role in Information Science.
ERIC Educational Resources Information Center
Carter, G. C.
1985-01-01
In this first of two installments, a description of the interactions of numerical databases (NDBs) for science with the various community sectors highlights data evaluation and the roles that NDBs will likely play. Twenty-four studies and articles dealing with needs for reliable physical, chemical, and mechanical property data are noted. (EJS)
The Need for Nuance in the Null Hypothesis Significance Testing Debate
ERIC Educational Resources Information Center
Häggström, Olle
2017-01-01
Null hypothesis significance testing (NHST) provides an important statistical toolbox, but there are a number of ways in which it is often abused and misinterpreted, with bad consequences for the reliability and progress of science. Parts of contemporary NHST debate, especially in the psychological sciences, is reviewed, and a suggestion is made…
Incorporating Nonparametric Statistics into Delphi Studies in Library and Information Science
ERIC Educational Resources Information Center
Ju, Boryung; Jin, Tao
2013-01-01
Introduction: The Delphi technique is widely used in library and information science research. However, many researchers in the field fail to employ standard statistical tests when using this technique. This makes the technique vulnerable to criticisms of its reliability and validity. The general goal of this article is to explore how…
Misconceptions of Selected Science Concepts Held by Elementary School Students
ERIC Educational Resources Information Center
Doran, Rodney L.
1972-01-01
Describes a test, administered as a motion picture, designed to measure misconceptions about the particle model of matter held by students in grades two through six. Reliability values for tests of eight misconceptions are given and the correlations of misconception scores with measures of IQ, reading, mathematics, and science ability reported.…
Exposing Students to the Idea that Theories Can Change
ERIC Educational Resources Information Center
Hoellwarth, Chance; Moelter, Matthew J.
2011-01-01
The scientific method is arguably the most reliable way to understand the physical world, yet this aspect of science is rarely addressed in introductory science courses. Students typically learn about the theory in its final, refined form, and seldom experience the experiment-to-theory cycle that goes into producing the theory. One exception to…
Evaluating ATM Technology for Distance Education in Library and Information Science.
ERIC Educational Resources Information Center
Stanford, Serena W.
1997-01-01
Investigates the impact of asynchronous transfer mode (ATM) technology in an interactive environment providing distance education in library and information science at two San Jose State University (California) sites. The main purpose of the study was to develop a reliable and valid evaluation instrument. Contains 6 tables. (Author/AEF)
The Geostationary Operational Satellite R Series SpaceWire Based Data System
NASA Technical Reports Server (NTRS)
Anderson, William; Birmingham, Michael; Krimchansky, Alexander; Lombardi, Matthew
2016-01-01
The Geostationary Operational Environmental Satellite R-Series Program (GOES-R, S, T, and U) mission is a joint program between National Oceanic & Atmospheric Administration (NOAA) and National Aeronautics & Space Administration (NASA) Goddard Space Flight Center (GSFC). SpaceWire was selected as the science data bus as well as command and telemetry for the GOES instruments. GOES-R, S, T, and U spacecraft have a mission data loss requirement for all data transfers between the instruments and spacecraft requiring error detection and correction at the packet level. The GOES-R Reliable Data Delivery Protocol (GRDDP) [1] was developed in house to provide a means of reliably delivering data among various on board sources and sinks. The GRDDP was presented to and accepted by the European Cooperation for Space Standardization (ECSS) and is part of the ECSS Protocol Identification Standard [2]. GOES-R development and integration is complete and the observatory is scheduled for launch November 2016. Now that instrument to spacecraft integration is complete, GOES-R Project reviewed lessons learned to determine how the GRDDP could be revised to improve the integration process. Based on knowledge gained during the instrument to spacecraft integration process the following is presented to help potential GRDDP users improve their system designs and implementation.
Network-based statistical comparison of citation topology of bibliographic databases
Šubelj, Lovro; Fiala, Dalibor; Bajec, Marko
2014-01-01
Modern bibliographic databases provide the basis for scientific research and its evaluation. While their content and structure differ substantially, there exist only informal notions on their reliability. Here we compare the topological consistency of citation networks extracted from six popular bibliographic databases including Web of Science, CiteSeer and arXiv.org. The networks are assessed through a rich set of local and global graph statistics. We first reveal statistically significant inconsistencies between some of the databases with respect to individual statistics. For example, the introduced field bow-tie decomposition of DBLP Computer Science Bibliography substantially differs from the rest due to the coverage of the database, while the citation information within arXiv.org is the most exhaustive. Finally, we compare the databases over multiple graph statistics using the critical difference diagram. The citation topology of DBLP Computer Science Bibliography is the least consistent with the rest, while, not surprisingly, Web of Science is significantly more reliable from the perspective of consistency. This work can serve either as a reference for scholars in bibliometrics and scientometrics or a scientific evaluation guideline for governments and research agencies. PMID:25263231
Prestigious Science Journals Struggle to Reach Even Average Reliability
Brembs, Björn
2018-01-01
In which journal a scientist publishes is considered one of the most crucial factors determining their career. The underlying common assumption is that only the best scientists manage to publish in a highly selective tier of the most prestigious journals. However, data from several lines of evidence suggest that the methodological quality of scientific experiments does not increase with increasing rank of the journal. On the contrary, an accumulating body of evidence suggests the inverse: methodological quality and, consequently, reliability of published research works in several fields may be decreasing with increasing journal rank. The data supporting these conclusions circumvent confounding factors such as increased readership and scrutiny for these journals, focusing instead on quantifiable indicators of methodological soundness in the published literature, relying on, in part, semi-automated data extraction from often thousands of publications at a time. With the accumulating evidence over the last decade grew the realization that the very existence of scholarly journals, due to their inherent hierarchy, constitutes one of the major threats to publicly funded science: hiring, promoting and funding scientists who publish unreliable science eventually erodes public trust in science. PMID:29515380
ERIC Educational Resources Information Center
Davis, Andrew
2015-01-01
PISA claims that it can extend its reach from its current core subjects of Reading, Science, Maths and problem-solving. Yet given the requirement for high levels of reliability for PISA, especially in the light of its current high stakes character, proposed widening of its subject coverage cannot embrace some important aspects of the social and…
Establishment of a National Wind Energy Center at University of Houston
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Su Su
The DOE-supported project objectives are to: establish a national wind energy center (NWEC) at University of Houston and conduct research to address critical science and engineering issues for the development of future large MW-scale wind energy production systems, especially offshore wind turbines. The goals of the project are to: (1) establish a sound scientific/technical knowledge base of solutions to critical science and engineering issues for developing future MW-scale large wind energy production systems, (2) develop a state-of-the-art wind rotor blade research facility at the University of Houston, and (3) through multi-disciplinary research, introducing technology innovations on advanced wind-turbine materials, processing/manufacturingmore » technology, design and simulation, testing and reliability assessment methods related to future wind turbine systems for cost-effective production of offshore wind energy. To achieve the goals of the project, the following technical tasks were planned and executed during the period from April 15, 2010 to October 31, 2014 at the University of Houston: (1) Basic research on large offshore wind turbine systems (2) Applied research on innovative wind turbine rotors for large offshore wind energy systems (3) Integration of offshore wind-turbine design, advanced materials and manufacturing technologies (4) Integrity and reliability of large offshore wind turbine blades and scaled model testing (5) Education and training of graduate and undergraduate students and post- doctoral researchers (6) Development of a national offshore wind turbine blade research facility The research program addresses both basic science and engineering of current and future large wind turbine systems, especially offshore wind turbines, for MW-scale power generation. The results of the research advance current understanding of many important scientific issues and provide technical information for solving future large wind turbines with advanced design, composite materials, integrated manufacturing, and structural reliability and integrity. The educational program have trained many graduate and undergraduate students and post-doctoral level researchers to learn critical science and engineering of wind energy production systems through graduate-level courses and research, and participating in various projects in center’s large multi-disciplinary research. These students and researchers are now employed by the wind industry, national labs and universities to support the US and international wind energy industry. The national offshore wind turbine blade research facility developed in the project has been used to support the technical and training tasks planned in the program to accomplish their goals, and it is a national asset which is available for used by domestic and international researchers in the wind energy arena.« less
NASA Astrophysics Data System (ADS)
Bundrick, David Ray
The relationship between science and religion in American higher education changed significantly over the past two centuries as empiricism and naturalism became the philosophical underpinnings of the university. This philosophical shift contributed significantly to the secularization of the academy, the context in which philosophers of science during the last half-century have theorized a variety of theoretical patterns for relating science and religion. Evidence suggests that science professors operationalize various science-faith paradigms, but no instrument prior to this research had ever been created to measure the constructs. The purpose of this research was to develop a scale, with at least adequate psychometric properties (good validity and initial reliability), able to identify and discriminate among these various science-faith paradigms (in the Western Christian tradition) in practice among college and university science professors in the United States. The researcher conducted a Web-based electronic survey of a stratified random sample of science professors representing a variety of higher education institution types, science disciplines, and religious affiliation. Principal Components Analysis of the survey data produced five factors predicted by the researcher. These factors correspond to five science-faith paradigms: Conflict---Science over Religion; Conflict---Religion over Science; Compartmentalism; Complementarism; and Concordism. Analysis of items loading on each factor produced a 50-item Science-Faith Paradigm Scale (SFPS) that consists of five sub-scales, each having characteristics of good content validity, construct validity, and initial reliability (Cronbach's alpha ranging from .87 to .95). Preliminary exploratory analysis of differences in SFPS sub-scale scores based on demographic variables indicates that the SFPS is capable of discriminating among groups. This research validates the existence of five science-faith paradigms in practice in the Western Christian tradition, enriches the information base on science-faith paradigms in the academy, and makes possible further research in this subject area. The Science-Faith Paradigm Scale is subject to confirmatory analysis through further research and may be employed voluntarily by science faculty for self-understanding that could lead to more effective communication among science professors and greater appreciation for the diversity of scientific-religious perspectives in American higher education.
The end of the (forensic science) world as we know it? The example of trace evidence.
Roux, Claude; Talbot-Wright, Benjamin; Robertson, James; Crispino, Frank; Ribaux, Olivier
2015-08-05
The dominant conception of forensic science as a patchwork of disciplines primarily assisting the criminal justice system (i.e. forensics) is in crisis or at least shows a series of anomalies and serious limitations. In recent years, symptoms of the crisis have been discussed in a number of reports by various commentators, without a doubt epitomized by the 2009 report by the US National Academies of Sciences (NAS 2009 Strengthening forensic science in the United States: a path forward). Although needed, but viewed as the solution to these drawbacks, the almost generalized adoption of stricter business models in forensic science casework compounded with ever-increasing normative and compliance processes not only place additional pressures on a discipline that already appears in difficulty, but also induce more fragmentation of the different forensic science tasks, a tenet many times denounced by the same NAS report and other similar reviews. One may ask whether these issues are not simply the result of an unfit paradigm. If this is the case, the current problems faced by forensic science may indicate future significant changes for the discipline. To facilitate broader discussion this presentation focuses on trace evidence, an area that is seminal to forensic science both for epistemological and historical reasons. There is, however, little doubt that this area is currently under siege worldwide. Current and future challenges faced by trace evidence are discussed along with some possible answers. The current situation ultimately presents some significant opportunities to re-invent not only trace evidence but also forensic science. Ultimately, a distinctive, more robust and more reliable science may emerge through rethinking the forensics paradigm built on specialisms, revisiting fundamental forensic science principles and adapting them to the twenty-first century. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Breathing life into fisheries stock assessments with citizen science
Fairclough, D. V.; Brown, J. I.; Carlish, B. J.; Crisafulli, B. M.; Keay, I. S.
2014-01-01
Citizen science offers a potentially cost-effective way for researchers to obtain large data sets over large spatial scales. However, it is not used widely to support biological data collection for fisheries stock assessments. Overfishing of demersal fishes along 1,000 km of the west Australian coast led to restrictive management to recover stocks. This diminished opportunities for scientists to cost-effectively monitor stock recovery via fishery-dependent sampling, particularly of the recreational fishing sector. As fishery-independent methods would be too expensive and logistically-challenging to implement, a citizen science program, Send us your skeletons (SUYS), was developed. SUYS asks recreational fishers to voluntarily donate fish skeletons of important species from their catch to allow biological data extraction by scientists to produce age structures and conduct stock assessment analyses. During SUYS, recreational fisher involvement, sample sizes and spatial and temporal coverage of samples have dramatically increased, while the collection cost per skeleton has declined substantially. SUYS is ensuring sampling objectives for stock assessments are achieved via fishery-dependent collection and reliable and timely scientific advice can be provided to managers. The program is also encouraging public ownership through involvement in the monitoring process, which can lead to greater acceptance of management decisions. PMID:25431103
Breathing life into fisheries stock assessments with citizen science.
Fairclough, D V; Brown, J I; Carlish, B J; Crisafulli, B M; Keay, I S
2014-11-28
Citizen science offers a potentially cost-effective way for researchers to obtain large data sets over large spatial scales. However, it is not used widely to support biological data collection for fisheries stock assessments. Overfishing of demersal fishes along 1,000 km of the west Australian coast led to restrictive management to recover stocks. This diminished opportunities for scientists to cost-effectively monitor stock recovery via fishery-dependent sampling, particularly of the recreational fishing sector. As fishery-independent methods would be too expensive and logistically-challenging to implement, a citizen science program, Send us your skeletons (SUYS), was developed. SUYS asks recreational fishers to voluntarily donate fish skeletons of important species from their catch to allow biological data extraction by scientists to produce age structures and conduct stock assessment analyses. During SUYS, recreational fisher involvement, sample sizes and spatial and temporal coverage of samples have dramatically increased, while the collection cost per skeleton has declined substantially. SUYS is ensuring sampling objectives for stock assessments are achieved via fishery-dependent collection and reliable and timely scientific advice can be provided to managers. The program is also encouraging public ownership through involvement in the monitoring process, which can lead to greater acceptance of management decisions.
Seeking high reliability in primary care: Leadership, tools, and organization.
Weaver, Robert R
2015-01-01
Leaders in health care increasingly recognize that improving health care quality and safety requires developing an organizational culture that fosters high reliability and continuous process improvement. For various reasons, a reliability-seeking culture is lacking in most health care settings. Developing a reliability-seeking culture requires leaders' sustained commitment to reliability principles using key mechanisms to embed those principles widely in the organization. The aim of this study was to examine how key mechanisms used by a primary care practice (PCP) might foster a reliability-seeking, system-oriented organizational culture. A case study approach was used to investigate the PCP's reliability culture. The study examined four cultural artifacts used to embed reliability-seeking principles across the organization: leadership statements, decision support tools, and two organizational processes. To decipher their effects on reliability, the study relied on observations of work patterns and the tools' use, interactions during morning huddles and process improvement meetings, interviews with clinical and office staff, and a "collective mindfulness" questionnaire. The five reliability principles framed the data analysis. Leadership statements articulated principles that oriented the PCP toward a reliability-seeking culture of care. Reliability principles became embedded in the everyday discourse and actions through the use of "problem knowledge coupler" decision support tools and daily "huddles." Practitioners and staff were encouraged to report unexpected events or close calls that arose and which often initiated a formal "process change" used to adjust routines and prevent adverse events from recurring. Activities that foster reliable patient care became part of the taken-for-granted routine at the PCP. The analysis illustrates the role leadership, tools, and organizational processes play in developing and embedding a reliable-seeking culture across an organization. Progress toward a reliability-seeking, system-oriented approach to care remains ongoing, and movement in that direction requires deliberate and sustained effort by committed leaders in health care.
Representation and Re-Presentation in Litigation Science
Jasanoff, Sheila
2008-01-01
Federal appellate courts have devised several criteria to help judges distinguish between reliable and unreliable scientific evidence. The best known are the U.S. Supreme Court’s criteria offered in 1993 in Daubert v. Merrell Dow Pharmaceuticals, Inc. This article focuses on another criterion, offered by the Ninth Circuit Court of Appeals, that instructs judges to assign lower credibility to “litigation science” than to science generated before litigation. In this article I argue that the criterion-based approach to judicial screening of scientific evidence is deeply flawed. That approach buys into the faulty premise that there are external criteria, lying outside the legal process, by which judges can distinguish between good and bad science. It erroneously assumes that judges can ascertain the appropriate criteria and objectively apply them to challenged evidence before litigation unfolds, and before methodological disputes are sorted out during that process. Judicial screening does not take into account the dynamics of litigation itself, including gaming by the parties and framing by judges, as constitutive factors in the production and representation of knowledge. What is admitted through judicial screening, in other words, is not precisely what a jury would see anyway. Courts are sites of repeated re-representations of scientific knowledge. In sum, the screening approach fails to take account of the wealth of existing scholarship on the production and validation of scientific facts. An unreflective application of that approach thus puts courts at risk of relying upon a “junk science” of the nature of scientific knowledge. PMID:18197311
Mohseni Bandpei, Mohammad A; Rahmani, Nahid; Majdoleslam, Basir; Abdollahi, Iraj; Ali, Shabnam Shah; Ahmad, Ashfaq
2014-09-01
The purpose of this study was to review the literature to determine whether surface electromyography (EMG) is a reliable tool to assess paraspinal muscle fatigue in healthy subjects and in patients with low back pain (LBP). A literature search for the period of 2000 to 2012 was performed, using PubMed, ProQuest, Science Direct, EMBASE, OVID, CINAHL, and MEDLINE databases. Electromyography, reliability, median frequency, paraspinal muscle, endurance, low back pain, and muscle fatigue were used as keywords. The literature search yielded 178 studies using the above keywords. Twelve articles were selected according to the inclusion criteria of the study. In 7 of the 12 studies, the surface EMG was only applied in healthy subjects, and in 5 studies, the reliability of surface EMG was investigated in patients with LBP or a comparison with a control group. In all of these studies, median frequency was shown to be a reliable EMG parameter to assess paraspinal muscles fatigue. There was a wide variation among studies in terms of methodology, surface EMG parameters, electrode location, procedure, and homogeneity of the study population. The results suggest that there seems to be a convincing body of evidence to support the merit of surface EMG in the assessment of paraspinal muscle fatigue in healthy subject and in patients with LBP. Copyright © 2014 National University of Health Sciences. Published by Elsevier Inc. All rights reserved.
de Albuquerque, Priscila Maria Nascimento Martins; de Alencar, Geisa Guimarães; de Oliveira, Daniela Araújo; de Siqueira, Gisela Rocha
2018-01-01
The aim of this study was to examine and interpret the concordance, accuracy, and reliability of photogrammetric protocols available in the literature for evaluating cervical lordosis in an adult population aged 18 to 59 years. A systematic search of 6 electronic databases (MEDLINE via PubMed, LILACS, CINAHL, Scopus, ScienceDirect, and Web of Science) located studies that assessed the reliability and/or concordance and/or accuracy of photogrammetric protocols for evaluating cervical lordosis, compared with radiography. Articles published through April 2016 were selected. Two independent reviewers used a critical appraisal tool (QUADAS and QAREL) to assess the quality of the selected studies. Two studies were included in the review and had high levels of reliability (intraclass correlation coefficient: 0.974-0.98). Only 1 study assessed the concordance between the methods, which was calculated using Pearson's correlation coefficient. To date, the accuracy of photogrammetry has not been investigated thoroughly. We encountered no study in the literature that investigated the accuracy of photogrammetry in diagnosing hyperlordosis of cervical spine. However, both current studies report high levels of intra- and interrater reliability. To increase the level of evidence of photogrammetry in the evaluation of cervical lordosis, it is necessary to conduct further studies using a larger sample to increase the external validity of the findings. Copyright © 2018. Published by Elsevier Inc.
Peer assessment of aviation performance: inconsistent for good reasons.
Roth, Wolff-Michael; Mavin, Timothy J
2015-03-01
Research into expertise is relatively common in cognitive science concerning expertise existing across many domains. However, much less research has examined how experts within the same domain assess the performance of their peer experts. We report the results of a modified think-aloud study conducted with 18 pilots (6 first officers, 6 captains, and 6 flight examiners). Pairs of same-ranked pilots were asked to rate the performance of a captain flying in a critical pre-recorded simulator scenario. Findings reveal (a) considerable variance within performance categories, (b) differences in the process used as evidence in support of a performance rating, (c) different numbers and types of facts (cues) identified, and (d) differences in how specific performance events affect choice of performance category and gravity of performance assessment. Such variance is consistent with low inter-rater reliability. Because raters exhibited good, albeit imprecise, reasons and facts, a fuzzy mathematical model of performance rating was developed. The model provides good agreement with observed variations. Copyright © 2014 Cognitive Science Society, Inc.
Dowson, Duncan
2012-01-01
It is now forty six years since the separate topics of friction, lubrication, wear and bearing design were integrated under the title 'Tribology' [Department of Education and Science, Lubrication (Tribology) Education and Research. A Report on the Present Position and Industry's Needs, HMSO, London, 1966]. Significant developments have been reported in many established and new aspects of tribology during this period. The subject has contributed to improved performance of much familiar equipment, such as reciprocating engines, where there have been vast improvements in engine reliability and efficiency. Nano-tribology has been central to remarkable advances in information processing and digital equipment. Shortly after widespread introduction of the term tribology, integration with biology and medicine prompted rapid and extensive interest in the fascinating sub-field now known as Bio-tribology [D. Dowson and V. Wright, Bio-tribology, in The Rheology of Lubricants, ed. T. C. Davenport, Applied Science Publishers, Barking, 1973, pp. 81-88]. An outline will be given of some of the developments in the latter field.
Molander, Linda; Hanberg, Annika; Rudén, Christina; Ågerstrand, Marlene; Beronius, Anna
2017-03-01
Different tools have been developed that facilitate systematic and transparent evaluation and handling of toxicity data in the risk assessment process. The present paper sets out to explore the combined use of two web-based tools for study evaluation and identification of reliable data relevant to health risk assessment. For this purpose, a case study was performed using in vivo toxicity studies investigating low-dose effects of bisphenol A on mammary gland development. The reliability of the mammary gland studies was evaluated using the Science in Risk Assessment and Policy (SciRAP) criteria for toxicity studies. The Health Assessment Workspace Collaborative (HAWC) was used for characterizing and visualizing the mammary gland data in terms of type of effects investigated and reported, and the distribution of these effects within the dose interval. It was then investigated whether there was any relationship between study reliability and the type of effects reported and/or their distribution in the dose interval. The combination of the SciRAP and HAWC tools allowed for transparent evaluation and visualization of the studies investigating developmental effects of BPA on the mammary gland. The use of these tools showed that there were no apparent differences in the type of effects and their distribution in the dose interval between the five studies assessed as most reliable and the whole data set. Combining the SciRAP and HAWC tools was found to be a useful approach for evaluating in vivo toxicity studies and identifying reliable and sensitive information relevant to regulatory risk assessment of chemicals. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Singh, Varun Pratap; Singh, Rajkumar
2014-03-01
The aim of this study was to develop a reliable and valid Nepali version of the Psychosocial Impact of Dental Aesthetic Questionnaire (PIDAQ). Cross-sectional descriptive validation study. B.P. Koirala Institute of Health Sciences, Dharan, Nepal. A rigorous translation process including conceptual and semantic evaluation, translation, back translation and pre-testing was carried out. Two hundred and fifty-two undergraduates, including equal numbers of males and females with an age ranging from 18 to 29 years (mean age: 22·33±2·114 years), participated in this study. Reliability was assessed by Cronbach's alpha coefficient and the coefficient of correlation was used to assess correlation between items and test-retest reliability. The construct validity was tested by factorial analysis. Convergent construct validity was tested by comparison of PIDAQ scores with the aesthetic component of the index of orthodontic treatment needs (IOTN-AC) and perception of occlusion scale (POS), respectively. Discriminant construct validity was assessed by differences in score for those who demand treatment and those who did not. The response rate was 100%. One hundred and twenty-three individuals had a demand for orthodontic treatment. The Nepali PIDAQ had excellent reliability with Cronbach's alpha of 0·945, corrected item correlation between 0·525 and 0·790 and overall test-retest reliability of 0·978. The construct validity was good with formation of a new sub-domain 'Dental self-consciousness'. The scale had good correlation with IOTN-AC and POS fulfilling convergent construct validity. The discriminant construct validity was proved by significant differences in scores for subjects with demand and without demand for treatment. To conclude, Nepali version of PIDAQ has good psychometric properties and can be used effectively in this population group for further research.
Carpio, B; Brown, B
1993-01-01
The undergraduate nursing degree program (B.Sc.N.) at McMaster University School of Nursing uses small groups, and is learner-centered and problem-based. A study was conducted during the 1991 admissions cycle to determine the initial reliability and validity of the semi-structured personal interview which constitutes the final component of candidate selection for this program. During the interview, three-member teams assess applicant suitability to the program based on six dimensions: applicant motivation, awareness of the program, problem-solving abilities, ability to relate to others, self-appraisal skills, and career goals. Each interviewer assigns the applicant a global rating using a seven-point scale. For the purposes of this study four interviewer teams were randomly selected from the pool of 31 teams to interview four simulated (preprogrammed) applicants. Using two-factor repeated-measures ANOVA to analyze interview ratings, inter-rater and inter-team intraclass correlation coefficients (ICC) were calculated. Inter-team reliability ranged from .64 to .97 for the individual dimensions, and .66 to .89 on global ratings. Inter-rater ICC for the six dimensions ranged from .81 to .99, and .96 to .99 for the global ratings. The item-to-total correlation coefficients between individual dimensions and global ratings ranged from .8 to 1.0. Pearson correlations between items ranged from .77 to 1.0. The ICC were then calculated for the interview scores of 108 actual applicants to the program. Inter-rater reliability based on global ratings was .79 for the single (1 rater) observation, and .91 for the multiple (3 rater) observation. These findings support the continued use of the interview as a reliable instrument with face validity. Studies of predictive validity will be undertaken.
Assessing and Adapting Scientific Results for Space Weather Research to Operations (R2O)
NASA Astrophysics Data System (ADS)
Thompson, B. J.; Friedl, L.; Halford, A. J.; Mays, M. L.; Pulkkinen, A. A.; Singer, H. J.; Stehr, J. W.
2017-12-01
Why doesn't a solid scientific paper necessarily result in a tangible improvement in space weather capability? A well-known challenge in space weather forecasting is investing effort to turn the results of basic scientific research into operational knowledge. This process is commonly known as "Research to Operations," abbreviated R2O. There are several aspects of this process: 1) How relevant is the scientific result to a particular space weather process? 2) If fully utilized, how much will that result improve the reliability of the forecast for the associated process? 3) How much effort will this transition require? Is it already in a relatively usable form, or will it require a great deal of adaptation? 4) How much burden will be placed on forecasters? Is it "plug-and-play" or will it require effort to operate? 5) How can robust space weather forecasting identify challenges for new research? This presentation will cover several approaches that have potential utility in assessing scientific results for use in space weather research. The demonstration of utility is the first step, relating to the establishment of metrics to ensure that there will be a clear benefit to the end user. The presentation will then move to means of determining cost vs. benefit, (where cost involves the full effort required to transition the science to forecasting, and benefit concerns the improvement of forecast reliability), and conclude with a discussion of the role of end users and forecasters in driving further innovation via "O2R."
NASA Astrophysics Data System (ADS)
Taylor, John R.; Stolz, Christopher J.
1993-08-01
Laser system performance and reliability depends on the related performance and reliability of the optical components which define the cavity and transport subsystems. High-average-power and long transport lengths impose specific requirements on component performance. The complexity of the manufacturing process for optical components requires a high degree of process control and verification. Qualification has proven effective in ensuring confidence in the procurement process for these optical components. Issues related to component reliability have been studied and provide useful information to better understand the long term performance and reliability of the laser system.
NASA Astrophysics Data System (ADS)
Taylor, J. R.; Stolz, C. J.
1992-12-01
Laser system performance and reliability depends on the related performance and reliability of the optical components which define the cavity and transport subsystems. High-average-power and long transport lengths impose specific requirements on component performance. The complexity of the manufacturing process for optical components requires a high degree of process control and verification. Qualification has proven effective in ensuring confidence in the procurement process for these optical components. Issues related to component reliability have been studied and provide useful information to better understand the long term performance and reliability of the laser system.
76 FR 58716 - Interpretation of Transmission Planning Reliability Standard
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-22
... directs NERC and Commission staff to initiate a process to identify any reliability issues, as discussed... Commission directs NERC and Commission staff to initiate a process to identify any reliability issues, as... established a process to select and certify an ERO,\\5\\ and subsequently certified NERC.\\6\\ On April 4, 2006...
Strategies for Building a Reliable, Diverse Pipeline of Earth Data Scientists
NASA Astrophysics Data System (ADS)
Fowler, R.; Robinson, E.
2015-12-01
The grand challenges facing the geosciences are increasingly data-driven and require large-scale collaboration. Today's geoscience community is primarily self-taught or peer-taught as neither data science nor collaborative skills are traditionally part of the geoscience curriculum. This is not a sustainable model. By increasing understanding of the role of data science and collaboration in the geosciences, and Earth and space science informatics, an increased number of students pursuing STEM degrees may choose careers in these fields. Efforts to build a reliable pipeline of future Earth data scientists must incorporate the following: (1) improved communication: covering not only what data science is, but what a data scientist working in the geosciences does and the impact their work has; (2) effective identification and promotion of the skills and knowledge needed, including possible academic and career paths, the availability and types of jobs in the geosciences, and how to develop the necessary skills for these careers; (3) the employment of recruitment and engagement strategies that result in a diverse data science workforce, especially the recruitment and inclusion of underrepresented minority students; and (4) changing organizational cultures to better retain and advance women and other minority groups in data science. In this presentation we'll discuss strategies to increase the number of women and underrepresented minority students pursuing careers in data science, with an emphasis on effective strategies for recruiting and mentoring these groups, as well as challenges faced and lessons learned.
NASA Astrophysics Data System (ADS)
Thawinkarn, Dawruwan
2018-01-01
This research aims to analyze factors of science teacher leadership in the Thailand World-Class Standard Schools. The research instrument was a five scale rating questionnaire with reliability 0.986. The sample group included 500 science teachers from World-Class Standard Schools who had been selected by using the stratified random sampling technique. Factor analysis of science teacher leadership in the Thailand World-Class Standard Schools was conducted by using M plus for Windows. The results are as follows: The results of confirmatory factor analysis on science teacher leadership in the Thailand World-Class Standard Schools revealed that the model significantly correlated with the empirical data. The consistency index value was x2 = 105.655, df = 88, P-Value = 0.086, TLI = 0.997, CFI = 0.999, RMSEA = 0.022, and SRMR = 0.019. The value of factor loading of science teacher leadership was positive, with statistical significance at the level of 0.01. The value of six factors was between 0.880-0.996. The highest factor loading was the professional learning community, followed by child-centered instruction, participation in development, the role model in teaching, transformational leaders, and self-development with factor loading at 0.996, 0.928, 0.911, 0.907, 0.901, and 0.871, respectively. The reliability of each factor was 99.1%, 86.0%, 83.0%, 82.2%, 81.0%, and 75.8%, respectively.
Programmers, professors, and parasites: credit and co-authorship in computer science.
Solomon, Justin
2009-12-01
This article presents an in-depth analysis of past and present publishing practices in academic computer science to suggest the establishment of a more consistent publishing standard. Historical precedent for academic publishing in computer science is established through the study of anecdotes as well as statistics collected from databases of published computer science papers. After examining these facts alongside information about analogous publishing situations and standards in other scientific fields, the article concludes with a list of basic principles that should be adopted in any computer science publishing standard. These principles would contribute to the reliability and scientific nature of academic publications in computer science and would allow for more straightforward discourse in future publications.
NASA Astrophysics Data System (ADS)
Buxner, Sanlyn; Impey, C. D.; Nieberding, M. N.; Romine, J. M.; Antonellis, J. C.; Llull, J.; Tijerino, K.; Collaborations of Astronomy Teaching Scholars (CATS)
2014-01-01
Supported by funding from NSF, we have been investigating the science literacy of undergraduate students using data collected from 1980 -2013. To date, we have collected over 12,000 surveys asking students about their foundational science knowledge as well as their attitudes towards science and technology topics. In 2012, we began investigating where students get their information about science and we have collected 30 interviews and almost 1000 survey responses. Our findings reveal that students’ science literacy, as measured by this instrument, has changed very little over the 23 years of data collection despite major educational innovations offered to students. A fraction of students continue to hold onto non-scientific beliefs, coupled with faith-based attitudes and beliefs, which are resistant to formal college instruction. Analysis of students’ open-ended responses show that although students use words often associated with science, they lack understandings of key aspects of science including the importance of evidence to support arguments and the need for replication of results. These results have important implications about how we teach science and how we assess students’ scientific understandings during class. Our recent work has shown that students use online sources to gain information about science for classes their own interests. Despite this, they rate professors and researchers as more reliable sources of scientific knowledge than online sources. This disconnect raises questions about how educators can work with students to provide knowledge in ways that are both accessible and reliable and how to help students sort knowledge in an age where everything can be found online. This material is based in part upon work supported by the National Science Foundation under Grant No. 0715517, a CCLI Phase III Grant for the Collaboration of Astronomy Teaching Scholars (CATS). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
The Ames Laboratory conducts fundamental research in the physical, chemical, materials, and mathematical sciences and engineering which underlie energy generating, conversion, transmission and storage technologies, environmental improvement, and other technical areas essential to national needs. These efforts will be maintained so as to contribute to the achievement of the vision of DOE and, more specifically, to increase the general levels of knowledge and technical capabilities, to prepare engineering and physical sciences students for the future, both academia and industry, and to develop new technologies and practical applications from our basic scientific programs that will contribute to a strengthening of themore » US economy. The Laboratory approaches all its operations with the safety and health of all workers as a constant objective and with genuine concern for the environment. The Laboratory relies upon its strengths in materials synthesis and processing, materials reliability, chemical analysis, chemical sciences, photosynthesis, materials sciences, metallurgy, high-temperature superconductivity, and applied mathematical sciences to conduct the long term basic and intermediate range applied research needed to solve the complex problems encountered in energy production, and utilization as well as environmental restoration and waste management. Ames Laboratory will continue to maintain a very significant and highly beneficial pre-college math and science education program which currently serves both teachers and students at the middle school and high school levels. Our technology transfer program is aided by joint efforts with ISU`s technology development and commercialization enterprise and will sustain concerted efforts to implement Cooperative Research and Development Agreements, industrially sponsored Work for Others projects. and scientific personnel exchanges with our various customers.« less
Celedonio Aguirre-Bravo; Carlos Rodriguez Franco
1999-01-01
The general objective of this Symposium was to build on the best science and technology available to assure that the data and information produced in future inventory and monitoring programs are comparable, quality assured, available, and adequate for their intended purposes, thereby providing a reliable framework for characterization, assessment, and management of...
Adult Science Learners' Mathematical Mistakes: An Analysis of Responses to Computer-Marked Questions
ERIC Educational Resources Information Center
Jordan, Sally
2014-01-01
Inspection of thousands of student responses to computer-marked assessment questions has brought insight into the errors made by adult distance learners of science. Most of the questions analysed were in summative use and required students to construct their own response. Both of these things increased confidence in the reliability of the…
Identifying Exemplary Science Teachers through Students' Perceptions of Their Learning Environment
ERIC Educational Resources Information Center
Waldrip, Bruce G.; Fisher, Darrell L.; Dorman, Jeffrey
2009-01-01
The purpose of this study was to examine students' psychosocial perceptions of their science classroom learning environment in order to identify exemplary teachers. This mixed-method study used the valid and reliable What Is Happening In this Class? (WIHIC) questionnaire with over 3,000 middle school students in 150 classrooms in Australia.…
Research | Photovoltaic Research | NREL
-V cells Hybrid tandems Polycrystalline Thin-Film PV CdTe solar cells CIGS solar cells Perovskites and Organic PV Perovskite solar cells Organic PV solar cells Advanced Materials, Devices, and Science Interfacial and Surface Science Reliability and Engineering Real-Time PV and Solar Resource
ERIC Educational Resources Information Center
Lundin, Mattias
2014-01-01
Science education has been pointed out as fact-based and built on reliable knowledge. Nevertheless, there are areas that include other aspects. Sexual education is, according to the Swedish syllabus, such an example and it involves aspects as love, sexuality and relations. These aspects suggest a possible tension between the biological and…
Sorooshian, Shahryar
2017-04-01
Fake and unethical publishers' activities are known by most of the readers of Science and Engineering Ethics. This letter tries to draw the readers' attention to the hidden side of some of these publishers' business. Here the black market of scholarly articles, which negatively affects the validity and reliability of research in higher education, as well as science and engineering, will be introduced.
ERIC Educational Resources Information Center
DeChenne, Sue Ellen; Enochs, Larry
2010-01-01
An instrument to measure the teaching self-efficacy of science, technology, engineering, and mathematics (STEM) GTAs is adapted from a general college teaching instrument (Prieto Navarro, 2005) for the specific teaching environment of the STEM GTAs. The construct and content validity and reliability of the final instrument are indicated. The final…
ERIC Educational Resources Information Center
Kahveci, Murat; Kahveci, Ajda; Mansour, Nasser; Mohammed, Maher
2016-01-01
The Science Teachers' Pedagogical Discontentment (STPD) scale has formerly been developed in the United States and used since 2006. Based on the perceptions of selected teachers, the scale is deeply rooted in the cultural and national standards. Given these limitations, the measurement integrity of its scores has not yet been conclusively…
Why Can't a Teacher Be More like a Scientist? Science, Pseudoscience and the Art of Teaching
ERIC Educational Resources Information Center
Carter, Mark; Wheldall, Kevin
2008-01-01
In this article, the authors argue the case for scientific evidenced-based practice in education. They consider what differentiates science from pseudoscience and what sources of information teachers typically regard as reliable. The What Works Clearinghouse is discussed with reference to certain limitations of its current operation. Given the…
Questioning reliability assessments of health information on social media.
Dalmer, Nicole K
2017-01-01
This narrative review examines assessments of the reliability of online health information retrieved through social media to ascertain whether health information accessed or disseminated through social media should be evaluated differently than other online health information. Several medical, library and information science, and interdisciplinary databases were searched using terms relating to social media, reliability, and health information. While social media's increasing role in health information consumption is recognized, studies are dominated by investigations of traditional (i.e., non-social media) sites. To more richly assess constructions of reliability when using social media for health information, future research must focus on health consumers' unique contexts, virtual relationships, and degrees of trust within their social networks.
Wikipedia mining of hidden links between political leaders
NASA Astrophysics Data System (ADS)
Frahm, Klaus M.; Jaffrès-Runser, Katia; Shepelyansky, Dima L.
2016-12-01
We describe a new method of reduced Google matrix which allows to establish direct and hidden links between a subset of nodes of a large directed network. This approach uses parallels with quantum scattering theory, developed for processes in nuclear and mesoscopic physics and quantum chaos. The method is applied to the Wikipedia networks in different language editions analyzing several groups of political leaders of USA, UK, Germany, France, Russia and G20. We demonstrate that this approach allows to recover reliably direct and hidden links among political leaders. We argue that the reduced Google matrix method can form the mathematical basis for studies in social and political sciences analyzing Leader-Members eXchange (LMX).
NASA Astrophysics Data System (ADS)
Yoo, Jinwon; Choi, Yujun; Cho, Young-Wook; Han, Sang-Wook; Lee, Sang-Yun; Moon, Sung; Oh, Kyunghwan; Kim, Yong-Su
2018-07-01
We present a detailed method to prepare and characterize four-dimensional pure quantum states or ququarts using polarization and time-bin modes of a single-photon. In particular, we provide a simple method to generate an arbitrary pure ququart and fully characterize the state with quantum state tomography. We also verify the reliability of the recipe by showing experimental preparation and characterization of 20 ququart states in mutually unbiased bases. As qudits provide superior properties over qubits in many fundamental tests of quantum physics and applications in quantum information processing, the presented method will be useful for photonic quantum information science.
Contextual assessment in science education: Background, issues, and policy
NASA Astrophysics Data System (ADS)
Klassen, Stephen
2006-09-01
Contemporary assessment practices in science education have undergone significant changes in recent decades. The basis for these changes and the resulting new assessment practices are the subject of this two-part paper. Part 1 considers the basis of assessment that, more than 25 years ago, was driven by the assumptions of decomposability and decontextualization of knowledge, resulting in a low-inference testing system, often described as traditional. This assessment model was replaced not on account of direct criticism, but rather on account of a larger revolution - the change from behavioral to cognitive psychology, developments in the philosophy of science, and the rise of constructivism. Most notably, the study of the active cognitive processes of the individual resulted in a major emphasis on context in learning and assessment. These changes gave rise to the development of various contextual assessment methodologies in science education, for example, concept mapping assessment, performance assessment, and portfolio assessment. In Part 2, the literature relating to the assessment methods identified in Part 1 is reviewed, revealing that there is not much research that supports their validity and reliability. However, encouraging new work on selected-response tests is forming the basis for reconsideration of past criticisms of this technique. Despite the major developments in contextual assessment methodologies in science education, two important questions remain unanswered, namely, whether grades can be considered as genuine numeric quantities and whether the individual student is the appropriate unit of assessment in public accountability. Given these issues and the requirement for science assessment to satisfy the goals of the individual, the classroom, and the society, tentative recommendations are put forward addressing these parallel needs in the assessment of science learning.
In-Space Propulsion Technology Program Solar Electric Propulsion Technologies
NASA Technical Reports Server (NTRS)
Dankanich, John W.
2006-01-01
NASA's In-space Propulsion (ISP) Technology Project is developing new propulsion technologies that can enable or enhance near and mid-term NASA science missions. The Solar Electric Propulsion (SEP) technology area has been investing in NASA s Evolutionary Xenon Thruster (NEXT), the High Voltage Hall Accelerator (HiVHAC), lightweight reliable feed systems, wear testing, and thruster modeling. These investments are specifically targeted to increase planetary science payload capability, expand the envelope of planetary science destinations, and significantly reduce the travel times, risk, and cost of NASA planetary science missions. Status and expected capabilities of the SEP technologies are reviewed in this presentation. The SEP technology area supports numerous mission studies and architecture analyses to determine which investments will give the greatest benefit to science missions. Both the NEXT and HiVHAC thrusters have modified their nominal throttle tables to better utilize diminished solar array power on outbound missions. A new life extension mechanism has been implemented on HiVHAC to increase the throughput capability on low-power systems to meet the needs of cost-capped missions. Lower complexity, more reliable feed system components common to all electric propulsion (EP) systems are being developed. ISP has also leveraged commercial investments to further validate new ion and hall thruster technologies and to potentially lower EP mission costs.
Science and ecological literacy in undergraduate field studies education
NASA Astrophysics Data System (ADS)
Mapp, Kim J.
There is an ever-increasing number of issues that face our world today; from climate change, water and food scarcity, to pollution and resource extraction. Science and ecology play fundamental roles in these problems, and yet the understanding of these fields is limited in our society (Miller, 2002; McBride, Brewer, Berkowitz, and Borrie, 2013). Across the nation students are finishing their undergraduate degrees and are expected to enter the workforce and society with the skills needed to succeed. The deficit of science and ecological literacy in these students has been recognized and a call for reform begun (D'Avanzo, 2003 and NRC, 2009). This mixed-methods study looked at how a field studies course could fill the gap of science and ecological literacy in undergraduates. Using grounded theory, five key themes were data-derived; definitions, systems thinking, human's role in the environment, impetus for change and transference. These themes where then triangulated for validity and reliability through qualitative and quantitative assessments. A sixth theme was also identified, the learning environment. Due to limited data to support this themes' development and reliability it is discussed in Chapter 5 to provide recommendations for further research. Key findings show that this field studies program influenced students' science and ecological literacy through educational theory and practice.
NASA Astrophysics Data System (ADS)
Rusilowati, A.; Nugroho, S. E.; Susilowati, E. S. M.; Mustika, T.; Harfiyani, N.; Prabowo, H. T.
2018-03-01
The research were aimed to develop and find out of validity, reliability, characteristic of scientific literacy assessment, and find out of the profile of students’ scientific literacy skills in Energy themed. The research is conducted in 7th grade of Secondary School at Demak, Central of Java Indonesia. The research design used R&D (Research and Development). The results of the research showed that the scientific literacy assessment was valid and reliable with 0.68 value in the first try out and 0.73 value in the last try out. The characteristics of the scientific literacy assessment are the difficulty index and the discrimination power. The difficulty index and distinguishing are 56.25% easy, 31.25% medium, and 12.5% very difficult with good discrimination power. The proportion of category of scientific literacy as the body of knowledge, the science as a way of investigating, science as a way of thinking, and the interaction among science, environment, technology, and society was 37.5%:25%:18.75%:18.75%. The highest to the lowest profile of students’ scientific literacy skills at Secondary School Demak was 72% in the category of science as a way of thinking and the lowest was 59% in the category of science as the body of knowledge.
ERIC Educational Resources Information Center
Gelisli, Yücel; Beisenbayeva, Lyazzat
2017-01-01
The purpose of the current study is to develop a reliable scale to be used to determine the scientific inquiry competency perception of post-graduate students engaged in post-graduate studies in the field of educational sciences and teacher education in Kazakhstan. The study employed the descriptive method. Within the context of the study, a scale…
NASA Astrophysics Data System (ADS)
Buxner, S.; Romine, J.; Impey, C.; Nieberding, M.
2015-11-01
Building on a 25 year study of undergraduate students' science literacy, we have been investigating where students report getting information about science. In this study, we investigated the relationship between students' basic science knowledge, responses about studying something scientifically, and where they report gaining information about science. Data for this study was collected through an online survey of astronomy courses during 2014. Responses were collected from a total of 400 students through online surveys. Most survey respondents were non-science majors in the first two years of college who had taken 3 or fewer college science courses. Our results show a relationship between students who report online searches and Wikipedia as reliable sources of information and lower science literacy scores, although there was no relationship between science knowledge and where students report getting information about science. Our results suggest that information literacy is an important component to overall science literacy.
NASA Astrophysics Data System (ADS)
Franke, M.; Leubner, S.; Dubavik, A.; George, A.; Savchenko, T.; Pini, C.; Frank, P.; Melnikau, D.; Rakovich, Y.; Gaponik, N.; Eychmüller, A.; Richter, A.
2017-04-01
Microfluidic devices present the basis of modern life sciences and chemical information processing. To control the flow and to allow optical readout, a reliable sensor material that can be easily utilized for microfluidic systems is in demand. Here, we present a new optical readout system for pH sensing based on pH sensitive, photoluminescent glutathione capped cadmium telluride quantum dots that are covalently immobilized in a poly(acrylate) hydrogel. For an applicable pH sensing the generated hybrid material is integrated in a microfluidic sensor chip setup. The hybrid material not only allows in situ readout, but also possesses valve properties due to the swelling behavior of the poly(acrylate) hydrogel. In this work, the swelling property of the hybrid material is utilized in a microfluidic valve seat, where a valve opening process is demonstrated by a fluid flow change and in situ monitored by photoluminescence quenching. This discrete photoluminescence detection (ON/OFF) of the fluid flow change (OFF/ON) enables upcoming chemical information processing.
Rasch Validation of a Measure of Reform-Oriented Science Teaching Practices
NASA Astrophysics Data System (ADS)
You, Hye Sun
2016-06-01
Growing evidence from recent curriculum documents and previous research suggests that reform-oriented science teaching practices promote students' conceptual understanding, levels of achievement, and motivation to learn, especially when students are actively engaged in constructing their ideas through scientific inquiries. However, it is difficult to identify to what extent science teachers engage students in reform-oriented teaching practices (RTPs) in their science classrooms. In order to exactly diagnose the current status of science teachers' implementation of the RTPs, a valid and reliable instrument tool is needed. The principles of validity and reliability are fundamental cornerstones in developing a robust measurement tool. As such, this study was motivated by the desire to point out the limitations of the existing statistical and psychometric analyses and to further examine the validation of the RTP survey instrument. This paper thus aims at calibrating the items of the RTPs for science teachers using the Rasch model. The survey instrument scale was adapted from the 2012 National Survey of Science and Mathematics Education (NSSME) data. A total of 3701 science teachers from 1403 schools from across the USA participated in the NSSME survey. After calibrating the RTP items and persons on the same scale, the RTP instrument well represented the population of US science teachers. Model-data fit determined by Infit and Outfit statistics was within an appropriate range (0.5-1.5), supporting the unidimensional structure of the RTPs. The ordered category thresholds and the probability of the thresholds showed that the five-point rating scale functioned well. The results of this study support the use of the RTP measure from the 2012 NSSME in assessing usage of RTPs.
Richler, Jennifer J.; Floyd, R. Jackie; Gauthier, Isabel
2014-01-01
Efforts to understand individual differences in high-level vision necessitate the development of measures that have sufficient reliability, which is generally not a concern in group studies. Holistic processing is central to research on face recognition and, more recently, to the study of individual differences in this area. However, recent work has shown that the most popular measure of holistic processing, the composite task, has low reliability. This is particularly problematic for the recent surge in interest in studying individual differences in face recognition. Here, we developed and validated a new measure of holistic face processing specifically for use in individual-differences studies. It avoids some of the pitfalls of the standard composite design and capitalizes on the idea that trial variability allows for better traction on reliability. Across four experiments, we refine this test and demonstrate its reliability. PMID:25228629
Smith, Michelle K; Jones, Francis H M; Gilbert, Sarah L; Wieman, Carl E
2013-01-01
Instructors and the teaching practices they employ play a critical role in improving student learning in college science, technology, engineering, and mathematics (STEM) courses. Consequently, there is increasing interest in collecting information on the range and frequency of teaching practices at department-wide and institution-wide scales. To help facilitate this process, we present a new classroom observation protocol known as the Classroom Observation Protocol for Undergraduate STEM or COPUS. This protocol allows STEM faculty, after a short 1.5-hour training period, to reliably characterize how faculty and students are spending their time in the classroom. We present the protocol, discuss how it differs from existing classroom observation protocols, and describe the process by which it was developed and validated. We also discuss how the observation data can be used to guide individual and institutional change.
Current Development in Treatment and Hydrogen Energy Conversion of Organic Solid Waste
NASA Astrophysics Data System (ADS)
Shin, Hang-Sik
2008-02-01
This manuscript summarized current developments on continuous hydrogen production technologies researched in Korea advanced institute of science and technology (KAIST). Long-term continuous pilot-scale operation of hydrogen producing processes fed with non-sterile food waste exhibited successful results. Experimental findings obtained by the optimization processes of growth environments for hydrogen producing bacteria, the development of high-rate hydrogen producing strategies, and the feasibility tests for real field application could contribute to the progress of fermentative hydrogen production technologies. Three major technologies such as controlling dilution rate depending on the progress of acidogenesis, maintaining solid retention time independently from hydraulic retention time, and decreasing hydrogen partial pressure by carbon dioxide sparging could enhance hydrogen production using anaerobic leaching beds reactors and anaerobic sequencing batch reactors. These findings could contribute to stable, reliable and effective performances of pilot-scale reactors treating organic wastes.
Implementation and Testing of Low Cost Uav Platform for Orthophoto Imaging
NASA Astrophysics Data System (ADS)
Brucas, D.; Suziedelyte-Visockiene, J.; Ragauskas, U.; Berteska, E.; Rudinskas, D.
2013-08-01
Implementation of Unmanned Aerial Vehicles for civilian applications is rapidly increasing. Technologies which were expensive and available only for military use have recently spread on civilian market. There is a vast number of low cost open source components and systems for implementation on UAVs available. Using of low cost hobby and open source components ensures considerable decrease of UAV price, though in some cases compromising its reliability. In Space Science and Technology Institute (SSTI) in collaboration with Vilnius Gediminas Technical University (VGTU) researches have been performed in field of constructing and implementation of small UAVs composed of low cost open source components (and own developments). Most obvious and simple implementation of such UAVs - orthophoto imaging with data download and processing after the flight. The construction, implementation of UAVs, flight experience, data processing and data implementation will be further covered in the paper and presentation.
Smith, Michelle K.; Jones, Francis H. M.; Gilbert, Sarah L.; Wieman, Carl E.
2013-01-01
Instructors and the teaching practices they employ play a critical role in improving student learning in college science, technology, engineering, and mathematics (STEM) courses. Consequently, there is increasing interest in collecting information on the range and frequency of teaching practices at department-wide and institution-wide scales. To help facilitate this process, we present a new classroom observation protocol known as the Classroom Observation Protocol for Undergraduate STEM or COPUS. This protocol allows STEM faculty, after a short 1.5-hour training period, to reliably characterize how faculty and students are spending their time in the classroom. We present the protocol, discuss how it differs from existing classroom observation protocols, and describe the process by which it was developed and validated. We also discuss how the observation data can be used to guide individual and institutional change. PMID:24297289
Partial Support of Meeting of the Board on Mathematical Sciences and Their Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weidman, Scott
2014-08-31
During the performance period, BMSA released the following major reports: Transforming Combustion Research through Cyberinfrastructure (2011); Assessing the Reliability of Complex Models: Mathematical and Statistical Foundations of Verification, Validation, and Uncertainty Quantification (2012); Fueling Innovation and Discovery: The Mathematical Sciences in the 21st Century (2012); Aging and the Macroeconomy: Long-Term Implications of an Older Population (2012); The Mathematical Sciences in 2025 (2013); Frontiers in Massive Data Analysis (2013); and Developing a 21st Century Global Library for Mathematics Research (2014).
Ziatabar Ahmadi, Seyyede Zohreh; Jalaie, Shohreh; Ashayeri, Hassan
2015-09-01
Theory of mind (ToM) or mindreading is an aspect of social cognition that evaluates mental states and beliefs of oneself and others. Validity and reliability are very important criteria when evaluating standard tests; and without them, these tests are not usable. The aim of this study was to systematically review the validity and reliability of published English comprehensive ToM tests developed for normal preschool children. We searched MEDLINE (PubMed interface), Web of Science, Science direct, PsycINFO, and also evidence base Medicine (The Cochrane Library) databases from 1990 to June 2015. Search strategy was Latin transcription of 'Theory of Mind' AND test AND children. Also, we manually studied the reference lists of all final searched articles and carried out a search of their references. Inclusion criteria were as follows: Valid and reliable diagnostic ToM tests published from 1990 to June 2015 for normal preschool children; and exclusion criteria were as follows: the studies that only used ToM tests and single tasks (false belief tasks) for ToM assessment and/or had no description about structure, validity or reliability of their tests. METHODological quality of the selected articles was assessed using the Critical Appraisal Skills Programme (CASP). In primary searching, we found 1237 articles in total databases. After removing duplicates and applying all inclusion and exclusion criteria, we selected 11 tests for this systematic review. There were a few valid, reliable and comprehensive ToM tests for normal preschool children. However, we had limitations concerning the included articles. The defined ToM tests were different in populations, tasks, mode of presentations, scoring, mode of responses, times and other variables. Also, they had various validities and reliabilities. Therefore, it is recommended that the researchers and clinicians select the ToM tests according to their psychometric characteristics, validity and reliability.
Ziatabar Ahmadi, Seyyede Zohreh; Jalaie, Shohreh; Ashayeri, Hassan
2015-01-01
Objective: Theory of mind (ToM) or mindreading is an aspect of social cognition that evaluates mental states and beliefs of oneself and others. Validity and reliability are very important criteria when evaluating standard tests; and without them, these tests are not usable. The aim of this study was to systematically review the validity and reliability of published English comprehensive ToM tests developed for normal preschool children. Method: We searched MEDLINE (PubMed interface), Web of Science, Science direct, PsycINFO, and also evidence base Medicine (The Cochrane Library) databases from 1990 to June 2015. Search strategy was Latin transcription of ‘Theory of Mind’ AND test AND children. Also, we manually studied the reference lists of all final searched articles and carried out a search of their references. Inclusion criteria were as follows: Valid and reliable diagnostic ToM tests published from 1990 to June 2015 for normal preschool children; and exclusion criteria were as follows: the studies that only used ToM tests and single tasks (false belief tasks) for ToM assessment and/or had no description about structure, validity or reliability of their tests. Methodological quality of the selected articles was assessed using the Critical Appraisal Skills Programme (CASP). Result: In primary searching, we found 1237 articles in total databases. After removing duplicates and applying all inclusion and exclusion criteria, we selected 11 tests for this systematic review. Conclusion: There were a few valid, reliable and comprehensive ToM tests for normal preschool children. However, we had limitations concerning the included articles. The defined ToM tests were different in populations, tasks, mode of presentations, scoring, mode of responses, times and other variables. Also, they had various validities and reliabilities. Therefore, it is recommended that the researchers and clinicians select the ToM tests according to their psychometric characteristics, validity and reliability. PMID:27006666
75 FR 35011 - Combined Notice of Filings #1
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-21
... Processes Manual Incorporating Proposed Revisions to the Reliability Standards Development Process. Filed..., June 21, 2010. Take notice that the Commission received the following electric reliability filings: Docket Numbers: RR10-12-000. Applicants: North American Electric Reliability Corp. Description: Petition...
Rethinking big data: A review on the data quality and usage issues
NASA Astrophysics Data System (ADS)
Liu, Jianzheng; Li, Jie; Li, Weifeng; Wu, Jiansheng
2016-05-01
The recent explosive publications of big data studies have well documented the rise of big data and its ongoing prevalence. Different types of ;big data; have emerged and have greatly enriched spatial information sciences and related fields in terms of breadth and granularity. Studies that were difficult to conduct in the past time due to data availability can now be carried out. However, big data brings lots of ;big errors; in data quality and data usage, which cannot be used as a substitute for sound research design and solid theories. We indicated and summarized the problems faced by current big data studies with regard to data collection, processing and analysis: inauthentic data collection, information incompleteness and noise of big data, unrepresentativeness, consistency and reliability, and ethical issues. Cases of empirical studies are provided as evidences for each problem. We propose that big data research should closely follow good scientific practice to provide reliable and scientific ;stories;, as well as explore and develop techniques and methods to mitigate or rectify those 'big-errors' brought by big data.
Unbiased All-Optical Random-Number Generator
NASA Astrophysics Data System (ADS)
Steinle, Tobias; Greiner, Johannes N.; Wrachtrup, Jörg; Giessen, Harald; Gerhardt, Ilja
2017-10-01
The generation of random bits is of enormous importance in modern information science. Cryptographic security is based on random numbers which require a physical process for their generation. This is commonly performed by hardware random-number generators. These often exhibit a number of problems, namely experimental bias, memory in the system, and other technical subtleties, which reduce the reliability in the entropy estimation. Further, the generated outcome has to be postprocessed to "iron out" such spurious effects. Here, we present a purely optical randomness generator, based on the bistable output of an optical parametric oscillator. Detector noise plays no role and postprocessing is reduced to a minimum. Upon entering the bistable regime, initially the resulting output phase depends on vacuum fluctuations. Later, the phase is rigidly locked and can be well determined versus a pulse train, which is derived from the pump laser. This delivers an ambiguity-free output, which is reliably detected and associated with a binary outcome. The resulting random bit stream resembles a perfect coin toss and passes all relevant randomness measures. The random nature of the generated binary outcome is furthermore confirmed by an analysis of resulting conditional entropies.
Gorbett, Gregory E; Morris, Sarah M; Meacham, Brian J; Wood, Christopher B
2015-01-01
A new method to characterize the degree of fire damage to gypsum wallboard is introduced, implemented, and tested to determine the efficacy of its application among novices. The method was evaluated by comparing degree of fire damage assessments of novices with and without the method. Thirty-nine "novice" raters assessed damage to a gypsum wallboard surface, completing 66 ratings, first without the method, and then again using the method. The inter-rater reliability was evaluated for ratings of damage without and with the method. For novice fire investigators rating degree of damage without the aid of the method, ICC(1,2) = 0.277 with 95% CI (0.211, 0.365), and with the method, ICC(2,1) = 0.593 with 95% CI (0.509, 0.684). Results indicate that the raters were more reliable in their analysis of the degree of fire damage when using the method, which support the use of standardized processes to decrease the variability in data collection and interpretation. © 2014 American Academy of Forensic Sciences.
Understanding Climate Uncertainty with an Ocean Focus
NASA Astrophysics Data System (ADS)
Tokmakian, R. T.
2009-12-01
Uncertainty in climate simulations arises from various aspects of the end-to-end process of modeling the Earth’s climate. First, there is uncertainty from the structure of the climate model components (e.g. ocean/ice/atmosphere). Even the most complex models are deficient, not only in the complexity of the processes they represent, but in which processes are included in a particular model. Next, uncertainties arise from the inherent error in the initial and boundary conditions of a simulation. Initial conditions are the state of the weather or climate at the beginning of the simulation and other such things, and typically come from observations. Finally, there is the uncertainty associated with the values of parameters in the model. These parameters may represent physical constants or effects, such as ocean mixing, or non-physical aspects of modeling and computation. The uncertainty in these input parameters propagates through the non-linear model to give uncertainty in the outputs. The models in 2020 will no doubt be better than today’s models, but they will still be imperfect, and development of uncertainty analysis technology is a critical aspect of understanding model realism and prediction capability. Smith [2002] and Cox and Stephenson [2007] discuss the need for methods to quantify the uncertainties within complicated systems so that limitations or weaknesses of the climate model can be understood. In making climate predictions, we need to have available both the most reliable model or simulation and a methods to quantify the reliability of a simulation. If quantitative uncertainty questions of the internal model dynamics are to be answered with complex simulations such as AOGCMs, then the only known path forward is based on model ensembles that characterize behavior with alternative parameter settings [e.g. Rougier, 2007]. The relevance and feasibility of using "Statistical Analysis of Computer Code Output" (SACCO) methods for examining uncertainty in ocean circulation due to parameter specification will be described and early results using the ocean/ice components of the CCSM climate model in a designed experiment framework will be shown. Cox, P. and D. Stephenson, Climate Change: A Changing Climate for Prediction, 2007, Science 317 (5835), 207, DOI: 10.1126/science.1145956. Rougier, J. C., 2007: Probabilistic Inference for Future Climate Using an Ensemble of Climate Model Evaluations, Climatic Change, 81, 247-264. Smith L., 2002, What might we learn from climate forecasts? Proc. Nat’l Academy of Sciences, Vol. 99, suppl. 1, 2487-2492 doi:10.1073/pnas.012580599.
Science should warn people of looming disaster
NASA Astrophysics Data System (ADS)
Kossobokov, Vladimir
2014-05-01
Contemporary Science is responsible for not coping with challenging changes of Exposures and their Vulnerability inflicted by growing population, its concentration, etc., which result in a steady increase of Losses from Natural Hazards. Scientists owe to Society for lack of special knowledge, education, and communication. In fact, it appears that a few seismic hazard assessment programs and/or methodologies were tested appropriately against real observations before being endorsed for estimation of earthquake related risks. The fatal evidence and aftermath of the past decades prove that many of the existing internationally accepted methodologies are grossly misleading and are evidently unacceptable for any kind of responsible risk evaluation and knowledgeable disaster prevention. In contrast, the confirmed reliability of pattern recognition aimed at earthquake prone areas and times of increased probability, along with realistic earthquake scaling and scenario modeling, allow us to conclude that Contemporary Science can do a better job in disclosing Natural Hazards, assessing Risks, and delivering this state-of-the-art knowledge of looming disaster in advance catastrophic events. In a lieu of seismic observations long enough for a reliable probabilistic assessment or a comprehensive physical theory of earthquake recurrence, pattern recognition applied to available geophysical and/or geological data sets remains a broad avenue to follow in seismic hazard forecast/prediction. Moreover, better understanding seismic process in terms of non-linear dynamics of a hierarchical system of blocks-and-faults and deterministic chaos, progress to new approaches in assessing time-dependent seismic hazard based on multiscale analysis of seismic activity and reproducible intermediate-term earthquake prediction technique. The algorithms, which make use of multidisciplinary data available and account for fractal nature of earthquake distributions in space and time, have confirmed their reliability by durable statistical testing in the on-going regular real-time application lasted for more than 20 years. Geoscientists must initiate shifting the minds of community from pessimistic disbelieve in forecast/prediction products to optimistic challenging views on Hazard Predictability in space and time, so that not to repeat missed opportunities for disaster preparedness like it happen in advance the 2009 L'Aquila, M6.3 earthquake in Italy and the 2011, M9.0 mega-thrust off the Pacific coast of Tōhoku region in Japan.
NASA Technical Reports Server (NTRS)
Wallace, Dolores R.
2003-01-01
In FY01 we learned that hardware reliability models need substantial changes to account for differences in software, thus making software reliability measurements more effective, accurate, and easier to apply. These reliability models are generally based on familiar distributions or parametric methods. An obvious question is 'What new statistical and probability models can be developed using non-parametric and distribution-free methods instead of the traditional parametric method?" Two approaches to software reliability engineering appear somewhat promising. The first study, begin in FY01, is based in hardware reliability, a very well established science that has many aspects that can be applied to software. This research effort has investigated mathematical aspects of hardware reliability and has identified those applicable to software. Currently the research effort is applying and testing these approaches to software reliability measurement, These parametric models require much project data that may be difficult to apply and interpret. Projects at GSFC are often complex in both technology and schedules. Assessing and estimating reliability of the final system is extremely difficult when various subsystems are tested and completed long before others. Parametric and distribution free techniques may offer a new and accurate way of modeling failure time and other project data to provide earlier and more accurate estimates of system reliability.
Do complexity-informed health interventions work? A scoping review.
Brainard, Julii; Hunter, Paul R
2016-09-20
The lens of complexity theory is widely advocated to improve health care delivery. However, empirical evidence that this lens has been useful in designing health care remains elusive. This review assesses whether it is possible to reliably capture evidence for efficacy in results or process within interventions that were informed by complexity science and closely related conceptual frameworks. Systematic searches of scientific and grey literature were undertaken in late 2015/early 2016. Titles and abstracts were screened for interventions (A) delivered by the health services, (B) that explicitly stated that complexity science provided theoretical underpinning, and (C) also reported specific outcomes. Outcomes had to relate to changes in actual practice, service delivery or patient clinical indicators. Data extraction and detailed analysis was undertaken for studies in three developed countries: Canada, UK and USA. Data were extracted for intervention format, barriers encountered and quality aspects (thoroughness or possible biases) of evaluation and reporting. From 5067 initial finds in scientific literature and 171 items in grey literature, 22 interventions described in 29 articles were selected. Most interventions relied on facilitating collaboration to find solutions to specific or general problems. Many outcomes were very positive. However, some outcomes were measured only subjectively, one intervention was designed with complexity theory in mind but did not reiterate this in subsequent evaluation and other interventions were credited as compatible with complexity science but reported no relevant theoretical underpinning. Articles often omitted discussion on implementation barriers or unintended consequences, which suggests that complexity theory was not widely used in evaluation. It is hard to establish cause and effect when attempting to leverage complex adaptive systems and perhaps even harder to reliably find evidence that confirms whether complexity-informed interventions are usually effective. While it is possible to show that interventions that are compatible with complexity science seem efficacious, it remains difficult to show that explicit planning with complexity in mind was particularly valuable. Recommendations are made to improve future evaluation reports, to establish a better evidence base about whether this conceptual framework is useful in intervention design and implementation.
Improving reliability of a residency interview process.
Peeters, Michael J; Serres, Michelle L; Gundrum, Todd E
2013-10-14
To improve the reliability and discrimination of a pharmacy resident interview evaluation form, and thereby improve the reliability of the interview process. In phase 1 of the study, authors used a Many-Facet Rasch Measurement model to optimize an existing evaluation form for reliability and discrimination. In phase 2, interviewer pairs used the modified evaluation form within 4 separate interview stations. In phase 3, 8 interviewers individually-evaluated each candidate in one-on-one interviews. In phase 1, the evaluation form had a reliability of 0.98 with person separation of 6.56; reproducibly, the form separated applicants into 6 distinct groups. Using that form in phase 2 and 3, our largest variation source was candidates, while content specificity was the next largest variation source. The phase 2 g-coefficient was 0.787, while confirmatory phase 3 was 0.922. Process reliability improved with more stations despite fewer interviewers per station-impact of content specificity was greatly reduced with more interview stations. A more reliable, discriminating evaluation form was developed to evaluate candidates during resident interviews, and a process was designed that reduced the impact from content specificity.
ERIC Educational Resources Information Center
Brancaccio-Taras, Loretta; Pape-Lindstrom, Pamela; Peteroy-Kelly, Marcy; Aguirre, Karen; Awong-Taylor, Judy; Balser, Teri; Cahill, Michael J.; Frey, Regina F.; Jack, Thomas; Kelrick, Michael; Marley, Kate; Miller, Kathryn G.; Osgood, Marcy; Romano, Sandra; Uzman, J. Akif; Zhao, Jiuqing
2016-01-01
The PULSE Vision & Change Rubrics, version 1.0, assess life sciences departments' progress toward implementation of the principles of the "Vision and Change report." This paper reports on the development of the rubrics, their validation, and their reliability in measuring departmental change aligned with the "Vision and…
Mission Status at Aura Science Team MOWG Meeting: EOS Aura
NASA Technical Reports Server (NTRS)
Fisher, Dominic
2016-01-01
Presentation at the 24797-16 Earth Observing System (EOS) Aura Science Team Meeting (Mission Operations Work Group (MOWG)) at Rotterdam, Netherlands August 29, 2016. Presentation topics include mission summary, spacecraft subsystems summary, recent and planned activities, spacecraft anomalies, data capture, propellant usage and lifetime estimates, spacecraft maneuvers and ground track history, mission highlights and past spacecraft anomalies and reliability estimates.
Integrated management of thesis using clustering method
NASA Astrophysics Data System (ADS)
Astuti, Indah Fitri; Cahyadi, Dedy
2017-02-01
Thesis is one of major requirements for student in pursuing their bachelor degree. In fact, finishing the thesis involves a long process including consultation, writing manuscript, conducting the chosen method, seminar scheduling, searching for references, and appraisal process by the board of mentors and examiners. Unfortunately, most of students find it hard to match all the lecturers' free time to sit together in a seminar room in order to examine the thesis. Therefore, seminar scheduling process should be on the top of priority to be solved. Manual mechanism for this task no longer fulfills the need. People in campus including students, staffs, and lecturers demand a system in which all the stakeholders can interact each other and manage the thesis process without conflicting their timetable. A branch of computer science named Management Information System (MIS) could be a breakthrough in dealing with thesis management. This research conduct a method called clustering to distinguish certain categories using mathematics formulas. A system then be developed along with the method to create a well-managed tool in providing some main facilities such as seminar scheduling, consultation and review process, thesis approval, assessment process, and also a reliable database of thesis. The database plays an important role in present and future purposes.
Center Director's Discretionary Fund 2005 Annual Report
NASA Technical Reports Server (NTRS)
Nurge, Mark; Griffin, Timothy; Arens, Ellen; Calle, Carlos; Quinn, Jacqueline; Wheeler, Raymond; Metzger, Phillip T.; Calle, Luz Marina; Beaver, Justin M.; Williams, Martha;
2007-01-01
The FY 2005 CDDF projects were selected from the following spaceport and range technology and science areas: fluid system technologies; spaceport structures and materials; command, control, and monitoring technologies; and biological sciences (including support for environmental stewardship). The FY 2005 CDDF research projects involved development of the following: a) Capacitance-based moisture sensors to optimize plant growth in reduced gravity; b) Commodity-free calibration methods; c) Application of atmospheric plasma glow discharge to alter the surface properties of polymers for improved electrostatic dissipation characteristics; d) A wipe-on, wipe-off chemical process to remove lead oxides found in paint; e) A robust metabolite profiling platform for better understanding the "law" of biological regulation; f) An explanation of the excavation processes that occur when a jet of gas impinges on a bed of sand; g) "Smart coatings" to detect and control corrosion at an early stage to prevent further corrosion h) A model that can produce a reliable diagnosis of the quality of a software product; i) The formulation of advanced materials to meet system safety needs to minimize electrostatic charges, flammability, and radiation exposure; j) A lab-based instrument that uses the electro-optic Pockels effect to make static electric fields visible; k) A passive volatile organic compound (VOC) cartridge to filter, identify, and quantify VOCs flowing into or emanating from plant flight experiments.
Demonstration of nanoimprinted hyperlens array for high-throughput sub-diffraction imaging
NASA Astrophysics Data System (ADS)
Byun, Minsueop; Lee, Dasol; Kim, Minkyung; Kim, Yangdoo; Kim, Kwan; Ok, Jong G.; Rho, Junsuk; Lee, Heon
2017-04-01
Overcoming the resolution limit of conventional optics is regarded as the most important issue in optical imaging science and technology. Although hyperlenses, super-resolution imaging devices based on highly anisotropic dispersion relations that allow the access of high-wavevector components, have recently achieved far-field sub-diffraction imaging in real-time, the previously demonstrated devices have suffered from the extreme difficulties of both the fabrication process and the non-artificial objects placement. This results in restrictions on the practical applications of the hyperlens devices. While implementing large-scale hyperlens arrays in conventional microscopy is desirable to solve such issues, it has not been feasible to fabricate such large-scale hyperlens array with the previously used nanofabrication methods. Here, we suggest a scalable and reliable fabrication process of a large-scale hyperlens device based on direct pattern transfer techniques. We fabricate a 5 cm × 5 cm size hyperlenses array and experimentally demonstrate that it can resolve sub-diffraction features down to 160 nm under 410 nm wavelength visible light. The array-based hyperlens device will provide a simple solution for much more practical far-field and real-time super-resolution imaging which can be widely used in optics, biology, medical science, nanotechnology and other closely related interdisciplinary fields.
High Strength Steel Weldment Reliability: Weld Metal Hydrogen Trapping.
1998-02-01
Reliability : Weld Metal Hydrogen Trapping submitted to : United States Army Research Office Materials Science Division P.O. Box 12211 Research Triangle...Conf. Proc. of Welding and Related Technologies for the XXIth Century, November 1998, Kiev, Ukraine : "Hydrogen Assisted Cracking in...appendices (see appendix IV). Next TTCP workshop will be held from 6th to 8th October 1998, at CANMET , Ottawa, Ontario, Canada. 20 III. Figures 18
Lausberg, Hedda; Sloetjes, Han
2016-09-01
As visual media spread to all domains of public and scientific life, nonverbal behavior is taking its place as an important form of communication alongside the written and spoken word. An objective and reliable method of analysis for hand movement behavior and gesture is therefore currently required in various scientific disciplines, including psychology, medicine, linguistics, anthropology, sociology, and computer science. However, no adequate common methodological standards have been developed thus far. Many behavioral gesture-coding systems lack objectivity and reliability, and automated methods that register specific movement parameters often fail to show validity with regard to psychological and social functions. To address these deficits, we have combined two methods, an elaborated behavioral coding system and an annotation tool for video and audio data. The NEUROGES-ELAN system is an effective and user-friendly research tool for the analysis of hand movement behavior, including gesture, self-touch, shifts, and actions. Since its first publication in 2009 in Behavior Research Methods, the tool has been used in interdisciplinary research projects to analyze a total of 467 individuals from different cultures, including subjects with mental disease and brain damage. Partly on the basis of new insights from these studies, the system has been revised methodologically and conceptually. The article presents the revised version of the system, including a detailed study of reliability. The improved reproducibility of the revised version makes NEUROGES-ELAN a suitable system for basic empirical research into the relation between hand movement behavior and gesture and cognitive, emotional, and interactive processes and for the development of automated movement behavior recognition methods.
The assessment of fidelity in a motor speech-treatment approach
Hayden, Deborah; Namasivayam, Aravind Kumar; Ward, Roslyn
2015-01-01
Objective To demonstrate the application of the constructs of treatment fidelity for research and clinical practice for motor speech disorders, using the Prompts for Restructuring Oral Muscular Phonetic Targets (PROMPT) Fidelity Measure (PFM). Treatment fidelity refers to a set of procedures used to monitor and improve the validity and reliability of behavioral intervention. While the concept of treatment fidelity has been emphasized in medical and allied health sciences, documentation of procedures for the systematic evaluation of treatment fidelity in Speech-Language Pathology is sparse. Methods The development and iterative process to improve the PFM, is discussed. Further, the PFM is evaluated against recommended measurement strategies documented in the literature. This includes evaluating the appropriateness of goals and objectives; and the training of speech–language pathologists, using direct and indirect procedures. Three expert raters scored the PFM to examine inter-rater reliability. Results Three raters, blinded to each other's scores, completed fidelity ratings on three separate occasions. Inter-rater reliability, using Krippendorff's Alpha, was >80% for the PFM on the final scoring occasion. This indicates strong inter-rater reliability. Conclusion The development of fidelity measures for the training of service providers and treatment delivery is important in specialized treatment approaches where certain ‘active ingredients’ (e.g. specific treatment targets and therapeutic techniques) must be present in order for treatment to be effective. The PFM reflects evidence-based practice by integrating treatment delivery and clinical skill as a single quantifiable metric. PFM enables researchers and clinicians to objectively measure treatment outcomes within the PROMPT approach. PMID:26213623
PREFACE: International Conference on Applied Sciences 2015 (ICAS2015)
NASA Astrophysics Data System (ADS)
Lemle, Ludovic Dan; Jiang, Yiwen
2016-02-01
The International Conference on Applied Sciences ICAS2015 took place in Wuhan, China on June 3-5, 2015 at the Military Economics Academy of Wuhan. The conference is regularly organized, alternatively in Romania and in P.R. China, by Politehnica University of Timişoara, Romania, and Military Economics Academy of Wuhan, P.R. China, with the joint aims to serve as a platform for exchange of information between various areas of applied sciences, and to promote the communication between the scientists of different nations, countries and continents. The topics of the conference cover a comprehensive spectrum of issues from: >Economical Sciences and Defense: Management Sciences, Business Management, Financial Management, Logistics, Human Resources, Crisis Management, Risk Management, Quality Control, Analysis and Prediction, Government Expenditure, Computational Methods in Economics, Military Sciences, National Security, and others... >Fundamental Sciences and Engineering: Interdisciplinary applications of physics, Numerical approximation and analysis, Computational Methods in Engineering, Metallic Materials, Composite Materials, Metal Alloys, Metallurgy, Heat Transfer, Mechanical Engineering, Mechatronics, Reliability, Electrical Engineering, Circuits and Systems, Signal Processing, Software Engineering, Data Bases, Modeling and Simulation, and others... The conference gathered qualified researchers whose expertise can be used to develop new engineering knowledge that has applicability potential in Engineering, Economics, Defense, etc. The number of participants was 120 from 11 countries (China, Romania, Taiwan, Korea, Denmark, France, Italy, Spain, USA, Jamaica, and Bosnia and Herzegovina). During the three days of the conference four invited and 67 oral talks were delivered. Based on the work presented at the conference, 38 selected papers have been included in this volume of IOP Conference Series: Materials Science and Engineering. These papers present new research in the various fields of Materials Engineering, Mechanical Engineering, Computers Engineering, and Electrical Engineering. It's our great pleasure to present this volume of IOP Conference Series: Materials Science and Engineering to the scientific community to promote further research in these areas. We sincerely hope that the papers published in this volume will contribute to the advancement of knowledge in the respective fields.
Neutral Kaon Photoproduction at LNS, Tohoku University
NASA Astrophysics Data System (ADS)
Kaneta, M.; Chiga, N.; Beckford, B.; Ejima, M.; Fujii, T.; Fujii, Y.; Fujibayashi, T.; Gogami, T.; Futatsukawa, K.; Hashimoto, O.; Hosomi, K.; Hirose, K.; Iguchi, A.; Kameoka, S.; Kanda, H.; Kato, H.; Kawama, D.; Kawasaki, T.; Kimura, C.; Kiyokawa, S.; Koike, T.; Kon, T.; Ma, Y.; Maeda, K.; Maruyama, N.; Matsumura, A.; Miyagi, Y.; Miura, Y.; Miwa, K.; Nakamura, S. N.; Nomura, H.; Okuyama, A.; Ohtani, A.; Otani, T.; Sato, M.; Shichijo, A.; Shirotori, K.; Takahashi, T.; Tamura, H.; Taniya, N.; Tsubota, H.; Tsukada, K.; Terada, N.; Ukai, M.; Uchida, D.; Watanabe, T.; Yamamoto, T.; Yamauchi, H.; Yokota, K.; Ishikawa, T.; Kinoshita, T.; Miyahara, H.; Nakabayashi, T.; Shimizu, H.; Suzuki, K.; Tamae, T.; Terasawa, T.; Yamazaki, H.; Han, Y. C.; Wang, T. S.; Sasaki, A.; Konno, O.; Bydžovský, P.; Sotona, M.
2010-10-01
The elementary photo-strangeness production process has been intensively studied based on the high-quality data of the charged kaon channel, γ + p → K+ + Λ(Σ0). However, there had been no reliable data for the neutral kaon channel γ + n → K0 + Λ(Σ0) and the theoretical investigations suffer seriously from the lack of the data. In order to have reliable data for the neutral kaon photo-production data, substantial effort has been made to measure the γ + n → K0 + Λ process in the π+π- decay channel, using a liquid deuterium target and a tagged photon beam (Eγ = 0.8-1.1 GeV) in the threshold region at the Laboratory of Nuclear Science (LNS), Tohoku University. We have taken exploratory data quite successfully with the use of Neutral Kaon Spectrometer (NKS) at LNS-Tohoku in 2003 and 2004. The data is compared to theoretical models and it indicates a hint that the K0 differential cross section has a backward peak in the energy region. The second generation of the experiment, NKS2, is designed to extend the NKS experiment by considerably upgrading the original neutral kaon spectrometer, fully replacing the spectrometer magnet, tracking detectors and all the trigger counters. The new spectrometer NKS2 has significantly larger acceptance for neutral kaons compared with NKS, particularly covering forward angles and much better invariant mass resolution. The estimated acceptance of NKS2 is three (ten) times larger for KS0 (Λ ) than that of NKS. The spectrometer is newly constructed and installed at the Laboratory of Nuclear Science, Tohoku University in 2005. The deuterium target data was taken with tagged photon beam in 2006-2007. We will report recent results of NKS2 in this paper. Additionally, a status of the upgrade project that gives us larger acceptance and capability of K0 + Λ coincidence measurement will be presented.
Neutral Kaon Photoproduction at LNS, Tohoku University
NASA Astrophysics Data System (ADS)
Kaneta, M.; Chiga, N.; Beckford, B.; Ejima, M.; Fujii, T.; Fujii, Y.; Fujibayashi, T.; Gogami, T.; Futatsukawa, K.; Hashimoto, O.; Hosomi, K.; Hirose, K.; Iguchi, A.; Kameoka, S.; Kanda, H.; Kato, H.; Kawama, D.; Kawasaki, T.; Kimura, C.; Kiyokawa, S.; Koike, T.; Kon, T.; Ma, Y.; Maeda, K.; Maruyama, N.; Matsumura, A.; Miyagi, Y.; Miura, Y.; Miwa, K.; Nakamura, S. N.; Nomura, H.; Okuyama, A.; Ohtani, A.; Otani, T.; Sato, M.; Shichijo, A.; Shirotori, K.; Takahashi, T.; Tamura, H.; Taniya, N.; Tsubota, H.; Tsukada, K.; Terada, N.; Ukai, M.; Uchida, D.; Watanabe, T.; Yamamoto, T.; Yamauchi, H.; Yokota, K.; Ishikawa, T.; Kinoshita, T.; Miyahara, H.; Nakabayashi, T.; Shimizu, H.; Suzuki, K.; Tamae, T.; Terasawa, T.; Yamazaki, H.; Han, Y. C.; Wang, T. S.; Sasaki, A.; Konno, O.; Bydžovský, P.; Sotona, M.
The elementary photo-strangeness production process has been intensively studied based on the high-quality data of the charged kaon channel, γ + p → K+ + Λ(Σ0). However, there had been no reliable data for the neutral kaon channel γ + n → K0 + Λ(Σ0) and the theoretical investigations suffer seriously from the lack of the data. In order to have reliable data for the neutral kaon photo-production data, substantial effort has been made to measure the γ + n → K0 + Λ process in the π+π- decay channel, using a liquid deuterium target and a tagged photon beam (Eγ = 0.8-1.1 GeV) in the threshold region at the Laboratory of Nuclear Science (LNS), Tohoku University. We have taken exploratory data quite successfully with the use of Neutral Kaon Spectrometer (NKS) at LNS-Tohoku in 2003 and 2004. The data is compared to theoretical models and it indicates a hint that the K0 differential cross section has a backward peak in the energy region. The second generation of the experiment, NKS2, is designed to extend the NKS experiment by considerably upgrading the original neutral kaon spectrometer, fully replacing the spectrometer magnet, tracking detectors and all the trigger counters. The new spectrometer NKS2 has significantly larger acceptance for neutral kaons compared with NKS, particularly covering forward angles and much better invariant mass resolution. The estimated acceptance of NKS2 is three (ten) times larger for KS0 (Λ ) than that of NKS. The spectrometer is newly constructed and installed at the Laboratory of Nuclear Science, Tohoku University in 2005. The deuterium target data was taken with tagged photon beam in 2006-2007. We will report recent results of NKS2 in this paper. Additionally, a status of the upgrade project that gives us larger acceptance and capability of K0 + Λ coincidence measurement will be presented.
The creation and validation of an instrument to measure school STEM Culture
NASA Astrophysics Data System (ADS)
White, Christopher
Although current research exists on school culture, there is a gap in the literature on specialized aspects of culture such as STEM Culture defined as the beliefs, values, practices, resources, and challenges in STEM fields (Science, Technology, Engineering and Mathematics) within a school. The objective of this study was to create a valid and reliable instrument, the STEM Culture Assessment Tool (STEM-CAT), that measures this cultural aspect based on a survey of stakeholder groups within the school community and use empirical data to support the use of this instrument to measure STEM Culture. Items were created and face validity was determined through a focus group and expert review before a pilot study was conducted to determine reliability of the items. Once items were determined reliable, the survey was given to eight high schools and results were correlated to the percentage of seniors who self-reported whether they intend to pursue STEM fields upon graduation. The results of this study indicate further need for research to determine how the STEM-CAT correlates to STEM culture due to some inconsistencies with the dependent variable in this study. Future research could be done correlating the results of the STEM-CAT with participation in Advanced Placement science and mathematics, SAT/ACT scores in science and mathematics or the number of students who actually pursue STEM fields rather than a prediction halfway through the 12th grade.
Using meta-quality to assess the utility of volunteered geographic information for science.
Langley, Shaun A; Messina, Joseph P; Moore, Nathan
2017-11-06
Volunteered geographic information (VGI) has strong potential to be increasingly valuable to scientists in collaboration with non-scientists. The abundance of mobile phones and other wireless forms of communication open up significant opportunities for the public to get involved in scientific research. As these devices and activities become more abundant, questions of uncertainty and error in volunteer data are emerging as critical components for using volunteer-sourced spatial data. Here we present a methodology for using VGI and assessing its sensitivity to three types of error. More specifically, this study evaluates the reliability of data from volunteers based on their historical patterns. The specific context is a case study in surveillance of tsetse flies, a health concern for being the primary vector of African Trypanosomiasis. Reliability, as measured by a reputation score, determines the threshold for accepting the volunteered data for inclusion in a tsetse presence/absence model. Higher reputation scores are successful in identifying areas of higher modeled tsetse prevalence. A dynamic threshold is needed but the quality of VGI will improve as more data are collected and the errors in identifying reliable participants will decrease. This system allows for two-way communication between researchers and the public, and a way to evaluate the reliability of VGI. Boosting the public's ability to participate in such work can improve disease surveillance and promote citizen science. In the absence of active surveillance, VGI can provide valuable spatial information given that the data are reliable.
Reliability of physical functioning tests in patients with low back pain: a systematic review.
Denteneer, Lenie; Van Daele, Ulrike; Truijen, Steven; De Hertogh, Willem; Meirte, Jill; Stassijns, Gaetane
2018-01-01
The aim of this study was to provide a comprehensive overview of physical functioning tests in patients with low back pain (LBP) and to investigate their reliability. A systematic computerized search was finalized in four different databases on June 24, 2017: PubMed, Web of Science, Embase, and MEDLINE. Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines were followed during all stages of this review. Clinical studies that investigate the reliability of physical functioning tests in patients with LBP were eligible. The methodological quality of the included studies was assessed with the use of the Consensus-based Standards for the selection of health Measurement Instruments (COSMIN) checklist. To come to final conclusions on the reliability of the identified clinical tests, the current review assessed three factors, namely, outcome assessment, methodological quality, and consistency of description. A total of 20 studies were found eligible and 38 clinical tests were identified. Good overall test-retest reliability was concluded for the extensor endurance test (intraclass correlation coefficient [ICC]=0.93-0.97), the flexor endurance test (ICC=0.90-0.97), the 5-minute walking test (ICC=0.89-0.99), the 50-ft walking test (ICC=0.76-0.96), the shuttle walk test (ICC=0.92-0.99), the sit-to-stand test (ICC=0.91-0.99), and the loaded forward reach test (ICC=0.74-0.98). For inter-rater reliability, only one test, namely, the Biering-Sörensen test (ICC=0.88-0.99), could be concluded to have an overall good inter-rater reliability. None of the identified clinical tests could be concluded to have a good intrarater reliability. Further investigation should focus on a better overall study methodology and the use of identical protocols for the description of clinical tests. The assessment of reliability is only a first step in the recommendation process for the use of clinical tests. In future research, the identified clinical tests in the current review should be further investigated for validity. Only when these clinimetric properties of a clinical test have been thoroughly investigated can a final conclusion regarding the clinical and scientific use of the identified tests be made. Copyright © 2017 Elsevier Inc. All rights reserved.
Negotiating boundaries: Encyclopédie, romanticism, and the construction of science.
Fetz, Marcelo
2017-01-01
Natural history in the eighteenth and nineteenth centuries has been widely debated in the field of the social sciences. This paper explores the social negotiation of boundaries in the Encyclopédie and romantic science. Highlighting the importance of imagination and aesthetics to the scientific realms, we perceive a different comprehension of the scientific field through the empirical study of how scientific demarcation is constructed. Works by Erasmus Darwin, Goethe, and Humboldt illustrate how reliable science was performed through atypical scientific methods. After pointing out the links between literary, artistic, and scientific works, we then debate a series of changes that framed the scientific imagery of romantic and encyclopaedic sciences.
MSRR Rack Materials Science Research Rack
NASA Technical Reports Server (NTRS)
Reagan, Shawn
2017-01-01
The Materials Science Research Rack (MSRR) is a research facility developed under a cooperative research agreement between NASA and the European Space Agency (ESA) for materials science investigations on the International Space Station (ISS). The MSRR is managed at the Marshall Space Flight Center (MSFC) in Huntsville, AL. The MSRR facility subsystems were manufactured by Teledyne Brown Engineering (TBE) and integrated with the ESA/EADS-Astrium developed Materials Science Laboratory (MSL) at the MSFC Space Station Integration and Test Facility (SSITF) as part of the Systems Development Operations Support (SDOS) contract. MSRR was launched on STS-128 in August 2009, and is currently installed in the U. S. Destiny Laboratory Module on the ISS. Materials science is an integral part of developing new, safer, stronger, more durable materials for use throughout everyday life. The goal of studying materials processing in space is to develop a better understanding of the chemical and physical mechanisms involved, and how they differ in the microgravity environment of space. To that end, the MSRR accommodates advanced investigations in the microgravity environment of the ISS for basic materials science research in areas such as solidification of metals and alloys. MSRR allows for the study of a variety of materials including metals, ceramics, semiconductor crystals, and glasses. Materials science research benefits from the microgravity environment of space, where the researcher can better isolate chemical and thermal properties of materials from the effects of gravity. With this knowledge, reliable predictions can be made about the conditions required on Earth to achieve improved materials. MSRR is a highly automated facility with a modular design capable of supporting multiple types of investigations. Currently the NASA-provided Rack Support Subsystem provides services (power, thermal control, vacuum access, and command and data handling) to the ESA developed Materials Science Laboratory (MSL) which accommodates interchangeable Furnace Inserts (FI). Two ESA-developed FIs are presently available on the ISS: the Low Gradient Furnace (LGF) and the Solidification and Quenching Furnace (SQF). Sample-Cartridge Assemblies (SCAs), each containing one or more material samples, are installed in the FI by the crew and can be processed at temperatures up to 1400 C. Once an SCA is installed, the experiment can be run by automatic command or science conducted via telemetry commands from the ground. This facility is available to support materials science investigations through programs such as the US National Laboratory, Technology Development, NASA Research Announcements, and others. TBE and MSFC are currently developing NASA Sample Cartridge Assemblies (SCA's) with a planned availability for launch in 2017.
NASA Astrophysics Data System (ADS)
Buxner, Sanlyn; Impey, Chris; Romine, James; Nieberding, Megan
2015-08-01
Using 25 years of data, we have been conducting a long-term study of undergraduate students’ science literacy. Based on questions developed for the National Science Board’s survey of the US public, we have gathered data from students enrolled in astronomy courses to help us understand their basic science knowledge as well as attitudes towards and beliefs about science. Science literacy of students in this study has remained relatively unchanged over a quarter of a century. Additionally, students’ beliefs and attitudes were associated with their overall knowledge in science. Less predictive were their self-reported majors, year in school, and number of college science courses taken. Students in this study consistently outperformed the general public surveyed by the NSB.Three years ago we broadened to our study to include an investigation of where students get their information about science and what sources they believe are the most and least reliable for that information. This past year, we have collected parallel data from lifelong learners from around the globe enrolled in a Massively Open Online Course (MOOC) in astronomy, 70% of this audience lives outside the US and represent 170 countries. We will present results of these new studies of almost 700 undergraduate students and over 2500 lifelong learners. Overall, the lifelong learners possess a greater interest in science and better knowledge in science despite less overall college science course experience. Using online sources of scientific information were prevalent for both traditional college students and lifelong learners, although there were distinct differences between how different groups of learners perceived the reliability of online information. We will discuss the implications of teaching science in both traditional in-person college classes and in online learning environments as sources of scientific information and information literacy.This material is based upon work supported by the National Science Foundation under Grant No.1244799. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.
Stirling Convertor Fasteners Reliability Quantification
NASA Technical Reports Server (NTRS)
Shah, Ashwin R.; Korovaichuk, Igor; Kovacevich, Tiodor; Schreiber, Jeffrey G.
2006-01-01
Onboard Radioisotope Power Systems (RPS) being developed for NASA s deep-space science and exploration missions require reliable operation for up to 14 years and beyond. Stirling power conversion is a candidate for use in an RPS because it offers a multifold increase in the conversion efficiency of heat to electric power and reduced inventory of radioactive material. Structural fasteners are responsible to maintain structural integrity of the Stirling power convertor, which is critical to ensure reliable performance during the entire mission. Design of fasteners involve variables related to the fabrication, manufacturing, behavior of fasteners and joining parts material, structural geometry of the joining components, size and spacing of fasteners, mission loads, boundary conditions, etc. These variables have inherent uncertainties, which need to be accounted for in the reliability assessment. This paper describes these uncertainties along with a methodology to quantify the reliability, and provides results of the analysis in terms of quantified reliability and sensitivity of Stirling power conversion reliability to the design variables. Quantification of the reliability includes both structural and functional aspects of the joining components. Based on the results, the paper also describes guidelines to improve the reliability and verification testing.
Markov and semi-Markov processes as a failure rate
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grabski, Franciszek
2016-06-08
In this paper the reliability function is defined by the stochastic failure rate process with a non negative and right continuous trajectories. Equations for the conditional reliability functions of an object, under assumption that the failure rate is a semi-Markov process with an at most countable state space are derived. A proper theorem is presented. The linear systems of equations for the appropriate Laplace transforms allow to find the reliability functions for the alternating, the Poisson and the Furry-Yule failure rate processes.
Benefits and challenges of incorporating citizen science into university education.
Mitchell, Nicola; Triska, Maggie; Liberatore, Andrea; Ashcroft, Linden; Weatherill, Richard; Longnecker, Nancy
2017-01-01
A common feature of many citizen science projects is the collection of data by unpaid contributors with the expectation that the data will be used in research. Here we report a teaching strategy that combined citizen science with inquiry-based learning to offer first year university students an authentic research experience. A six-year partnership with the Australian phenology citizen science program ClimateWatch has enabled biology students from the University of Western Australia to contribute phenological data on plants and animals, and to conduct the first research on unvalidated species datasets contributed by public and university participants. Students wrote scientific articles on their findings, peer-reviewed each other's work and the best articles were published online in a student journal. Surveys of more than 1500 students showed that their environmental engagement increased significantly after participating in data collection and data analysis. However, only 31% of students agreed with the statement that "data collected by citizen scientists are reliable" at the end of the project, whereas the rate of agreement was initially 79%. This change in perception was likely due to students discovering erroneous records when they mapped data points and analysed submitted photographs. A positive consequence was that students subsequently reported being more careful to avoid errors in their own data collection, and making greater efforts to contribute records that were useful for future scientific research. Evaluation of our project has shown that by embedding a research process within citizen science participation, university students are given cause to improve their contributions to environmental datasets. If true for citizen scientists in general, enabling participants as well as scientists to analyse data could enhance data quality, and so address a key constraint of broad-scale citizen science programs.
Next generation of spaceborne rain radars: science rationales and technology status
NASA Astrophysics Data System (ADS)
Im, Eastwood; Durden, Stephen L.; Kakar, Ramesh K.; Kummerow, Christian D.; Smith, Eric A.
2003-04-01
Global rainfall is the primary distributor of latent heat through atmospheric circulation. This important atmospheric parameter can only be measured reliably from space. The on-going Tropical Rainfall Measuring Mission (TRMM) is the first space based mission dedicated to advance our understanding of tropical precipitation patterns and their implications on global climate and its change. The Precipitation Radar (PR) aboard the satellite is the first radar ever flown in space and has provided exciting, new data on the 3-D rain structures for a variety of scientific applications. The continuous success of TRMM has led to new development of the next generation of spaceborne satellites and sensors for global rainfall and hydrological parameter measurements. From science and cost efficiency prospective, these new sensing instruments are expected to provide enhanced capabilities and reduced consumption on the spacecraft resources. At NASA, the Earth Science Enterprise has strengthened its investment on instrument technologies to help achieving these two main goals and to obtain the best science values from the new earth science instruments. It is with this spirit that a notional instrument concept, using a dual-frequency rain radar with a deployable 5-meter electronically-scanned membrane antenna and real-time digital signal processing, is developed. This new system, the Second Generation Precipitation Radar (PR-2), has the potential of offering greatly enhanced performance accuracy while using only a fraction of the mass of the current TRMM PR. During the last two years, several of the technology items associated with this notional instrument have also been prototyped. In this paper, the science rationales, the instrument design concept, and the technology status for the PR-2 notional system will be presented.
Publishing FAIR Data: An Exemplar Methodology Utilizing PHI-Base.
Rodríguez-Iglesias, Alejandro; Rodríguez-González, Alejandro; Irvine, Alistair G; Sesma, Ane; Urban, Martin; Hammond-Kosack, Kim E; Wilkinson, Mark D
2016-01-01
Pathogen-Host interaction data is core to our understanding of disease processes and their molecular/genetic bases. Facile access to such core data is particularly important for the plant sciences, where individual genetic and phenotypic observations have the added complexity of being dispersed over a wide diversity of plant species vs. the relatively fewer host species of interest to biomedical researchers. Recently, an international initiative interested in scholarly data publishing proposed that all scientific data should be "FAIR"-Findable, Accessible, Interoperable, and Reusable. In this work, we describe the process of migrating a database of notable relevance to the plant sciences-the Pathogen-Host Interaction Database (PHI-base)-to a form that conforms to each of the FAIR Principles. We discuss the technical and architectural decisions, and the migration pathway, including observations of the difficulty and/or fidelity of each step. We examine how multiple FAIR principles can be addressed simultaneously through careful design decisions, including making data FAIR for both humans and machines with minimal duplication of effort. We note how FAIR data publishing involves more than data reformatting, requiring features beyond those exhibited by most life science Semantic Web or Linked Data resources. We explore the value-added by completing this FAIR data transformation, and then test the result through integrative questions that could not easily be asked over traditional Web-based data resources. Finally, we demonstrate the utility of providing explicit and reliable access to provenance information, which we argue enhances citation rates by encouraging and facilitating transparent scholarly reuse of these valuable data holdings.
Emerging Fabric of Science: Persistent Identifiers and Knowledge Networks
NASA Astrophysics Data System (ADS)
Hugo, W.
2017-12-01
There is an increasing emphasis on the use of persistent identifiers in the description of scientific activity, whether this is done to cite scholarly publications and research output, reliably identify role players such as funders and researchers, or to provide long-lasting references to controlled vocabulary. The ICSU World Data System has been promoting the establishment of a "Knowledge Network" to describe research activity, realising that parts of the network will be established as a federated `system', based on linkages between registries of persistent identifiers. In addition, there is a growing focus on not only the relationship between these major role players and associated digital objects, but also on the processes of science: provenance, reproducibility, and re-usability being significant topics of discussion. The paper will focus on description of the `Fabric of Science' from the perspectives of both structure and processes, review the state of implementation of real services and infrastructure in support of it. A case is made for inclusion of persistent identifiers into the mainstream activities of scientists and data infrastructure managers, and for the development of services, such as Scholix, to make better use of the relationships between digital objects and major role players. A proposal is made for the adoption of a federated system of services that are based on a hybrid graph-object framework similar to Scholix for recording the activity of scientific research. Finally, links to related ideas are explored: novel ways of representing of knowledge (such as Nanopublications) and the possibility that the publication paradigm currently in use may have to be amended.
The Use of Cronbach's Alpha When Developing and Reporting Research Instruments in Science Education
NASA Astrophysics Data System (ADS)
Taber, Keith S.
2017-06-01
Cronbach's alpha is a statistic commonly quoted by authors to demonstrate that tests and scales that have been constructed or adopted for research projects are fit for purpose. Cronbach's alpha is regularly adopted in studies in science education: it was referred to in 69 different papers published in 4 leading science education journals in a single year (2015)—usually as a measure of reliability. This article explores how this statistic is used in reporting science education research and what it represents. Authors often cite alpha values with little commentary to explain why they feel this statistic is relevant and seldom interpret the result for readers beyond citing an arbitrary threshold for an acceptable value. Those authors who do offer readers qualitative descriptors interpreting alpha values adopt a diverse and seemingly arbitrary terminology. More seriously, illustrative examples from the science education literature demonstrate that alpha may be acceptable even when there are recognised problems with the scales concerned. Alpha is also sometimes inappropriately used to claim an instrument is unidimensional. It is argued that a high value of alpha offers limited evidence of the reliability of a research instrument, and that indeed a very high value may actually be undesirable when developing a test of scientific knowledge or understanding. Guidance is offered to authors reporting, and readers evaluating, studies that present Cronbach's alpha statistic as evidence of instrument quality.
NASA Astrophysics Data System (ADS)
Brooks, Kristine M.
The goal of science education is the preparation of scientifically literate students (Abd-El-Khalick & Lederman, 2000, & American Association for the Advancement of Science (AAAS), 1990). In order to instruct students in the nature of science with its history, development, methods and applications, science teachers use textbooks as the primary organizer for the curriculum (Chippetta, Ganesh, Lee, & Phillips, 2006). Science textbooks are the dominant instructional tool that exerts great influence on instructional content and its delivery (Wang, 1998). Science and science literacy requires acquiring knowledge about the natural world and understanding its application in society, or, in other words, the nature of science. An understanding of the nature of science is an important part of science literacy (Abd-El-Khalik & Lederman, 2000, & AAAS, 1990). The nature of science has four basic themes or dimensions: science as a body of knowledge, science as a way of thinking, science as a way of investigating, and science with its interaction with technology and society (Chippetta & Koballa, 2006). Textbooks must relay and incorporate these themes to promote science literacy. The results from this content analysis provide further insights into science textbooks and their content with regard to the inclusion of the nature of science and ethnic diversity. Science textbooks usually downplay human influences (Clough & Olson, 2004) whether as part of the nature of science with its historical development or its interaction with societies of diverse cultures. Minority students are underperforming in science and science is divided on ethnic, linguistic, and gender identity (Brown, 2005). Greater representations of diversity in curriculum materials enable minority students to identify with science (Nines, 2000). Textbooks, with their influence on curriculum and presentation, must include links for science and students of diverse cultures. What is the balance of the four aspects of the nature of science and what is the balance of ethnic diversity in the participants in science (students and scientists) in physical science textbooks? To establish an answer to these questions, this investigation used content analysis. For the balance of the four aspects of the nature of science, the analysis was conducted on random page samples of five physical science textbooks. A random sampling of the pages within the physical science textbooks should be sufficient to represent the content of the textbooks (Garcia, 1985). For the balance of ethnic diversity of the participants in science, the analysis was conducted on all pictures or drawings of students and scientists within the content of the five textbooks. One of these IPC books is under current use in a large, local school district and the other four were published during the same, or similar, year. Coding procedures for the sample used two sets of coders. One set of coders have previously analyzed for the nature of science in a study on middle school science textbooks (Phillips, 2006) and the coders for ethnic diversity are public school teachers who have worked with ethnically diverse students for over ten years. Both sets of coders were trained and the reliability of their coding checked before coding the five textbooks. To check for inter-coder reliability, percent agreement, Cohen's kappa and Krippendorff's alpha were calculated. The results from this study indicate that science as a body of knowledge and science as a way of investigating are the prevalent themes of the nature of science in the five physical science textbooks. This investigation also found that there is an imbalance in the ethnic diversity of students and scientists portrayed within the chapters of the physical science textbooks studied. This imbalance reflects ratios that are neither equally balanced nor in align with the U.S. Census. Given that textbooks are the main sources of information in most classrooms, the imbalance of the nature of science could provide the students, and the teachers, with an incomplete perception and understanding of the nature of science. This imbalance could also provide the students with inadequate skills to develop and process science information and apply it to their world. The ethnic diversity portrayed in the physical science textbooks provides an inadequate link between the students' ethnic backgrounds and the ethnic diversity of the participants of science. Educators and publishers should provide science textbooks that incorporate all four aspects of the nature of science to a degree that science is perceived as more than just facts and information. Science must be recognized as a way of investigating, a way of thinking, and a way of applying knowledge to society. Further, in order to recognize all people who take part in science, students and scientists from a variety of ethnic groups should be portrayed in the physical science textbooks.
AMS with light nuclei at small accelerators
NASA Astrophysics Data System (ADS)
Stan-Sion, C.; Enachescu, M.
2017-06-01
AMS applications with lighter nuclei are presented. It will be shown how Carbon-14, Boron-10, Beryllium-10, and Tritium-3 can be used to provide valuable information in forensic science, environmental physics, nuclear pollution, in material science and for diagnose of the plasma confinement in fusion reactors. Small accelerators are reliable, efficient and possess the highest ion beam transmissions that confer high precision in measurements.
ERIC Educational Resources Information Center
Wavering, Michael; Mangione, Katherine; McBride, Craig
2013-01-01
A dissertation study looking at preservice teachers' alternative conceptions in earth science was completed by one of the authors. The data used for this study from the dissertation were a series of eleven interviews. (Purpose) The authors of this manuscript wanted to provide more in-depth analysis of these interviews, specifically to provide a…
JPRS Report, Science & Technology, USSR: Science & Technology Policy
1988-04-05
associa- tions—were formulated. Specialists, A. V. Glichev, direc- tor of the All-Union Institute of Metrology and Stan- dardization of the USSR State...Various functional subdi- visions—laboratories of reliability, metrological labora- tories, monitoring and diagnostic centers, and so forth— are...department for standards, metrology , and quality. The latter annually does not approve and sends back for modification up to 20 of the "notebooks of
NASA Astrophysics Data System (ADS)
Vanvyve, E.; Magontier, P.; Vandenberghe, F. C.; Delle Monache, L.; Dickinson, K.
2012-12-01
Wind energy is amongst the fastest growing sources of renewable energy in the U.S. and could supply up to 20 % of the U.S power production by 2030. An accurate and reliable wind resource assessment for prospective wind farm sites is a challenging task, yet is crucial for evaluating the long-term profitability and feasibility of a potential development. We have developed an accurate and computationally efficient wind resource assessment technique for prospective wind farm sites, which incorporates innovative statistical techniques and the new NASA Earth science dataset MERRA. This technique produces a wind resource estimate that is more accurate than that obtained by the wind energy industry's standard technique, while providing a reliable quantification of its uncertainty. The focus now is on evaluating the socio-economic value of this new technique upon using the industry's standard technique. Would it yield lower financing costs? Could it result in lower electricity prices? Are there further down-the-line positive consequences, e.g. job creation, time saved, greenhouse gas decrease? Ultimately, we expect our results will inform efforts to refine and disseminate the new technique to support the development of the U.S. renewable energy infrastructure. In order to address the above questions, we are carrying out a cost-benefit analysis based on the net present worth of the technique. We will describe this approach, including the cash-flow process of wind farm financing, how the wind resource assessment factors in, and will present current results for various hypothetical candidate wind farm sites.
Cooperation and dialogical modeling for designing a safe Human space exploration mission to Mars
NASA Astrophysics Data System (ADS)
Grès, Stéphane; Tognini, Michel; Le Cardinal, Gilles; Zalila, Zyed; Gueydan, Guillaume
2014-11-01
This paper proposes an approach for a complex and innovative project requiring international contributions from different communities of knowledge and expertise. Designing a safe and reliable architecture for a manned mission to Mars or the Asteroids necessitates strong cooperation during the early stages of design to prevent and reduce risks for the astronauts at each step of the mission. The stake during design is to deal with the contradictions, antagonisms and paradoxes of the involved partners for the definition and modeling of a shared project of reference. As we see in our research which analyses the cognitive and social aspects of technological risks in major accidents, in such a project, the complexity of the global organization (during design and use) and the integration of a wide and varie d range of sciences and innovative technologies is likely to increase systemic risks as follows: human and cultural mistakes, potential defaults, failures and accidents. We identify as the main danger antiquated centralized models of organization and the operational limits of interdisciplinarity in the sciences. Beyond this, we can see that we need to take carefully into account human cooperation and the quality of relations between heterogeneous partners. Designing an open, self-learning and reliable exploration system able to self-adapt in dangerous and unforeseen situations implies a collective networked intelligence led by a safe process that organizes interaction between the actors and the aims of the project. Our work, supported by the CNES (French Space Agency), proposes an innovative approach to the coordination of a complex project.
Science Process Skills in Science Curricula Applied in Turkey
ERIC Educational Resources Information Center
Yumusak, Güngör Keskinkiliç
2016-01-01
One of the most important objectives of the science curricula is to bring in science process skills. The science process skills are skills that lie under scientific thinking and decision-making. Thus it is important for a science curricula to be rationalized in such a way that it brings in science process skills. New science curricula were…
EOS MLS Science Data Processing System: A Description of Architecture and Capabilities
NASA Technical Reports Server (NTRS)
Cuddy, David T.; Echeverri, Mark D.; Wagner, Paul A.; Hanzel, Audrey T.; Fuller, Ryan A.
2006-01-01
This paper describes the architecture and capabilities of the Science Data Processing System (SDPS) for the EOS MLS. The SDPS consists of two major components--the Science Computing Facility and the Science Investigator-led Processing System. The Science Computing Facility provides the facilities for the EOS MLS Science Team to perform the functions of scientific algorithm development, processing software development, quality control of data products, and scientific analyses. The Science Investigator-led Processing System processes and reprocesses the science data for the entire mission and delivers the data products to the Science Computing Facility and to the Goddard Space Flight Center Earth Science Distributed Active Archive Center, which archives and distributes the standard science products.
Improving Reliability of a Residency Interview Process
Serres, Michelle L.; Gundrum, Todd E.
2013-01-01
Objective. To improve the reliability and discrimination of a pharmacy resident interview evaluation form, and thereby improve the reliability of the interview process. Methods. In phase 1 of the study, authors used a Many-Facet Rasch Measurement model to optimize an existing evaluation form for reliability and discrimination. In phase 2, interviewer pairs used the modified evaluation form within 4 separate interview stations. In phase 3, 8 interviewers individually-evaluated each candidate in one-on-one interviews. Results. In phase 1, the evaluation form had a reliability of 0.98 with person separation of 6.56; reproducibly, the form separated applicants into 6 distinct groups. Using that form in phase 2 and 3, our largest variation source was candidates, while content specificity was the next largest variation source. The phase 2 g-coefficient was 0.787, while confirmatory phase 3 was 0.922. Process reliability improved with more stations despite fewer interviewers per station—impact of content specificity was greatly reduced with more interview stations. Conclusion. A more reliable, discriminating evaluation form was developed to evaluate candidates during resident interviews, and a process was designed that reduced the impact from content specificity. PMID:24159209
Expanded Owens Valley Solar Array Science and Data Products
NASA Astrophysics Data System (ADS)
Gary, Dale E.; Hurford, G. J.; Nita, G. M.; Fleishman, G. D.; McTiernan, J. M.
2010-05-01
The Owens Valley Solar Array (OVSA) has been funded for major expansion, to create a university-based facility serving a broad scientific community, to keep the U.S. competitive in the field of solar radio physics. The project, funded by the National Science Foundation through the MRI-Recovery and Reinvestment program, will result in a world-class facility for scientific research at microwave radio frequencies (1-18 GHz) in solar and space weather physics. The project also includes an exciting program of targeted astronomical science. The solar science to be addressed focuses on the magnetic structure of the solar corona, on transient phenomena resulting from magnetic interactions, including the sudden release of energy and subsequent particle acceleration and heating, and on space weather phenomena. The project will support the scientific community by providing open data access and software tools for analysis of the data, to exploit synergies with on-going solar research in other wavelength bands. The New Jersey Institute of Technology (NJIT) will upgrade OVSA from its current complement of 7 antennas to a total of 15 by adding 8 new antennas, and will reinvest in the existing infrastructure by replacing the existing control systems, signal transmission, and signal processing with modern, far more capable and reliable systems based on new technology developed for the Frequency Agile Solar Radiotelescope (FASR). The project will be completed in time to provide solar-dedicated observations during the upcoming solar maximum in 2013 and beyond. We will detail the new science addressed by the expanded array, and provide an overview of the expected data products.
NASA Astrophysics Data System (ADS)
Yu, Z. P.; Yue, Z. F.; Liu, W.
2018-05-01
With the development of artificial intelligence, more and more reliability experts have noticed the roles of subjective information in the reliability design of complex system. Therefore, based on the certain numbers of experiment data and expert judgments, we have divided the reliability estimation based on distribution hypothesis into cognition process and reliability calculation. Consequently, for an illustration of this modification, we have taken the information fusion based on intuitional fuzzy belief functions as the diagnosis model of cognition process, and finished the reliability estimation for the open function of cabin door affected by the imprecise judgment corresponding to distribution hypothesis.
Photogrammetry - Remote Sensing and Geoinformation
NASA Astrophysics Data System (ADS)
Lazaridou, M. A.; Patmio, E. N.
2012-07-01
Earth and its environment are studied by different scientific disciplines as geosciences, science of engineering, social sciences, geography, etc. The study of the above, beyond pure scientific interest, is useful for the practical needs of man. Photogrammetry and Remote Sensing (defined by Statute II of ISPRS) is the art, science, and technology of obtaining reliable information from non-contact imaging and other sensor systems about the Earth and its environment, and other physical objects and of processes through recording, measuring, analyzing and representation. Therefore, according to this definition, photogrammetry and remote sensing can support studies of the above disciplines for acquisition of geoinformation. This paper concerns basic concepts of geosciences (geomorphology, geology, hydrology etc), and the fundamentals of photogrammetry-remote sensing, in order to aid the understanding of the relationship between photogrammetry-remote sensing and geoinformation and also structure curriculum in a brief, concise and coherent way. This curriculum can represent an appropriate research and educational outline and help to disseminate knowledge in various directions and levels. It resulted from our research and educational experience in graduate and post-graduate level (post-graduate studies relative to the protection of environment and protection of monuments and historical centers) in the Lab. of Photogrammetry - Remote Sensing in Civil Engineering Faculty of Aristotle University of Thessaloniki.
Medicine, methodology, and values: trade-offs in clinical science and practice.
Ho, Vincent K Y
2011-01-01
The current guidelines of evidence-based medicine (EBM) presuppose that clinical research and clinical practice should advance from rigorous scientific tests as they generate reliable, value-free knowledge. Under this presupposition, hypotheses postulated by doctors and patients in the process of their decision making are preferably tested in randomized clinical trials (RCTs), and in systematic reviews and meta-analyses summarizing outcomes from multiple RCTs. Since testing under this scheme is predominantly focused on the criteria of generality and precision achieved through methodological rigor, at the cost of the criterion of realism, translating test results to clinical practice is often problematic. Choices concerning which methodological criteria should have priority are inevitable, however, as clinical trials, and scientific research in general, cannot meet all relevant criteria at the same time. Since these choices may be informed by considerations external to science, we must acknowledge that science cannot be value-free in a strict sense, and this invites a more prominent role for value-laden considerations in evaluating clinical research. The urgency for this becomes even more apparent when we consider the important yet implicit role of scientific theories in EBM, which may also be subjected to methodological evaluation and for which selectiveness in methodological focus is likewise inevitable.
Semantically-enabled Knowledge Discovery in the Deep Carbon Observatory
NASA Astrophysics Data System (ADS)
Wang, H.; Chen, Y.; Ma, X.; Erickson, J. S.; West, P.; Fox, P. A.
2013-12-01
The Deep Carbon Observatory (DCO) is a decadal effort aimed at transforming scientific and public understanding of carbon in the complex deep earth system from the perspectives of Deep Energy, Deep Life, Extreme Physics and Chemistry, and Reservoirs and Fluxes. Over the course of the decade DCO scientific activities will generate a massive volume of data across a variety of disciplines, presenting significant challenges in terms of data integration, management, analysis and visualization, and ultimately limiting the ability of scientists across disciplines to make insights and unlock new knowledge. The DCO Data Science Team (DCO-DS) is applying Semantic Web methodologies to construct a knowledge representation focused on the DCO Earth science disciplines, and use it together with other technologies (e.g. natural language processing and data mining) to create a more expressive representation of the distributed corpus of DCO artifacts including datasets, metadata, instruments, sensors, platforms, deployments, researchers, organizations, funding agencies, grants and various awards. The embodiment of this knowledge representation is the DCO Data Science Infrastructure, in which unique entities within the DCO domain and the relations between them are recognized and explicitly identified. The DCO-DS Infrastructure will serve as a platform for more efficient and reliable searching, discovery, access, and publication of information and knowledge for the DCO scientific community and beyond.
Litigation-Generated Science: Why Should We Care?
Boden, Leslie I.; Ozonoff, David
2008-01-01
Background In a 1994 Ninth Circuit decision on the remand of Daubert v. Merrell Dow Pharmaceuticals, Inc., Judge Alex Kosinski wrote that science done for the purpose of litigation should be subject to more stringent standards of admissibility than other science. Objectives We analyze this proposition by considering litigation-generated science as a subset of science involving conflict of interest. Discussion Judge Kosinski's formulation suggests there may be reasons to treat science involving conflict of interest differently but raises questions about whether litigation-generated science should be singled out. In particular we discuss the similar problems raised by strategically motivated science done in anticipation of possible future litigation or otherwise designed to benefit the sponsor and ask what special treatment, if any, should be given to science undertaken to support existing or potential future litigation. Conclusion The problems with litigation-generated science are not special. On the contrary, they are very general and apply to much or most science that is relevant and reliable in the courtroom setting. PMID:18197310
Litigation-generated science: why should we care?
Boden, Leslie I; Ozonoff, David
2008-01-01
In a 1994 Ninth Circuit decision on the remand of Daubert v. Merrell Dow Pharmaceuticals, Inc., Judge Alex Kosinski wrote that science done for the purpose of litigation should be subject to more stringent standards of admissibility than other science. We analyze this proposition by considering litigation-generated science as a subset of science involving conflict of interest. Judge Kosinski's formulation suggests there may be reasons to treat science involving conflict of interest differently but raises questions about whether litigation-generated science should be singled out. In particular we discuss the similar problems raised by strategically motivated science done in anticipation of possible future litigation or otherwise designed to benefit the sponsor and ask what special treatment, if any, should be given to science undertaken to support existing or potential future litigation. The problems with litigation-generated science are not special. On the contrary, they are very general and apply to much or most science that is relevant and reliable in the courtroom setting.
Meta-analytic guidelines for evaluating single-item reliabilities of personality instruments.
Spörrle, Matthias; Bekk, Magdalena
2014-06-01
Personality is an important predictor of various outcomes in many social science disciplines. However, when personality traits are not the principal focus of research, for example, in global comparative surveys, it is often not possible to assess them extensively. In this article, we first provide an overview of the advantages and challenges of single-item measures of personality, a rationale for their construction, and a summary of alternative ways of assessing their reliability. Second, using seven diverse samples (Ntotal = 4,263) we develop the SIMP-G, the German adaptation of the Single-Item Measures of Personality, an instrument assessing the Big Five with one item per trait, and evaluate its validity and reliability. Third, we integrate previous research and our data into a first meta-analysis of single-item reliabilities of personality measures, and provide researchers with guidelines and recommendations for the evaluation of single-item reliabilities. © The Author(s) 2013.
Software architecture of INO340 telescope control system
NASA Astrophysics Data System (ADS)
Ravanmehr, Reza; Khosroshahi, Habib
2016-08-01
The software architecture plays an important role in distributed control system of astronomical projects because many subsystems and components must work together in a consistent and reliable way. We have utilized a customized architecture design approach based on "4+1 view model" in order to design INOCS software architecture. In this paper, after reviewing the top level INOCS architecture, we present the software architecture model of INOCS inspired by "4+1 model", for this purpose we provide logical, process, development, physical, and scenario views of our architecture using different UML diagrams and other illustrative visual charts. Each view presents INOCS software architecture from a different perspective. We finish the paper by science data operation of INO340 and the concluding remarks.
2009-02-10
VANDENBERG AIR FORCE BASE, Calif. -- NASA's Orbiting Carbon Observatory, or OCO, arrives at Space Launch Complex 576-E at Vandenberg Air Force Base in California. The spacecraft is scheduled for launch aboard Orbital Sciences' Taurus XL rocket, being erected at left, on Feb. 23 from Vandenberg. The spacecraft will collect precise global measurements of carbon dioxide (CO2) in the Earth's atmosphere. Scientists will analyze OCO data to improve our understanding of the natural processes and human activities that regulate the abundance and distribution of this important greenhouse gas. This improved understanding will enable more reliable forecasts of future changes in the abundance and distribution of CO2 in the atmosphere and the effect that these changes may have on the Earth's climate. Photo credit: NASA/Randy Beaudoin, VAFB
2009-02-10
VANDENBERG AIR FORCE BASE, Calif. -- NASA's Orbiting Carbon Observatory, or OCO, is transported to Space Launch Complex 576-E at Vandenberg Air Force Base in California. The spacecraft is scheduled for launch aboard Orbital Sciences' Taurus XL rocket on Feb. 23 from Vandenberg. The spacecraft will collect precise global measurements of carbon dioxide (CO2) in the Earth's atmosphere. Scientists will analyze OCO data to improve our understanding of the natural processes and human activities that regulate the abundance and distribution of this important greenhouse gas. This improved understanding will enable more reliable forecasts of future changes in the abundance and distribution of CO2 in the atmosphere and the effect that these changes may have on the Earth's climate. Photo credit: NASA/Randy Beaudoin, VAFB
2009-02-10
VANDENBERG AIR FORCE BASE, Calif. -- NASA's Orbiting Carbon Observatory, or OCO, arrives at Space Launch Complex 576-E at Vandenberg Air Force Base in California. The spacecraft is scheduled for launch aboard Orbital Sciences' Taurus XL rocket on Feb. 23 from Vandenberg. The spacecraft will collect precise global measurements of carbon dioxide (CO2) in the Earth's atmosphere. Scientists will analyze OCO data to improve our understanding of the natural processes and human activities that regulate the abundance and distribution of this important greenhouse gas. This improved understanding will enable more reliable forecasts of future changes in the abundance and distribution of CO2 in the atmosphere and the effect that these changes may have on the Earth's climate. Photo credit: NASA/Randy Beaudoin, VAFB
From Norm Adoption to Norm Internalization
NASA Astrophysics Data System (ADS)
Conte, Rosaria; Andrighetto, Giulia; Villatoro, Daniel
In this presentation, advances in modeling the mental dynamics of norms will be presented. In particular, the process from norm-adoption, possibly yielding new normative goals, to different forms of norm compliance will be focused upon, including norm internalization, which is at study in social-behavioral sciences and moral philosophy since long. Of late, the debate was revamped within the rationality approach pointing to the role of norm internalization as a less costly and more reliable enforcement system than social control. So far, poor attention was paid to the mental underpinnings of internalization. In this presentation, a rich cognitive model of different types, degrees and factors of internalization is shown. The initial implementation of this model on EMIL-A, a normative agent architecture developed and applied to the.
NASA Technical Reports Server (NTRS)
Frazier, Donald O.
2000-01-01
Technically, the field of integrated optics using organic/polymer materials as a new means of information processing, has emerged as of vital importance to optical computers, optical switching, optical communications, the defense industry, etc. The goal is to replace conventional electronic integrated circuits and wires by equivalent miniaturized optical integrated circuits and fibers, offering larger bandwidths, more compactness and reliability, immunity to electromagnetic interference and less cost. From the Code E perspective, this research area represents an opportunity to marry "front-line" education in science and technology with national scientific and technological interests while maximizing human resources utilization. This can be achieved by the development of untapped resources for scientific research - such as minorities, women, and universities traditionally uninvolved in scientific research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pestovich, Kimberly Shay
Harnessing the power of the nuclear sciences for national security and to benefit others is one of Los Alamos National Laboratory’s missions. MST-8 focuses on manipulating and studying how the structure, processing, properties, and performance of materials interact at the atomic level under nuclear conditions. Within this group, single crystal scintillators contribute to the safety and reliability of weapons, provide global security safeguards, and build on scientific principles that carry over to medical fields for cancer detection. Improved cladding materials made of ferritic-martensitic alloys support the mission of DOE-NE’s Fuel Cycle Research and Development program to close the nuclear fuelmore » cycle, aiming to solve nuclear waste management challenges and thereby increase the performance and safety of current and future reactors.« less
Composite structural materials. [fiber reinforced composites for aircraft structures
NASA Technical Reports Server (NTRS)
Ansell, G. S.; Loewy, R. G.; Wiberly, S. E.
1981-01-01
Physical properties of fiber reinforced composites; structural concepts and analysis; manufacturing; reliability; and life prediction are subjects of research conducted to determine the long term integrity of composite aircraft structures under conditions pertinent to service use. Progress is reported in (1) characterizing homogeneity in composite materials; (2) developing methods for analyzing composite materials; (3) studying fatigue in composite materials; (4) determining the temperature and moisture effects on the mechanical properties of laminates; (5) numerically analyzing moisture effects; (6) numerically analyzing the micromechanics of composite fracture; (7) constructing the 727 elevator attachment rib; (8) developing the L-1011 engine drag strut (CAPCOMP 2 program); (9) analyzing mechanical joints in composites; (10) developing computer software; and (11) processing science and technology, with emphasis on the sailplane project.
Deng, Jian-qiang; Hou, Yi-ping
2005-08-01
Genetic analysis from forensic microsamples is a urgent, difficult task in forensic science, because it is frequently limited by the amount of specimen available in forensic practice, much effort has been carried out to resolve this difficulty. Whole genome amplification (WGA) technology, which was developing quickly in these years, has been thought to be a powerful, reliable and efficient strategy in analysis of minute amount DNA on many fields. In this review, we discuss its application in forensic science.
Wilkins, Aleeza M.; Doebrich, Jeff L.
2016-09-19
The USGS Mineral Resources Program (MRP) delivers unbiased science and information to increase understanding of mineral resource potential, production, and consumption, and how mineral resources interact with the environment. The MRP is the Federal Government’s sole source for this mineral resource science and information. Program goals are to (1) increase understanding of mineral resource formation, (2) provide mineral resource inventories and assessments, (3) broaden knowledge of the effects of mineral resources on the environment and society, and (4) provide analysis on the availability and reliability of mineral supplies.
Foerster, Rebecca M.; Poth, Christian H.; Behler, Christian; Botsch, Mario; Schneider, Werner X.
2016-01-01
Neuropsychological assessment of human visual processing capabilities strongly depends on visual testing conditions including room lighting, stimuli, and viewing-distance. This limits standardization, threatens reliability, and prevents the assessment of core visual functions such as visual processing speed. Increasingly available virtual reality devices allow to address these problems. One such device is the portable, light-weight, and easy-to-use Oculus Rift. It is head-mounted and covers the entire visual field, thereby shielding and standardizing the visual stimulation. A fundamental prerequisite to use Oculus Rift for neuropsychological assessment is sufficient test-retest reliability. Here, we compare the test-retest reliabilities of Bundesen’s visual processing components (visual processing speed, threshold of conscious perception, capacity of visual working memory) as measured with Oculus Rift and a standard CRT computer screen. Our results show that Oculus Rift allows to measure the processing components as reliably as the standard CRT. This means that Oculus Rift is applicable for standardized and reliable assessment and diagnosis of elementary cognitive functions in laboratory and clinical settings. Oculus Rift thus provides the opportunity to compare visual processing components between individuals and institutions and to establish statistical norm distributions. PMID:27869220
Foerster, Rebecca M; Poth, Christian H; Behler, Christian; Botsch, Mario; Schneider, Werner X
2016-11-21
Neuropsychological assessment of human visual processing capabilities strongly depends on visual testing conditions including room lighting, stimuli, and viewing-distance. This limits standardization, threatens reliability, and prevents the assessment of core visual functions such as visual processing speed. Increasingly available virtual reality devices allow to address these problems. One such device is the portable, light-weight, and easy-to-use Oculus Rift. It is head-mounted and covers the entire visual field, thereby shielding and standardizing the visual stimulation. A fundamental prerequisite to use Oculus Rift for neuropsychological assessment is sufficient test-retest reliability. Here, we compare the test-retest reliabilities of Bundesen's visual processing components (visual processing speed, threshold of conscious perception, capacity of visual working memory) as measured with Oculus Rift and a standard CRT computer screen. Our results show that Oculus Rift allows to measure the processing components as reliably as the standard CRT. This means that Oculus Rift is applicable for standardized and reliable assessment and diagnosis of elementary cognitive functions in laboratory and clinical settings. Oculus Rift thus provides the opportunity to compare visual processing components between individuals and institutions and to establish statistical norm distributions.
Verification of Triple Modular Redundancy Insertion for Reliable and Trusted Systems
NASA Technical Reports Server (NTRS)
Berg, Melanie; LaBel, Kenneth
2016-01-01
If a system is required to be protected using triple modular redundancy (TMR), improper insertion can jeopardize the reliability and security of the system. Due to the complexity of the verification process and the complexity of digital designs, there are currently no available techniques that can provide complete and reliable confirmation of TMR insertion. We propose a method for TMR insertion verification that satisfies the process for reliable and trusted systems.
NASA Astrophysics Data System (ADS)
Hebner, Greg
2010-11-01
Products and consumer goods that utilize low temperature plasmas at some point in their creation touch and enrich our lives on almost a continuous basis. Examples are many but include the tremendous advances in microelectronics and the pervasive nature of the internet, advanced material coatings that increase the strength and reliability of products from turbine engines to potato chip bags, and the recent national emphasis on energy efficient lighting and compact fluorescent bulbs. Each of these products owes their contributions to energy security and international competiveness to fundamental research investments. However, it would be a mistake to believe that the great commercial success of these products implies a robust understanding of the complicated interactions inherent in plasma systems. Rather, current development of the next generation of low temperature plasma enabled products and processes is clearly exposing a new set of exciting scientific challenges that require leaps in fundamental understanding and interdisciplinary research teams. Emerging applications such as liquid-plasma systems to improve water quality and remediate hazardous chemicals, plasma-assisted combustion to increase energy efficiency and reduce emissions, and medical applications promise to improve our lives and the environment only if difficult science questions are solved. This talk will take a brief look back at the role of low temperature plasma science in enabling entirely new markets and then survey the next generation of emerging plasma applications. The emphasis will be on describing the key science questions and the opportunities for scientific cross cutting collaborations that underscore the need for increased outreach on the part of the plasma science community to improve visibility at the federal program level. This work is supported by the DOE, Office of Science for Fusion Energy Sciences, and Sandia National Laboratories, a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000
Analysis of Repeatability and Reliability of Warm IRAC Observations of Transiting Exoplanets
NASA Astrophysics Data System (ADS)
Carey, Sean J.; Krick, Jessica; Ingalls, James
2015-12-01
Extracting information about thermal profiles and composition of the atmospheres of transiting exoplanets is extremely challenging due to the small differential signal of the atmosphere in observations of transits, secondary eclipses, and full phase curves for exoplanets. The relevant signals are often at the level of 100 ppm or smaller and require the removal of significant instrumental systematics in the two infrared instruments currently capable of providing information at this precision, WFC3 on HST and IRAC aboard the Spitzer Space Telescope. For IRAC, the systematics are due to the interplay of residual telescope pointing variation with intra-pixel gain variations in the moderately undersampled camera. There is currently a debate in the community on the reliability of repeated IRAC observations of exoplanets particularly those in eclipse from which inferences about atmospheric temperature and pressure profiles can made. To assess the repeatability and reliability of post-cryogenic observations with IRAC, the Spitzer Science Center in conjunction with volunteers from the astronomical community has performed a systematic analysis of the removal of systematics and repeatability of warm IRAC observations. Recently, a data challenge consisting of the measurement of ten secondary eclipses of XO-3b (see Wong et al. 2014) and a complementary analysis of a synthetic version of the XO-3b data was undertaken. We report on the results of this data challenge. Five different techniques were applied to the data (BLISS mapping [Stevenson et al. (2012)], kernel regression using the science data [Wong et al. (2015)] and calibration data [Krick et al. (2015)], pixel-level decorrelation [Deming et al. (2015)], ICA [Morello et al. (2015)] and Gaussian Processes [Evans et al. (2015)]) and found consistent results in terms of eclipse depth and reliability in both the actual and synthetic data. In addition, each technique obtained the input eclipse depth in the simulated data within the stated measurement uncertainty. The reported uncertainties for each measurement approach the photon noise limit. These findings generally refute the results of Hansen et al. (2014) and suggest that inferences about atmospheric properties can be reasonably made using warm IRAC data. Application of our test methods to future observations using JWST (in particular the MIRI instrument) will be discussed.
Establishing Reliability and Validity of the Criterion Referenced Exam of GeoloGy Standards EGGS
NASA Astrophysics Data System (ADS)
Guffey, S. K.; Slater, S. J.; Slater, T. F.; Schleigh, S.; Burrows, A. C.
2016-12-01
Discipline-based geoscience education researchers have considerable need for a criterion-referenced, easy-to-administer and -score conceptual diagnostic survey for undergraduates taking introductory science survey courses in order for faculty to better be able to monitor the learning impacts of various interactive teaching approaches. To support ongoing education research across the geosciences, we are continuing to rigorously and systematically work to firmly establish the reliability and validity of the recently released Exam of GeoloGy Standards, EGGS. In educational testing, reliability refers to the consistency or stability of test scores whereas validity refers to the accuracy of the inferences or interpretations one makes from test scores. There are several types of reliability measures being applied to the iterative refinement of the EGGS survey, including test-retest, alternate form, split-half, internal consistency, and interrater reliability measures. EGGS rates strongly on most measures of reliability. For one, Cronbach's alpha provides a quantitative index indicating the extent to which if students are answering items consistently throughout the test and measures inter-item correlations. Traditional item analysis methods further establish the degree to which a particular item is reliably assessing students is actually quantifiable, including item difficulty and item discrimination. Validity, on the other hand, is perhaps best described by the word accuracy. For example, content validity is the to extent to which a measurement reflects the specific intended domain of the content, stemming from judgments of people who are either experts in the testing of that particular content area or are content experts. Perhaps more importantly, face validity is a judgement of how representative an instrument is reflective of the science "at face value" and refers to the extent to which a test appears to measure a the targeted scientific domain as viewed by laypersons, examinees, test users, the public, and other invested stakeholders.
Modified personal interviews: resurrecting reliable personal interviews for admissions?
Hanson, Mark D; Kulasegaram, Kulamakan Mahan; Woods, Nicole N; Fechtig, Lindsey; Anderson, Geoff
2012-10-01
Traditional admissions personal interviews provide flexible faculty-student interactions but are plagued by low inter-interview reliability. Axelson and Kreiter (2009) retrospectively showed that multiple independent sampling (MIS) may improve reliability of personal interviews; thus, the authors incorporated MIS into the admissions process for medical students applying to the University of Toronto's Leadership Education and Development Program (LEAD). They examined the reliability and resource demands of this modified personal interview (MPI) format. In 2010-2011, LEAD candidates submitted written applications, which were used to screen for participation in the MPI process. Selected candidates completed four brief (10-12 minutes) independent MPIs each with a different interviewer. The authors blueprinted MPI questions to (i.e., aligned them with) leadership attributes, and interviewers assessed candidates' eligibility on a five-point Likert-type scale. The authors analyzed inter-interview reliability using the generalizability theory. Sixteen candidates submitted applications; 10 proceeded to the MPI stage. Reliability of the written application components was 0.75. The MPI process had overall inter-interview reliability of 0.79. Correlation between the written application and MPI scores was 0.49. A decision study showed acceptable reliability of 0.74 with only three MPIs scored using one global rating. Furthermore, a traditional admissions interview format would take 66% more time than the MPI format. The MPI format, used during the LEAD admissions process, achieved high reliability with minimal faculty resources. The MPI format's reliability and effective resource use were possible through MIS and employment of expert interviewers. MPIs may be useful for other admissions tasks.
Wikipedia : its reliability and social role
NASA Astrophysics Data System (ADS)
Kusaka, Kyuhachi
This article discusses Japanese Wikipedia's reliability and its social role as a free, collaborative, multilingual Internet encyclopedia supported by the non-profit Wikimedia Foundation. Japanese Wikipedia's reliability is explained from several surveys. The central concern is how the nature of encyclopedia and Wikipedia affects the quality of Wikipedia's articles. Wikipedia's core content policies such as verifiability, no original research and a neutral point of view will make articles better. But incomplete or poorly written first drafts exist because Wikipedia is a work in progress. The article also argues that social role of online encyclopedia which provides knowledge for all. Knowledge-based society or/and advanced information society require public understanding of science and other expertise. Online encyclopedia attributed to a reliable, published source using an inline citation will guide anyone to specialized knowledge.
Effect of Entropy Generation on Wear Mechanics and System Reliability
NASA Astrophysics Data System (ADS)
Gidwani, Akshay; James, Siddanth; Jagtap, Sagar; Karthikeyan, Ram; Vincent, S.
2018-04-01
Wear is an irreversible phenomenon. Processes such as mutual sliding and rolling between materials involve entropy generation. These processes are monotonic with respect to time. The concept of entropy generation is further quantified using Degradation Entropy Generation theorem formulated by Michael D. Bryant. The sliding-wear model can be extrapolated to different instances in order to further provide a potential analysis of machine prognostics as well as system and process reliability for various processes besides even mere mechanical processes. In other words, using the concept of ‘entropy generation’ and wear, one can quantify the reliability of a system with respect to time using a thermodynamic variable, which is the basis of this paper. Thus in the present investigation, a unique attempt has been made to establish correlation between entropy-wear-reliability which can be useful technique in preventive maintenance.
Vianco, Paul T.
2017-02-01
Soldering technology has made tremendous strides in the past half-century. Whether structural or electronic, all solder joints must provide a level of reliability that is required by the application. This Part 1 report examines the effects of filler metal properties and soldering process on joint reliability. Solder alloy composition must have the appropriate melting and mechanical properties that suit the product's assembly process(es) and use environment. The filler metal must also optimize solderability (wetting-and-spreading) to realize the proper joint geometry. Here, the soldering process also affects joint reliability. The choice of flux and thermal profile support the solderability performance ofmore » the molten filler metal to successfully fill the gap and complete the fillet.« less
Traceability of On-Machine Tool Measurement: A Review.
Mutilba, Unai; Gomez-Acedo, Eneko; Kortaberria, Gorka; Olarra, Aitor; Yagüe-Fabra, Jose A
2017-07-11
Nowadays, errors during the manufacturing process of high value components are not acceptable in driving industries such as energy and transportation. Sectors such as aerospace, automotive, shipbuilding, nuclear power, large science facilities or wind power need complex and accurate components that demand close measurements and fast feedback into their manufacturing processes. New measuring technologies are already available in machine tools, including integrated touch probes and fast interface capabilities. They provide the possibility to measure the workpiece in-machine during or after its manufacture, maintaining the original setup of the workpiece and avoiding the manufacturing process from being interrupted to transport the workpiece to a measuring position. However, the traceability of the measurement process on a machine tool is not ensured yet and measurement data is still not fully reliable enough for process control or product validation. The scientific objective is to determine the uncertainty on a machine tool measurement and, therefore, convert it into a machine integrated traceable measuring process. For that purpose, an error budget should consider error sources such as the machine tools, components under measurement and the interactions between both of them. This paper reviews all those uncertainty sources, being mainly focused on those related to the machine tool, either on the process of geometric error assessment of the machine or on the technology employed to probe the measurand.
Johnson Space Center's Risk and Reliability Analysis Group 2008 Annual Report
NASA Technical Reports Server (NTRS)
Valentine, Mark; Boyer, Roger; Cross, Bob; Hamlin, Teri; Roelant, Henk; Stewart, Mike; Bigler, Mark; Winter, Scott; Reistle, Bruce; Heydorn,Dick
2009-01-01
The Johnson Space Center (JSC) Safety & Mission Assurance (S&MA) Directorate s Risk and Reliability Analysis Group provides both mathematical and engineering analysis expertise in the areas of Probabilistic Risk Assessment (PRA), Reliability and Maintainability (R&M) analysis, and data collection and analysis. The fundamental goal of this group is to provide National Aeronautics and Space Administration (NASA) decisionmakers with the necessary information to make informed decisions when evaluating personnel, flight hardware, and public safety concerns associated with current operating systems as well as with any future systems. The Analysis Group includes a staff of statistical and reliability experts with valuable backgrounds in the statistical, reliability, and engineering fields. This group includes JSC S&MA Analysis Branch personnel as well as S&MA support services contractors, such as Science Applications International Corporation (SAIC) and SoHaR. The Analysis Group s experience base includes nuclear power (both commercial and navy), manufacturing, Department of Defense, chemical, and shipping industries, as well as significant aerospace experience specifically in the Shuttle, International Space Station (ISS), and Constellation Programs. The Analysis Group partners with project and program offices, other NASA centers, NASA contractors, and universities to provide additional resources or information to the group when performing various analysis tasks. The JSC S&MA Analysis Group is recognized as a leader in risk and reliability analysis within the NASA community. Therefore, the Analysis Group is in high demand to help the Space Shuttle Program (SSP) continue to fly safely, assist in designing the next generation spacecraft for the Constellation Program (CxP), and promote advanced analytical techniques. The Analysis Section s tasks include teaching classes and instituting personnel qualification processes to enhance the professional abilities of our analysts as well as performing major probabilistic assessments used to support flight rationale and help establish program requirements. During 2008, the Analysis Group performed more than 70 assessments. Although all these assessments were important, some were instrumental in the decisionmaking processes for the Shuttle and Constellation Programs. Two of the more significant tasks were the Space Transportation System (STS)-122 Low Level Cutoff PRA for the SSP and the Orion Pad Abort One (PA-1) PRA for the CxP. These two activities, along with the numerous other tasks the Analysis Group performed in 2008, are summarized in this report. This report also highlights several ongoing and upcoming efforts to provide crucial statistical and probabilistic assessments, such as the Extravehicular Activity (EVA) PRA for the Hubble Space Telescope service mission and the first fully integrated PRAs for the CxP's Lunar Sortie and ISS missions.
Experiences in managing the Prometheus Project
NASA Technical Reports Server (NTRS)
Lehman, David H.; Clark, Karla B.; Cook, Beverly A.; Gavit, Sarah A.; Kayali, Sammy A.; McKinney, John C.; Milkovich, David C.; Reh, Kim R.; Taylor, Randall L.; Casani, John R.
2006-01-01
Congress authorized NASA?s Prometheus Project in February 2003, with the first Prometheus mission slated to explore the icy moons of Jupiter. The Project had two major objectives: (1) to develop a nuclear reactor that would provide unprecedented levels of power and show that it could be processed safely and operated reliably in space for long-duration, deep-space exploration and (2) to explore the three icy moons of Jupiter - Callisto, Ganymede, and Europa - and return science data that would meet the scientific goals as set forth in the Decadal Survey Report of the National Academy of Sciences. Early in Project planning, it was determined that the development of the Prometheus nuclear powered Spaceship would be complex and require the intellectual knowledge residing at numerous organizations across the country. In addition, because of the complex nature of the Project and the multiple partners, approaches beyond those successfully used to manage a typical JPL project would be needed. This paper1 will describe the key experiences in managing Prometheus that should prove useful for future projects of similar scope and magnitude
Case conceptualization research in cognitive behavior therapy: A state of the science review.
Easden, Michael H; Kazantzis, Nikolaos
2018-03-01
Prominent models of cognitive behavior therapy (CBT) assert that case conceptualization is crucial for tailoring interventions to adequately address the needs of the individual client. We aimed to review the research on case conceptualization in CBT. We conducted a systematic search of PsychINFO, MEDLINE, Psychology and Behavioral Science Collection, and CINAHL databases to February 2016. A total of 24 studies that met inclusion criteria were identified. It was notable that studies (a) focused on the assessment function of case conceptualization, (b) employed diverse methodologies, and, overall, (c) there remains a paucity of studies examining the in-session process of using case conceptualization or examining relations with outcome. Results from the existing studies suggest that experienced therapists can reliably construct some elements of case conceptualizations, but importance for the efficacy of case conceptualization in CBT has yet to be demonstrated. Research that involves direct observation of therapist competence in case conceptualization as a predictor of CBT outcomes is recommended as a focus for future hypothesis testing. © 2017 Wiley Periodicals, Inc.
The evolution and provision of expert knowledge and its effective utilisation
NASA Astrophysics Data System (ADS)
Sammonds, Peter
2017-04-01
The specific aims of increasing Resilience to Natural Hazards in China programme are (i) to improve hazard forecasting, risk mitigation and preparedness based upon reliable knowledge of the fundamental processes involved and underpinned by basic science and, (ii) to improve the uptake of and responses to scientific advice, by developing risk-based approaches to natural hazards in collaboration with the communities at risk. One of the programme's principal goals is to integrate natural and social science research to increase the benefits for those affected by natural hazards. To that end a co-productive approach to research is expected, involving a framework for sharing knowledge and values between natural and social scientists and consultation with policy makers, civil society and other stakeholders. This paper explore knowledge relationships and reflective learning across disciplines. There is commonly a disjunction between the evolution and provision of expert knowledge and its effective utilisation. Building on experience as Strategic Advisor to the Increasing Resilience to Natural Hazards programme, this paper addresses the research needs to assess how scientific knowledge and risk reduction strategies can be most effectively developed and communicated.
NASA Astrophysics Data System (ADS)
Li, Lin; Zeng, Li; Lin, Zi-Jing; Cazzell, Mary; Liu, Hanli
2015-05-01
Test-retest reliability of neuroimaging measurements is an important concern in the investigation of cognitive functions in the human brain. To date, intraclass correlation coefficients (ICCs), originally used in inter-rater reliability studies in behavioral sciences, have become commonly used metrics in reliability studies on neuroimaging and functional near-infrared spectroscopy (fNIRS). However, as there are six popular forms of ICC, the adequateness of the comprehensive understanding of ICCs will affect how one may appropriately select, use, and interpret ICCs toward a reliability study. We first offer a brief review and tutorial on the statistical rationale of ICCs, including their underlying analysis of variance models and technical definitions, in the context of assessment on intertest reliability. Second, we provide general guidelines on the selection and interpretation of ICCs. Third, we illustrate the proposed approach by using an actual research study to assess intertest reliability of fNIRS-based, volumetric diffuse optical tomography of brain activities stimulated by a risk decision-making protocol. Last, special issues that may arise in reliability assessment using ICCs are discussed and solutions are suggested.
Science to support the understanding of Ohio's water resources, 2014-15
Shaffer, Kimberly; Kula, Stephanie P.
2014-01-01
The U.S. Geological Survey (USGS) works in cooperation with local, State, and other Federal agencies, as well as universities, to furnish decision makers, policy makers, USGS scientists, and the general public with reliable scientific information and tools to assist them in management, stewardship, and use of Ohio’s natural resources. The diversity of scientific expertise among USGS personnel enables them to carry out large- and small-scale multidisciplinary studies. The USGS is unique among government organizations because it has neither regulatory nor developmental authority—its sole product is impartial, credible, relevant, and timely scientific information, equally accessible and available to everyone. The USGS Ohio Water Science Center provides reliable hydrologic and water-related ecological information to aid in the understanding of the use and management of the Nation’s water resources, in general, and Ohio’s water resources, in particular. This fact sheet provides an overview of current (2014) or recently completed USGS studies and data activities pertaining to water resources in Ohio. More information regarding projects of the USGS Ohio Water Science Center is available at http://oh.water.usgs.gov/.
Supporting students' learning in the domain of computer science
NASA Astrophysics Data System (ADS)
Gasparinatou, Alexandra; Grigoriadou, Maria
2011-03-01
Previous studies have shown that students with low knowledge understand and learn better from more cohesive texts, whereas high-knowledge students have been shown to learn better from texts of lower cohesion. This study examines whether high-knowledge readers in computer science benefit from a text of low cohesion. Undergraduate students (n = 65) read one of four versions of a text concerning Local Network Topologies, orthogonally varying local and global cohesion. Participants' comprehension was examined through free-recall measure, text-based, bridging-inference, elaborative-inference, problem-solving questions and a sorting task. The results indicated that high-knowledge readers benefited from the low-cohesion text. The interaction of text cohesion and knowledge was reliable for the sorting activity, for elaborative-inference and for problem-solving questions. Although high-knowledge readers performed better in text-based and in bridging-inference questions with the low-cohesion text, the interaction of text cohesion and knowledge was not reliable. The results suggest a more complex view of when and for whom textual cohesion affects comprehension and consequently learning in computer science.
NASA Astrophysics Data System (ADS)
Penner, Joyce E.; Andronova, Natalia; Oehmke, Robert C.; Brown, Jonathan; Stout, Quentin F.; Jablonowski, Christiane; van Leer, Bram; Powell, Kenneth G.; Herzog, Michael
2007-07-01
One of the most important advances needed in global climate models is the development of atmospheric General Circulation Models (GCMs) that can reliably treat convection. Such GCMs require high resolution in local convectively active regions, both in the horizontal and vertical directions. During previous research we have developed an Adaptive Mesh Refinement (AMR) dynamical core that can adapt its grid resolution horizontally. Our approach utilizes a finite volume numerical representation of the partial differential equations with floating Lagrangian vertical coordinates and requires resolving dynamical processes on small spatial scales. For the latter it uses a newly developed general-purpose library, which facilitates 3D block-structured AMR on spherical grids. The library manages neighbor information as the blocks adapt, and handles the parallel communication and load balancing, freeing the user to concentrate on the scientific modeling aspects of their code. In particular, this library defines and manages adaptive blocks on the sphere, provides user interfaces for interpolation routines and supports the communication and load-balancing aspects for parallel applications. We have successfully tested the library in a 2-D (longitude-latitude) implementation. During the past year, we have extended the library to treat adaptive mesh refinement in the vertical direction. Preliminary results are discussed. This research project is characterized by an interdisciplinary approach involving atmospheric science, computer science and mathematical/numerical aspects. The work is done in close collaboration between the Atmospheric Science, Computer Science and Aerospace Engineering Departments at the University of Michigan and NOAA GFDL.
NASA Astrophysics Data System (ADS)
Nguyen, L.; Chee, T.; Minnis, P.; Spangenberg, D.; Ayers, J. K.; Palikonda, R.; Vakhnin, A.; Dubois, R.; Murphy, P. R.
2014-12-01
The processing, storage and dissemination of satellite cloud and radiation products produced at NASA Langley Research Center are key activities for the Climate Science Branch. A constellation of systems operates in sync to accomplish these goals. Because of the complexity involved with operating such intricate systems, there are both high failure rates and high costs for hardware and system maintenance. Cloud computing has the potential to ameliorate cost and complexity issues. Over time, the cloud computing model has evolved and hybrid systems comprising off-site as well as on-site resources are now common. Towards our mission of providing the highest quality research products to the widest audience, we have explored the use of the Amazon Web Services (AWS) Cloud and Storage and present a case study of our results and efforts. This project builds upon NASA Langley Cloud and Radiation Group's experience with operating large and complex computing infrastructures in a reliable and cost effective manner to explore novel ways to leverage cloud computing resources in the atmospheric science environment. Our case study presents the project requirements and then examines the fit of AWS with the LaRC computing model. We also discuss the evaluation metrics, feasibility, and outcomes and close the case study with the lessons we learned that would apply to others interested in exploring the implementation of the AWS system in their own atmospheric science computing environments.
NASA Astrophysics Data System (ADS)
Ramamurthy, M. K.
2016-12-01
Increasingly, the conduct of science requires close international collaborations to share data, information, knowledge, expertise, and other resources. This is particularly true in the geosciences where the highly connected nature of the Earth system and the need to understand global environmental processes have heightened the importance of scientific partnerships. As geoscience studies become a team effort involving networked scientists and data providers, it is crucial that there is open and reliable access to earth system data of all types, software, tools, models, and other assets. That environment demands close attention to security-related matters, including the creation of trustworthy cyberinfrastructure to facilitate the efficient use of available resources and support the conduct of science. Unidata and EarthCube, both of which are NSF-funded and community-driven programs, recognize the importance of collaborations and the value of networked communities. Unidata, a cornerstone cyberinfrastructure facility for the geosciences, includes users in nearly 180 countries. The EarthCube initiative is aimed at transforming the conduct of geosciences research by creating a well-connected and facile environment for sharing data and in an open, transparent, and inclusive manner and to accelerate our ability to understand and predict the Earth system. We will present the Unidata and EarthCube community perspectives on the approaches to balancing an environment that promotes open and collaborative eScience with the needs for security and communication, including what works, what is needed, the challenges, and opportunities to advance science.
System design of the Pioneer Venus spacecraft. Volume 3: Systems analysis
NASA Technical Reports Server (NTRS)
Fisher, J. N.
1973-01-01
The mission, systems, operations, ground systems, and reliability analysis of the Thor/Delta baseline design used for the Pioneer Space Probe are discussed. Tradeoff decisions concerning spin axis orientation, bus antenna design, communication system design, probe descent, and reduced science payload are analyzed. The reliability analysis is made for the probe bus mission, large probe mission, and small probe mission. Detailed mission sequences were established to identify critical areas and provide phasing of critical operation.
NASA Astrophysics Data System (ADS)
Lagron, C. S.; Ray, A. J.; Barsugli, J. J.
2016-12-01
The Federal Energy Regulatory Commission (FERC) issues licenses for non-federal hydropower projects through its Integrated Licensing Process (ILP). Through this multi-stage, multi-year decision process, NOAA National Marine Fisheries Service (NMFS) can request studies needed to prescribe license conditions to mitigate dams' effects on trust resources, e.g. fish passages and flow requirements. NMFS must understand the combined effects of hydropower projects and climate change to fulfill its mandates to maintain fisheries and protected species. Although 30-50 year hydropower licenses and renewals are within the time frame of anticipated risks from changing climate, FERC has consistently rejected NMFS' climate study requests, stating climate science is "too uncertain," and therefore not actionable. The ILP is an opportunity to incorporate climate change risks in this decision process, and to make decisions now to avoid failures later in the system regarding both hydropower reliability (the concern of FERC and the applicant) and ecosystem health (NMFS's concern). NMFS has partnered with climate scientists at the ESRL Physical Sciences Division to co-produce a climate study request for the relicensing of the Hiram Project on the Saco River in Southern Maine. The Saco hosts Atlantic salmon (Salmo salar) runs which are not currently self-sustaining. This presentation will describe basin-to-basin variability in both historic river analyses (Hydro-Climate Data Network, HCDN) and projected hydrologic responses of New England rivers to climate forcings using statewide Precipitation-Runoff Modeling System (PRMS) demonstrate the need to develop Saco-specific watershed models. Furthermore, although methods for projecting fishery-relevant metrics (heat waves, flood annual exceedance probabilities) have been proven in nearby basins, this modeling has not been conducted at fishery-relevant thresholds. Climate study requests are an example of bridging between science and applications. We argue that the current state of climate science provides actionable information on climate risks in the region, and will articulate the need and required elements for a Saco-specific climate study request.
The planetary spatial data infrastructure for the OSIRIS-REx mission
NASA Astrophysics Data System (ADS)
DellaGiustina, D. N.; Selznick, S.; Nolan, M. C.; Enos, H. L.; Lauretta, D. S.
2017-12-01
The primary objective of the Origins, Spectral Interpretation, Resource Identification, and Security-Regolith Explorer (OSIRIS-REx) mission is to return a pristine sample of carbonaceous material from primitive asteroid (101955) Bennu. Understanding the geospatial context of Bennu is critical to choosing a sample-site and also linking the nature of the sample to the global properties of Bennu and the broader asteroid population. We established a planetary spatial data infrastructure (PSDI) support the primary objective of OSIRIS-REx. OSIRIS-REx is unique among planetary missions in that all remote sensing is performed to support the sample return objective. Prior to sampling, OSIRIS-REx will survey Bennu for nearly two years to select and document the most valuable primary and backup sample sites. During this period, the mission will combine coordinated observations from five science instruments into four thematic maps: deliverability, safety, sampleability, and scientific value. The deliverability map assesses the probability that the flight dynamics team can deliver the spacecraft to the desired location. The safety map indicates the probability that physical hazards are present at the sample-site. The sampleability map quantifies the probability that a sample can be successfully collected from the surface. Finally, the scientific value map shows the probability that the collected sample contains organics and volatiles and also places the sample site in a definitive geological context relative to Bennu's history. The OSIRIS-REx Science Processing and Operations Center (SPOC) serves as the operational PSDI for the mission. The SPOC is tasked with intake of all data from the spacecraft and other ground sources and assimilating these data into a single comprehensive system for processing and presentation. The SPOC centralizes all geographic data of Bennu in a relational database and ensures that standardization and provenance are maintained throughout proximity operations.The SPOC is a live system that handles inputs from spacecraft and science instrument telemetry, and science data producers. It includes multiple levels of validation, both automated and manual to process all data in a robust and reliable manner and eventually deliver it to the NASA Planetary Data System for archive.
DES Science Portal: Computing Photometric Redshifts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gschwend, Julia
An important challenge facing photometric surveys for cosmological purposes, such as the Dark Energy Survey (DES), is the need to produce reliable photometric redshifts (photo-z). The choice of adequate algorithms and configurations and the maintenance of an up-to-date spectroscopic database to build training sets, for example, are challenging tasks when dealing with large amounts of data that are regularly updated and constantly growing. In this paper, we present the first of a series of tools developed by DES, provided as part of the DES Science Portal, an integrated web-based data portal developed to facilitate the scientific analysis of the data,more » while ensuring the reproducibility of the analysis. We present the DES Science Portal photometric redshift tools, starting from the creation of a spectroscopic sample to training the neural network photo-z codes, to the final estimation of photo-zs for a large photometric catalog. We illustrate this operation by calculating well calibrated photo-zs for a galaxy sample extracted from the DES first year (Y1A1) data. The series of processes mentioned above is run entirely within the Portal environment, which automatically produces validation metrics, and maintains the provenance between the different steps. This system allows us to fine tune the many steps involved in the process of calculating photo-zs, making sure that we do not lose the information on the configurations and inputs of the previous processes. By matching the DES Y1A1 photometry to a spectroscopic sample, we define different training sets that we use to feed the photo-z algorithms already installed at the Portal. Finally, we validate the results under several conditions, including the case of a sample limited to i<22.5 with the color properties close to the full DES Y1A1 photometric data. This way we compare the performance of multiple methods and training configurations. The infrastructure presented here is an effcient way to test several methods of calculating photo-zs and use them to create different catalogs for portal science workflows« less
Wafer level reliability for high-performance VLSI design
NASA Technical Reports Server (NTRS)
Root, Bryan J.; Seefeldt, James D.
1987-01-01
As very large scale integration architecture requires higher package density, reliability of these devices has approached a critical level. Previous processing techniques allowed a large window for varying reliability. However, as scaling and higher current densities push reliability to its limit, tighter control and instant feedback becomes critical. Several test structures developed to monitor reliability at the wafer level are described. For example, a test structure was developed to monitor metal integrity in seconds as opposed to weeks or months for conventional testing. Another structure monitors mobile ion contamination at critical steps in the process. Thus the reliability jeopardy can be assessed during fabrication preventing defective devices from ever being placed in the field. Most importantly, the reliability can be assessed on each wafer as opposed to an occasional sample.
Space Shuttle Propulsion System Reliability
NASA Technical Reports Server (NTRS)
Welzyn, Ken; VanHooser, Katherine; Moore, Dennis; Wood, David
2011-01-01
This session includes the following sessions: (1) External Tank (ET) System Reliability and Lessons, (2) Space Shuttle Main Engine (SSME), Reliability Validated by a Million Seconds of Testing, (3) Reusable Solid Rocket Motor (RSRM) Reliability via Process Control, and (4) Solid Rocket Booster (SRB) Reliability via Acceptance and Testing.
An Assessment of Science Teachers' Perceptions of Secondary School Environments in Taiwan
NASA Astrophysics Data System (ADS)
Huang, Shwu-Yong L.
2006-01-01
This study investigates the psychosocial environments of secondary schools from science teachers’ perspectives, as well as associated variables. Using a sample of 900 secondary science teachers from 52 schools in Taiwan, the results attest to the validity and reliability of the instrument, the Science Teacher School Environment Questionnaire, and its ability to differentiate among schools. The descriptive results show that a majority of science teachers positively perceived their school environments. The teachers reported high collegiality, good teacher student relations, effective principal leadership, strong professional interest, and low work pressure—but also low staff freedom. Multiple regression results further indicate that policy-relevant variables like school level, school location, and teachers’ intentions to stay in teaching were associated with science teachers’ perceptions of their school environments. Qualitative data analysis based on interviews of 34 science teachers confirmed and enriched these findings.
Moore, Amy Lawson; Miller, Terissa M
2018-01-01
The purpose of the current study is to evaluate the validity and reliability of the revised Gibson Test of Cognitive Skills, a computer-based battery of tests measuring short-term memory, long-term memory, processing speed, logic and reasoning, visual processing, as well as auditory processing and word attack skills. This study included 2,737 participants aged 5-85 years. A series of studies was conducted to examine the validity and reliability using the test performance of the entire norming group and several subgroups. The evaluation of the technical properties of the test battery included content validation by subject matter experts, item analysis and coefficient alpha, test-retest reliability, split-half reliability, and analysis of concurrent validity with the Woodcock Johnson III Tests of Cognitive Abilities and Tests of Achievement. Results indicated strong sources of evidence of validity and reliability for the test, including internal consistency reliability coefficients ranging from 0.87 to 0.98, test-retest reliability coefficients ranging from 0.69 to 0.91, split-half reliability coefficients ranging from 0.87 to 0.91, and concurrent validity coefficients ranging from 0.53 to 0.93. The Gibson Test of Cognitive Skills-2 is a reliable and valid tool for assessing cognition in the general population across the lifespan.
NASA Astrophysics Data System (ADS)
Żurek, Józef; Kaleta, Ryszard; Zieja, Mariusz
2016-06-01
The forecasting of reliability and life of aeronautical hardware requires recognition of many and various destructive processes that deteriorate the health/maintenance status thereof. The aging of technical components of aircraft as an armament system proves of outstanding significance to reliability and safety of the whole system. The aging process is usually induced by many and various factors, just to mention mechanical, biological, climatic, or chemical ones. The aging is an irreversible process and considerably affects (i.e. reduces) reliability and lifetime of aeronautical equipment. Application of the characteristic function of the aging process is suggested to predict reliability and lifetime of aeronautical hardware. An increment in values of diagnostic parameters is introduced to formulate then, using the characteristic function and after some rearrangements, the partial differential equation. An analytical dependence for the characteristic function of the aging process is a solution to this equation. With the inverse transformation applied, the density function of the aging of aeronautical hardware is found. Having found the density function, one can determine the aeronautical equipment's reliability and lifetime. The in-service collected or the life tests delivered data are used to attain this goal. Coefficients in this relationship are found using the likelihood function.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Żurek, Józef; Kaleta, Ryszard; Zieja, Mariusz
2016-06-08
The forecasting of reliability and life of aeronautical hardware requires recognition of many and various destructive processes that deteriorate the health/maintenance status thereof. The aging of technical components of aircraft as an armament system proves of outstanding significance to reliability and safety of the whole system. The aging process is usually induced by many and various factors, just to mention mechanical, biological, climatic, or chemical ones. The aging is an irreversible process and considerably affects (i.e. reduces) reliability and lifetime of aeronautical equipment. Application of the characteristic function of the aging process is suggested to predict reliability and lifetime ofmore » aeronautical hardware. An increment in values of diagnostic parameters is introduced to formulate then, using the characteristic function and after some rearrangements, the partial differential equation. An analytical dependence for the characteristic function of the aging process is a solution to this equation. With the inverse transformation applied, the density function of the aging of aeronautical hardware is found. Having found the density function, one can determine the aeronautical equipment’s reliability and lifetime. The in-service collected or the life tests delivered data are used to attain this goal. Coefficients in this relationship are found using the likelihood function.« less
User's guide to the Reliability Estimation System Testbed (REST)
NASA Technical Reports Server (NTRS)
Nicol, David M.; Palumbo, Daniel L.; Rifkin, Adam
1992-01-01
The Reliability Estimation System Testbed is an X-window based reliability modeling tool that was created to explore the use of the Reliability Modeling Language (RML). RML was defined to support several reliability analysis techniques including modularization, graphical representation, Failure Mode Effects Simulation (FMES), and parallel processing. These techniques are most useful in modeling large systems. Using modularization, an analyst can create reliability models for individual system components. The modules can be tested separately and then combined to compute the total system reliability. Because a one-to-one relationship can be established between system components and the reliability modules, a graphical user interface may be used to describe the system model. RML was designed to permit message passing between modules. This feature enables reliability modeling based on a run time simulation of the system wide effects of a component's failure modes. The use of failure modes effects simulation enhances the analyst's ability to correctly express system behavior when using the modularization approach to reliability modeling. To alleviate the computation bottleneck often found in large reliability models, REST was designed to take advantage of parallel processing on hypercube processors.
76 FR 16277 - System Restoration Reliability Standards
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-23
... system restoration process. The Commission also approves the NERC's proposal to retire four existing EOP... prepare personnel to enable effective coordination of the system restoration process. The Commission also..., through the Reliability Standard development process, a modification to EOP-005-1 that identifies time...
Mohammadsalehi, Narges; Mohammadbeigi, Abolfazl; Jadidi, Rahmatollah; Anbari, Zohreh; Ghaderi, Ebrahim; Akbari, Mojtaba
2015-09-01
Reliability and validity are the key concepts in measurement processes. Young internet addiction test (YIAT) is regarded as a valid and reliable questionnaire in English speaking countries for diagnosis of Internet-related behavior disorders. This study aimed at validating the Persian version of YIAT in the Iranian society. A pilot and a cross-sectional study were conducted on 28 and 254 students of Qom University of Medical Sciences, respectively, in order to validate the Persian version of YIAT. Forward and backward translations were conducted to develop a Persian version of the scale. Reliability was measured by test-retest, Cronbach's alpha and interclass correlation coefficient (ICC). Face, content and construct validity were approved by the importance score index, content validity ratio (CVR), content validity index (CVI), correlation matrix and factor analysis. The SPSS software was used for data analysis. The Cronbach's alpha was 0.917 (CI 95%; 0.901 - 0.931). The average of scale-level CVI was calculated to be 0.74; the CVI index for each item was higher than 0.83 and the average of CVI index was equal to 0.89. Factor analysis extracted three factors including personal activities disorder (PAD), emotional and mood disorder (EMD) and social activities disorder (SAD), with more than 55.8% of total variances. The ICC for different factors of Persian version of Young Questionnaire including PAD, EMD and for SAD was r = 0.884; CI 95%; 0.861 - 0.904, r = 0.766; CI 95%; 0.718 - 0.808 and r = 0.745; CI 95%; 0.686 - 0.795, respectively. Our study showed that the Persian version of YIAT is good and usable on Iranian people. The reliability of the instrument was very good. Moreover, the validity of the Persian translated version of the scale was sufficient. In addition, the reliability and validity of the three extracted factors of YIAT were evaluated and were acceptable.
Mohammadsalehi, Narges; Mohammadbeigi, Abolfazl; Jadidi, Rahmatollah; Anbari, Zohreh; Ghaderi, Ebrahim; Akbari, Mojtaba
2015-01-01
Background: Reliability and validity are the key concepts in measurement processes. Young internet addiction test (YIAT) is regarded as a valid and reliable questionnaire in English speaking countries for diagnosis of Internet-related behavior disorders. Objectives: This study aimed at validating the Persian version of YIAT in the Iranian society. Patients and Methods: A pilot and a cross-sectional study were conducted on 28 and 254 students of Qom University of Medical Sciences, respectively, in order to validate the Persian version of YIAT. Forward and backward translations were conducted to develop a Persian version of the scale. Reliability was measured by test-retest, Cronbach’s alpha and interclass correlation coefficient (ICC). Face, content and construct validity were approved by the importance score index, content validity ratio (CVR), content validity index (CVI), correlation matrix and factor analysis. The SPSS software was used for data analysis. Results: The Cronbach’s alpha was 0.917 (CI 95%; 0.901 - 0.931). The average of scale-level CVI was calculated to be 0.74; the CVI index for each item was higher than 0.83 and the average of CVI index was equal to 0.89. Factor analysis extracted three factors including personal activities disorder (PAD), emotional and mood disorder (EMD) and social activities disorder (SAD), with more than 55.8% of total variances. The ICC for different factors of Persian version of Young Questionnaire including PAD, EMD and for SAD was r = 0.884; CI 95%; 0.861 - 0.904, r = 0.766; CI 95%; 0.718 - 0.808 and r = 0.745; CI 95%; 0.686 - 0.795, respectively. Conclusions: Our study showed that the Persian version of YIAT is good and usable on Iranian people. The reliability of the instrument was very good. Moreover, the validity of the Persian translated version of the scale was sufficient. In addition, the reliability and validity of the three extracted factors of YIAT were evaluated and were acceptable. PMID:26495253
Peer-review for selection of oral presentations for conferences: Are we reliable?
Deveugele, Myriam; Silverman, Jonathan
2017-11-01
Although peer-review for journal submission, grant-applications and conference submissions has been called 'a counter- stone of science', and even 'the gold standard for evaluating scientific merit', publications on this topic remain scares. Research that has investigated peer-review reveals several issues and criticisms concerning bias, poor quality review, unreliability and inefficiency. The most important weakness of the peer review process is the inconsistency between reviewers leading to inadequate inter-rater reliability. To report the reliability of ratings for a large international conference and to suggest possible solutions to overcome the problem. In 2016 during the International Conference on Communication in Healthcare, organized by EACH: International Association for Communication in Healthcare, a calibration exercise was proposed and feedback was reported back to the participants of the exercise. Most abstracts, as well as most peer-reviewers, receive and give scores around the median. Contrary to the general assumption that there are high and low scorers, in this group only 3 peer-reviewers could be identified with a high mean, while 7 has a low mean score. Only 2 reviewers gave only high ratings (4 and 5). Of the eight abstracts included in this exercise, only one abstract received a high mean score and one a low mean score. Nevertheless, both these abstracts received both low and high scores; all other abstracts received all possible scores. Peer-review of submissions for conferences are, in accordance with the literature, unreliable. New and creative methods will be needed to give the participants of a conference what they really deserve: a more reliable selection of the best abstracts. More raters per abstract improves the inter-rater reliability; training of reviewers could be helpful; providing feedback to reviewers can lead to less inter-rater disagreement; fostering negative peer-review (rejecting the inappropriate submissions) rather than a positive (accepting the best) could be fruitful for selecting abstracts for conferences. Copyright © 2017 Elsevier B.V. All rights reserved.
Multiple mini-interviews: same concept, different approaches.
Knorr, Mirjana; Hissbach, Johanna
2014-12-01
Increasing numbers of educational institutions in the medical field choose to replace their conventional admissions interviews with a multiple mini-interview (MMI) format because the latter has superior reliability values and reduces interviewer bias. As the MMI format can be adapted to the conditions of each institution, the question of under which circumstances an MMI is most expedient remains unresolved. This article systematically reviews the existing MMI literature to identify the aspects of MMI design that have impact on the reliability, validity and cost-efficiency of the format. Three electronic databases (OVID, PubMed, Web of Science) were searched for any publications in which MMIs and related approaches were discussed. Sixty-six publications were included in the analysis. Forty studies reported reliability values. Generally, raising the number of stations has more impact on reliability than raising the number of raters per station. Other factors with positive influence include the exclusion of stations that are too easy, and the use of normative anchored rating scales or skills-based rater training. Data on criterion-related validities and analyses of dimensionality were found in 31 studies. Irrespective of design differences, the relationship between MMI results and academic measures is small to zero. The McMaster University MMI predicts in-programme and licensing examination performance. Construct validity analyses are mostly exploratory and their results are inconclusive. Seven publications gave information on required resources or provided suggestions on how to save costs. The most relevant cost factors that are additional to those of conventional interviews are the costs of station development and actor payments. The MMI literature provides useful recommendations for reliable and cost-efficient MMI designs, but some important aspects have not yet been fully explored. More theory-driven research is needed concerning dimensionality and construct validity, the predictive validity of MMIs other than those of McMaster University, the comparison of station types, and a cost-efficient station development process. © 2014 John Wiley & Sons Ltd.
Science and intuition: do both have a place in clinical decision making?
Pearson, Helen
Intuition is widely used in clinical decision making yet its use is underestimated compared to scientific decision-making methods. Information processing is used within scientific decision making and is methodical and analytical, whereas intuition relies more on a practitioner's perception. Intuition is an unconscious process and may be referred to as a 'sixth sense', 'hunch' or 'gut feeling'. It is not underpinned by valid and reliable measures. Expert health professionals use a rapid, automatic process to recognise familiar problems instantly. Intuition could therefore involve pattern recognition, where experts draw on experiences, so could be perceived as a cognitive skill rather than a perception or knowing without knowing how. The NHS places great importance on evidence-based practice but intuition is seemingly becoming an acceptable way of thinking and knowing in clinical decision making. Recognising nursing as an art allows intuition to be used and the environment or situation to be interpreted to help inform decision making. Intuition can be used in conjunction with evidence-based practice and to achieve good outcomes and deserves to be acknowledged within clinical practice.
Unreliable evoked responses in autism
Dinstein, Ilan; Heeger, David J.; Lorenzi, Lauren; Minshew, Nancy J.; Malach, Rafael; Behrmann, Marlene
2012-01-01
Summary Autism has been described as a disorder of general neural processing, but the particular processing characteristics that might be abnormal in autism have mostly remained obscure. Here, we present evidence of one such characteristic: poor evoked response reliability. We compared cortical response amplitude and reliability (consistency across trials) in visual, auditory, and somatosensory cortices of high-functioning individuals with autism and controls. Mean response amplitudes were statistically indistinguishable across groups, yet trial-by-trial response reliability was significantly weaker in autism, yielding smaller signal-to-noise ratios in all sensory systems. Response reliability differences were evident only in evoked cortical responses and not in ongoing resting-state activity. These findings reveal that abnormally unreliable cortical responses, even to elementary non-social sensory stimuli, may represent a fundamental physiological alteration of neural processing in autism. The results motivate a critical expansion of autism research to determine whether (and how) basic neural processing properties such as reliability, plasticity, and adaptation/habituation are altered in autism. PMID:22998867
NASA Astrophysics Data System (ADS)
Rusyati, Lilit; Firman, Harry
2017-05-01
This research was motivated by the importance of multiple-choice questions that indicate the elements and sub-elements of critical thinking and implementation of computer-based test. The method used in this research was descriptive research for profiling the validation of science virtual test to measure students' critical thinking in junior high school. The participant is junior high school students of 8th grade (14 years old) while science teacher and expert as the validators. The instrument that used as a tool to capture the necessary data are sheet of an expert judgment, sheet of legibility test, and science virtual test package in multiple choice form with four possible answers. There are four steps to validate science virtual test to measure students' critical thinking on the theme of "Living Things and Environmental Sustainability" in 7th grade Junior High School. These steps are analysis of core competence and basic competence based on curriculum 2013, expert judgment, legibility test and trial test (limited and large trial test). The test item criterion based on trial test are accepted, accepted but need revision, and rejected. The reliability of the test is α = 0.747 that categorized as `high'. It means the test instruments used is reliable and high consistency. The validity of Rxy = 0.63 means that the validity of the instrument was categorized as `high' according to interpretation value of Rxy (correlation).
Rapid Prototyping of Nanofluidic Slits in a Silicone Bilayer
Kole, Thomas P.; Liao, Kuo-Tang; Schiffels, Daniel; Ilic, B. Robert; Strychalski, Elizabeth A.; Kralj, Jason G.; Liddle, J. Alexander; Dritschilo, Anatoly; Stavis, Samuel M.
2015-01-01
This article reports a process for rapidly prototyping nanofluidic devices, particularly those comprising slits with microscale widths and nanoscale depths, in silicone. This process consists of designing a nanofluidic device, fabricating a photomask, fabricating a device mold in epoxy photoresist, molding a device in silicone, cutting and punching a molded silicone device, bonding a silicone device to a glass substrate, and filling the device with aqueous solution. By using a bilayer of hard and soft silicone, we have formed and filled nanofluidic slits with depths of less than 400 nm and aspect ratios of width to depth exceeding 250 without collapse of the slits. An important attribute of this article is that the description of this rapid prototyping process is very comprehensive, presenting context and details which are highly relevant to the rational implementation and reliable repetition of the process. Moreover, this process makes use of equipment commonly found in nanofabrication facilities and research laboratories, facilitating the broad adaptation and application of the process. Therefore, while this article specifically informs users of the Center for Nanoscale Science and Technology (CNST) at the National Institute of Standards and Technology (NIST), we anticipate that this information will be generally useful for the nanofabrication and nanofluidics research communities at large, and particularly useful for neophyte nanofabricators and nanofluidicists. PMID:26958449
Electrical Stimulation of Coleopteran Muscle for Initiating Flight.
Choo, Hao Yu; Li, Yao; Cao, Feng; Sato, Hirotaka
2016-01-01
Some researchers have long been interested in reconstructing natural insects into steerable robots or vehicles. However, until recently, these so-called cyborg insects, biobots, or living machines existed only in science fiction. Owing to recent advances in nano/micro manufacturing, data processing, and anatomical and physiological biology, we can now stimulate living insects to induce user-desired motor actions and behaviors. To improve the practicality and applicability of airborne cyborg insects, a reliable and controllable flight initiation protocol is required. This study demonstrates an electrical stimulation protocol that initiates flight in a beetle (Mecynorrhina torquata, Coleoptera). A reliable stimulation protocol was determined by analyzing a pair of dorsal longitudinal muscles (DLMs), flight muscles that oscillate the wings. DLM stimulation has achieved with a high success rate (> 90%), rapid response time (< 1.0 s), and small variation (< 0.33 s; indicating little habituation). Notably, the stimulation of DLMs caused no crucial damage to the free flight ability. In contrast, stimulation of optic lobes, which was earlier demonstrated as a successful flight initiation protocol, destabilized the beetle in flight. Thus, DLM stimulation is a promising secure protocol for inducing flight in cyborg insects or biobots.
Automatic EEG spike detection.
Harner, Richard
2009-10-01
Since the 1970s advances in science and technology during each succeeding decade have renewed the expectation of efficient, reliable automatic epileptiform spike detection (AESD). But even when reinforced with better, faster tools, clinically reliable unsupervised spike detection remains beyond our reach. Expert-selected spike parameters were the first and still most widely used for AESD. Thresholds for amplitude, duration, sharpness, rise-time, fall-time, after-coming slow waves, background frequency, and more have been used. It is still unclear which of these wave parameters are essential, beyond peak-peak amplitude and duration. Wavelet parameters are very appropriate to AESD but need to be combined with other parameters to achieve desired levels of spike detection efficiency. Artificial Neural Network (ANN) and expert-system methods may have reached peak efficiency. Support Vector Machine (SVM) technology focuses on outliers rather than centroids of spike and nonspike data clusters and should improve AESD efficiency. An exemplary spike/nonspike database is suggested as a tool for assessing parameters and methods for AESD and is available in CSV or Matlab formats from the author at brainvue@gmail.com. Exploratory Data Analysis (EDA) is presented as a graphic method for finding better spike parameters and for the step-wise evaluation of the spike detection process.
Electrical Stimulation of Coleopteran Muscle for Initiating Flight
Choo, Hao Yu; Li, Yao; Cao, Feng; Sato, Hirotaka
2016-01-01
Some researchers have long been interested in reconstructing natural insects into steerable robots or vehicles. However, until recently, these so-called cyborg insects, biobots, or living machines existed only in science fiction. Owing to recent advances in nano/micro manufacturing, data processing, and anatomical and physiological biology, we can now stimulate living insects to induce user-desired motor actions and behaviors. To improve the practicality and applicability of airborne cyborg insects, a reliable and controllable flight initiation protocol is required. This study demonstrates an electrical stimulation protocol that initiates flight in a beetle (Mecynorrhina torquata, Coleoptera). A reliable stimulation protocol was determined by analyzing a pair of dorsal longitudinal muscles (DLMs), flight muscles that oscillate the wings. DLM stimulation has achieved with a high success rate (> 90%), rapid response time (< 1.0 s), and small variation (< 0.33 s; indicating little habituation). Notably, the stimulation of DLMs caused no crucial damage to the free flight ability. In contrast, stimulation of optic lobes, which was earlier demonstrated as a successful flight initiation protocol, destabilized the beetle in flight. Thus, DLM stimulation is a promising secure protocol for inducing flight in cyborg insects or biobots. PMID:27050093
What’s for dinner? Undescribed species of porcini in a commercial packet
Suz, Laura M.
2014-01-01
Accurate diagnosis of the components of our food and a standard lexicon for clear communication is essential for regulating global food trade and identifying food frauds. Reliable identification of wild collected foods can be particularly difficult, especially when they originate in under-documented regions or belong to poorly known groups such as Fungi. Porcini, one of the most widely traded wild edible mushrooms in the world, are large and conspicuous and they are used as a food both on their own and in processed food products. China is a major exporter of porcini, most of it ending up in Europe. We used DNA-sequencing to identify three species of mushroom contained within a commercial packet of dried Chinese porcini purchased in London. Surprisingly, all three have never been formally described by science and required new scientific names. This demonstrates the ubiquity of unknown fungal diversity even in widely traded commercial food products from one of the most charismatic and least overlooked groups of mushrooms. Our rapid analysis and description makes it possible to reliably identify these species, allowing their harvest to be monitored and their presence tracked in the food chain. PMID:25279259
Scientific Data Management (SDM) Center for Enabling Technologies. 2007-2012
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ludascher, Bertram; Altintas, Ilkay
Over the past five years, our activities have both established Kepler as a viable scientific workflow environment and demonstrated its value across multiple science applications. We have published numerous peer-reviewed papers on the technologies highlighted in this short paper and have given Kepler tutorials at SC06,SC07,SC08,and SciDAC 2007. Our outreach activities have allowed scientists to learn best practices and better utilize Kepler to address their individual workflow problems. Our contributions to advancing the state-of-the-art in scientific workflows have focused on the following areas. Progress in each of these areas is described in subsequent sections. Workflow development. The development of amore » deeper understanding of scientific workflows "in the wild" and of the requirements for support tools that allow easy construction of complex scientific workflows; Generic workflow components and templates. The development of generic actors (i.e.workflow components and processes) which can be broadly applied to scientific problems; Provenance collection and analysis. The design of a flexible provenance collection and analysis infrastructure within the workflow environment; and, Workflow reliability and fault tolerance. The improvement of the reliability and fault-tolerance of workflow environments.« less