Jeffrey T. Morisette; Jaime E. Nickeson; Paul Davis; Yujie Wang; Yuhong Tian; Curtis E. Woodcock; Nikolay Shabanov; Matthew Hansen; Warren B. Cohen; Doug R. Oetter; Robert E. Kennedy
2003-01-01
Phase 1I of the Scientific Data Purchase (SDP) has provided NASA investigators access to data from four different satellite and airborne data sources. The Moderate Resolution Imaging Spectrometer (MODIS) land discipline team (MODLAND) sought to utilize these data in support of land product validation activities with a lbcus on tile EOS Land Validation Core Sites. These...
Validity Evidence for a Learning Progression of Scientific Explanation
ERIC Educational Resources Information Center
Yao, Jian-Xin; Guo, Yu-Ying
2018-01-01
Providing scientific explanations for natural phenomena is a fundamental aim of science; therefore, scientific explanation has been selected as one of the key practices in science education policy documents around the world. To further elaborate on existing educational frameworks of scientific explanation in K-12, we propose a learning progression…
A Primer on Building Teacher Evaluation Instruments.
ERIC Educational Resources Information Center
Bitner, Ted; Kratzner, Ron
This paper presents a primer on building a scientifically oriented teacher evaluation instrument. It stresses the importance of accurate measures and accepts the presupposition that scientific approaches provide the most accurate measures of student teacher performance. The paper discusses the scientific concepts of validity and reliability, and…
A new dataset validation system for the Planetary Science Archive
NASA Astrophysics Data System (ADS)
Manaud, N.; Zender, J.; Heather, D.; Martinez, S.
2007-08-01
The Planetary Science Archive is the official archive for the Mars Express mission. It has received its first data by the end of 2004. These data are delivered by the PI teams to the PSA team as datasets, which are formatted conform to the Planetary Data System (PDS). The PI teams are responsible for analyzing and calibrating the instrument data as well as the production of reduced and calibrated data. They are also responsible of the scientific validation of these data. ESA is responsible of the long-term data archiving and distribution to the scientific community and must ensure, in this regard, that all archived products meet quality. To do so, an archive peer-review is used to control the quality of the Mars Express science data archiving process. However a full validation of its content is missing. An independent review board recently recommended that the completeness of the archive as well as the consistency of the delivered data should be validated following well-defined procedures. A new validation software tool is being developed to complete the overall data quality control system functionality. This new tool aims to improve the quality of data and services provided to the scientific community through the PSA, and shall allow to track anomalies in and to control the completeness of datasets. It shall ensure that the PSA end-users: (1) can rely on the result of their queries, (2) will get data products that are suitable for scientific analysis, (3) can find all science data acquired during a mission. We defined dataset validation as the verification and assessment process to check the dataset content against pre-defined top-level criteria, which represent the general characteristics of good quality datasets. The dataset content that is checked includes the data and all types of information that are essential in the process of deriving scientific results and those interfacing with the PSA database. The validation software tool is a multi-mission tool that has been designed to provide the user with the flexibility of defining and implementing various types of validation criteria, to iteratively and incrementally validate datasets, and to generate validation reports.
Validation of Automated Scoring for a Formative Assessment That Employs Scientific Argumentation
ERIC Educational Resources Information Center
Mao, Liyang; Liu, Ou Lydia; Roohr, Katrina; Belur, Vinetha; Mulholland, Matthew; Lee, Hee-Sun; Pallant, Amy
2018-01-01
Scientific argumentation is one of the core practices for teachers to implement in science classrooms. We developed a computer-based formative assessment to support students' construction and revision of scientific arguments. The assessment is built upon automated scoring of students' arguments and provides feedback to students and teachers.…
Fukui, Sadaaki; Matthias, Marianne S; Salyers, Michelle P
2015-01-01
Shared decision-making (SDM) is imperative to person-centered care, yet little is known about what aspects of SDM are targeted during psychiatric visits. This secondary data analysis (191 psychiatric visits with 11 providers, coded with a validated SDM coding system) revealed two factors (scientific and preference-based discussions) underlying SDM communication. Preference-based discussion occurred less. Both provider and consumer initiation of SDM elements and decision complexity were associated with greater discussions in both factors, but were more strongly associated with scientific discussion. Longer visit length correlated with only scientific discussion. Providers' understanding of core domains could facilitate engaging consumers in SDM.
Whose Consensus Is It Anyway? Scientific versus Legalistic Conceptions of Validity
ERIC Educational Resources Information Center
Borsboom, Denny
2012-01-01
Paul E. Newton provides an insightful and scholarly overview of central issues in validity theory. As he notes, many of the conceptual problems in validity theory derive from the fact that the word "validity" has two meanings. First, it indicates "whether a test measures what it purports to measure." This is a factual claim about the psychometric…
1998-12-01
Soft Sphere Molecular Model for Inverse-Power-Law or Lennard Jones Potentials , Physics of Fluids A, Vol. 3, No. 10, pp. 2459-2465. 42. Legge, H...information; — Providing assistance to member nations for the purpose of increasing their scientific and technical potential ; — Rendering scientific and...nal, 34:756-763, 1996. [22] W. Jones and B. Launder. The Prediction of Laminarization with a Two-Equation Model of Turbulence. Int. Journal of Heat
Fever: Views in Anthroposophic Medicine and Their Scientific Validity
2016-01-01
Objective. To conduct a scoping review to characterize how fever is viewed in anthroposophic medicine (AM) and discuss the scientific validity of these views. Methods. Systematic searches were run in Medline, Embase, CAMbase, and Google Scholar. Material from anthroposophic medical textbooks and articles was also used. Data was extracted and interpreted. Results. Most of the anthroposophic literature on this subject is in the German language. Anthroposophic physicians hold a beneficial view on fever, rarely suppress fever with antipyretics, and often use complementary means of alleviating discomfort. In AM, fever is considered to have the following potential benefits: promoting more complete recovery; preventing infection recurrences and atopic diseases; providing a unique opportunity for caregivers to provide loving care; facilitating individual development and resilience; protecting against cancer and boosting the anticancer effects of mistletoe products. These views are discussed with regard to the available scientific data. Conclusion. AM postulates that fever can be of short-term and long-term benefit in several ways; many of these opinions have become evidence-based (though still often not practiced) while others still need empirical studies to be validated, refuted, or modified. PMID:27999605
NASA Technical Reports Server (NTRS)
Fargion, Giulietta S.; McClain, Charles R.; Busalacchi, Antonio J. (Technical Monitor)
2001-01-01
The purpose of this technical report is to provide current documentation of the Sensor Intercomparison and Merger for Biological and Interdisciplinary Oceanic Studies (SIMBIOS) Project activities, NASA Research Announcement (NRAI) research status, satellite data processing, data product validation, and field calibration. This documentation is necessary to ensure that critical information is related to the scientific community and NASA management. This critical information includes the technical difficulties and challenges of validating and combining ocean color data from an array of independent satellite systems to form consistent and accurate global bio-optical time series products. This technical report is not meant as a substitute for scientific literature. Instead, it will provide a ready and responsive vehicle for the multitude of technical reports issued by an operational project.
MODIS Validation, Data Merger and Other Activities Accomplished by the SIMBIOS Project: 2002-2003
NASA Technical Reports Server (NTRS)
Fargion, Giulietta S.; McClain, Charles R.
2003-01-01
The purpose of this technical report is to provide current documentation of the Sensor Intercomparison and Merger for Biological and Interdisciplinary Oceanic Studies (SIMBIOS) Project activities, satellite data processing, and data product validation. This documentation is necessary to ensure that critical information is related to the scientific community and NASA management. This critical information includes the technical difficulties and challenges of validating and combining ocean color data from an array of independent satellite systems to form consistent and accurate global bio-optical time series products. This technical report focuses on the SIMBIOS Project s efforts in support of the Moderate-Resolution Imaging Spectroradiometer (MODIS) on the Earth Observing System (EOS) Terra platform (similar evaluations of MODIS/Aqua are underway). This technical report is not meant as a substitute for scientific literature. Instead, it will provide a ready and responsive vehicle for the multitude of technical reports issued by an operational project.
Chafetz, M D; Williams, M A; Ben-Porath, Y S; Bianchini, K J; Boone, K B; Kirkwood, M W; Larrabee, G J; Ord, J S
2015-01-01
The milestone publication by Slick, Sherman, and Iverson (1999) of criteria for determining malingered neurocognitive dysfunction led to extensive research on validity testing. Position statements by the National Academy of Neuropsychology and the American Academy of Clinical Neuropsychology (AACN) recommended routine validity testing in neuropsychological evaluations. Despite this widespread scientific and professional support, the Social Security Administration (SSA) continued to discourage validity testing, a stance that led to a congressional initiative for SSA to reevaluate their position. In response, SSA commissioned the Institute of Medicine (IOM) to evaluate the science concerning the validation of psychological testing. The IOM concluded that validity assessment was necessary in psychological and neuropsychological examinations (IOM, 2015 ). The AACN sought to provide independent expert guidance and recommendations concerning the use of validity testing in disability determinations. A panel of contributors to the science of validity testing and its application to the disability process was charged with describing why the disability process for SSA needs improvement, and indicating the necessity for validity testing in disability exams. This work showed how the determination of malingering is a probability proposition, described how different types of validity tests are appropriate, provided evidence concerning non-credible findings in children and low-functioning individuals, and discussed the appropriate evaluation of pain disorders typically seen outside of mental consultations. A scientific plan for validity assessment that additionally protects test security is needed in disability determinations and in research on classification accuracy of disability decisions.
THE USE OF RESEARCH RESULTS IN TEACHING SOCIAL WORK PRACTICE.
ERIC Educational Resources Information Center
LAWRENCE, RICHARD G.
BECAUSE THE SUCCESS OF INTERVENTION DEPENDS UPON THE VALIDITY OF THE PROPOSITIONS EMPLOYED, AND BECAUSE SCIENTIFIC RESEARCH ASSURES VALIDITY BY PROVIDING THE MOST SYSTEMATIC AND RIGOROUS ATTENTION TO PROBLEMS, THE UTILIZATION OF RESEARCH IS IMPORTANT TO SOCIAL WORK PRACTICE. SEVERAL FACTORS LIMIT ITS USE--(1) ALTHOUGH CONCEPTS ARE CLEARLY DEFINED…
Nine Criteria for a Measure of Scientific Output
Kreiman, Gabriel; Maunsell, John H. R.
2011-01-01
Scientific research produces new knowledge, technologies, and clinical treatments that can lead to enormous returns. Often, the path from basic research to new paradigms and direct impact on society takes time. Precise quantification of scientific output in the short-term is not an easy task but is critical for evaluating scientists, laboratories, departments, and institutions. While there have been attempts to quantifying scientific output, we argue that current methods are not ideal and suffer from solvable difficulties. Here we propose criteria that a metric should have to be considered a good index of scientific output. Specifically, we argue that such an index should be quantitative, based on robust data, rapidly updated and retrospective, presented with confidence intervals, normalized by number of contributors, career stage and discipline, impractical to manipulate, and focused on quality over quantity. Such an index should be validated through empirical testing. The purpose of quantitatively evaluating scientific output is not to replace careful, rigorous review by experts but rather to complement those efforts. Because it has the potential to greatly influence the efficiency of scientific research, we have a duty to reflect upon and implement novel and rigorous ways of evaluating scientific output. The criteria proposed here provide initial steps toward the systematic development and validation of a metric to evaluate scientific output. PMID:22102840
Poor reporting of scientific leadership information in clinical trial registers.
Sekeres, Melanie; Gold, Jennifer L; Chan, An-Wen; Lexchin, Joel; Moher, David; Van Laethem, Marleen L P; Maskalyk, James; Ferris, Lorraine; Taback, Nathan; Rochon, Paula A
2008-02-20
In September 2004, the International Committee of Medical Journal Editors (ICMJE) issued a Statement requiring that all clinical trials be registered at inception in a public register in order to be considered for publication. The World Health Organization (WHO) and ICMJE have identified 20 items that should be provided before a trial is considered registered, including contact information. Identifying those scientifically responsible for trial conduct increases accountability. The objective is to examine the proportion of registered clinical trials providing valid scientific leadership information. We reviewed clinical trial entries listing Canadian investigators in the two largest international and public trial registers, the International Standard Randomized Controlled Trial Number (ISRCTN) register, and ClinicalTrials.gov. The main outcome measures were the proportion of clinical trials reporting valid contact information for the trials' Principal Investigator (PI)/Co-ordinating Investigator/Study Chair/Site PI, and trial e-mail contact address, stratified by funding source, recruiting status, and register. A total of 1388 entries (142 from ISRCTN and 1246 from ClinicalTrials.gov) comprised our sample. We found non-compliance with mandatory registration requirements regarding scientific leadership and trial contact information. Non-industry and partial industry funded trials were significantly more likely to identify the individual responsible for scientific leadership (OR = 259, 95% CI: 95-701) and to provide a contact e-mail address (OR = 9.6, 95% CI: 6.6-14) than were solely industry funded trials. Despite the requirements set by WHO and ICMJE, data on scientific leadership and contact e-mail addresses are frequently omitted from clinical trials registered in the two leading public clinical trial registers. To promote accountability and transparency in clinical trials research, public clinical trials registers should ensure adequate monitoring of trial registration to ensure completion of mandatory contact information fields identifying scientific leadership.
Griesinger, Claudius; Desprez, Bertrand; Coecke, Sandra; Casey, Warren; Zuang, Valérie
This chapter explores the concepts, processes, tools and challenges relating to the validation of alternative methods for toxicity and safety testing. In general terms, validation is the process of assessing the appropriateness and usefulness of a tool for its intended purpose. Validation is routinely used in various contexts in science, technology, the manufacturing and services sectors. It serves to assess the fitness-for-purpose of devices, systems, software up to entire methodologies. In the area of toxicity testing, validation plays an indispensable role: "alternative approaches" are increasingly replacing animal models as predictive tools and it needs to be demonstrated that these novel methods are fit for purpose. Alternative approaches include in vitro test methods, non-testing approaches such as predictive computer models up to entire testing and assessment strategies composed of method suites, data sources and decision-aiding tools. Data generated with alternative approaches are ultimately used for decision-making on public health and the protection of the environment. It is therefore essential that the underlying methods and methodologies are thoroughly characterised, assessed and transparently documented through validation studies involving impartial actors. Importantly, validation serves as a filter to ensure that only test methods able to produce data that help to address legislative requirements (e.g. EU's REACH legislation) are accepted as official testing tools and, owing to the globalisation of markets, recognised on international level (e.g. through inclusion in OECD test guidelines). Since validation creates a credible and transparent evidence base on test methods, it provides a quality stamp, supporting companies developing and marketing alternative methods and creating considerable business opportunities. Validation of alternative methods is conducted through scientific studies assessing two key hypotheses, reliability and relevance of the test method for a given purpose. Relevance encapsulates the scientific basis of the test method, its capacity to predict adverse effects in the "target system" (i.e. human health or the environment) as well as its applicability for the intended purpose. In this chapter we focus on the validation of non-animal in vitro alternative testing methods and review the concepts, challenges, processes and tools fundamental to the validation of in vitro methods intended for hazard testing of chemicals. We explore major challenges and peculiarities of validation in this area. Based on the notion that validation per se is a scientific endeavour that needs to adhere to key scientific principles, namely objectivity and appropriate choice of methodology, we examine basic aspects of study design and management, and provide illustrations of statistical approaches to describe predictive performance of validated test methods as well as their reliability.
RE-EVALUATION OF APPLICABILITY OF AGENCY SAMPLE HOLDING TIMES
Purpose and Rationale:
1) To assess the validity of currently recognized holding times and to provide a scientific basis for changes that may be necessary to the current regulations.
2) While holding times may appear adequate to protect sample integrity and provid...
Cloud computing and validation of expandable in silico livers.
Ropella, Glen E P; Hunt, C Anthony
2010-12-03
In Silico Livers (ISLs) are works in progress. They are used to challenge multilevel, multi-attribute, mechanistic hypotheses about the hepatic disposition of xenobiotics coupled with hepatic responses. To enhance ISL-to-liver mappings, we added discrete time metabolism, biliary elimination, and bolus dosing features to a previously validated ISL and initiated re-validated experiments that required scaling experiments to use more simulated lobules than previously, more than could be achieved using the local cluster technology. Rather than dramatically increasing the size of our local cluster we undertook the re-validation experiments using the Amazon EC2 cloud platform. So doing required demonstrating the efficacy of scaling a simulation to use more cluster nodes and assessing the scientific equivalence of local cluster validation experiments with those executed using the cloud platform. The local cluster technology was duplicated in the Amazon EC2 cloud platform. Synthetic modeling protocols were followed to identify a successful parameterization. Experiment sample sizes (number of simulated lobules) on both platforms were 49, 70, 84, and 152 (cloud only). Experimental indistinguishability was demonstrated for ISL outflow profiles of diltiazem using both platforms for experiments consisting of 84 or more samples. The process was analogous to demonstration of results equivalency from two different wet-labs. The results provide additional evidence that disposition simulations using ISLs can cover the behavior space of liver experiments in distinct experimental contexts (there is in silico-to-wet-lab phenotype similarity). The scientific value of experimenting with multiscale biomedical models has been limited to research groups with access to computer clusters. The availability of cloud technology coupled with the evidence of scientific equivalency has lowered the barrier and will greatly facilitate model sharing as well as provide straightforward tools for scaling simulations to encompass greater detail with no extra investment in hardware.
Critical validation studies of neurofeedback.
Gruzelier, John; Egner, Tobias
2005-01-01
The field of neurofeedback training has proceeded largely without validation. In this article the authors review studies directed at validating sensory motor rhythm, beta and alpha-theta protocols for improving attention, memory, and music performance in healthy participants. Importantly, benefits were demonstrable with cognitive and neurophysiologic measures that were predicted on the basis of regression models of learning to enhance sensory motor rhythm and beta activity. The first evidence of operant control over the alpha-theta ratio is provided, together with remarkable improvements in artistic aspects of music performance equivalent to two class grades in conservatory students. These are initial steps in providing a much needed scientific basis to neurofeedback.
Data Validation for Earth Probe-Total Ozone Mapping Spectrometer
NASA Technical Reports Server (NTRS)
Stanford, John L.
1995-01-01
This presentation represents the final report for the NASA grant project. The goal of this project was to provide scientific analysis to aid in validation fo data sets used in detection of long term global trends of total ozone. Ozone data from the Earth Probe Total Ozone Mapping Spectrometer instrument was compared for validation purposes with features in previous TOMS data. Atmospheric dynamic concepts were used in the analysis. The publications sponsored by the grant are listed along with abstracts.
NASA Technical Reports Server (NTRS)
VanHeukelem, Laurie; Thomas, Crystal S.; Gilbert, Patricia M.; Fargion, Giulietta S. (Editor); McClain, Charles R. (Editor)
2002-01-01
The purpose of this technical report is to provide current documentation of the Sensor Intercomparison and Merger for Biological and Interdisciplinary Oceanic Studies (SIMBIOS) Project activities, NASA Research Announcement (NRA) research status, satellite data processing, data product validation, and field calibration. This documentation is necessary to ensure that critical information is related to the scientific community and NASA management. This critical information includes the technical difficulties and challenges of validating and combining ocean color data from an array of independent satellite systems to form consistent and accurate global bio-optical time series products. This technical report is not meant as a substitute for scientific literature. Instead, it will provide a ready and responsive vehicle for the multitude of technical reports issued by an operational project. This particular document focus on the variability in chlorophyll pigment measurements resulting from differences in methodologies and laboratories conducting the pigment analysis.
Evaluation of the Validity and Reliability of the Waterlow Pressure Ulcer Risk Assessment Scale
Charalambous, Charalambos; Koulori, Agoritsa; Vasilopoulos, Aristidis; Roupa, Zoe
2018-01-01
Introduction Prevention is the ideal strategy to tackle the problem of pressure ulcers. Pressure ulcer risk assessment scales are one of the most pivotal measures applied to tackle the problem, much criticisms has been developed regarding the validity and reliability of these scales. Objective To investigate the validity and reliability of the Waterlow pressure ulcer risk assessment scale. Method The methodology used is a narrative literature review, the bibliography was reviewed through Cinahl, Pubmed, EBSCO, Medline and Google scholar, 26 scientific articles where identified. The articles where chosen due to their direct correlation with the objective under study and their scientific relevance. Results The construct and face validity of the Waterlow appears adequate, but with regards to content validity changes in the category age and gender can be beneficial. The concurrent validity cannot be assessed. The predictive validity of the Waterlow is characterized by high specificity and low sensitivity. The inter-rater reliability has been demonstrated to be inadequate, this may be due to lack of clear definitions within the categories and differentiating level of knowledge between the users. Conclusion Due to the limitations presented regarding the validity and reliability of the Waterlow pressure ulcer risk assessment scale, the scale should be used in conjunction with clinical assessment to provide optimum results. PMID:29736104
Evaluation of the Validity and Reliability of the Waterlow Pressure Ulcer Risk Assessment Scale.
Charalambous, Charalambos; Koulori, Agoritsa; Vasilopoulos, Aristidis; Roupa, Zoe
2018-04-01
Prevention is the ideal strategy to tackle the problem of pressure ulcers. Pressure ulcer risk assessment scales are one of the most pivotal measures applied to tackle the problem, much criticisms has been developed regarding the validity and reliability of these scales. To investigate the validity and reliability of the Waterlow pressure ulcer risk assessment scale. The methodology used is a narrative literature review, the bibliography was reviewed through Cinahl, Pubmed, EBSCO, Medline and Google scholar, 26 scientific articles where identified. The articles where chosen due to their direct correlation with the objective under study and their scientific relevance. The construct and face validity of the Waterlow appears adequate, but with regards to content validity changes in the category age and gender can be beneficial. The concurrent validity cannot be assessed. The predictive validity of the Waterlow is characterized by high specificity and low sensitivity. The inter-rater reliability has been demonstrated to be inadequate, this may be due to lack of clear definitions within the categories and differentiating level of knowledge between the users. Due to the limitations presented regarding the validity and reliability of the Waterlow pressure ulcer risk assessment scale, the scale should be used in conjunction with clinical assessment to provide optimum results.
Inferring Genetic Ancestry: Opportunities, Challenges, and Implications
Royal, Charmaine D.; Novembre, John; Fullerton, Stephanie M.; Goldstein, David B.; Long, Jeffrey C.; Bamshad, Michael J.; Clark, Andrew G.
2010-01-01
Increasing public interest in direct-to-consumer (DTC) genetic ancestry testing has been accompanied by growing concern about issues ranging from the personal and societal implications of the testing to the scientific validity of ancestry inference. The very concept of “ancestry” is subject to misunderstanding in both the general and scientific communities. What do we mean by ancestry? How exactly is ancestry measured? How far back can such ancestry be defined and by which genetic tools? How do we validate inferences about ancestry in genetic research? What are the data that demonstrate our ability to do this correctly? What can we say and what can we not say from our research findings and the test results that we generate? This white paper from the American Society of Human Genetics (ASHG) Ancestry and Ancestry Testing Task Force builds upon the 2008 ASHG Ancestry Testing Summary Statement in providing a more in-depth analysis of key scientific and non-scientific aspects of genetic ancestry inference in academia and industry. It culminates with recommendations for advancing the current debate and facilitating the development of scientifically based, ethically sound, and socially attentive guidelines concerning the use of these continually evolving technologies. PMID:20466090
Earth Science Enterprise Scientific Data Purchase Project: Verification and Validation
NASA Technical Reports Server (NTRS)
Jenner, Jeff; Policelli, Fritz; Fletcher, Rosea; Holecamp, Kara; Owen, Carolyn; Nicholson, Lamar; Dartez, Deanna
2000-01-01
This paper presents viewgraphs on the Earth Science Enterprise Scientific Data Purchase Project's verification,and validation process. The topics include: 1) What is Verification and Validation? 2) Why Verification and Validation? 3) Background; 4) ESE Data Purchas Validation Process; 5) Data Validation System and Ingest Queue; 6) Shipment Verification; 7) Tracking and Metrics; 8) Validation of Contract Specifications; 9) Earth Watch Data Validation; 10) Validation of Vertical Accuracy; and 11) Results of Vertical Accuracy Assessment.
Wodushek, Thomas R; Greher, Michael R
2017-05-01
In the first column in this 2-part series, Performance Validity Testing in Neuropsychology: Scientific Basis and Clinical Application-A Brief Review, the authors introduced performance validity tests (PVTs) and their function, provided a justification for why they are necessary, traced their ongoing endorsement by neuropsychological organizations, and described how they are used and interpreted by ever increasing numbers of clinical neuropsychologists. To enhance readers' understanding of these measures, this second column briefly describes common detection strategies used in PVTs as well as the typical methods used to validate new PVTs and determine cut scores for valid/invalid determinations. We provide a discussion of the latest research demonstrating how neuropsychologists can combine multiple PVTs in a single battery to improve sensitivity/specificity to invalid responding. Finally, we discuss future directions for the research and application of PVTs.
NASA Technical Reports Server (NTRS)
Coleman, E. A.
1980-01-01
Scientific information from previous space flights, space medicine, exercise physiology, and sports medicine was used to prepare a physical fitness manual suitable for use by members of the NASA astronaut population. A variety of scientifically valid exercise programs and activities suitable for the development of physical fitness are provided. Programs, activities, and supportive scientific data are presented in a concise, easy to read format so as to permit the user to select his or her mode of training with confidence and devote time previously spent experimenting with training routines to preparation for space flight. The programs and activities included were tested and shown to be effective and enjoyable.
Environmental risk, precaution, and scientific rationality in the context of WTO/NAFTA trade rules.
Crawford-Brown, Douglas; Pauwelyn, Joost; Smith, Kelly
2004-04-01
This article considers the role of scientific rationality in understanding statements of risk produced by a scientific community. An argument is advanced that, while scientific rationality does impose constraints on valid scientific justifications for restrictions on products and practices, it also provides flexibility in the judgments needed to both develop and apply characterizations of risk. The implications of this flexibility for the understanding of risk estimates in WTO and NAFTA deliberations are explored, with the goal of finding an intermediate ground between the view that science unambiguously justifies or rejects a policy, and the view that science is yet another cultural tool that can be manipulated in support of any decision. The result is a proposal for a dialogical view of scientific rationality in which risk estimates are depicted as confidence distributions that follow from a structured dialogue of scientific panels focused on judgments of evidence, evidential reasoning, and epistemic analysis.
Ground-water models: Validate or invalidate
Bredehoeft, J.D.; Konikow, Leonard F.
1993-01-01
The word validation has a clear meaning to both the scientific community and the general public. Within the scientific community the validation of scientific theory has been the subject of philosophical debate. The philosopher of science, Karl Popper, argued that scientific theory cannot be validated, only invalidated. Popper’s view is not the only opinion in this debate; however, many scientists today agree with Popper (including the authors). To the general public, proclaiming that a ground-water model is validated carries with it an aura of correctness that we do not believe many of us who model would claim. We can place all the caveats we wish, but the public has its own understanding of what the word implies. Using the word valid with respect to models misleads the public; verification carries with it similar connotations as far as the public is concerned. Our point is this: using the terms validation and verification are misleading, at best. These terms should be abandoned by the ground-water community.
Stem Cell Research and Clinical Translation: A Roadmap about Good Clinical Practice and Patient Care
Scopetti, Matteo; Gatto, Vittorio
2017-01-01
The latest research achievements in the field of stem cells led in 2016 to the publication of “Guidelines for Stem Cell Research and Clinical Translation” by the International Society for Stem Cell Research (ISSCR). Updating the topics covered in previous publications, the new recommendations offer interesting ethical and scientific insights. Under the common principles of research integrity, protection of patient's welfare, respect for the research subjects, transparency and social justice, the centrality of good clinical practice, and informed consent in research and translational medicine is supported. The guidelines implement the abovementioned publications, requiring rigor in all areas of research, promoting the validity of the scientific activity results and emphasizing the need for an accurate and efficient public communication. This paper aims to analyze the aforementioned guidelines in order to provide a valid interpretive tool for experts. In particular, a research activity focused on the bioethical, scientific, and social implications of the new recommendations is carried out in order to provide food for thought. Finally, as an emerging issue of potential impact of current guidelines, an overview on implications of compensation for egg donation is offered. PMID:29090010
Frati, Paola; Scopetti, Matteo; Santurro, Alessandro; Gatto, Vittorio; Fineschi, Vittorio
2017-01-01
The latest research achievements in the field of stem cells led in 2016 to the publication of "Guidelines for Stem Cell Research and Clinical Translation" by the International Society for Stem Cell Research (ISSCR). Updating the topics covered in previous publications, the new recommendations offer interesting ethical and scientific insights. Under the common principles of research integrity, protection of patient's welfare, respect for the research subjects, transparency and social justice, the centrality of good clinical practice, and informed consent in research and translational medicine is supported. The guidelines implement the abovementioned publications, requiring rigor in all areas of research, promoting the validity of the scientific activity results and emphasizing the need for an accurate and efficient public communication. This paper aims to analyze the aforementioned guidelines in order to provide a valid interpretive tool for experts. In particular, a research activity focused on the bioethical, scientific, and social implications of the new recommendations is carried out in order to provide food for thought. Finally, as an emerging issue of potential impact of current guidelines, an overview on implications of compensation for egg donation is offered.
The semantics of Chemical Markup Language (CML): dictionaries and conventions.
Murray-Rust, Peter; Townsend, Joe A; Adams, Sam E; Phadungsukanan, Weerapong; Thomas, Jens
2011-10-14
The semantic architecture of CML consists of conventions, dictionaries and units. The conventions conform to a top-level specification and each convention can constrain compliant documents through machine-processing (validation). Dictionaries conform to a dictionary specification which also imposes machine validation on the dictionaries. Each dictionary can also be used to validate data in a CML document, and provide human-readable descriptions. An additional set of conventions and dictionaries are used to support scientific units. All conventions, dictionaries and dictionary elements are identifiable and addressable through unique URIs.
The semantics of Chemical Markup Language (CML): dictionaries and conventions
2011-01-01
The semantic architecture of CML consists of conventions, dictionaries and units. The conventions conform to a top-level specification and each convention can constrain compliant documents through machine-processing (validation). Dictionaries conform to a dictionary specification which also imposes machine validation on the dictionaries. Each dictionary can also be used to validate data in a CML document, and provide human-readable descriptions. An additional set of conventions and dictionaries are used to support scientific units. All conventions, dictionaries and dictionary elements are identifiable and addressable through unique URIs. PMID:21999509
VALIDATION GUIDELINES FOR LABORATORIES PERFORMING FORENSIC ANALYSIS OF CHEMICAL TERRORISM
The Scientific Working Group on Forensic Analysis of Chemical Terrorism (SWGFACT) has developed the following guidelines for laboratories engaged in the forensic analysis of chemical evidence associated with terrorism. This document provides a baseline framework and guidance for...
[Summary: Scientific evaluation of EMDR psychotherapy].
Haour, F; de Beaurepaire, C
2016-06-01
The evaluation of psychotherapy methods is made difficult by their practical and theoretical diversities as well as the increasing number of available therapies. Evaluation based on scientific criteria in randomized control trials is providing the highest level of proof and recognition by Health Agencies. A recently described integrative psychotherapy, eye movement desensitization and reprocessing (EMDR), developed by F. Shapiro since 1989, has been confronted with the validation procedure used in pharmacological treatment. It was of interest to review the scientific validation steps carried out for this EMDR psychotherapy and for its mechanisms of action. The practical and methodological protocol of the EMDR psychotherapy for trauma integration is reviewed as well as clinical results and mechanisms. This EMDR therapy, focused on the resolutions of traumas, was started by treating patients with post-traumatic stress disorders (PTSD). The integrative EMDR protocol obtained the highest level of efficiency, for PTSD treatment, twenty years after its first publication. The efficiency of the protocol is now under study and scientific evaluation for troubles in which the trauma experiences are triggers or factors of maintenance of the troubles: anxiety, depression, phobia, sexual troubles, schizophrenia, etc. This new integrative psychotherapy follows the pathways and the timing observed for the evaluation and the validation of other therapies. Copyright © 2016 L'Encéphale, Paris. Published by Elsevier Masson SAS. All rights reserved.
Chandra monitoring, trends, and response
NASA Astrophysics Data System (ADS)
Spitzbart, Brad D.; Wolk, Scott J.; Isobe, Takashi
2002-12-01
The Chandra X-ray Observatory was launched in July, 1999 and has yielded extraordinary scientific results. Behind the scenes, our Monitoring and Trends Analysis (MTA) system has proven to be a valuable resource. With three years worth of on-orbit data, we have available a vast array of both telescope diagnostic information and analysis of scientific data to access Observatory performance. As part of Chandra's Science Operations Team (SOT), the primary goal of MTA is to provide tools for effective decision making leading to the most efficient production of quality science output from the Observatory. We occupy a middle ground between flight operations, chiefly concerned with the health and safety of the spacecraft, and validation and verification, concerned with the scientific validity of the data taken and whether or not they fulfill the observer's requirements. In that role we provide and receive support from systems engineers, instrument experts, operations managers, and scientific users. MTA tools, products, and services include real-time monitoring and alert generation for the most mission critical components, long term trending of all spacecraft systems, detailed analysis of various subsystems for life expectancy or anomaly resolution, and creating and maintaining a large SQL database of relevant information. This is accomplished through the use of a wide variety of input data sources and flexible, accessible programming and analysis techniques. This paper will discuss the overall design of the system, its evolution and the resources available.
NASA Astrophysics Data System (ADS)
Carter, Frances D.
2011-12-01
Low participation and performance in science, technology, engineering, and mathematics (STEM) fields by U.S. citizens are widely recognized as major problems with substantial economic, political, and social ramifications. Studies of collegiate interventions designed to broaden participation in STEM fields suggest that participation in undergraduate research is a key program component that enhances such student outcomes as undergraduate GPA, graduation, persistence in a STEM major, and graduate school enrollment. However, little is known about the mechanisms that are responsible for these positive effects. The current study hypothesizes that undergraduate research participation increases scientific self-efficacy and scientific research proficiency. This hypothesis was tested using data obtained from a survey of minority students from several STEM intervention programs that offer undergraduate research opportunities. Students were surveyed both prior to and following the summer of 2010. Factor analysis was used to examine the factor structure of participants' responses on scientific self-efficacy and scientific research proficiency scales. Difference-in-difference analysis was then applied to the resulting factor score differences to estimate the relationship of summer research participation with scientific self-efficacy and scientific research proficiency. Factor analytic results replicate and further validate previous findings of a general scientific self-efficacy construct (Schultz, 2008). While the factor analytic results for the exploratory scientific research proficiency scale suggest that it was also a measureable construct, the factor structure was not generalizable over time. Potential reasons for the lack of generalizability validity for the scientific research proficiency scale are explored and recommendations for emerging scales are provided. Recent restructuring attempts within federal science agencies threaten the future of STEM intervention programs. Causal estimates of the effect of undergraduate research participation on specific and measurable benefits can play an important role in ensuring the sustainability of STEM intervention programs. Obtaining such estimates requires additional studies that, inter alia, incorporate adequate sample sizes, valid measurement scales, and the ability to account for unobserved variables. Political strategies, such as compromise, can also play an important role in ensuring the sustainability of STEM intervention programs.
IMPROVING THE REPORTING OF THERAPEUTIC EXERCISE INTERVENTIONS IN REHABILITATION RESEARCH.
Page, Phil; Hoogenboom, Barb; Voight, Michael
2017-04-01
The foundation of evidence-based practice lies in clinical research, which is based on the utilization of the scientific method. The scientific method requires that all details of the experiment be provided in publications to support replication of the study in order to evaluate and validate the results. More importantly, clinical research can only be translated into practice when researchers provide explicit details of the study. Too often, rehabilitation exercise intervention studies lack the appropriate detail to allow clinicians to replicate the exercise protocol in their patient populations. Therefore, the purpose of this clinical commentary is to provide guidelines for optimal reporting of therapeutic exercise interventions in rehabilitation research. 5.
DOT National Transportation Integrated Search
2006-05-01
This research has provided NCDOT with (1) scientific observations to validate the pollutant removal : performance of selected structural BMPs, (2) a database management option for BMP monitoring and : non-monitoring sites, (3) pollution prevention pl...
NASA Astrophysics Data System (ADS)
Develaki, Maria
2017-11-01
Scientific reasoning is particularly pertinent to science education since it is closely related to the content and methodologies of science and contributes to scientific literacy. Much of the research in science education investigates the appropriate framework and teaching methods and tools needed to promote students' ability to reason and evaluate in a scientific way. This paper aims (a) to contribute to an extended understanding of the nature and pedagogical importance of model-based reasoning and (b) to exemplify how using computer simulations can support students' model-based reasoning. We provide first a background for both scientific reasoning and computer simulations, based on the relevant philosophical views and the related educational discussion. This background suggests that the model-based framework provides an epistemologically valid and pedagogically appropriate basis for teaching scientific reasoning and for helping students develop sounder reasoning and decision-taking abilities and explains how using computer simulations can foster these abilities. We then provide some examples illustrating the use of computer simulations to support model-based reasoning and evaluation activities in the classroom. The examples reflect the procedure and criteria for evaluating models in science and demonstrate the educational advantages of their application in classroom reasoning activities.
Science games and the development of scientific possible selves.
Beier, Margaret; Miller, Leslie; Wang, Shu
2012-12-01
Serious scientific games, especially those that include a virtual apprenticeship component, provide players with realistic experiences in science. This article discusses how science games can influence learning about science and the development of science-oriented possible selves through repeated practice in professional play and through social influences (e.g., peer groups). We first review the theory of possible selves (Markus and Nurius 1986) and discuss the potential of serious scientific games for influencing the development of scientific possible selves. As part of our review, we present a forensic game that inspired our work. Next we present a measure of scientific possible selves and assess its reliability and validity with a sample of middle-school students (N=374). We conclude by discussing the promise of science games and the development of scientific possible selves on both the individual and group levels as a means of inspiring STEM careers among adolescents.
Science games and the development of scientific possible selves
Beier, Margaret; Miller, Leslie; Wang, Shu
2012-01-01
Serious scientific games, especially those that include a virtual apprenticeship component, provide players with realistic experiences in science. This article discusses how science games can influence learning about science and the development of science-oriented possible selves through repeated practice in professional play and through social influences (e.g., peer groups). We first review the theory of possible selves (Markus and Nurius 1986) and discuss the potential of serious scientific games for influencing the development of scientific possible selves. As part of our review, we present a forensic game that inspired our work. Next we present a measure of scientific possible selves and assess its reliability and validity with a sample of middle-school students (N=374). We conclude by discussing the promise of science games and the development of scientific possible selves on both the individual and group levels as a means of inspiring STEM careers among adolescents. PMID:23483731
Science games and the development of scientific possible selves
NASA Astrophysics Data System (ADS)
Beier, Margaret E.; Miller, Leslie M.; Wang, Shu
2012-12-01
Serious scientific games, especially those that include a virtual apprenticeship component, provide players with realistic experiences in science. This article discusses how science games can influence learning about science and the development of science-oriented possible selves through repeated practice in professional play and through social influences (e.g., peer groups). We first review the theory of possible selves (Markus and Nurius 1986) and discuss the potential of serious scientific games for influencing the development of scientific possible selves. As part of our review, we present a forensic game that inspired our work. Next we present a measure of scientific possible selves and assess its reliability and validity with a sample of middle-school students ( N = 374). We conclude by discussing the promise of science games and the development of scientific possible selves on both the individual and group levels as a means of inspiring STEM careers among adolescents.
Cloud computing and validation of expandable in silico livers
2010-01-01
Background In Silico Livers (ISLs) are works in progress. They are used to challenge multilevel, multi-attribute, mechanistic hypotheses about the hepatic disposition of xenobiotics coupled with hepatic responses. To enhance ISL-to-liver mappings, we added discrete time metabolism, biliary elimination, and bolus dosing features to a previously validated ISL and initiated re-validated experiments that required scaling experiments to use more simulated lobules than previously, more than could be achieved using the local cluster technology. Rather than dramatically increasing the size of our local cluster we undertook the re-validation experiments using the Amazon EC2 cloud platform. So doing required demonstrating the efficacy of scaling a simulation to use more cluster nodes and assessing the scientific equivalence of local cluster validation experiments with those executed using the cloud platform. Results The local cluster technology was duplicated in the Amazon EC2 cloud platform. Synthetic modeling protocols were followed to identify a successful parameterization. Experiment sample sizes (number of simulated lobules) on both platforms were 49, 70, 84, and 152 (cloud only). Experimental indistinguishability was demonstrated for ISL outflow profiles of diltiazem using both platforms for experiments consisting of 84 or more samples. The process was analogous to demonstration of results equivalency from two different wet-labs. Conclusions The results provide additional evidence that disposition simulations using ISLs can cover the behavior space of liver experiments in distinct experimental contexts (there is in silico-to-wet-lab phenotype similarity). The scientific value of experimenting with multiscale biomedical models has been limited to research groups with access to computer clusters. The availability of cloud technology coupled with the evidence of scientific equivalency has lowered the barrier and will greatly facilitate model sharing as well as provide straightforward tools for scaling simulations to encompass greater detail with no extra investment in hardware. PMID:21129207
NASA Astrophysics Data System (ADS)
Arieska, M.; Syamsurizal, S.; Sumarmin, R.
2018-04-01
Students having difficulty in identifying and describing the vertebrate animals as well as less skilled in science process as practical. Increased expertise in scientific skills, one of which is through practical activities using practical guidance based on scientific approach. This study aims to produce practical guidance vertebrate taxonomy for biology education students PGRI STKIP West Sumatra valid. This study uses a model of Plomp development consisting of three phases: the initial investigation, floating or prototype stage, and the stage of assessment. Data collection instruments used in this study is a validation sheet guiding practicum. Data were analyzed descriptively based on data obtained from the field. The result of the development of practical guidance vertebrate taxonomic validity value of 3.22 is obtained with very valid category. Research and development has produced a practical guide based vertebrate taxonomic scientific approach very valid.
Development and validation of an instrument for evaluating inquiry-based tasks in science textbooks
NASA Astrophysics Data System (ADS)
Yang, Wenyuan; Liu, Enshan
2016-12-01
This article describes the development and validation of an instrument that can be used for content analysis of inquiry-based tasks. According to the theories of educational evaluation and qualities of inquiry, four essential functions that inquiry-based tasks should serve are defined: (1) assisting in the construction of understandings about scientific concepts, (2) providing students opportunities to use inquiry process skills, (3) being conducive to establishing understandings about scientific inquiry, and (4) giving students opportunities to develop higher order thinking skills. An instrument - the Inquiry-Based Tasks Analysis Inventory (ITAI) - was developed to judge whether inquiry-based tasks perform these functions well. To test the reliability and validity of the ITAI, 4 faculty members were invited to use the ITAI to collect data from 53 inquiry-based tasks in the 3 most widely adopted senior secondary biology textbooks in Mainland China. The results indicate that (1) the inter-rater reliability reached 87.7%, (2) the grading criteria have high discriminant validity, (3) the items possess high convergent validity, and (4) the Cronbach's alpha reliability coefficient reached 0.792. The study concludes that the ITAI is valid and reliable. Because of its solid foundations in theoretical and empirical argumentation, the ITAI is trustworthy.
Validation of alternative methods for toxicity testing.
Bruner, L H; Carr, G J; Curren, R D; Chamberlain, M
1998-01-01
Before nonanimal toxicity tests may be officially accepted by regulatory agencies, it is generally agreed that the validity of the new methods must be demonstrated in an independent, scientifically sound validation program. Validation has been defined as the demonstration of the reliability and relevance of a test method for a particular purpose. This paper provides a brief review of the development of the theoretical aspects of the validation process and updates current thinking about objectively testing the performance of an alternative method in a validation study. Validation of alternative methods for eye irritation testing is a specific example illustrating important concepts. Although discussion focuses on the validation of alternative methods intended to replace current in vivo toxicity tests, the procedures can be used to assess the performance of alternative methods intended for other uses. Images Figure 1 PMID:9599695
A Chemistry Concept Reasoning Test
ERIC Educational Resources Information Center
Cloonan, Carrie A.; Hutchinson, John S.
2011-01-01
A Chemistry Concept Reasoning Test was created and validated providing an easy-to-use tool for measuring conceptual understanding and critical scientific thinking of general chemistry models and theories. The test is designed to measure concept understanding comparable to that found in free-response questions requiring explanations over…
Toward an Interdisciplinary Science of Culture
ERIC Educational Resources Information Center
Hayes, Linda J.; Fryling, Mitch J.
2009-01-01
Cultural events are of interest to scientists working in many scientific domains. Given this, an interdisciplinary science of culture may provide a more thorough understanding of cultural phenomena. However, interdisciplinary sciences depend upon the validity and vitality of the participating disciplines. This article reviews the nature of…
NASA Technical Reports Server (NTRS)
Fargion, Giulietta S.; McClain, Charles R.
2002-01-01
The purpose of this technical report is to provide current documentation of the Sensor Intercomparison and Merger for Biological and Interdisciplinary Oceanic Studies (SIMBIOS) Project activities, NASA Research Announcement (NRA) research status, satellite data processing, data product validation, and field calibration. This documentation is necessary to ensure that critical information is related to the scientific community and NASA management. This critical information includes the technical difficulties and challenges of validating and combining ocean color data from an array of independent satellite systems to form consistent and accurate global bio-optical time series products. This technical report is not meant as a substitute for scientific literature. Instead, it will provide a ready and responsive vehicle for the multitude of technical reports issued by an operational project. The SIMBIOS Science Team Principal Investigators' (PIs) original contributions to this report are in chapters four and above. The purpose of these contributions is to describe the current research status of the SIMBIOS-NRA-96 funded research. The contributions are published as submitted, with the exception of minor edits to correct obvious grammatical or clerical errors.
Support Net for Frontline Providers
2016-03-01
influencing members’ continuance intentions in professional virtual communities - a longitudinal study. Journal of Information Science, 33(4), 451-467...of law, no person shall be subject to any penalty for failing to comply with a collection of information if it does not display a currently valid OMB...from a scientific and theoretically based manner. Results from this project provide critical prevalence information , theoretical development, and
Chinsembu, Kazhila C
2009-01-01
Many people with Human Immunodeficiency Virus/Acquired Immunodeficiency Syndrome (HIV/AIDS) in Namibia have access to antiretroviral drugs but some still use traditional medicines to treat opportunistic infections and offset side-effects from antiretroviral medication. Namibia has a rich biodiversity of indigenous plants that could contain novel anti-HIV agents. However, such medicinal plants have not been identified and properly documented. Various ethnomedicines used to treat HIV/AIDS opportunistic infections have not been scientifically validated for safety and efficacy. These limitations are mostly attributable to the lack of collaboration between biomedical scientists and traditional healers. This paper presents a five-step contextual model for initiating collaboration with Namibian traditional healers in order that candidate plants that may contain novel anti-HIV agents are identified, and traditional medicines used to treat HIV/AIDS opportunistic infections are subjected to scientific validation. The model includes key structures and processes used to initiate collaboration with traditional healers in Namibia; namely, the National Biosciences Forum, a steering committee with the University of Namibia (UNAM) as the focal point, a study tour to Zambia and South Africa where other collaborative frameworks were examined, commemorations of the African Traditional Medicine Day (ATMD), and consultations with stakeholders in north-eastern Namibia. Experiences from these structures and processes are discussed. All traditional healers in north-eastern Namibia were willing to collaborate with UNAM in order that their traditional medicines could be subjected to scientific validation. The current study provides a framework for future collaboration with traditional healers and the selection of candidate anti-HIV medicinal plants and ethnomedicines for scientific testing in Namibia. PMID:19852791
Evolving herbal formulations in management of dengue fever.
Singh, Pawan Kumar; Rawat, Pooja
Dengue is endemic in more than 100 countries and it is estimated that annually above 390 million infections occur globally. During the period between 1996-2015, a massive increase of more than 500 per cent has been recorded in number of dengue cases reported in India. Till date, there are no specific globally accepted treatments for dengue fever in any system of medicine. Dengue does not cause very high mortality if properly handled and is currently being managed by clinicians through various adjuvant and alternative therapeutic options. Various plant based preparations have been used in different parts of India for combating dengue and are simultaneously also being scientifically validated by researchers. However, number of such scientific validation studies on phytomedicines are very less in India. Out of twenty-two plants reported against dengue, only four have been studied scientifically. Azadirachta indica, Carica papaya, Hippophae rhamnoides and Cissampelos pareira extracts were found effective and demonstrated improvement in clinical symptoms and direct inhibitory effect on dengue virus. C. papaya clinical trial showed increase in platelet count and faster recovery. These plants may be explored further as probable candidates for drug discovery against dengue. There is a need to search more such herbal formulations, which are being practiced at local level, document properly and validate them scientifically to confirm efficacy, mechanistic action and safety, before use. The herbal formulations being used by communities are the low hanging fruits which may provide alternative or adjuvant therapy if proper validation, value addition and product development steps are followed. This paper aims to review the recent status of dengue cases, deaths and evolving curative herbal solutions adapted and reported from India to combat the disease. Copyright © 2017 Transdisciplinary University, Bangalore and World Ayurveda Foundation. Published by Elsevier B.V. All rights reserved.
Verhagen, H; Aruoma, O I; van Delft, J H M; Dragsted, L O; Ferguson, L R; Knasmüller, S; Pool-Zobel, B L; Poulsen, H E; Williamson, G; Yannai, S
2003-05-01
There is increasing evidence that chemicals/test substances cannot only have adverse effects, but that there are many substances that can (also) have a beneficial effect on health. As this journal regularly publishes papers in this area and has every intention in continuing to do so in the near future, it has become essential that studies reported in this journal reflect an adequate level of scientific scrutiny. Therefore a set of essential characteristics of studies has been defined. These basic requirements are default properties rather than non-negotiables: deviations are possible and useful, provided they can be justified on scientific grounds. The 10 basic requirements for a scientific paper reporting antioxidant, antimutagenic or anticarcinogenic potential of test substances in in vitro experiments and animal studies in vivo concern the following areas: (1) Hypothesis-driven study design; (2) The nature of the test substance; (3) Valid and invalid test systems; (4) The selection of dose levels and gender; (5) Reversal of the effects induced by oxidants, carcinogens and mutagens; (6) Route of administration; (7) Number and validity of test variables; (8) Repeatability and reproducibility; (9) Statistics; and (10) Quality Assurance.
Providing a Science Base for the Evaluation of Tobacco Products
Berman, Micah L.; Connolly, Greg; Cummings, K. Michael; Djordjevic, Mirjana V.; Hatsukami, Dorothy K.; Henningfield, Jack E.; Myers, Matthew; O'Connor, Richard J.; Parascandola, Mark; Rees, Vaughan; Rice, Jerry M.
2015-01-01
Objective Evidence-based tobacco regulation requires a comprehensive scientific framework to guide the evaluation of new tobacco products and health-related claims made by product manufacturers. Methods The Tobacco Product Assessment Consortium (TobPRAC) employed an iterative process involving consortia investigators, consultants, a workshop of independent scientists and public health experts, and written reviews in order to develop a conceptual framework for evaluating tobacco products. Results The consortium developed a four-phased framework for the scientific evaluation of tobacco products. The four phases addressed by the framework are: (1) pre-market evaluation, (2) pre-claims evaluation, (3) post-market activities, and (4) monitoring and re-evaluation. For each phase, the framework proposes the use of validated testing procedures that will evaluate potential harms at both the individual and population level. Conclusions While the validation of methods for evaluating tobacco products is an ongoing and necessary process, the proposed framework need not wait for fully validated methods to be used in guiding tobacco product regulation today. PMID:26665160
Mejia, Christian R; Valladares-Garrido, Mario J; Miñan-Tapia, Armando; Serrano, Felipe T; Tobler-Gómez, Liz E; Pereda-Castro, William; Mendoza-Flores, Cynthia R; Mundaca-Manay, Maria Y; Valladares-Garrido, Danai
2017-01-01
Sci-Hub is a useful web portal for people working in science as it provides access to millions of free scientific articles. Satisfaction and usage should be explored in the Latino student population. The objective of this study was to evaluate the use, knowledge, and perception of the scientific contribution of Sci-Hub in medical students from Latin America. A multicenter, observational, analytical study was conducted in 6632 medical students from 6 countries in Latin America. We surveyed from a previously validated instrument, delving into knowledge, monthly average usage, satisfaction level, and perception of the scientific contributions provided by Sci-Hub. Frequencies and percentages are described, and generalized linear models were used to establish statistical associations. Only 19.2% of study participants knew of Sci-Hub and its function, while the median use was twice a month. 29.9% of Sci-Hub-aware participants claimed they always find the desired scientific information in their Sci-Hub search; 62.5% of participants affirmed that Sci-Hub contributes to scientific investigation; only 2.2% reported that Sci-Hub does not contribute to science. The majority of Latino students are not aware of Sci-Hub.
78 FR 37228 - Cooperative Agreement To Support the Western Center for Food Safety
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-20
... Davis main campus and experimental stations provide invaluable access to one of the leading food... sites for experimental trials is instrumental to FDA receiving the most current scientifically validated... facilitate industry compliance with preventive control standards. Information gleaned from this research has...
Placing Science into Its Human Context: Using Scientific Autobiography to Teach Chemistry
NASA Astrophysics Data System (ADS)
Carroll, Felix A.; Seeman, Jeffrey I.
2001-12-01
Scientific autobiography and biography can improve chemistry learning by helping students relate otherwise abstract concepts to important events in the lives of fellow human beings. In advanced courses, reading scientific autobiography and biography can help students see how scientific collaboration, advances in instrumentation, and major events in human lives influence the development of chemical ideas over time. In addition, studying many years of an individual's research program can demonstrate the progress of science, the connectivity of research findings, and the validity of experimental results over many decades. This paper describes the use of an autobiography of an eminent chemist in an advanced undergraduate chemistry course. This approach not only enhances the teaching of chemical concepts, but it also provides students with expanded opportunities for cooperative and self-directed learning activities.
ERIC Educational Resources Information Center
Jurecki, Karenann; Wander, Matthew C. F.
2012-01-01
In this work, we present an approach for teaching students to evaluate scientific literature and other materials critically. We use four criteria divided into two tiers: original research, authority, objectivity, and validity. The first tier, originality and authority, assesses the quality of the source. The second tier, objectivity and validity,…
ERIC Educational Resources Information Center
Lin, Tzung-Jin; Tsai, Chin-Chung
2017-01-01
The purpose of this study was to develop and validate two survey instruments to evaluate high school students' scientific epistemic beliefs and goal orientations in learning science. The initial relationships between the sampled students' scientific epistemic beliefs and goal orientations in learning science were also investigated. A final valid…
Quantifying falsifiability of scientific theories
NASA Astrophysics Data System (ADS)
Nemenman, Ilya
I argue that the notion of falsifiability, a key concept in defining a valid scientific theory, can be quantified using Bayesian Model Selection, which is a standard tool in modern statistics. This relates falsifiability to the quantitative version of the statistical Occam's razor, and allows transforming some long-running arguments about validity of scientific theories from philosophical discussions to rigorous mathematical calculations.
Web Based Semi-automatic Scientific Validation of Models of the Corona and Inner Heliosphere
NASA Astrophysics Data System (ADS)
MacNeice, P. J.; Chulaki, A.; Taktakishvili, A.; Kuznetsova, M. M.
2013-12-01
Validation is a critical step in preparing models of the corona and inner heliosphere for future roles supporting either or both the scientific research community and the operational space weather forecasting community. Validation of forecasting quality tends to focus on a short list of key features in the model solutions, with an unchanging order of priority. Scientific validation exposes a much larger range of physical processes and features, and as the models evolve to better represent features of interest, the research community tends to shift its focus to other areas which are less well understood and modeled. Given the more comprehensive and dynamic nature of scientific validation, and the limited resources available to the community to pursue this, it is imperative that the community establish a semi-automated process which engages the model developers directly into an ongoing and evolving validation process. In this presentation we describe the ongoing design and develpment of a web based facility to enable this type of validation of models of the corona and inner heliosphere, on the growing list of model results being generated, and on strategies we have been developing to account for model results that incorporate adaptively refined numerical grids.
Archer, Edward; Pavela, Gregory; Lavie, Carl J
2015-07-01
The Scientific Report of the 2015 Dietary Guidelines Advisory Committee was primarily informed by memory-based dietary assessment methods (M-BMs) (eg, interviews and surveys). The reliance on M-BMs to inform dietary policy continues despite decades of unequivocal evidence that M-BM data bear little relation to actual energy and nutrient consumption. Data from M-BMs are defended as valid and valuable despite no empirical support and no examination of the foundational assumptions regarding the validity of human memory and retrospective recall in dietary assessment. We assert that uncritical faith in the validity and value of M-BMs has wasted substantial resources and constitutes the greatest impediment to scientific progress in obesity and nutrition research. Herein, we present evidence that M-BMs are fundamentally and fatally flawed owing to well-established scientific facts and analytic truths. First, the assumption that human memory can provide accurate or precise reproductions of past ingestive behavior is indisputably false. Second, M-BMs require participants to submit to protocols that mimic procedures known to induce false recall. Third, the subjective (ie, not publicly accessible) mental phenomena (ie, memories) from which M-BM data are derived cannot be independently observed, quantified, or falsified; as such, these data are pseudoscientific and inadmissible in scientific research. Fourth, the failure to objectively measure physical activity in analyses renders inferences regarding diet-health relationships equivocal. Given the overwhelming evidence in support of our position, we conclude that M-BM data cannot be used to inform national dietary guidelines and that the continued funding of M-BMs constitutes an unscientific and major misuse of research resources. Copyright © 2015 Mayo Foundation for Medical Education and Research. Published by Elsevier Inc. All rights reserved.
Evaluation of Scientific Journal Validity, It's Articles and Their Authors.
Masic, Izet; Begic, Edin
2016-01-01
The science that deals with evaluation of a scientific article refer to the finding quantitative indicators (index) of the scientific research success is called scientometrics. Scientometrics is part of scientology (the science of science) that analyzes scientific papers and their citations in a selected sample of scientific journals. There are four indexes by which it is possible to measure the validity of scientific research: number of articles, impact factor of the journal, the number and order of authors and citations number. Every scientific article is a record of the data written by the rules recommended by several scientific associations and committees. Growing number of authors and lot of authors with same name and surname led to the introduction of the necessary identification agent - ORCID number.
NASA Astrophysics Data System (ADS)
Svedholm, Annika M.; Lindeman, Marjaana
2013-03-01
Lay conceptions of energy often conflict with scientific knowledge, hinder science learning and scientific literacy, and provide a basis for ungrounded beliefs. In a sample of Finnish upper secondary school students, energy was attributed with features of living and animate beings and thought of as a mental property. These ontologically confused conceptions (OCC) were associated with trust in complementary and alternative medicine (CAM), and independent of scientifically valid conceptions. Substance-based energy conceptions followed the correlational pattern of OCC, rather than scientific conceptions. OCC and CAM decreased both during the regular school physics curriculum and after a lesson targeted at the ontological confusions. OCC and CAM were slightly less common among students with high actively open-minded thinking, low trust in intuition and high need for cognition. The findings are discussed in relation to the goals of scientific education.
NASA Astrophysics Data System (ADS)
Foglini, F.
2016-12-01
The EVER-EST project aims to develop a generic Virtual Research Environment (VRE) tailored to the needs and validated by the Earth Science domain. To achieve this the EVER-EST VRE provides earth scientists with the means to seamlessly manage both the data involved in their computationally intensive disciplines and the scientific methods applied in their observations and modellings, which lead to the specific results that need to be attributable, validated and shared within the community e.g. in the form of scholarly communications. Central to this approach is the concept of Research Objects (ROs) as semantically rich aggregations of resources that bring together data, methods and people in scientific investigations. ROs enable the creation of digital artifacts that can encapsulate scientific knowledge and provide a mechanism for sharing and discovering assets of reusable research and scientific assets as first-class citizens. The EVER-EST VRE is the first RO-centric native infrastructure leveraging the notion of ROs and their application in observational rather than experimental disciplines and particularly in Earth Science. The Institute of MARine Science (ISMAR-CNR) is a scientific partner of the EVER-EST project providing useful and applicable contributions to the identification and definition of variables indicated by the European Commission in the Marine Strategy Framework Directive (MSFD) to achieve the Good Environment Status (GES). The VRC is willing to deliver practical methods, procedures and protocols to support coherent and widely accepted interpretation of the MSFD. The use case deal with 1. the Posidonia meadows along the Apulian coast, 2. the deep-sea corals along the Apulian continenatal slope and 3. the jellyfish abundance in the Italian water. The SeaMonitoring VRC created specific RO for asesing deep sea corals suitabilty, Posidonia meadows occurrences and for detecting jelly fish density aloing the italian coast. The VRC developed specific RO for bathymetric data implementing a data preservation plan and a specific vocabulary for metadata.
NASA Technical Reports Server (NTRS)
Chien, Steve; Kandt, R. Kirk; Roden, Joseph; Burleigh, Scott; King, Todd; Joy, Steve
1992-01-01
Scientific data preparation is the process of extracting usable scientific data from raw instrument data. This task involves noise detection (and subsequent noise classification and flagging or removal), extracting data from compressed forms, and construction of derivative or aggregate data (e.g. spectral densities or running averages). A software system called PIPE provides intelligent assistance to users developing scientific data preparation plans using a programming language called Master Plumber. PIPE provides this assistance capability by using a process description to create a dependency model of the scientific data preparation plan. This dependency model can then be used to verify syntactic and semantic constraints on processing steps to perform limited plan validation. PIPE also provides capabilities for using this model to assist in debugging faulty data preparation plans. In this case, the process model is used to focus the developer's attention upon those processing steps and data elements that were used in computing the faulty output values. Finally, the dependency model of a plan can be used to perform plan optimization and runtime estimation. These capabilities allow scientists to spend less time developing data preparation procedures and more time on scientific analysis tasks. Because the scientific data processing modules (called fittings) evolve to match scientists' needs, issues regarding maintainability are of prime importance in PIPE. This paper describes the PIPE system and describes how issues in maintainability affected the knowledge representation used in PIPE to capture knowledge about the behavior of fittings.
A Primer on Observational Measurement.
Girard, Jeffrey M; Cohn, Jeffrey F
2016-08-01
Observational measurement plays an integral role in a variety of scientific endeavors within biology, psychology, sociology, education, medicine, and marketing. The current article provides an interdisciplinary primer on observational measurement; in particular, it highlights recent advances in observational methodology and the challenges that accompany such growth. First, we detail the various types of instrument that can be used to standardize measurements across observers. Second, we argue for the importance of validity in observational measurement and provide several approaches to validation based on contemporary validity theory. Third, we outline the challenges currently faced by observational researchers pertaining to measurement drift, observer reactivity, reliability analysis, and time/expense. Fourth, we describe recent advances in computer-assisted measurement, fully automated measurement, and statistical data analysis. Finally, we identify several key directions for future observational research to explore.
RE-EVALUATION OF APPLICABILITY OF AGENCY SAMPLE HOLDING TIMES
The Purpose and Rationale is to:
To assess the validity of currently recognized holding times and to provide a scientific basis for changes tha may be necessary to the current regulations.
While holding times may appear adequate to protect sample integrity and provi...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-22
... on rigorous scientifically based research methods to assess the effectiveness of a particular... activities and programs; and (B) Includes research that-- (i) Employs systematic, empirical methods that draw... or observational methods that provide reliable and valid data across evaluators and observers, across...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-15
... that is based on rigorous scientifically based research methods to assess the effectiveness of a...) Relies on measurements or observational methods that provide reliable and valid data across evaluators... of innovative, cohesive models that are based on research and have demonstrated that they effectively...
First-Year Teacher Knowledge of Phonemic Awareness and Its Instruction
ERIC Educational Resources Information Center
Cheesman, Elaine A.; McGuire, Joan M.; Shankweiler, Donald; Coyne, Michael
2009-01-01
Converging evidence has identified phonemic awareness (PA) as one of five essential components of beginning reading instruction. Evidence suggests that many teachers do not have the recommended knowledge or skills sufficient to provide effective PA instruction within the context of scientifically validated reading education. This study examines…
Rural Leadership Development: A Synthesis of Research
ERIC Educational Resources Information Center
Kaufman, Eric K.; Rudd, Rick D.
2006-01-01
With millions of dollars being invested in adult rural leadership development, it is essential that research be conducted to determine the effectiveness of this investment. Such research can validate the investment and provide guidance for future programming. However, an extensive review of literature in Cambridge Scientific Abstracts yielded only…
Overview of SCIAMACHY validation: 2002 2004
NASA Astrophysics Data System (ADS)
Piters, A. J. M.; Bramstedt, K.; Lambert, J.-C.; Kirchhoff, B.
2005-08-01
SCIAMACHY, on board Envisat, is now in operation for almost three years. This UV/visible/NIR spectrometer measures the solar irradiance, the earthshine radiance scattered at nadir and from the limb, and the attenuation of solar radiation by the atmosphere during sunrise and sunset, from 240 to 2380 nm and at moderate spectral resolution. Vertical columns and profiles of a variety of atmospheric constituents are inferred from the SCIAMACHY radiometric measurements by dedicated retrieval algorithms. With the support of ESA and several international partners, a methodical SCIAMACHY validation programme has been developed jointly by Germany, the Netherlands and Belgium (the three instrument providing countries) to face complex requirements in terms of measured species, altitude range, spatial and temporal scales, geophysical states and intended scientific applications. This summary paper describes the approach adopted to address those requirements. The actual validation of the operational SCIAMACHY processors established at DLR on behalf of ESA has been hampered by data distribution and processor problems. Since first data releases in summer 2002, operational processors were upgraded regularly and some data products - level-1b spectra, level-2 O3, NO2, BrO and clouds data - have improved significantly. Validation results summarised in this paper conclude that for limited periods and geographical domains they can already be used for atmospheric research. Nevertheless, remaining processor problems cause major errors preventing from scientific usability in other periods and domains. Untied to the constraints of operational processing, seven scientific institutes (BIRA-IASB, IFE, IUP-Heidelberg, KNMI, MPI, SAO and SRON) have developed their own retrieval algorithms and generated SCIAMACHY data products, together addressing nearly all targeted constituents. Most of the UV-visible data products (both columns and profiles) already have acceptable, if not excellent, quality. Several near-infrared column products are still in development but they have already demonstrated their potential for a variety of applications. In any case, scientific users are advised to read carefully validation reports before using the data. It is required and anticipated that SCIAMACHY validation will continue throughout instrument lifetime and beyond. The actual amount of work will obviously depend on funding considerations.
Nikolian, Vahagn C; Ibrahim, Andrew M
2017-09-01
Journals fill several important roles within academic medicine, including building knowledge, validating quality of methods, and communicating research. This section provides an overview of these roles and highlights innovative approaches journals have taken to enhance dissemination of research. As journals move away from print formats and embrace web-based content, design-centered thinking will allow for engagement of a larger audience. Examples of recent efforts in this realm are provided, as well as simplified strategies for developing visual abstracts to improve dissemination via social media. Finally, we hone in on principles of learning and education which have driven these advances in multimedia-based communication in scientific research.
Formoso, Giulio; Rizzini, Paolo; Bassi, Maurizio; Bonfanti, Paolo; Rizzardini, Giuliano; Campomori, Annalisa; Mosconi, Paola
2016-09-01
The wide offer of information on pharmaceuticals does not often fulfill physicians' needs: problems of relevance, access, quality and applicability are widely recognized, and doctors often rely on their own experience and expert opinions rather than on available evidence. A quali-quantitative research was carried out in Italy to provide an overview on information seeking behavior and information needs of doctors, in particular of infectious disease specialists, and to suggest an action plan for improving relevance, quality and usability of scientific information. We did a quantitative survey and three focus groups. Two hundred infectious disease specialists answered a 24-item questionnaire aimed at investigating features of scientific information they receive and their ratings about its completeness, quality and usability. Subsequent focus groups, each involving eight specialists, investigated their opinions on information sources and materials, and their suggestions on how these could better support their information needs. The quantitative survey indicated doctors' appreciation of traditional channels (especially drug representatives) and information materials (brochures), but also their attitude to autonomous search of information and their wish to have more digital channels available. Focus groups provided more depth and, not surprisingly, revealed that physicians consider critical to get complete, comparative and specific information quickly, but also that they would like to discuss their doubts with expert colleagues. Quite strikingly, limited concerns were expressed on information validity, potential biases and conflicts of interests, as scientific validity seems to be related to the perceived authoritativeness of information sources rather than to the availability of a transparent evaluation framework. Although this research investigated views of infectious disease specialists, we believe that their opinions and perceived needs should not substantially differ from those of other clinicians, either in primary or in secondary care. In participants' view, the ideal information framework should provide quick and tailored answers through available evidence and favor the exchange of information between practitioners and trusted experts. The general consensus existing within the scientific and medical community on the need for integrating available evidence and experience is confirmed, although the issues of information validity and conflicts of interests seem definitely overlooked.
ERIC Educational Resources Information Center
Romine, William L.; Sadler, Troy D.; Kinslow, Andrew T.
2017-01-01
We describe the development and validation of the Quantitative Assessment of Socio-scientific Reasoning (QuASSR) in a college context. The QuASSR contains 10 polytomous, two-tiered items crossed between two scenarios, and is based on theory suggesting a four-pronged structure for SSR (complexity, perspective taking, inquiry, and skepticism). In…
Scientific Reporting: Raising the Standards.
McLeroy, Kenneth R; Garney, Whitney; Mayo-Wilson, Evan; Grant, Sean
2016-10-01
This article is based on a presentation that was made at the 2014 annual meeting of the editorial board of Health Education & Behavior. The article addresses critical issues related to standards of scientific reporting in journals, including concerns about external and internal validity and reporting bias. It reviews current reporting guidelines, effects of adopting guidelines, and offers suggestions for improving reporting. The evidence about the effects of guideline adoption and implementation is briefly reviewed. Recommendations for adoption and implementation of appropriate guidelines, including considerations for journals, are provided. © 2016 Society for Public Health Education.
The Scientific Status of Projective Techniques.
Lilienfeld, S O; Wood, J M; Garb, H N
2000-11-01
Although projective techniques continue to be widely used in clinical and forensic settings, their scientific status remains highly controversial. In this monograph, we review the current state of the literature concerning the psychometric properties (norms, reliability, validity, incremental validity, treatment utility) of three major projective instruments: Rorschach Inkblot Test, Thematic Apperception Test (TAT), and human figure drawings. We conclude that there is empirical support for the validity of a small number of indexes derived from the Rorschach and TAT. However, the substantial majority of Rorschach and TAT indexes are not empirically supported. The validity evidence for human figure drawings is even more limited. With a few exceptions, projective indexes have not consistently demonstrated incremental validity above and beyond other psychometric data. In addition, we summarize the results of a new meta-analysis intended to examine the capacity of these three instruments to detect child sexual abuse. Although some projective instruments were better than chance at detecting child sexual abuse, there were virtually no replicated findings across independent investigative teams. This meta-analysis also provides the first clear evidence of substantial file drawer effects in the projectives literature, as the effect sizes from published studies markedly exceeded those from unpublished studies. We conclude with recommendations regarding the (a) construction of projective techniques with adequate validity, (b) forensic and clinical use of projective techniques, and (c) education and training of future psychologists regarding projective techniques. © 2000 Association for Psychological Science.
Mochizuki, Ayumi; Ieki, Katsunori; Kamimori, Hiroshi; Nagao, Akemi; Nakai, Keiko; Nakayama, Akira; Nanba, Eitaro
2018-04-01
The guidance and several guidelines on bioanalytical method validation, which were issued by the US FDA, EMA and Ministry of Health, Labour and Welfare, list the 'full' validation parameters; however, none of these provide any details for 'partial' validation. Japan Bioanalysis Forum approved a total of three annual discussion groups from 2012 to 2014. In the discussion groups, members from pharmaceutical companies and contract research organizations discussed the details of partial validation from a risk assessment viewpoint based on surveys focusing on bioanalysis of small molecules using LC-MS/MS in Japan. This manuscript presents perspectives and recommendations for most conceivable changes that can be made to full and partial validations by members of the discussion groups based on their experiences and discussions at the Japan Bioanalysis Forum Symposium.
Mejia, Christian R.; Valladares-Garrido, Mario J.; Miñan-Tapia, Armando; Serrano, Felipe T.; Tobler-Gómez, Liz E.; Pereda-Castro, William; Mendoza-Flores, Cynthia R.; Mundaca-Manay, Maria Y.; Valladares-Garrido, Danai
2017-01-01
Introduction Sci-Hub is a useful web portal for people working in science as it provides access to millions of free scientific articles. Satisfaction and usage should be explored in the Latino student population. The objective of this study was to evaluate the use, knowledge, and perception of the scientific contribution of Sci-Hub in medical students from Latin America. Methodology A multicenter, observational, analytical study was conducted in 6632 medical students from 6 countries in Latin America. We surveyed from a previously validated instrument, delving into knowledge, monthly average usage, satisfaction level, and perception of the scientific contributions provided by Sci-Hub. Frequencies and percentages are described, and generalized linear models were used to establish statistical associations. Results Only 19.2% of study participants knew of Sci-Hub and its function, while the median use was twice a month. 29.9% of Sci-Hub-aware participants claimed they always find the desired scientific information in their Sci-Hub search; 62.5% of participants affirmed that Sci-Hub contributes to scientific investigation; only 2.2% reported that Sci-Hub does not contribute to science. Conclusion The majority of Latino students are not aware of Sci-Hub. PMID:28982181
Design and validation of general biology learning program based on scientific inquiry skills
NASA Astrophysics Data System (ADS)
Cahyani, R.; Mardiana, D.; Noviantoro, N.
2018-03-01
Scientific inquiry is highly recommended to teach science. The reality in the schools and colleges is that many educators still have not implemented inquiry learning because of their lack of understanding. The study aims to1) analyze students’ difficulties in learning General Biology, 2) design General Biology learning program based on multimedia-assisted scientific inquiry learning, and 3) validate the proposed design. The method used was Research and Development. The subjects of the study were 27 pre-service students of general elementary school/Islamic elementary schools. The workflow of program design includes identifying learning difficulties of General Biology, designing course programs, and designing instruments and assessment rubrics. The program design is made for four lecture sessions. Validation of all learning tools were performed by expert judge. The results showed that: 1) there are some problems identified in General Biology lectures; 2) the designed products include learning programs, multimedia characteristics, worksheet characteristics, and, scientific attitudes; and 3) expert validation shows that all program designs are valid and can be used with minor revisions. The first section in your paper.
Driver Education Curriculum Guide. Alcohol and Other Drugs.
ERIC Educational Resources Information Center
Ohio State Dept. of Education, Columbus.
Designed to provide instructors and students with reliable and scientifically validated information about alcohol and other drugs, this curriculum guide presents lessons in six major areas: (1) drugs and traffic safety; (2) alcohol: what it is and how it works; (3) alcohol: use, abuse, and moderation; (4) drugs other than alcohol: types, uses, and…
The development of alternative methods for toxicity testing is driven by the need for scientifically valid data that can be obtained in a rapid and cost-efficient manner. In vitro systems provide a model in which chemical effects on cellular events can be examined using technique...
Users guide for ERB 7 MAT (including the first year quality control)
NASA Technical Reports Server (NTRS)
Groveman, B.
1984-01-01
In the first section of this report background information for the use of the ERB-7 Master Archival Tapes (MAT) is provided. The second section gives details regarding the scientific validity and quality of the MAT. The MAT data analyzed covers the period from November 16, 1978 to October 31, 1979.
SIMBIOS Project; 2003 Annual Report
NASA Technical Reports Server (NTRS)
McClain, Charles R.; Fargion, Giulietta S.
2003-01-01
The purpose of this technical report is to provide current documentation of the the Sensor Intercomparison and Merger for Biological and Interdisciplinary Oceanic Studies (SIMBIOS) Project activities, NASA Research Announcement (NRA) research status, satellite data processing, data product validation, and field calibration. This documentation is necessary to ensure that critical information is related to the scientific community and NASA management. This critical information includes the technical difficulties and challenges of validating and combining ocean color data from an array of independent satellite systems to form consistent and accurate global bio-optical time series products. This technical report is not meant as a substitute for scientific literature. Instead, it will provide a ready and responsive vehicle for the multitude of technical reports issued by an operational project. The SIMBIOS Science Team Principal Investigators (PIs) original contributions to this report are in chapters four and above. The purpose of these contributions is to describe the current research status of the SIMBIOS-NRA-99 funded research. The contributions are published as submitted, with the exception of minor edits to correct obvious grammatical or clerical errors.
Sun, Xiaoling; Kaur, Jasleen; Milojević, Staša; Flammini, Alessandro; Menczer, Filippo
2013-01-01
The birth and decline of disciplines are critical to science and society. How do scientific disciplines emerge? No quantitative model to date allows us to validate competing theories on the different roles of endogenous processes, such as social collaborations, and exogenous events, such as scientific discoveries. Here we propose an agent-based model in which the evolution of disciplines is guided mainly by social interactions among agents representing scientists. Disciplines emerge from splitting and merging of social communities in a collaboration network. We find that this social model can account for a number of stylized facts about the relationships between disciplines, scholars, and publications. These results provide strong quantitative support for the key role of social interactions in shaping the dynamics of science. While several "science of science" theories exist, this is the first account for the emergence of disciplines that is validated on the basis of empirical data.
A standardized set of 3-D objects for virtual reality research and applications.
Peeters, David
2018-06-01
The use of immersive virtual reality as a research tool is rapidly increasing in numerous scientific disciplines. By combining ecological validity with strict experimental control, immersive virtual reality provides the potential to develop and test scientific theories in rich environments that closely resemble everyday settings. This article introduces the first standardized database of colored three-dimensional (3-D) objects that can be used in virtual reality and augmented reality research and applications. The 147 objects have been normed for name agreement, image agreement, familiarity, visual complexity, and corresponding lexical characteristics of the modal object names. The availability of standardized 3-D objects for virtual reality research is important, because reaching valid theoretical conclusions hinges critically on the use of well-controlled experimental stimuli. Sharing standardized 3-D objects across different virtual reality labs will allow for science to move forward more quickly.
Sun, Xiaoling; Kaur, Jasleen; Milojević, Staša; Flammini, Alessandro; Menczer, Filippo
2013-01-01
The birth and decline of disciplines are critical to science and society. How do scientific disciplines emerge? No quantitative model to date allows us to validate competing theories on the different roles of endogenous processes, such as social collaborations, and exogenous events, such as scientific discoveries. Here we propose an agent-based model in which the evolution of disciplines is guided mainly by social interactions among agents representing scientists. Disciplines emerge from splitting and merging of social communities in a collaboration network. We find that this social model can account for a number of stylized facts about the relationships between disciplines, scholars, and publications. These results provide strong quantitative support for the key role of social interactions in shaping the dynamics of science. While several “science of science” theories exist, this is the first account for the emergence of disciplines that is validated on the basis of empirical data. PMID:23323212
NASA Astrophysics Data System (ADS)
Sun, Xiaoling; Kaur, Jasleen; Milojević, Staša; Flammini, Alessandro; Menczer, Filippo
2013-01-01
The birth and decline of disciplines are critical to science and society. How do scientific disciplines emerge? No quantitative model to date allows us to validate competing theories on the different roles of endogenous processes, such as social collaborations, and exogenous events, such as scientific discoveries. Here we propose an agent-based model in which the evolution of disciplines is guided mainly by social interactions among agents representing scientists. Disciplines emerge from splitting and merging of social communities in a collaboration network. We find that this social model can account for a number of stylized facts about the relationships between disciplines, scholars, and publications. These results provide strong quantitative support for the key role of social interactions in shaping the dynamics of science. While several ``science of science'' theories exist, this is the first account for the emergence of disciplines that is validated on the basis of empirical data.
Bautista, Ami C; Zhou, Lei; Jawa, Vibha
2013-10-01
Immunogenicity support during nonclinical biotherapeutic development can be resource intensive if supported by conventional methodologies. A universal indirect species-specific immunoassay can eliminate the need for biotherapeutic-specific anti-drug antibody immunoassays without compromising quality. By implementing the R's of sustainability (reduce, reuse, rethink), conservation of resources and greener laboratory practices were achieved in this study. Statistical analysis across four biotherapeutics supported identification of consistent product performance standards (cut points, sensitivity and reference limits) and a streamlined universal anti-drug antibody immunoassay method implementation strategy. We propose an efficient, fit-for-purpose, scientifically and statistically supported nonclinical immunogenicity assessment strategy. Utilization of a universal method and streamlined validation, while retaining comparability to conventional immunoassays and meeting the industry recommended standards, provides environmental credits in the scientific laboratory. Collectively, individual reductions in critical material consumption, energy usage, waste and non-environment friendly consumables, such as plastic and paper, support a greener laboratory environment.
NASA Astrophysics Data System (ADS)
Goff, Kevin David
This pilot study evaluated the validity of a new quantitative, closed-response instrument for assessing student conceptual change regarding the theory of evolution. The instrument has two distinguishing design features. First, it is designed not only to gauge student mastery of the scientific model of evolution, but also to elicit a trio of deeply intuitive tendencies that are known to compromise many students' understanding: the projection of intentional agency, teleological directionality, and immutable essences onto biological phenomena. Second, in addition to a section of conventional multiple choice questions, the instrument contains a series of items where students may simultaneously endorse both scientifically normative propositions and intuitively appealing yet unscientific propositions, without having to choose between them. These features allow for the hypothesized possibility that the three intuitions are partly innate, themselves products of cognitive evolution in our hominin ancestors, and thus may continue to inform students' thinking even after instruction and conceptual change. The test was piloted with 340 high school students from diverse schools and communities. Confirmatory factor analysis and other statistical methods provided evidence that the instrument already has strong potential for validly distinguishing students who hold a correct scientific understanding from those who do not, but that revision and retesting are needed to render it valid for gauging students' adherence to intuitive misconceptions. Ultimately the instrument holds promise as a tool for classroom intervention studies by conceptual change researchers, for diagnostic testing and data gathering by instructional leaders, and for provoking classroom dialogue and debate by science teachers.
High-throughput neuroimaging-genetics computational infrastructure
Dinov, Ivo D.; Petrosyan, Petros; Liu, Zhizhong; Eggert, Paul; Hobel, Sam; Vespa, Paul; Woo Moon, Seok; Van Horn, John D.; Franco, Joseph; Toga, Arthur W.
2014-01-01
Many contemporary neuroscientific investigations face significant challenges in terms of data management, computational processing, data mining, and results interpretation. These four pillars define the core infrastructure necessary to plan, organize, orchestrate, validate, and disseminate novel scientific methods, computational resources, and translational healthcare findings. Data management includes protocols for data acquisition, archival, query, transfer, retrieval, and aggregation. Computational processing involves the necessary software, hardware, and networking infrastructure required to handle large amounts of heterogeneous neuroimaging, genetics, clinical, and phenotypic data and meta-data. Data mining refers to the process of automatically extracting data features, characteristics and associations, which are not readily visible by human exploration of the raw dataset. Result interpretation includes scientific visualization, community validation of findings and reproducible findings. In this manuscript we describe the novel high-throughput neuroimaging-genetics computational infrastructure available at the Institute for Neuroimaging and Informatics (INI) and the Laboratory of Neuro Imaging (LONI) at University of Southern California (USC). INI and LONI include ultra-high-field and standard-field MRI brain scanners along with an imaging-genetics database for storing the complete provenance of the raw and derived data and meta-data. In addition, the institute provides a large number of software tools for image and shape analysis, mathematical modeling, genomic sequence processing, and scientific visualization. A unique feature of this architecture is the Pipeline environment, which integrates the data management, processing, transfer, and visualization. Through its client-server architecture, the Pipeline environment provides a graphical user interface for designing, executing, monitoring validating, and disseminating of complex protocols that utilize diverse suites of software tools and web-services. These pipeline workflows are represented as portable XML objects which transfer the execution instructions and user specifications from the client user machine to remote pipeline servers for distributed computing. Using Alzheimer's and Parkinson's data, we provide several examples of translational applications using this infrastructure1. PMID:24795619
SIMBIOS Project 1999 Annual Report
NASA Technical Reports Server (NTRS)
McClain, Charles R.; Fargion, Giulietta S.
1999-01-01
The purpose of this technical memorandum is to provide current documentation of the Sensor Intercomparison and Merger for Biological and Interdisciplinary Oceanic Studies (SIMBIOS) Project activities, NASA Research Announcement (NRA) research status, satellite data processing, data product validation, and field calibration. This documentation is necessary to ensure that critical information is related to the scientific community and NASA management. This critical information includes the technical difficulties and challenges of combining ocean color data from an array of independent satellite systems to form consistent and accurate global bio-optical time series products. This technical report is not meant as a substitute for scientific literature. Instead, it will provide a ready and responsive vehicle for the multitude of technical reports issued by an operational project.
SIMBIOS Project 1998 Annual Report
NASA Technical Reports Server (NTRS)
McClain, Charles R.; Fargion, Giulietta, S.
1999-01-01
The purpose of this series of technical reports is to provide current documentation of the Sensor Intercomparison and Merger for Biological and Interdisciplinary Ocean Studies (SIMBIOS) Project activities, NASA Research Announcement (NRA) research status, satellite data processing, data product validation and field calibration. This documentation is necessary to ensure that critical information is related to the scientific community and NASA management. This critical information includes the technical difficulties and challenges of combining ocean color data from an array of independent satellite systems to form consistent and accurate global bio-optical time series products. This technical report is not meant to substitute for scientific literature. Instead, it will provide a ready and responsive vehicle for the multitude of technical reports issues by an operational project.
Threats to desert tortoise populations: a critical review of the literature
Boarman, William I.
2002-01-01
Decision in resource management are generally based on a combination of sociopolitical, economic, and environmental factors, and may be biased by personal values. These three components often contradict each other resulting in controversy. Controversies can usually be reduced when solid scientific evidence is used to support or refute a decision. However, it is important to recognize that data often do little to alter antagonists' positions when differences in values are the bases if the dispute. But, supporting data can make the decision more defensible, both legally and ethically, especially if the data supporting all opposing viewpoints are included in the decision-making process. Resource management decisions must be made using the best scientific information currently available. However, scientific data vary in two important measures of quality: reliability and validity. The reliability of the data is a measure of the degree to which the observations or conclusion can be repeated. Validity of the data is a measure of the degree to which the observation or conclusion reflects what actually occurs in nature. How the data are collected strongly affects the reliability of validity of ecological conclusions that can be made. Research data potentially relevant to management come from different sources, and the source often provides clues to the reliability and, to a certain extent, validity of data. Understanding the quality of data being used to make management decisions helps to separate the philosophical or value-based aspects of arguments from the objective ones, thus helping to clarify the decisions and judgments that need to be made. The West Mojave Plan is a multispecies, bioregional plan for the management of natural resources within a 9.4 million-acre area of the Mojave Desert in California. The plan addresses the legal requirements for the recovery of the desert tortoise (Gopherus agazzizii), a threatened species, but also covers an additional approximately 80 species of plants and animals assigned special status by the Bureau of Land Management, U. S. Fish and Wildlife Service, and California Department of Fish and Game. Within the planning area, 28 separate jurisdictions (counties, cities, towns, military installations, etc.) seek programmatic prescriptions that will facilitate streamlined environmental review, result in expedited authorization for development projects, and protect listed and unlisted species into the foreseeable future to avoid or minimize conflicts between proposed development and species' conservation and recovery. All of the scientific data available concerning the biology and management of these approximately 80 species and their habitats must be evaluated to develop a scientifically credible plan. This document provides an overview and evaluation of the knowledge of the major threats to the persistence and recovery of desert tortoise populations. I was specifically asked to evaluate the scientific veracity of the data and reports available. I summarize the data presently available with particular focus on the West Mojave Desert, evaluate the scientific integrity of the data. and identify major gaps in the available knowledge. I do not attempt to provide in-depth details on each study or threat; for more details I encourage the reader to consult the individual papers or reports city throughout this report (many of which are available at most university libraries and at the West Mojave Plan office in Riverside, California). I also do not attempt to characterize or evaluate the past or present management actions, except where they have direct bearing on evaluation of threats, nor do I attempt, for the most part, to acquire, generate, or evaluate new or existing, but uninterpreted data.
The Sardinia Radio Telescope . From a technological project to a radio observatory
NASA Astrophysics Data System (ADS)
Prandoni, I.; Murgia, M.; Tarchi, A.; Burgay, M.; Castangia, P.; Egron, E.; Govoni, F.; Pellizzoni, A.; Ricci, R.; Righini, S.; Bartolini, M.; Casu, S.; Corongiu, A.; Iacolina, M. N.; Melis, A.; Nasir, F. T.; Orlati, A.; Perrodin, D.; Poppi, S.; Trois, A.; Vacca, V.; Zanichelli, A.; Bachetti, M.; Buttu, M.; Comoretto, G.; Concu, R.; Fara, A.; Gaudiomonte, F.; Loi, F.; Migoni, C.; Orfei, A.; Pilia, M.; Bolli, P.; Carretti, E.; D'Amico, N.; Guidetti, D.; Loru, S.; Massi, F.; Pisanu, T.; Porceddu, I.; Ridolfi, A.; Serra, G.; Stanghellini, C.; Tiburzi, C.; Tingay, S.; Valente, G.
2017-12-01
Context. The Sardinia Radio Telescope (SRT) is the new 64 m dish operated by the Italian National Institute for Astrophysics (INAF). Its active surface, comprised of 1008 separate aluminium panels supported by electromechanical actuators, will allow us to observe at frequencies of up to 116 GHz. At the moment, three receivers, one per focal position, have been installed and tested: a 7-beam K-band receiver, a mono-feed C-band receiver, and a coaxial dual-feed L/P band receiver. The SRT was officially opened in September 2013, upon completion of its technical commissioning phase. In this paper, we provide an overview of the main science drivers for the SRT, describe the main outcomes from the scientific commissioning of the telescope, and discuss a set of observations demonstrating the scientific capabilities of the SRT. Aims: The scientific commissioning phase, carried out in the 2012-2015 period, proceeded in stages following the implementation and/or fine-tuning of advanced subsystems such as the active surface, the derotator, new releases of the acquisition software, etc. One of the main objectives of scientific commissioning was the identification of deficiencies in the instrumentation and/or in the telescope subsystems for further optimization. As a result, the overall telescope performance has been significantly improved. Methods: As part of the scientific commissioning activities, different observing modes were tested and validated, and the first astronomical observations were carried out to demonstrate the science capabilities of the SRT. In addition, we developed astronomer-oriented software tools to support future observers on site. In the following, we refer to the overall scientific commissioning and software development activities as astronomical validation. Results: The astronomical validation activities were prioritized based on technical readiness and scientific impact. The highest priority was to make the SRT available for joint observations as part of European networks. As a result, the SRT started to participate (in shared-risk mode) in European VLBI Network (EVN) and Large European Array for Pulsars (LEAP) observing sessions in early 2014. The validation of single-dish operations for the suite of SRT first light receivers and backends continued in the following year, and was concluded with the first call for shared-risk early-science observations issued at the end of 2015. As discussed in the paper, the SRT capabilities were tested (and optimized when possible) for several different observing modes: imaging, spectroscopy, pulsar timing, and transients.
NASA Astrophysics Data System (ADS)
Anggraini, R.; Darvina, Y.; Amir, H.; Murtiani, M.; Yulkifli, Y.
2018-04-01
The availability of modules in schools is currently lacking. Learners have not used the module as a source in the learning process. In accordance with the demands of the 2013 curriculum, that learning should be conducted using a scientific approach and loaded with character values as well as learning using interactive learning resources. The solution of this problem is to create an interactive module with a scientifically charged character approach. This interactive module can be used by learners outside the classroom or in the classroom. This interactive module contains straight motion material, parabolic motion and circular motion of high school physics class X semester 1. The purpose of this research is to produce an interactive module with a scientific approach charged with character and determine the validity and practicality. The research is Research and Development. This study was conducted only until the validity test and practice test. The validity test was conducted by three lecturers of Physics of FMIPA UNP as experts. The instruments used in this research are validation sheet and worksheet sheet. Data analysis technique used is product validity analysis. The object of this research is electronic module, while the subject of this research is three validator.
NASA Astrophysics Data System (ADS)
Lin, Tzung-Jin; Tsai, Chin-Chung
2017-11-01
The purpose of this study was to develop and validate two survey instruments to evaluate high school students' scientific epistemic beliefs and goal orientations in learning science. The initial relationships between the sampled students' scientific epistemic beliefs and goal orientations in learning science were also investigated. A final valid sample of 600 volunteer Taiwanese high school students participated in this survey by responding to the Scientific Epistemic Beliefs Instrument (SEBI) and the Goal Orientations in Learning Science Instrument (GOLSI). Through both exploratory and confirmatory factor analyses, the SEBI and GOLSI were proven to be valid and reliable for assessing the participants' scientific epistemic beliefs and goal orientations in learning science. The path analysis results indicated that, by and large, the students with more sophisticated epistemic beliefs in various dimensions such as Development of Knowledge, Justification for Knowing, and Purpose of Knowing tended to adopt both Mastery-approach and Mastery-avoidance goals. Some interesting results were also found. For example, the students tended to set a learning goal to outperform others or merely demonstrate competence (Performance-approach) if they had more informed epistemic beliefs in the dimensions of Multiplicity of Knowledge, Uncertainty of Knowledge, and Purpose of Knowing.
Misuses of biology in the context of the paranormal.
Hewitt, G C
1988-04-15
Public suspicion of science stems from science's challenging of perceptions and myths about reality, and a public fear of new technology. The result is a susceptibility to pseudoscience. In claiming that creation 'science' is as valid as evolution the creationists misquote scientists and seek to spread their own 'scientific' myths concerning a young age for the earth, an act of creation based on a particular literalist interpretation of the Christian Bible and a single worldwide flood. They use methods of debate and politics, rather than scientific research. A selection of their arguments is examined and the nature of the evidence for evolution is discussed. Problems with the creation 'science' model are noted. In the myth of the hundredth monkey phenomenon, original research is misquoted to denigrate scientific research and support sentimental ideas of paranormal events. The misuse of science is seen as damaging to society because it reduces the effective gathering and application of scientific information. However, pseudoscience provides a valuable guide to gaps in public scientific education.
Chang, Jasper O; Levy, Susan S; Seay, Seth W; Goble, Daniel J
2014-05-01
Recent guidelines advocate sports medicine professionals to use balance tests to assess sensorimotor status in the management of concussions. The present study sought to determine whether a low-cost balance board could provide a valid, reliable, and objective means of performing this balance testing. Criterion validity testing relative to a gold standard and 7 day test-retest reliability. University biomechanics laboratory. Thirty healthy young adults. Balance ability was assessed on 2 days separated by 1 week using (1) a gold standard measure (ie, scientific grade force plate), (2) a low-cost Nintendo Wii Balance Board (WBB), and (3) the Balance Error Scoring System (BESS). Validity of the WBB center of pressure path length and BESS scores were determined relative to the force plate data. Test-retest reliability was established based on intraclass correlation coefficients. Composite scores for the WBB had excellent validity (r = 0.99) and test-retest reliability (R = 0.88). Both the validity (r = 0.10-0.52) and test-retest reliability (r = 0.61-0.78) were lower for the BESS. These findings demonstrate that a low-cost balance board can provide improved balance testing accuracy/reliability compared with the BESS. This approach provides a potentially more valid/reliable, yet affordable, means of assessing sports-related concussion compared with current methods.
Diabetic foot infections: recent literature and cornerstones of management.
Uçkay, Ilker; Gariani, Karim; Dubois-Ferrière, Victor; Suvà, Domizio; Lipsky, Benjamin A
2016-04-01
Diabetes mellitus has reached pandemic levels and will continue to increase worldwide. Physicians and surgeons should know to manage one of its most prevalent complications, the diabetic foot infection (DFI), in a scientifically based and resource-sparing way. We performed a nonsystematic review of recent scientific literature to provide guidance on management of DFIs. Studies in the past couple of years provide data on which recommendations for diagnosing and treating DFI are based, especially with validated guidelines and reviews of the microbiology and selected aspects of the complex DFI problem. Recent literature provides approaches to prevention and studies support more conservative surgical treatment. Unfortunately, there have been virtually no new therapeutic molecules, antibiotic regimens, randomized trials, or surgical techniques introduced in the recent past; we briefly discuss how this may change in the future. Recent scientific evidence on DFI strongly supports the value of multidisciplinary and some new care models, guideline-based management, more preventive approaches, and confirms several established therapeutic concepts. In contrast, there has been almost no new substantial information regarding the optimal antibiotic or surgical management in recent literature.
Actual curriculum development practices instrument: Testing for factorial validity
NASA Astrophysics Data System (ADS)
Foi, Liew Yon; Bakar, Kamariah Abu; Hamzah, Mohd Sahandri Gani; Alwi, Nor Hayati
2014-09-01
The Actual Curriculum Development Practices Instrument (ACDP-I) was developed and the factorial validity of the ACDP-I was tested (n = 107) using exploratory factor analysis procedures in the earlier work of [1]. Despite the ACDP-I appears to be content and construct valid instrument with very high internal reliability qualities for using in Malaysia, the accumulated evidences are still needed to provide a sound scientific basis for the proposed score interpretations. Therefore, the present study addresses this concern by utilising the confirmatory factor analysis to further confirm the theoretical structure of the variable Actual Curriculum Development Practices (ACDP) and enrich the psychometrical properties of ACDP-I. Results of this study have practical implication to both researchers and educators whose concerns focus on teachers' classroom practices and the instrument development and validation process.
Snodgrass, Melinda R; Chung, Moon Y; Meadan, Hedda; Halle, James W
2018-03-01
Single-case research (SCR) has been a valuable methodology in special education research. Montrose Wolf (1978), an early pioneer in single-case methodology, coined the term "social validity" to refer to the social importance of the goals selected, the acceptability of procedures employed, and the effectiveness of the outcomes produced in applied investigations. Since 1978, many contributors to SCR have included social validity as a feature of their articles and several authors have examined the prevalence and role of social validity in SCR. We systematically reviewed all SCR published in six highly-ranked special education journals from 2005 to 2016 to establish the prevalence of social validity assessments and to evaluate their scientific rigor. We found relatively low, but stable prevalence with only 28 publications addressing all three factors of the social validity construct (i.e., goals, procedures, outcomes). We conducted an in-depth analysis of the scientific rigor of these 28 publications. Social validity remains an understudied construct in SCR, and the scientific rigor of social validity assessments is often lacking. Implications and future directions are discussed. Copyright © 2018 Elsevier Ltd. All rights reserved.
Hypoderma sinense: solving a century-old enigma.
Otranto, D; Colwell, D D; Pape, T
2005-09-01
Among the species of Hypoderma (Diptera: Oestridae) that have been described and named over the last three centuries, Hypoderma sinense Pleske has been the subject of several scientific discussions. Hypoderma sinense was described by T. Pleske in 1926 on the basis of only three females collected by the Russian explorer P. K. Kozlov nearly 25 years earlier during a scientific expedition to China (1900-1901). This species was examined by the foremost oestrid authorities and synonomized with H. lineatum. Recently a unique, unidentified species of Hypoderma was observed to infect cattle and yaks in China. Molecular and morphological observations confirmed the unique nature of the third-stage larvae. This data initiated a debate within the scientific community regarding the proper name of this species, in particular with reference to previous taxonomical discussion on the validity of H. sinense. The present work provides a historical overview of the Russian scientific expeditions that collected the specimens and of the explorers and the entomologists who contributed to the description of H. sinense. The morphological examination of the original type material of H. sinense and the comparison with females of H. lineatum indicated that the H. sinense lectotype, deposited at the Zoological Institute of the Russian Academy of Sciences, St Petersburg, was within the range of variation of H. lineatum. Comparisons of the cox1 (688 bp) sequence obtained from the leg of a paralectotype of H. sinense with those of H. bovis (Linneaus), H. lineatum (De Villers) and of a sixth valid species of Hypoderma identified as "H. sinense" available in GenBank revealed differences of 9.7%, 7.2% and 0.3%, respectively. On the basis of these results, we concluded that the nominal species H. sinense should be treated as valid.
Learning from Science and Sport - How we, Safety, "Engage with Rigor"
NASA Astrophysics Data System (ADS)
Herd, A.
2012-01-01
As the world of spaceflight safety is relatively small and potentially inward-looking, we need to be aware of the "outside world". We should then try to remind ourselves to be open to the possibility that data, knowledge or experience from outside of the spaceflight community may provide some constructive alternate perspectives. This paper will assess aspects from two seemingly tangential fields, science and sport, and align these with the world of safety. In doing so some useful insights will be given to the challenges we face and may provide solutions relevant in our everyday (of safety engineering). Sport, particularly a contact sport such as rugby union, requires direct interaction between members of two (opposing) teams. Professional, accurately timed and positioned interaction for a desired outcome. These interactions, whilst an essential part of the game, are however not without their constraints. The rugby scrum has constraints as to the formation and engagement of the two teams. The controlled engagement provides for an interaction between the two teams in a safe manner. The constraints arising from the reality that an incorrect engagement could cause serious injury to members of either team. In academia, scientific rigor is applied to assure that the arguments provided and the conclusions drawn in academic papers presented for publication are valid, legitimate and credible. The scientific goal of the need for rigor may be expressed in the example of achieving a statistically relevant sample size, n, in order to assure analysis validity of the data pool. A failure to apply rigor could then place the entire study at risk of failing to have the respective paper published. This paper will consider the merits of these two different aspects, scientific rigor and sports engagement, and offer a reflective look at how this may provide a "modus operandi" for safety engineers at any level whether at their desks (creating or reviewing safety assessments) or in a safety review meeting (providing a verbal critique of the presented safety case).
Third Molars on the Internet: A Guide for Assessing Information Quality and Readability.
Hanna, Kamal; Brennan, David; Sambrook, Paul; Armfield, Jason
2015-10-06
Directing patients suffering from third molars (TMs) problems to high-quality online information is not only medically important, but also could enable better engagement in shared decision making. This study aimed to develop a scale that measures the scientific information quality (SIQ) for online information concerning wisdom tooth problems and to conduct a quality evaluation for online TMs resources. In addition, the study evaluated whether a specific piece of readability software (Readability Studio Professional 2012) might be reliable in measuring information comprehension, and explored predictors for the SIQ Scale. A cross-sectional sample of websites was retrieved using certain keywords and phrases such as "impacted wisdom tooth problems" using 3 popular search engines. The retrieved websites (n=150) were filtered. The retained 50 websites were evaluated to assess their characteristics, usability, accessibility, trust, readability, SIQ, and their credibility using DISCERN and Health on the Net Code (HoNCode). Websites' mean scale scores varied significantly across website affiliation groups such as governmental, commercial, and treatment provider bodies. The SIQ Scale had a good internal consistency (alpha=.85) and was significantly correlated with DISCERN (r=.82, P<.01) and HoNCode (r=.38, P<.01). Less than 25% of websites had SIQ scores above 75%. The mean readability grade (10.3, SD 1.9) was above the recommended level, and was significantly correlated with the Scientific Information Comprehension Scale (r=.45. P<.01), which provides evidence for convergent validity. Website affiliation and DISCERN were significantly associated with SIQ (P<.01) and explained 76% of the SIQ variance. The developed SIQ Scale was found to demonstrate reliability and initial validity. Website affiliation, DISCERN, and HoNCode were significant predictors for the quality of scientific information. The Readability Studio software estimates were associated with scientific information comprehensiveness measures.
MEDES clinical research facility as a tool to prepare ISSA space flights
NASA Astrophysics Data System (ADS)
Maillet, A.; Traon, A. Pavy-Le
This new multi-disciplinary medical experimentation center provides the ideal scientific, medical and technical environment required for research programs and to prepare international space station Alpha (ISSA) missions, where space and healthcare industries can share their expertise. Different models are available to simulate space flight effects (bed-rest, confinement,…). This is of particular interest for research in Human psychology, physiology, physiopathology and ergonomics, validation of biomedical materials and procedures, testing of drugs, and other healthcare related products. This clinical research facility (CRF) provides valuable services in various fields of Human research requiring healthy volunteers. CRF is widely accessible to national and international, scientific, medical and industrial organisations. Furthermore, users have at their disposal the multi-disciplinary skills of MEDES staff and all MEDES partners on a single site.
EMDataBank unified data resource for 3DEM.
Lawson, Catherine L; Patwardhan, Ardan; Baker, Matthew L; Hryc, Corey; Garcia, Eduardo Sanz; Hudson, Brian P; Lagerstedt, Ingvar; Ludtke, Steven J; Pintilie, Grigore; Sala, Raul; Westbrook, John D; Berman, Helen M; Kleywegt, Gerard J; Chiu, Wah
2016-01-04
Three-dimensional Electron Microscopy (3DEM) has become a key experimental method in structural biology for a broad spectrum of biological specimens from molecules to cells. The EMDataBank project provides a unified portal for deposition, retrieval and analysis of 3DEM density maps, atomic models and associated metadata (emdatabank.org). We provide here an overview of the rapidly growing 3DEM structural data archives, which include maps in EM Data Bank and map-derived models in the Protein Data Bank. In addition, we describe progress and approaches toward development of validation protocols and methods, working with the scientific community, in order to create a validation pipeline for 3DEM data. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Development and Validation of a Multimedia-Based Assessment of Scientific Inquiry Abilities
ERIC Educational Resources Information Center
Kuo, Che-Yu; Wu, Hsin-Kai; Jen, Tsung-Hau; Hsu, Ying-Shao
2015-01-01
The potential of computer-based assessments for capturing complex learning outcomes has been discussed; however, relatively little is understood about how to leverage such potential for summative and accountability purposes. The aim of this study is to develop and validate a multimedia-based assessment of scientific inquiry abilities (MASIA) to…
ERIC Educational Resources Information Center
Berzonsky, William A.; Richardson, Katherine D.
2008-01-01
Accessibility of online scientific literature continues to expand due to the advent of scholarly databases and search engines. Studies have shown that undergraduates favor using online scientific literature to address research questions, but they often do not have the skills to assess the validity of research articles. Undergraduates generally are…
The future of yogurt: scientific and regulatory needs1234
German, J Bruce
2014-01-01
Lactation biology, microbial selection, and human diversity are central themes that could guide investment in scientific research, industrial innovation, and regulatory policy oversight to propel yogurt into the central role for health-promoting food products. The ability of yogurt to provide the nourishing properties of milk together with the live microorganisms from fermentation provides a unique combination of food assets. Academic research must now define the various targets on which these biological assets act to improve health and develop the metrics that can quantitatively document their benefits. The food industry must reconcile that yogurt and its microorganisms cannot be expected to provide measurable benefits for all consumers, at all doses, and at all times. A supportive regulatory oversight must demand safety and yet encourage innovations that support a value proposition for yogurt in health. Health valuation in the marketplace will be driven by parallel innovations, including accurate assessment technologies, validated microbial ingredients, and health-aware consumers. PMID:24695899
The future of yogurt: scientific and regulatory needs.
German, J Bruce
2014-05-01
Lactation biology, microbial selection, and human diversity are central themes that could guide investment in scientific research, industrial innovation, and regulatory policy oversight to propel yogurt into the central role for health-promoting food products. The ability of yogurt to provide the nourishing properties of milk together with the live microorganisms from fermentation provides a unique combination of food assets. Academic research must now define the various targets on which these biological assets act to improve health and develop the metrics that can quantitatively document their benefits. The food industry must reconcile that yogurt and its microorganisms cannot be expected to provide measurable benefits for all consumers, at all doses, and at all times. A supportive regulatory oversight must demand safety and yet encourage innovations that support a value proposition for yogurt in health. Health valuation in the marketplace will be driven by parallel innovations, including accurate assessment technologies, validated microbial ingredients, and health-aware consumers.
Determining History of Victimization and Potential for Abusive Behavior in U.S. Navy Recruits
1993-04-20
child abuse , spouse abuse, and sexual/physical aggression to provide a scientific basis for a study to survey Navy recruits for their history of and...been found to be associated with abusive behavior and to ascertain the reliability, validity, and appropriateness for use of relevant instruments. Child abuse , Spouse abuse, Sexual aggression, Sexual abuse.
ERIC Educational Resources Information Center
Donmoyer, Robert; Galloway, Fred
2010-01-01
In recent years, policy makers and researchers once again have embraced the traditional idea that quasi-experimental research designs (or reasonable facsimiles) can provide the sort of valid and generalizable knowledge about "what works" that educational researchers had promised--but never really produced--during the previous century. Although…
Midwest Structural Sciences Center, 2006-2013
2013-09-01
for Technology High Speed Systems Division Air Force Research Laboratory This report is published in the interest of scientific and...also be used for making predictions of future flights. 2 Approved for public release; distribution unlimited. Fig. 1.1: Development of future high ...methods were developed to provide validation quality data for coupled high temperature and acoustic loading environments, and to quantitatively study
Gormally, Cara; Brickman, Peggy; Lutz, Mary
2012-01-01
Life sciences faculty agree that developing scientific literacy is an integral part of undergraduate education and report that they teach these skills. However, few measures of scientific literacy are available to assess students' proficiency in using scientific literacy skills to solve scenarios in and beyond the undergraduate biology classroom. In this paper, we describe the development, validation, and testing of the Test of Scientific Literacy Skills (TOSLS) in five general education biology classes at three undergraduate institutions. The test measures skills related to major aspects of scientific literacy: recognizing and analyzing the use of methods of inquiry that lead to scientific knowledge and the ability to organize, analyze, and interpret quantitative data and scientific information. Measures of validity included correspondence between items and scientific literacy goals of the National Research Council and Project 2061, findings from a survey of biology faculty, expert biology educator reviews, student interviews, and statistical analyses. Classroom testing contexts varied both in terms of student demographics and pedagogical approaches. We propose that biology instructors can use the TOSLS to evaluate their students' proficiencies in using scientific literacy skills and to document the impacts of curricular reform on students' scientific literacy.
Gormally, Cara; Brickman, Peggy; Lutz, Mary
2012-01-01
Life sciences faculty agree that developing scientific literacy is an integral part of undergraduate education and report that they teach these skills. However, few measures of scientific literacy are available to assess students’ proficiency in using scientific literacy skills to solve scenarios in and beyond the undergraduate biology classroom. In this paper, we describe the development, validation, and testing of the Test of Scientific Literacy Skills (TOSLS) in five general education biology classes at three undergraduate institutions. The test measures skills related to major aspects of scientific literacy: recognizing and analyzing the use of methods of inquiry that lead to scientific knowledge and the ability to organize, analyze, and interpret quantitative data and scientific information. Measures of validity included correspondence between items and scientific literacy goals of the National Research Council and Project 2061, findings from a survey of biology faculty, expert biology educator reviews, student interviews, and statistical analyses. Classroom testing contexts varied both in terms of student demographics and pedagogical approaches. We propose that biology instructors can use the TOSLS to evaluate their students’ proficiencies in using scientific literacy skills and to document the impacts of curricular reform on students’ scientific literacy. PMID:23222832
ERIC Educational Resources Information Center
O'Sullivan, Connie; O'Sullivan, Michael
2005-01-01
Intelligent design lacks scientific validity and has been repudiated by every leading scientific organization, including the American Association for the Advancement of Sciences and the National Academy of Sciences, both of which assert that design theory lacks any scientific merit and cannot be supported by scientific research. Teaching it would…
DESCQA: An Automated Validation Framework for Synthetic Sky Catalogs
NASA Astrophysics Data System (ADS)
Mao, Yao-Yuan; Kovacs, Eve; Heitmann, Katrin; Uram, Thomas D.; Benson, Andrew J.; Campbell, Duncan; Cora, Sofía A.; DeRose, Joseph; Di Matteo, Tiziana; Habib, Salman; Hearin, Andrew P.; Bryce Kalmbach, J.; Krughoff, K. Simon; Lanusse, François; Lukić, Zarija; Mandelbaum, Rachel; Newman, Jeffrey A.; Padilla, Nelson; Paillas, Enrique; Pope, Adrian; Ricker, Paul M.; Ruiz, Andrés N.; Tenneti, Ananth; Vega-Martínez, Cristian A.; Wechsler, Risa H.; Zhou, Rongpu; Zu, Ying; The LSST Dark Energy Science Collaboration
2018-02-01
The use of high-quality simulated sky catalogs is essential for the success of cosmological surveys. The catalogs have diverse applications, such as investigating signatures of fundamental physics in cosmological observables, understanding the effect of systematic uncertainties on measured signals and testing mitigation strategies for reducing these uncertainties, aiding analysis pipeline development and testing, and survey strategy optimization. The list of applications is growing with improvements in the quality of the catalogs and the details that they can provide. Given the importance of simulated catalogs, it is critical to provide rigorous validation protocols that enable both catalog providers and users to assess the quality of the catalogs in a straightforward and comprehensive way. For this purpose, we have developed the DESCQA framework for the Large Synoptic Survey Telescope Dark Energy Science Collaboration as well as for the broader community. The goal of DESCQA is to enable the inspection, validation, and comparison of an inhomogeneous set of synthetic catalogs via the provision of a common interface within an automated framework. In this paper, we present the design concept and first implementation of DESCQA. In order to establish and demonstrate its full functionality we use a set of interim catalogs and validation tests. We highlight several important aspects, both technical and scientific, that require thoughtful consideration when designing a validation framework, including validation metrics and how these metrics impose requirements on the synthetic sky catalogs.
The Contingency of Laws of Nature in Science and Theology
NASA Astrophysics Data System (ADS)
Jaeger, Lydia
2010-10-01
The belief that laws of nature are contingent played an important role in the emergence of the empirical method of modern physics. During the scientific revolution, this belief was based on the idea of voluntary creation. Taking up Peter Mittelstaedt’s work on laws of nature, this article explores several alternative answers which do not overtly make use of metaphysics: some laws are laws of mathematics; macroscopic laws can emerge from the interplay of numerous subsystems without any specific microscopic nomic structures (John Wheeler’s “law without law”); laws are the preconditions of scientific experience (Kant); laws are theoretical abstractions which only apply in very limited circumstances (Nancy Cartwright). Whereas Cartwright’s approach is in tension with modern scientific methodology, the first three strategies count as illuminating, though partial answers. It is important for the empirical method of modern physics that these three strategies, even when taken together, do not provide a complete explanation of the order of nature. Thus the question of why laws are valid is still relevant. In the concluding section, I argue that the traditional answer, based on voluntary creation, provides the right balance of contingency and coherence which is in harmony with modern scientific method.
Modern data science for analytical chemical data - A comprehensive review.
Szymańska, Ewa
2018-10-22
Efficient and reliable analysis of chemical analytical data is a great challenge due to the increase in data size, variety and velocity. New methodologies, approaches and methods are being proposed not only by chemometrics but also by other data scientific communities to extract relevant information from big datasets and provide their value to different applications. Besides common goal of big data analysis, different perspectives and terms on big data are being discussed in scientific literature and public media. The aim of this comprehensive review is to present common trends in the analysis of chemical analytical data across different data scientific fields together with their data type-specific and generic challenges. Firstly, common data science terms used in different data scientific fields are summarized and discussed. Secondly, systematic methodologies to plan and run big data analysis projects are presented together with their steps. Moreover, different analysis aspects like assessing data quality, selecting data pre-processing strategies, data visualization and model validation are considered in more detail. Finally, an overview of standard and new data analysis methods is provided and their suitability for big analytical chemical datasets shortly discussed. Copyright © 2018 Elsevier B.V. All rights reserved.
Ovadje, Pamela; Roma, Alessia; Steckle, Matthew; Nicoletti, Leah; Arnason, John Thor; Pandey, Siyaram
2015-01-01
Natural health products (NHPs) are defined as natural extracts containing polychemical mixtures; they play a leading role in the discovery and development of drugs, for disease treatment. More than 50% of current cancer therapeutics are derived from natural sources. However, the efficacy of natural extracts in treating cancer has not been explored extensively. Scientific research into the validity and mechanism of action of these products is needed to develop NHPs as main stream cancer therapy. The preclinical and clinical validation of NHPs would be essential for this development. This review summarizes some of the recent advancements in the area of NHPs with anticancer effects. This review also focuses on various NHPs that have been studied to scientifically validate their claims as anticancer agents. Furthermore, this review emphasizes the efficacy of these NHPs in targeting the multiple vulnerabilities of cancer cells for a more selective efficacious treatment. The studies reviewed here have paved the way for the introduction of more NHPs from traditional medicine to the forefront of modern medicine, in order to provide alternative, safer, and cheaper complementary treatments for cancer therapy and possibly improve the quality of life of cancer patients. PMID:25883673
Weintraub, Sandra; Dikmen, Sureyya S.; Heaton, Robert K.; Tulsky, David S.; Zelazo, Philip David; Slotkin, Jerry; Carlozzi, Noelle E.; Bauer, Patricia J.; Wallner-Allen, Kathleen; Fox, Nathan; Havlik, Richard; Beaumont, Jennifer L.; Mungas, Dan; Manly, Jennifer J.; Moy, Claudia; Conway, Kevin; Edwards, Emmeline; Nowinski, Cindy J.; Gershon, Richard
2014-01-01
This paper introduces a special series on validity studies of the Cognition Battery (CB) from the U.S. National Institutes of Health Toolbox for the Assessment of Neurological and Behavioral Function (NIHTB) (R. C. Gershon et al., 2013) in an adult sample. This first paper in the series describes the sample, each of the seven instruments in the NIHTB-CB briefly, and the general approach to data analysis. Data are provided on test-retest reliability and practice effects, and raw scores (mean, standard deviation, range) are presented for each instrument and the gold standard instruments used to measure construct validity. Accompanying papers provide details on each instrument, including information about instrument development, psychometric properties, age and education effects on performance, and convergent and discriminant construct validity. One paper in the series is devoted to a factor analysis of the NIHTB-CB in adults and another describes the psychometric properties of three composite scores derived from the individual measures representing fluid and crystallized abilities and their combination. The NIHTB-CB is designed to provide a brief, comprehensive, common set of measures to allow comparisons among disparate studies and to improve scientific communication. PMID:24959840
Weintraub, Sandra; Dikmen, Sureyya S; Heaton, Robert K; Tulsky, David S; Zelazo, Philip David; Slotkin, Jerry; Carlozzi, Noelle E; Bauer, Patricia J; Wallner-Allen, Kathleen; Fox, Nathan; Havlik, Richard; Beaumont, Jennifer L; Mungas, Dan; Manly, Jennifer J; Moy, Claudia; Conway, Kevin; Edwards, Emmeline; Nowinski, Cindy J; Gershon, Richard
2014-07-01
This study introduces a special series on validity studies of the Cognition Battery (CB) from the U.S. National Institutes of Health Toolbox for the Assessment of Neurological and Behavioral Function (NIHTB) (Gershon, Wagster et al., 2013) in an adult sample. This first study in the series describes the sample, each of the seven instruments in the NIHTB-CB briefly, and the general approach to data analysis. Data are provided on test-retest reliability and practice effects, and raw scores (mean, standard deviation, range) are presented for each instrument and the gold standard instruments used to measure construct validity. Accompanying papers provide details on each instrument, including information about instrument development, psychometric properties, age and education effects on performance, and convergent and discriminant construct validity. One study in the series is devoted to a factor analysis of the NIHTB-CB in adults and another describes the psychometric properties of three composite scores derived from the individual measures representing fluid and crystallized abilities and their combination. The NIHTB-CB is designed to provide a brief, comprehensive, common set of measures to allow comparisons among disparate studies and to improve scientific communication.
Nestle, Frank O
2008-01-01
Psoriasis is one of the most common chronic inflammatory disorders with a strong genetic background. Recent progress in the understanding of both the immunological as well as the genetic basis has provided an unprecedented opportunity to move scientific insights from the bench to bedside. Based on insights from laboratory research, targeted immunotherapies are now available for the benefit of patients suffering from psoriasis. The success of these therapies has validated insights into disease pathogenesis and also provides the opportunity to increase our understanding about the pathways underpinning autoimmune-type inflammation in the skin.
ERIC Educational Resources Information Center
Benjamin, Thomas E.; Marks, Bryant; Demetrikopoulos, Melissa K.; Rose, Jordan; Pollard, Ethen; Thomas, Alicia; Muldrow, Lycurgus L.
2017-01-01
Although a major goal of Science, Technology, Engineering, and Mathematics (STEM) education is to develop scientific literacy, prior efforts at measuring scientific literacy have not attempted to link scientific literacy with success in STEM fields. The current Scientific Literacy Survey for College Preparedness in STEM (SLSCP-STEM) scale was…
dREL: a relational expression language for dictionary methods.
Spadaccini, Nick; Castleden, Ian R; du Boulay, Doug; Hall, Sydney R
2012-08-27
The provision of precise metadata is an important but a largely underrated challenge for modern science [Nature 2009, 461, 145]. We describe here a dictionary methods language dREL that has been designed to enable complex data relationships to be expressed as formulaic scripts in data dictionaries written in DDLm [Spadaccini and Hall J. Chem. Inf. Model.2012 doi:10.1021/ci300075z]. dREL describes data relationships in a simple but powerful canonical form that is easy to read and understand and can be executed computationally to evaluate or validate data. The execution of dREL expressions is not a substitute for traditional scientific computation; it is to provide precise data dependency information to domain-specific definitions and a means for cross-validating data. Some scientific fields apply conventional programming languages to methods scripts but these tend to inhibit both dictionary development and accessibility. dREL removes the programming barrier and encourages the production of the metadata needed for seamless data archiving and exchange in science.
Smith, Gregory T.; McCarthy, Denis M.; Zapolski, Tamika C. B.
2010-01-01
The authors argue for a significant shift in how clinical psychology researchers conduct construct validation and theory validation tests. They argue that sound theory and validation tests can best be conducted on measures of unidimensional or homogeneous constructs. Hierarchical organizations of such constructs are useful descriptively and theoretically, but higher order composites do not refer to definable psychological processes. Application of this perspective to the approach of the Diagnostic and Statistical Manual of Mental Disorders to describing psychopathology calls into doubt the traditional use of the syndromal approach, in which single scores reflect the presence of multidimensional disorders. For many forms of psychological dysfunction, this approach does not appear optimal and may need to be discarded. The authors note that their perspective represents a straightforward application of existing psychometric theory, they demonstrate the practical value of adopting this perspective, and they provide evidence that this shift is already under way among clinical researchers. Description in terms of homogeneous dimensions provides improved validity, utility, and parsimony. In contrast, the use of composite diagnoses can retard scientific progress and hamper clinicians' efforts to understand and treat dysfunction. PMID:19719340
Observations on CFD Verification and Validation from the AIAA Drag Prediction Workshops
NASA Technical Reports Server (NTRS)
Morrison, Joseph H.; Kleb, Bil; Vassberg, John C.
2014-01-01
The authors provide observations from the AIAA Drag Prediction Workshops that have spanned over a decade and from a recent validation experiment at NASA Langley. These workshops provide an assessment of the predictive capability of forces and moments, focused on drag, for transonic transports. It is very difficult to manage the consistency of results in a workshop setting to perform verification and validation at the scientific level, but it may be sufficient to assess it at the level of practice. Observations thus far: 1) due to simplifications in the workshop test cases, wind tunnel data are not necessarily the “correct” results that CFD should match, 2) an average of core CFD data are not necessarily a better estimate of the true solution as it is merely an average of other solutions and has many coupled sources of variation, 3) outlier solutions should be investigated and understood, and 4) the DPW series does not have the systematic build up and definition on both the computational and experimental side that is required for detailed verification and validation. Several observations regarding the importance of the grid, effects of physical modeling, benefits of open forums, and guidance for validation experiments are discussed. The increased variation in results when predicting regions of flow separation and increased variation due to interaction effects, e.g., fuselage and horizontal tail, point out the need for validation data sets for these important flow phenomena. Experiences with a recent validation experiment at NASA Langley are included to provide guidance on validation experiments.
Airborne Validation of Spatial Properties Measured by the CALIPSO Lidar
NASA Technical Reports Server (NTRS)
McGill, Matthew J.; Vaughan, Mark A.; Trepte, Charles Reginald; Hart, William D.; Hlavka, Dennis L.; Winker, David M.; Keuhn, Ralph
2007-01-01
The primary payload onboard the Cloud-Aerosol Lidar Infrared Pathfinder Satellite Observations (CALIPSO) satellite is a dual-wavelength backscatter lidar designed to provide vertical profiling of clouds and aerosols. Launched in April 2006, the first data from this new satellite was obtained in June 2006. As with any new satellite measurement capability, an immediate post-launch requirement is to verify that the data being acquired is correct lest scientific conclusions begin to be drawn based on flawed data. A standard approach to verifying satellite data is to take a similar, or validation, instrument and fly it onboard a research aircraft. Using an aircraft allows the validation instrument to get directly under the satellite so that both the satellite instrument and the aircraft instrument are sensing the same region of the atmosphere. Although there are almost always some differences in the sampling capabilities of the two instruments, it is nevertheless possible to directly compare the measurements. To validate the measurements from the CALIPSO lidar, a similar instrument, the Cloud Physics Lidar, was flown onboard the NASA high-altitude ER-2 aircraft during July- August 2006. This paper presents results to demonstrate that the CALIPSO lidar is properly calibrated and the CALIPSO Level 1 data products are correct. The importance of the results is to demonstrate to the research community that CALIPSO Level 1 data can be confidently used for scientific research.
NASA Technical Reports Server (NTRS)
Fargion, Giulietta S.; Barnes, Robert; McClain, Charles
2001-01-01
The purpose of this technical report is to provide current documentation of the Sensor Intercomparison and Merger for Biological and Interdisciplinary Oceanic Studies (SIMBIOS) Project Office activities on in situ aerosol optical thickness (i.e., protocols, and data QC and analysis). This documentation is necessary to ensure that critical information is related to the scientific community and NASA management. This critical information includes the technical difficulties and challenges of validating and combining ocean color data from an array of independent satellite systems to form consistent and accurate global bio-optical time series products. This technical report is not meant as a substitute for scientific literature. Instead, it will provide a ready and responsive vehicle for the multitude of technical reports issued by an operational project.
Statistical primer: how to deal with missing data in scientific research?
Papageorgiou, Grigorios; Grant, Stuart W; Takkenberg, Johanna J M; Mokhles, Mostafa M
2018-05-10
Missing data are a common challenge encountered in research which can compromise the results of statistical inference when not handled appropriately. This paper aims to introduce basic concepts of missing data to a non-statistical audience, list and compare some of the most popular approaches for handling missing data in practice and provide guidelines and recommendations for dealing with and reporting missing data in scientific research. Complete case analysis and single imputation are simple approaches for handling missing data and are popular in practice, however, in most cases they are not guaranteed to provide valid inferences. Multiple imputation is a robust and general alternative which is appropriate for data missing at random, surpassing the disadvantages of the simpler approaches, but should always be conducted with care. The aforementioned approaches are illustrated and compared in an example application using Cox regression.
Self-report: psychology's four-letter word.
Haeffel, Gerald J; Howard, George S
2010-01-01
Self-report continues to be one of the most widely used measurement strategies in psychology despite longstanding concerns about its validity and scientific rigor. In this article, the merits of self-report are examined from a philosophy of science perspective. A framework is also provided for evaluating self-report measures. Specifically, four issues are presented that can be used as a decision aid when making choices about measurement.
NASA Technical Reports Server (NTRS)
Smith, Charles M.
2003-01-01
This report provides results of an independent assessment of the geopositional accuracy of the Earth Satellite (EarthSat) Corporation's GeoCover, Orthorectified Landsat Thematic Mapper (TM) imagery over Northeast Asia. This imagery was purchased through NASA's Earth Science Enterprise (ESE) Scientific Data Purchase (SDP) program.
ERIC Educational Resources Information Center
Bromme, Rainer; Scharrer, Lisa; Stadtler, Marc; Hömberg, Johanna; Torspecken, Ronja
2015-01-01
Scientific texts are a genre in which adherence to specific discourse conventions allows for conclusions on the scientific integrity of the information and thus on its validity. This study examines whether genre-typical features of scientific discourse influence how laypeople handle conflicting science-based knowledge claims. In two experiments…
The bottom-up approach to integrative validity: a new perspective for program evaluation.
Chen, Huey T
2010-08-01
The Campbellian validity model and the traditional top-down approach to validity have had a profound influence on research and evaluation. That model includes the concepts of internal and external validity and within that model, the preeminence of internal validity as demonstrated in the top-down approach. Evaluators and researchers have, however, increasingly recognized that in an evaluation, the over-emphasis on internal validity reduces that evaluation's usefulness and contributes to the gulf between academic and practical communities regarding interventions. This article examines the limitations of the Campbellian validity model and the top-down approach and provides a comprehensive, alternative model, known as the integrative validity model for program evaluation. The integrative validity model includes the concept of viable validity, which is predicated on a bottom-up approach to validity. This approach better reflects stakeholders' evaluation views and concerns, makes external validity workable, and becomes therefore a preferable alternative for evaluation of health promotion/social betterment programs. The integrative validity model and the bottom-up approach enable evaluators to meet scientific and practical requirements, facilitate in advancing external validity, and gain a new perspective on methods. The new perspective also furnishes a balanced view of credible evidence, and offers an alternative perspective for funding. Copyright (c) 2009 Elsevier Ltd. All rights reserved.
Overview of SCIAMACHY validation: 2002-2004
NASA Astrophysics Data System (ADS)
Piters, A. J. M.; Bramstedt, K.; Lambert, J.-C.; Kirchhoff, B.
2006-01-01
SCIAMACHY, on board Envisat, has been in operation now for almost three years. This UV/visible/NIR spectrometer measures the solar irradiance, the earthshine radiance scattered at nadir and from the limb, and the attenuation of solar radiation by the atmosphere during sunrise and sunset, from 240 to 2380 nm and at moderate spectral resolution. Vertical columns and profiles of a variety of atmospheric constituents are inferred from the SCIAMACHY radiometric measurements by dedicated retrieval algorithms. With the support of ESA and several international partners, a methodical SCIAMACHY validation programme has been developed jointly by Germany, the Netherlands and Belgium (the three instrument providing countries) to face complex requirements in terms of measured species, altitude range, spatial and temporal scales, geophysical states and intended scientific applications. This summary paper describes the approach adopted to address those requirements.
Since provisional releases of limited data sets in summer 2002, operational SCIAMACHY processors established at DLR on behalf of ESA were upgraded regularly and some data products - level-1b spectra, level-2 O3, NO2, BrO and clouds data - have improved significantly. Validation results summarised in this paper and also reported in this special issue conclude that for limited periods and geographical domains they can already be used for atmospheric research. Nevertheless, current processor versions still experience known limitations that hamper scientific usability in other periods and domains. Free from the constraints of operational processing, seven scientific institutes (BIRA-IASB, IFE/IUP-Bremen, IUP-Heidelberg, KNMI, MPI, SAO and SRON) have developed their own retrieval algorithms and generated SCIAMACHY data products, together addressing nearly all targeted constituents. Most of the UV-visible data products - O3, NO2, SO2, H2O total columns; BrO, OClO slant columns; O3, NO2, BrO profiles - already have acceptable, if not excellent, quality. Provisional near-infrared column products - CO, CH4, N2O and CO2 - have already demonstrated their potential for a variety of applications. Cloud and aerosol parameters are retrieved, suffering from calibration with the exception of cloud cover. In any case, scientific users are advised to read carefully validation reports before using the data. It is required and anticipated that SCIAMACHY validation will continue throughout instrument lifetime and beyond and will accompany regular processor upgrades.
ERIC Educational Resources Information Center
Merma Molina, Gladys; Peña Alfaro, Hilda; Peña Alfaro González, Silvia Rosa
2017-01-01
In this study, the researchers will explore the process of designing and validating a rubric to evaluate the adaptation of scientific articles in the format of the "American Psychological Association" (APA). The rubric will evaluate certain aspects of the APA format that allow authors, editors, and evaluators to decide if the scientific…
NASA Astrophysics Data System (ADS)
Marelli, Fulvio; Glaves, Helen; Albani, Mirko
2017-04-01
Advances in technologies and measuring techniques in the Earth science and Earth observation domains have resulted in huge amounts of data about our Planet having been acquired. By making this data readily discoverable and accessible, and providing researchers with the necessary processing power, tools, and technologies to work collaboratively and share the results with their peers, will create new opportunities and innovative approaches for cross-disciplinary research. The EVER-EST project aims to support these advancements in scientific research by developing a generic Virtual Research Environment (VRE) which is tailored to the needs of the Earth Science domain. It will provide scientists with the means to manage, share and preserve the data and methodologies applied in their research, and lead to results that are validated, attributable and can be shared within and beyond their often geographically dispersed communities e.g. in the form of scholarly communications. The EVER-EST VRE is being implemented as a Service Oriented Architecture (SOA) that is based on loosely coupled services which can be differentiated as being either generic or specific to the requirements of the Earth Science domain. Central to the EVEREST approach is the concept of the Research Object (RO) which provides a semantically rich mechanism to aggregate related resources about a scientific investigation so that they can be shared together using a single unique identifier. Although the concept of Research Objects has previously been validated by other experimental disciplines this application in the Earth Sciences represents its first implementation in observational research. The EVER-EST e-infrastructure will be validated by four virtual research communities (VRC) covering different multidisciplinary Earth Science domains: including ocean monitoring, selected natural hazards (flooding, ground instability and extreme weather events), land monitoring and risk management (volcanoes and seismicity). Each of the VRCs represents a different collaborative use case for the VRE according to its own specific requirements for data, software, best practice and community engagement. The diverse use cases will demonstrate how the VRE can be used for a range of activities from straight forward data/software sharing to investigating ways to improve cooperative working. Development of the EVEREST VRE will leverage on the results of several previous projects which have produced state-of-the-art technologies for scientific data management and curation as well those initiatives which have developed models, techniques and tools for the preservation of scientific methods and their implementation in computational forms such as scientific workflows. The EVER-EST project is funded by the European Union's Horizon 2020 research and innovation programme under grant agreement no 674907. The project is led by the European Space Agency (ESA), and involves some of the major European Earth Science data providers/users including NERC, DLR, INGV, CNR and SatCEN.
Conceptual-level workflow modeling of scientific experiments using NMR as a case study
Verdi, Kacy K; Ellis, Heidi JC; Gryk, Michael R
2007-01-01
Background Scientific workflows improve the process of scientific experiments by making computations explicit, underscoring data flow, and emphasizing the participation of humans in the process when intuition and human reasoning are required. Workflows for experiments also highlight transitions among experimental phases, allowing intermediate results to be verified and supporting the proper handling of semantic mismatches and different file formats among the various tools used in the scientific process. Thus, scientific workflows are important for the modeling and subsequent capture of bioinformatics-related data. While much research has been conducted on the implementation of scientific workflows, the initial process of actually designing and generating the workflow at the conceptual level has received little consideration. Results We propose a structured process to capture scientific workflows at the conceptual level that allows workflows to be documented efficiently, results in concise models of the workflow and more-correct workflow implementations, and provides insight into the scientific process itself. The approach uses three modeling techniques to model the structural, data flow, and control flow aspects of the workflow. The domain of biomolecular structure determination using Nuclear Magnetic Resonance spectroscopy is used to demonstrate the process. Specifically, we show the application of the approach to capture the workflow for the process of conducting biomolecular analysis using Nuclear Magnetic Resonance (NMR) spectroscopy. Conclusion Using the approach, we were able to accurately document, in a short amount of time, numerous steps in the process of conducting an experiment using NMR spectroscopy. The resulting models are correct and precise, as outside validation of the models identified only minor omissions in the models. In addition, the models provide an accurate visual description of the control flow for conducting biomolecular analysis using NMR spectroscopy experiment. PMID:17263870
Conceptual-level workflow modeling of scientific experiments using NMR as a case study.
Verdi, Kacy K; Ellis, Heidi Jc; Gryk, Michael R
2007-01-30
Scientific workflows improve the process of scientific experiments by making computations explicit, underscoring data flow, and emphasizing the participation of humans in the process when intuition and human reasoning are required. Workflows for experiments also highlight transitions among experimental phases, allowing intermediate results to be verified and supporting the proper handling of semantic mismatches and different file formats among the various tools used in the scientific process. Thus, scientific workflows are important for the modeling and subsequent capture of bioinformatics-related data. While much research has been conducted on the implementation of scientific workflows, the initial process of actually designing and generating the workflow at the conceptual level has received little consideration. We propose a structured process to capture scientific workflows at the conceptual level that allows workflows to be documented efficiently, results in concise models of the workflow and more-correct workflow implementations, and provides insight into the scientific process itself. The approach uses three modeling techniques to model the structural, data flow, and control flow aspects of the workflow. The domain of biomolecular structure determination using Nuclear Magnetic Resonance spectroscopy is used to demonstrate the process. Specifically, we show the application of the approach to capture the workflow for the process of conducting biomolecular analysis using Nuclear Magnetic Resonance (NMR) spectroscopy. Using the approach, we were able to accurately document, in a short amount of time, numerous steps in the process of conducting an experiment using NMR spectroscopy. The resulting models are correct and precise, as outside validation of the models identified only minor omissions in the models. In addition, the models provide an accurate visual description of the control flow for conducting biomolecular analysis using NMR spectroscopy experiment.
Malhotra, Ashutosh; Gündel, Michaela; Rajput, Abdul Mateen; Mevissen, Heinz-Theodor; Saiz, Albert; Pastor, Xavier; Lozano-Rubi, Raimundo; Martinez-Lapiscina, Elena H; Martinez-Lapsicina, Elena H; Zubizarreta, Irati; Mueller, Bernd; Kotelnikova, Ekaterina; Toldo, Luca; Hofmann-Apitius, Martin; Villoslada, Pablo
2015-01-01
In order to retrieve useful information from scientific literature and electronic medical records (EMR) we developed an ontology specific for Multiple Sclerosis (MS). The MS Ontology was created using scientific literature and expert review under the Protégé OWL environment. We developed a dictionary with semantic synonyms and translations to different languages for mining EMR. The MS Ontology was integrated with other ontologies and dictionaries (diseases/comorbidities, gene/protein, pathways, drug) into the text-mining tool SCAIView. We analyzed the EMRs from 624 patients with MS using the MS ontology dictionary in order to identify drug usage and comorbidities in MS. Testing competency questions and functional evaluation using F statistics further validated the usefulness of MS ontology. Validation of the lexicalized ontology by means of named entity recognition-based methods showed an adequate performance (F score = 0.73). The MS Ontology retrieved 80% of the genes associated with MS from scientific abstracts and identified additional pathways targeted by approved disease-modifying drugs (e.g. apoptosis pathways associated with mitoxantrone, rituximab and fingolimod). The analysis of the EMR from patients with MS identified current usage of disease modifying drugs and symptomatic therapy as well as comorbidities, which are in agreement with recent reports. The MS Ontology provides a semantic framework that is able to automatically extract information from both scientific literature and EMR from patients with MS, revealing new pathogenesis insights as well as new clinical information.
[Issues on business of genetic testing in near future].
Takada, Fumio
2009-06-01
Since 1990's, a business condition that company sells genetic testing services directly to consumers without through medical facility, so called "direct-to-consumers (DTC) genetic testing", has risen. They provide genetic testing for obesity, disease susceptibility or paternity, etc. There are serious problems in this kind of business. Most of the providers do not make sales with face-to-face selling, and do through internet instead. They do not provide genetic counseling by certified genetic counselor or clinical geneticist. Most DTC genetic testing services for disease susceptibility or predispositions including obesity, lack scientific validity, clinical validity and clinical utility. And also including paternity genetic testing, they all have risks of ethical legal and social issues (ELSI) in genetic discrimination and/or eugenics. The specific problem in Japan is that the healthcare section of the government still has not paid attention and not taken seriously the requirement to deploy safety net.
Smart Data Infrastructure: The Sixth Generation of Mediation for Data Science
NASA Astrophysics Data System (ADS)
Fox, P. A.
2014-12-01
In the emergent "fourth paradigm" (data-driven) science, the scientific method is enhanced by the integration of significant data sources into the practice of scientific research. To address Big Science, there are challenges in understanding the role of data in enabling researchers to attack not just disciplinary issues, but also the system-level, large-scale, and transdisciplinary global scientific challenges facing society.Recognizing that the volume of data is only one of many dimensions to be considered, there is a clear need for improved data infrastructures to mediate data and information exchange, which we contend will need to be powered by semantic technologies. One clear need is to provide computational approaches for researchers to discover appropriate data resources, rapidly integrate data collections from heterogeneously resources or multiple data sets, and inter-compare results to allow generation and validation of hypotheses. Another trend is toward automated tools that allow researchers to better find and reuse data that they currently don't know they need, let alone know how to find. Again semantic technologies will be required. Finally, to turn data analytics from "art to science", technical solutions are needed for cross-dataset validation, reproducibility studies on data-driven results, and the concomitant citation of data products allowing recognition for those who curate and share important data resources.
Durham, Mary F.; Knight, Jennifer K.; Couch, Brian A.
2017-01-01
The Scientific Teaching (ST) pedagogical framework provides various approaches for science instructors to teach in a way that more closely emulates how science is practiced by actively and inclusively engaging students in their own learning and by making instructional decisions based on student performance data. Fully understanding the impact of ST requires having mechanisms to quantify its implementation. While many useful instruments exist to document teaching practices, these instruments only partially align with the range of practices specified by ST, as described in a recently published taxonomy. Here, we describe the development, validation, and implementation of the Measurement Instrument for Scientific Teaching (MIST), a survey derived from the ST taxonomy and designed to gauge the frequencies of ST practices in undergraduate science courses. MIST showed acceptable validity and reliability based on results from 7767 students in 87 courses at nine institutions. We used factor analyses to identify eight subcategories of ST practices and used these categories to develop a short version of the instrument amenable to joint administration with other research instruments. We further discuss how MIST can be used by instructors, departments, researchers, and professional development programs to quantify and track changes in ST practices. PMID:29196428
The naphthalene state of the science symposium: objectives, organization, structure, and charge.
Belzer, Richard B; Bus, James S; Cavalieri, Ercole L; Lewis, Steven C; North, D Warner; Pleus, Richard C
2008-07-01
This report provides a summary of the objectives, organization, structure and charge for the naphthalene state of the science symposium (NS(3)), Monterey, CA, October 9-12, 2006. A 1-day preliminary conference was held followed by a 3-day state of the science symposium covering four topics judged by the Planning Committee to be crucial for developing valid and reliable scientific estimates of low-dose human cancer risk from naphthalene. The Planning Committee reviewed the relevant scientific literature to identify singularly knowledgeable researchers and a pool of scientists qualified to serve as expert panelists. In two cases, independent scientists were commissioned to develop comprehensive reviews of the relevant science in a specific area for which no leading researcher could be identified. Researchers and expert panelists alike were screened for conflicts of interest. All policy issues related to risk assessment practices and risk management were scrupulously excluded. NS(3) was novel in several ways and provides an innovative model for the effective use of peer review to identify scientific uncertainties and propose research strategies for reducing or eliminating them prior to the conduct of risk assessment.
2018-01-01
Although it is becoming increasingly popular to monitor parameters related to training, recovery, and health with wearable sensor technology (wearables), scientific evaluation of the reliability, sensitivity, and validity of such data is limited and, where available, has involved a wide variety of approaches. To improve the trustworthiness of data collected by wearables and facilitate comparisons, we have outlined recommendations for standardized evaluation. We discuss the wearable devices themselves, as well as experimental and statistical considerations. Adherence to these recommendations should be beneficial not only for the individual, but also for regulatory organizations and insurance companies. PMID:29712629
Zucchetti, Giulia; Rossi, Francesca; Chamorro Vina, Carolina; Bertorello, Nicoletta; Fagioli, Franca
2018-05-01
An exercise program (EP) during cancer treatment seems to be a valid strategy against physiological and quality-of-life impairments, but scientific evidence of benefits among pediatric patients is still limited. This review summarizes the literature focused on randomized controlled trials of EP offered to patients during leukemia and lymphoma treatment. Studies published up to June 2017 were selected from multiple databases and assessed by three independent reviewers for methodological validity. The review identified eight studies, but several types of bias have to be avoided to provide evidence-based recommendations accessible to patients, families, and professionals. © 2018 Wiley Periodicals, Inc.
DESCQA: An Automated Validation Framework for Synthetic Sky Catalogs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mao, Yao-Yuan; Kovacs, Eve; Heitmann, Katrin
The use of high-quality simulated sky catalogs is essential for the success of cosmological surveys. The catalogs have diverse applications, such as investigating signatures of fundamental physics in cosmological observables, understanding the effect of systematic uncertainties on measured signals and testing mitigation strategies for reducing these uncertainties, aiding analysis pipeline development and testing, and survey strategy optimization. The list of applications is growing with improvements in the quality of the catalogs and the details that they can provide. Given the importance of simulated catalogs, it is critical to provide rigorous validation protocols that enable both catalog providers and users tomore » assess the quality of the catalogs in a straightforward and comprehensive way. For this purpose, we have developed the DESCQA framework for the Large Synoptic Survey Telescope Dark Energy Science Collaboration as well as for the broader community. The goal of DESCQA is to enable the inspection, validation, and comparison of an inhomogeneous set of synthetic catalogs via the provision of a common interface within an automated framework. Here in this paper, we present the design concept and first implementation of DESCQA. In order to establish and demonstrate its full functionality we use a set of interim catalogs and validation tests. We highlight several important aspects, both technical and scientific, that require thoughtful consideration when designing a validation framework, including validation metrics and how these metrics impose requirements on the synthetic sky catalogs.« less
DESCQA: An Automated Validation Framework for Synthetic Sky Catalogs
Mao, Yao-Yuan; Kovacs, Eve; Heitmann, Katrin; ...
2018-02-08
The use of high-quality simulated sky catalogs is essential for the success of cosmological surveys. The catalogs have diverse applications, such as investigating signatures of fundamental physics in cosmological observables, understanding the effect of systematic uncertainties on measured signals and testing mitigation strategies for reducing these uncertainties, aiding analysis pipeline development and testing, and survey strategy optimization. The list of applications is growing with improvements in the quality of the catalogs and the details that they can provide. Given the importance of simulated catalogs, it is critical to provide rigorous validation protocols that enable both catalog providers and users tomore » assess the quality of the catalogs in a straightforward and comprehensive way. For this purpose, we have developed the DESCQA framework for the Large Synoptic Survey Telescope Dark Energy Science Collaboration as well as for the broader community. The goal of DESCQA is to enable the inspection, validation, and comparison of an inhomogeneous set of synthetic catalogs via the provision of a common interface within an automated framework. Here in this paper, we present the design concept and first implementation of DESCQA. In order to establish and demonstrate its full functionality we use a set of interim catalogs and validation tests. We highlight several important aspects, both technical and scientific, that require thoughtful consideration when designing a validation framework, including validation metrics and how these metrics impose requirements on the synthetic sky catalogs.« less
Costa, Fabrizio; Cramer, Grant; Finnegan, E Jean
2017-11-10
The inclusive threshold policy for publication in BMC journals including BMC Plant Biology means that editorial decisions are largely based on the soundness of the research presented rather than the novelty or potential impact of the work. Here we discuss what is required to ensure that research meets the requirement of scientific soundness. BMC Plant Biology and the other BCM-series journals ( https://www.biomedcentral.com/p/the-bmc-series-journals ) differ in policy from many other journals as they aim to provide a home for all publishable research. The inclusive threshold policy for publication means that editorial decisions are largely based on the soundness of the research presented rather than the novelty or potential impact of the work. The emphasis on scientific soundness ( http://blogs.biomedcentral.com/bmcseriesblog/2016/12/05/vital-importance-inclusive/ ) rather than novelty or impact is important because it means that manuscripts that may be judged to be of low impact due to the nature of the study as well as those reporting negative results or that largely replicate earlier studies, all of which can be difficult to publish elsewhere, are available to the research community. Here we discuss the importance of the soundness of research and provide some basic guidelines to assist authors to determine whether their research is appropriate for submission to BMC Plant Biology.Prior to a research article being sent out for review, the handling editor will first determine whether the research presented is scientifically valid. To be valid the research must address a question of biological significance using suitable methods and analyses, and must follow community-agreed standards relevant to the research field.
[Human rights. Right to health. Right to health information. The Venezuelan biomedical journals].
Stegemann, Herbert
2013-06-01
Venezuelan Biomedical journals have been confronting, for several years, a gradual decline both, from the standpoint of their management and in the quality of their editorial content. At its highest level, Venezuela had about sixty different titles. But irregular financial support, as well as the lack of a clear official policy, regarding these scientific activities, were some of the reasons that have contributed to this decline. Several recent Venezuelan and international documents provide an important legal support for the design of new official policies and government responsibilities. There is now a valid opportunity to profit from new tools to evaluate and improve the quality of our scientific and editorial activities.
NASA Astrophysics Data System (ADS)
Çalik, Muammer; Coll, Richard Kevin
2012-08-01
In this paper, we describe the Scientific Habits of Mind Survey (SHOMS) developed to explore public, science teachers', and scientists' understanding of habits of mind (HoM). The instrument contained 59 items, and captures the seven SHOM identified by Gauld. The SHOM was validated by administration to two cohorts of pre-service science teachers: primary science teachers with little science background or interest (n = 145), and secondary school science teachers (who also were science graduates) with stronger science knowledge (n = 145). Face validity was confirmed by the use of a panel of experts and a pilot study employing participants similar in demographics to the intended sample. To confirm convergent and discriminant validity, confirmatory factor analysis and evaluation of the reliability were calculated. Statistical data and other data gathered from interviews suggest that the SHOMS will prove to be a useful tool for educators and researchers who wish to investigate HoM for a variety of participants.
Scientific Inquiry: A Model for Online Searching.
ERIC Educational Resources Information Center
Harter, Stephen P.
1984-01-01
Explores scientific inquiry as philosophical and behavioral model for online search specialist and information retrieval process. Nature of scientific research is described and online analogs to research concepts of variable, hypothesis formulation and testing, operational definition, validity, reliability, assumption, and cyclical nature of…
Containerless Processing on ISS: Ground Support Program for EML
NASA Technical Reports Server (NTRS)
Diefenbach, Angelika; Schneider, Stephan; Willnecker, Rainer
2012-01-01
EML is an electromagnetic levitation facility planned for the ISS aiming at processing and investigating liquid metals or semiconductors by using electromagnetic levitation technique under microgravity with reduced electromagnetic fields and convection conditions. Its diagnostics and processing methods allow to measure thermophysical properties in the liquid state over an extended temperature range and to investigate solidification phenomena in undercooled melts. The EML project is a common effort of The European Space Agency (ESA) and the German Space Agency DLR. The Microgravity User Support Centre MUSC at Cologne, Germany, has been assigned the responsibility for EML operations. For the EML experiment preparation an extensive scientific ground support program is established at MUSC, providing scientific and technical services in the preparation, performance and evaluation of the experiments. Its final output is the transcription of the scientific goals and requirements into validated facility control parameters for the experiment execution onboard the ISS.
Disease management research using event graphs.
Allore, H G; Schruben, L W
2000-08-01
Event Graphs, conditional representations of stochastic relationships between discrete events, simulate disease dynamics. In this paper, we demonstrate how Event Graphs, at an appropriate abstraction level, also extend and organize scientific knowledge about diseases. They can identify promising treatment strategies and directions for further research and provide enough detail for testing combinations of new medicines and interventions. Event Graphs can be enriched to incorporate and validate data and test new theories to reflect an expanding dynamic scientific knowledge base and establish performance criteria for the economic viability of new treatments. To illustrate, an Event Graph is developed for mastitis, a costly dairy cattle disease, for which extensive scientific literature exists. With only a modest amount of imagination, the methodology presented here can be seen to apply modeling to any disease, human, plant, or animal. The Event Graph simulation presented here is currently being used in research and in a new veterinary epidemiology course. Copyright 2000 Academic Press.
Article-level assessment of influence and translation in biomedical research
Santangelo, George M.
2017-01-01
Given the vast scale of the modern scientific enterprise, it can be difficult for scientists to make judgments about the work of others through careful analysis of the entirety of the relevant literature. This has led to a reliance on metrics that are mathematically flawed and insufficiently diverse to account for the variety of ways in which investigators contribute to scientific progress. An urgent, critical first step in solving this problem is replacing the Journal Impact Factor with an article-level alternative. The Relative Citation Ratio (RCR), a metric that was designed to serve in that capacity, measures the influence of each publication on its respective area of research. RCR can serve as one component of a multifaceted metric that provides an effective data-driven supplement to expert opinion. Developing validated methods that quantify scientific progress can help to optimize the management of research investments and accelerate the acquisition of knowledge that improves human health. PMID:28559438
Developing an Instrument of Scientific Literacy Assessment on the Cycle Theme
ERIC Educational Resources Information Center
Rusilowati, Ani; Kurniawati, Lina; Nugroho, Sunyoto E.; Widiyatmoko, Arif
2016-01-01
The purpose of this study is to develop scientific literacy evaluation instrument that tested its validity, reliability, and characteristics to measure the skill of student's scientific literacy used four scientific literacy, categories as follow:science as a body of knowledge (category A), science as a way of thinking (category B), science as a…
Middle School Students' Learning about Genetic Inheritance through On-Line Scaffolding Supports
ERIC Educational Resources Information Center
Manokore, Viola
2010-01-01
The main goal of school science is to enable learners to become scientifically literate through their participation in scientific discourses (McNeill & Krajcik, 2009). One of the key elements of scientific discourses is the ability to construct scientific explanations that consist of valid claims supported by appropriate evidence (e.g., McNeill &…
Biomedical surveillance: rights conflict with rights.
Atherley, G; Johnston, N; Tennassee, M
1986-10-01
Medical screening and biomedical monitoring violate individual rights. Such conflicts of right with right are acted upon synergistically by uncertainty which, in some important respects, increases rather than decreases as a result of research. Issues of rightness and wrongness, ethical issues, arise because the human beings who are subjects of medical screening and biological monitoring often have little or no option whether to be subjected to them. We identify issues of rightness and wrongness of biomedical surveillance for various purposes of occupational health and safety. We distinguish between social validity and scientific validity. We observe that principles are well established for scientific validity, but not for social validity. We support guidelines as a way forward.
ERIC Educational Resources Information Center
Brigham, Frederick J.; Gustashaw, William E., III; Wiley, Andrew L.; Brigham, Michele St. Peter
2004-01-01
The authors provide an analysis of why the controversies surrounding educational treatment are likely to continue even with scientific validation of practices as called for in the No Child Left Behind (NCLB) act. They describe how bias in human judgment makes it difficult to trust others and also difficult to doubt oneself relative to important…
Impact of imaging measurements on response assessment in glioblastoma clinical trials
Reardon, David A.; Ballman, Karla V.; Buckner, Jan C.; Chang, Susan M.; Ellingson, Benjamin M.
2014-01-01
We provide historical and scientific guidance on imaging response assessment for incorporation into clinical trials to stimulate effective and expedited drug development for recurrent glioblastoma by addressing 3 fundamental questions: (i) What is the current validation status of imaging response assessment, and when are we confident assessing response using today's technology? (ii) What imaging technology and/or response assessment paradigms can be validated and implemented soon, and how will these technologies provide benefit? (iii) Which imaging technologies need extensive testing, and how can they be prospectively validated? Assessment of T1 +/− contrast, T2/FLAIR, diffusion, and perfusion-imaging sequences are routine and provide important insight into underlying tumor activity. Nonetheless, utility of these data within and across patients, as well as across institutions, are limited by challenges in quantifying measurements accurately and lack of consistent and standardized image acquisition parameters. Currently, there exists a critical need to generate guidelines optimizing and standardizing MRI sequences for neuro-oncology patients. Additionally, more accurate differentiation of confounding factors (pseudoprogression or pseudoresponse) may be valuable. Although promising, diffusion MRI, perfusion MRI, MR spectroscopy, and amino acid PET require extensive standardization and validation. Finally, additional techniques to enhance response assessment, such as digital T1 subtraction maps, warrant further investigation. PMID:25313236
Advancing Partnerships Towards an Integrated Approach to Oil Spill Response
NASA Astrophysics Data System (ADS)
Green, D. S.; Stough, T.; Gallegos, S. C.; Leifer, I.; Murray, J. J.; Streett, D.
2015-12-01
Oil spills can cause enormous ecological and economic devastation, necessitating application of the best science and technology available, and remote sensing is playing a growing critical role in the detection and monitoring of oil spills, as well as facilitating validation of remote sensing oil spill products. The FOSTERRS (Federal Oil Science Team for Emergency Response Remote Sensing) interagency working group seeks to ensure that during an oil spill, remote sensing assets (satellite/aircraft/instruments) and analysis techniques are quickly, effectively, appropriately, and seamlessly available to oil spills responders. Yet significant challenges remain for addressing oils spanning a vast range of chemical properties that may be spilled from the Tropics to the Arctic, with algorithms and scientific understanding needing advances to keep up with technology. Thus, FOSTERRS promotes enabling scientific discovery to ensure robust utilization of available technology as well as identifying technologies moving up the TRL (Technology Readiness Level). A recent FOSTERRS facilitated support activity involved deployment of the AVIRIS NG (Airborne Visual Infrared Imaging Spectrometer- Next Generation) during the Santa Barbara Oil Spill to validate the potential of airborne hyperspectral imaging to real-time map beach tar coverage including surface validation data. Many developing airborne technologies have potential to transition to space-based platforms providing global readiness.
Monitoring Coastal Marshes for Persistent Saltwater Intrusion
NASA Technical Reports Server (NTRS)
Kalcic, Maria; Hall, Callie; Fletcher, Rose; Russell, Jeff
2009-01-01
Primary goal: Provide resource managers with remote sensing products that support ecosystem forecasting models requiring salinity and inundation data. Work supports the habitat-switching modules in the Coastal Louisiana Ecosystem Assessment and Restoration (CLEAR) model, which provides scientific evaluation for restoration management (Visser et al., 2008). Ongoing work to validate flooding with radar (NWRC/USGS) and enhance persistence estimates through "fusion" of MODIS and Landsat time series (ROSES A.28 Gulf of Mexico). Additional work will also investigate relationship between saltwater dielectric constant and radar returns (Radarsat) (ROSES A.28 Gulf of Mexico).
Horton, pipe hydraulics, and the atmospheric boundary layer (The Robert E. Horton Memorial Lecture)
NASA Technical Reports Server (NTRS)
Brutsaert, Wilfried
1993-01-01
The early stages of Horton's scientific career which provided the opportunity and stimulus to delve into the origins of some contemporary concepts on the atmospheric boundary layer are reviewed. The study of Saph and Schoder provided basis for the experimental verification and validation of similarity by Blasius, Staton and Pannel, and for the subsequent developments that led to the present understanding of the turbulent boundary layer. Particular attention is given to incorporation of similarity and scaling in the analysis of turbulent flow.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boyd, J.; Herner, K.; Jayatilaka, B.
The Fermilab Tevatron collider's data-taking run ended in September 2011, yielding a dataset with rich scientific potential. The CDF and DO experiments each have nearly 9 PB of collider and simulated data stored on tape. A large computing infrastructure consisting of tape storage, disk cache, and distributed grid computing for physics analysis with the Tevatron data is present at Fermilab. The Fermilab Run II data preservation project intends to keep this analysis capability sustained through the year 2020 or beyond. To achieve this, we are implementing a system that utilizes virtualization, automated validation, and migration to new standards in bothmore » software and data storage technology as well as leveraging resources available from currently-running experiments at Fermilab. Furthermore, these efforts will provide useful lessons in ensuring long-term data access for numerous experiments throughout high-energy physics, and provide a roadmap for high-quality scientific output for years to come.« less
The flaws and human harms of animal experimentation.
Akhtar, Aysha
2015-10-01
Nonhuman animal ("animal") experimentation is typically defended by arguments that it is reliable, that animals provide sufficiently good models of human biology and diseases to yield relevant information, and that, consequently, its use provides major human health benefits. I demonstrate that a growing body of scientific literature critically assessing the validity of animal experimentation generally (and animal modeling specifically) raises important concerns about its reliability and predictive value for human outcomes and for understanding human physiology. The unreliability of animal experimentation across a wide range of areas undermines scientific arguments in favor of the practice. Additionally, I show how animal experimentation often significantly harms humans through misleading safety studies, potential abandonment of effective therapeutics, and direction of resources away from more effective testing methods. The resulting evidence suggests that the collective harms and costs to humans from animal experimentation outweigh potential benefits and that resources would be better invested in developing human-based testing methods.
Data preservation at the Fermilab Tevatron
Boyd, J.; Herner, K.; Jayatilaka, B.; ...
2015-12-23
The Fermilab Tevatron collider's data-taking run ended in September 2011, yielding a dataset with rich scientific potential. The CDF and DO experiments each have nearly 9 PB of collider and simulated data stored on tape. A large computing infrastructure consisting of tape storage, disk cache, and distributed grid computing for physics analysis with the Tevatron data is present at Fermilab. The Fermilab Run II data preservation project intends to keep this analysis capability sustained through the year 2020 or beyond. To achieve this, we are implementing a system that utilizes virtualization, automated validation, and migration to new standards in bothmore » software and data storage technology as well as leveraging resources available from currently-running experiments at Fermilab. Furthermore, these efforts will provide useful lessons in ensuring long-term data access for numerous experiments throughout high-energy physics, and provide a roadmap for high-quality scientific output for years to come.« less
Data preservation at the Fermilab Tevatron
NASA Astrophysics Data System (ADS)
Boyd, J.; Herner, K.; Jayatilaka, B.; Roser, R.; Sakumoto, W.
2015-12-01
The Fermilab Tevatron collider's data-taking run ended in September 2011, yielding a dataset with rich scientific potential. The CDF and DO experiments each have nearly 9 PB of collider and simulated data stored on tape. A large computing infrastructure consisting of tape storage, disk cache, and distributed grid computing for physics analysis with the Tevatron data is present at Fermilab. The Fermilab Run II data preservation project intends to keep this analysis capability sustained through the year 2020 or beyond. To achieve this, we are implementing a system that utilizes virtualization, automated validation, and migration to new standards in both software and data storage technology as well as leveraging resources available from currently-running experiments at Fermilab. These efforts will provide useful lessons in ensuring long-term data access for numerous experiments throughout high-energy physics, and provide a roadmap for high-quality scientific output for years to come.
Third Molars on the Internet: A Guide for Assessing Information Quality and Readability
Brennan, David; Sambrook, Paul; Armfield, Jason
2015-01-01
Background Directing patients suffering from third molars (TMs) problems to high-quality online information is not only medically important, but also could enable better engagement in shared decision making. Objectives This study aimed to develop a scale that measures the scientific information quality (SIQ) for online information concerning wisdom tooth problems and to conduct a quality evaluation for online TMs resources. In addition, the study evaluated whether a specific piece of readability software (Readability Studio Professional 2012) might be reliable in measuring information comprehension, and explored predictors for the SIQ Scale. Methods A cross-sectional sample of websites was retrieved using certain keywords and phrases such as “impacted wisdom tooth problems” using 3 popular search engines. The retrieved websites (n=150) were filtered. The retained 50 websites were evaluated to assess their characteristics, usability, accessibility, trust, readability, SIQ, and their credibility using DISCERN and Health on the Net Code (HoNCode). Results Websites’ mean scale scores varied significantly across website affiliation groups such as governmental, commercial, and treatment provider bodies. The SIQ Scale had a good internal consistency (alpha=.85) and was significantly correlated with DISCERN (r=.82, P<.01) and HoNCode (r=.38, P<.01). Less than 25% of websites had SIQ scores above 75%. The mean readability grade (10.3, SD 1.9) was above the recommended level, and was significantly correlated with the Scientific Information Comprehension Scale (r=.45. P<.01), which provides evidence for convergent validity. Website affiliation and DISCERN were significantly associated with SIQ (P<.01) and explained 76% of the SIQ variance. Conclusion The developed SIQ Scale was found to demonstrate reliability and initial validity. Website affiliation, DISCERN, and HoNCode were significant predictors for the quality of scientific information. The Readability Studio software estimates were associated with scientific information comprehensiveness measures. PMID:26443470
Validity and reliability of the Diagnostic Adaptive Behaviour Scale.
Tassé, M J; Schalock, R L; Balboni, G; Spreat, S; Navas, P
2016-01-01
The Diagnostic Adaptive Behaviour Scale (DABS) is a new standardised adaptive behaviour measure that provides information for evaluating limitations in adaptive behaviour for the purpose of determining a diagnosis of intellectual disability. This article presents validity evidence and reliability data for the DABS. Validity evidence was based on comparing DABS scores with scores obtained on the Vineland Adaptive Behaviour Scale, second edition. The stability of the test scores was measured using a test and retest, and inter-rater reliability was assessed by computing the inter-respondent concordance. The DABS convergent validity coefficients ranged from 0.70 to 0.84, while the test-retest reliability coefficients ranged from 0.78 to 0.95, and the inter-rater concordance as measured by intraclass correlation coefficients ranged from 0.61 to 0.87. All obtained validity and reliability indicators were strong and comparable with the validity and reliability coefficients of the most commonly used adaptive behaviour instruments. These results and the advantages of the DABS for clinician and researcher use are discussed. © 2015 MENCAP and International Association of the Scientific Study of Intellectual and Developmental Disabilities and John Wiley & Sons Ltd.
[Reliability and validity of Driving Anger Scale in professional drivers in China].
Li, Z; Yang, Y M; Zhang, C; Li, Y; Hu, J; Gao, L W; Zhou, Y X; Zhang, X J
2017-11-10
Objective: To assess the reliability and validity of the Chinese version of Driving Anger Scale (DAS) in professional drivers in China and provide a scientific basis for the application of the scale in drivers in China. Methods: Professional drivers, including taxi drivers, bus drivers, truck drivers and school bus drivers, were selected to complete the questionnaire. Cronbach's α and split-half reliability were calculated to evaluate the reliability of DAS, and content, contract, discriminant and convergent validity were performed to measure the validity of the scale. Results: The overall Cronbach's α of DAS was 0.934 and the split-half reliability was 0.874. The correlation coefficient of each subscale with the total scale was 0.639-0.922. The simplified version of DAS supported a presupposed six-factor structure, explaining 56.371% of the total variance revealed by exploratory factor analysis. The DAS had good convergent and discriminant validity, with the success rate of calibration experiment of 100%. Conclusion: DAS has a good reliability and validity in professional drivers in China, and the use of DAS is worth promoting in divers.
The predictive validity of the BioMedical Admissions Test for pre-clinical examination performance.
Emery, Joanne L; Bell, John F
2009-06-01
Some medical courses in the UK have many more applicants than places and almost all applicants have the highest possible previous and predicted examination grades. The BioMedical Admissions Test (BMAT) was designed to assist in the student selection process specifically for a number of 'traditional' medical courses with clear pre-clinical and clinical phases and a strong focus on science teaching in the early years. It is intended to supplement the information provided by examination results, interviews and personal statements. This paper reports on the predictive validity of the BMAT and its predecessor, the Medical and Veterinary Admissions Test. Results from the earliest 4 years of the test (2000-2003) were matched to the pre-clinical examination results of those accepted onto the medical course at the University of Cambridge. Correlation and logistic regression analyses were performed for each cohort. Section 2 of the test ('Scientific Knowledge') correlated more strongly with examination marks than did Section 1 ('Aptitude and Skills'). It also had a stronger relationship with the probability of achieving the highest examination class. The BMAT and its predecessor demonstrate predictive validity for the pre-clinical years of the medical course at the University of Cambridge. The test identifies important differences in skills and knowledge between candidates, not shown by their previous attainment, which predict their examination performance. It is thus a valid source of additional admissions information for medical courses with a strong scientific emphasis when previous attainment is very high.
NASA Astrophysics Data System (ADS)
Eastwood, Jennifer L.; Sadler, Troy D.; Sherwood, Robert D.; Schlegel, Whitney M.
2013-06-01
The purpose of this study was to examine whether Socioscientific Issues (SSI) based learning environments affect university students' epistemological understanding of scientific inquiry differently from traditional science educational contexts. We identify and compare conceptions of scientific inquiry of students participating in an interdisciplinary, SSI-focused undergraduate human biology major (SSI) and those participating in a traditional biology major (BIO). Forty-five SSI students and 50 BIO students completed an open-ended questionnaire examining their understanding of scientific inquiry. Eight general themes including approximately 60 subthemes emerged from questionnaire responses, and the numbers of students including each subtheme in their responses were statistically compared between groups. A subset of students participated in interviews, which were used to validate and triangulate questionnaire data and probe students' understanding of scientific inquiry in relation to their majors. We found that both groups provided very similar responses, differing significantly in only five subthemes. Results indicated that both groups held generally adequate understandings of inquiry, but also a number of misconceptions. Small differences between groups supported by both questionnaires and interviews suggest that the SSI context contributed to nuanced understandings, such as a more interdisciplinary and problem-centered conception of scientific inquiry. Implications for teaching and research are discussed.
Is the statistic value all we should care about in neuroimaging?
Chen, Gang; Taylor, Paul A; Cox, Robert W
2017-02-15
Here we address an important issue that has been embedded within the neuroimaging community for a long time: the absence of effect estimates in results reporting in the literature. The statistic value itself, as a dimensionless measure, does not provide information on the biophysical interpretation of a study, and it certainly does not represent the whole picture of a study. Unfortunately, in contrast to standard practice in most scientific fields, effect (or amplitude) estimates are usually not provided in most results reporting in the current neuroimaging publications and presentations. Possible reasons underlying this general trend include (1) lack of general awareness, (2) software limitations, (3) inaccurate estimation of the BOLD response, and (4) poor modeling due to our relatively limited understanding of FMRI signal components. However, as we discuss here, such reporting damages the reliability and interpretability of the scientific findings themselves, and there is in fact no overwhelming reason for such a practice to persist. In order to promote meaningful interpretation, cross validation, reproducibility, meta and power analyses in neuroimaging, we strongly suggest that, as part of good scientific practice, effect estimates should be reported together with their corresponding statistic values. We provide several easily adaptable recommendations for facilitating this process. Published by Elsevier Inc.
ERIC Educational Resources Information Center
Wei, Silin; Liu, Xiufeng; Jia, Yuane
2014-01-01
Scientific models and modeling play an important role in science, and students' understanding of scientific models is essential for their understanding of scientific concepts. The measurement instrument of "Students' Understanding of Models in Science" (SUMS), developed by Treagust, Chittleborough & Mamiala ("International…
Young, Jasmine Y; Westbrook, John D; Feng, Zukang; Sala, Raul; Peisach, Ezra; Oldfield, Thomas J; Sen, Sanchayita; Gutmanas, Aleksandras; Armstrong, David R; Berrisford, John M; Chen, Li; Chen, Minyu; Di Costanzo, Luigi; Dimitropoulos, Dimitris; Gao, Guanghua; Ghosh, Sutapa; Gore, Swanand; Guranovic, Vladimir; Hendrickx, Pieter M S; Hudson, Brian P; Igarashi, Reiko; Ikegawa, Yasuyo; Kobayashi, Naohiro; Lawson, Catherine L; Liang, Yuhe; Mading, Steve; Mak, Lora; Mir, M Saqib; Mukhopadhyay, Abhik; Patwardhan, Ardan; Persikova, Irina; Rinaldi, Luana; Sanz-Garcia, Eduardo; Sekharan, Monica R; Shao, Chenghua; Swaminathan, G Jawahar; Tan, Lihua; Ulrich, Eldon L; van Ginkel, Glen; Yamashita, Reiko; Yang, Huanwang; Zhuravleva, Marina A; Quesada, Martha; Kleywegt, Gerard J; Berman, Helen M; Markley, John L; Nakamura, Haruki; Velankar, Sameer; Burley, Stephen K
2017-03-07
OneDep, a unified system for deposition, biocuration, and validation of experimentally determined structures of biological macromolecules to the PDB archive, has been developed as a global collaboration by the worldwide PDB (wwPDB) partners. This new system was designed to ensure that the wwPDB could meet the evolving archiving requirements of the scientific community over the coming decades. OneDep unifies deposition, biocuration, and validation pipelines across all wwPDB, EMDB, and BMRB deposition sites with improved focus on data quality and completeness in these archives, while supporting growth in the number of depositions and increases in their average size and complexity. In this paper, we describe the design, functional operation, and supporting infrastructure of the OneDep system, and provide initial performance assessments. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Farrar, Cathy
As part of the National Science Foundation Science Literacy through Science Journalism (SciJourn) research and development initiative (http://www.scijourn.org ; Polman, Saul, Newman, and Farrar, 2008) a quasi-experimental design was used to investigate what impact incorporating science journalism activities had on students' scientific literacy. Over the course of a school year students participated in a variety of activities culminating in the production of science news articles for Scijourner, a regional print and online high school science news magazine. Participating teachers and SciJourn team members collaboratively developed activities focused on five aspects of scientific literacy: placing information into context, recognizing relevance, evaluating factual accuracy, use of multiple credible sources and information seeking processes. This study details the development process for the Scientific Literacy Assessment (SLA) including validity and reliability studies, evaluates student scientific literacy using the SLA, examines student SLA responses to provide a description of high school students' scientific literacy, and outlines implications of the findings in relation to the National Research Council's A Framework for K-12 Science Education: Practices, Crosscutting Concepts, and Core Ideas (2012) and classroom science teaching practices. Scientifically literate adults acting as experts in the assessment development phase informed the creation of a scoring guide that was used to analyze student responses. Experts tended to draw on both their understanding of science concepts and life experiences to formulate answers; paying close attention to scientific factual inaccuracies, sources of information, how new information fit into their view of science and society as well as targeted strategies for information seeking. Novices (i.e., students), in contrast, tended to ignore factual inaccuracies, showed little understanding about source credibility and suggested unproductive information seeking strategies. However, similar to the experts, novices made references to both scientific and societal contexts. The expert/novice comparison provides a rough description of a developmental continuum of scientific literacy. The findings of this study including student results and Generalized Linear Mixed Modeling suggest that the incorporation of science journalism activities focused on STEM issues can improve student scientific literacy. Incorporation of a wide variety of strategies raised scores on the SLA. Teachers who included a writing and revision process that prioritized content had significantly larger gains in student scores. Future studies could broaden the description of high school student scientific literacy and measured by the SLA and provide alternative pathways for developing scientific literacy as envisioned by SciJourn and the NRC Frameworks.
NASA Astrophysics Data System (ADS)
Yusliana Ekawati, Elvin
2017-01-01
This study aimed to produce a model of scientific attitude assessment in terms of the observations for physics learning based scientific approach (case study of dynamic fluid topic in high school). Development of instruments in this study adaptation of the Plomp model, the procedure includes the initial investigation, design, construction, testing, evaluation and revision. The test is done in Surakarta, so that the data obtained are analyzed using Aiken formula to determine the validity of the content of the instrument, Cronbach’s alpha to determine the reliability of the instrument, and construct validity using confirmatory factor analysis with LISREL 8.50 program. The results of this research were conceptual models, instruments and guidelines on scientific attitudes assessment by observation. The construct assessment instruments include components of curiosity, objectivity, suspended judgment, open-mindedness, honesty and perseverance. The construct validity of instruments has been qualified (rated load factor > 0.3). The reliability of the model is quite good with the Alpha value 0.899 (> 0.7). The test showed that the model fits the theoretical models are supported by empirical data, namely p-value 0.315 (≥ 0.05), RMSEA 0.027 (≤ 0.08)
Fajkowska, Małgorzata; Domaradzka, Ewa; Wytykowska, Agata
2018-01-01
The present paper is addressed to (1) the validation of a recently proposed typology of anxiety and depression, and (2) the presentation of a new tool—the Anxiety and Depression Questionnaire (ADQ)—based on this typology. Empirical data collected across two stages—construction and validation—allowed us to offer the final form of the ADQ, designed to measure arousal anxiety, apprehension anxiety, valence depression, anhedonic depression, and mixed types of anxiety and depression. The results support the proposed typology of anxiety and depression and provide evidence that the ADQ is a reliable and valid self-rating measure of affective types, and accordingly its use in scientific research is recommended. PMID:29410638
NASA/MSFC FY92 Earth Science and Applications Program Research Review
NASA Technical Reports Server (NTRS)
Arnold, James E. (Editor); Leslie, Fred W. (Editor)
1993-01-01
A large amount of attention has recently been given to global issues such as the ozone hole, tropospheric temperature variability, etc. A scientific challenge is to better understand atmospheric processes on a variety of spatial and temporal scales in order to predict environmental changes. Measurement of geophysical parameters such as wind, temperature, and moisture are needed to validate theories, provide analyzed data sets, and initialize or constrain numerical models. One of NASA's initiatives is the Mission to Planet Earth Program comprised of an Earth Observation System (EOS) and the scientific strategy to analyze these data. This work describes these efforts in the context of satellite data analysis and fundamental studies of atmospheric dynamics which examine selected processes important to the global circulation.
Tsai, Alexander C.
2011-01-01
In 1992, researchers from the University of California-Los Angeles published a study on the scientific merit and validity of pharmaceutical advertisements in medical journals. Their results led them to conclude, provocatively, that many pharmaceutical advertisements contained deficiencies in areas in which the U.S. Food and Drug Administration had established explicit standards of quality. This article provides a detailed account of third-party reactions to the study following its publication in the Annals of Internal Medicine, as well as the implications for those involved, including the authors, editors, and publisher. The increasingly diverging interests between medical journal editors and publishers are also discussed and highlighted by two recent cases of editors’ departures from prominent general-interest medical journals. PMID:14758858
Horne, Justine; Madill, Janet; Gilliland, Jason
2017-11-01
The 'Theory of Planned Behavior' (TPB) has been tested and validated in the scientific literature across multiple disciplines and is arguably the most widely accepted theory among behavior change academics. Despite this widespread acceptability, the TPB has yet to be incorporated into personalized healthcare behavior change research. Several prominent personalized healthcare researchers suggest that personalizing healthcare recommendations have a positive impact on changes in lifestyle habits. However, research in this area has demonstrated conflicting findings. We provide a scientific and theoretical basis to support a proposed expansion of the TPB to include personalization, and call to action-personalized healthcare behavior change researchers to test this expansion. Specific recommendations for study design are included.
NASA Astrophysics Data System (ADS)
Brasseur, Pierre
2015-04-01
The MyOcean projects supported by the European Commission period have been developed during the 2008-2015 period to build an operational service of ocean physical state and ecosystem information to intermediate and downstream users in the areas of marine safety, marine resources, marine and coastal environment and weather, climate and seasonal forecasting. The "core" information provided to users is obtained through the combination of satellite and in situ observations, eddy-resolving modelling of the global ocean and regional european seas, biochemistry, ecosystem and sea-ice modelling, and data assimilation for global to basin scale circulation. A comprehensive R&D plan was established in 2010 to ensure the collection and provision of information of best possible quality for daily estimates of the ocean state (real-time), its short-term evolution, and its history over the past (reanalyses). A service validation methodology was further developed to ensure proper scientific evaluation and routine monitoring of the accuracy of MyOcean products. In this presentation, we will present an overview of the main scientific advances achieved in MyOcean using the NEMO modelling platform, ensemble-based assimilation schemes, coupled circulation-ecosystem, sea-ice assimilative models and probabilistic methodologies for ensemble validation. We will further highlight the key areas that will require additional innovation effort to support the Marine Copernicus service evolution.
Nimbus/TOMS Science Data Operations Support
NASA Technical Reports Server (NTRS)
Childs, Jeff
1998-01-01
1. Participate in and provide analysis of laboratory and in-flight calibration of UV sensors used for space observations of backscattered UV radiation. 2. Provide support to the TOMS Science Operations Center, including generating instrument command lists and analysis of TOMS health and safety data. 3. Develop and maintain software and algorithms designed to capture and process raw spacecraft and instrument data, convert the instrument output into measured radiance and irradiances, and produce scientifically valid products. 4. Process the TOMS data into Level 1, Level 2, and Level 3 data products. 5. Provide analysis of the science data products in support of NASA GSFC Code 916's research.
Nimbus/TOMS Science Data Operations Support
NASA Technical Reports Server (NTRS)
1998-01-01
Projected goals include the following: (1) Participate in and provide analysis of laboratory and in-flight calibration of LTV sensors used for space observations of backscattered LTV radiation; (2) Provide support to the TOMS Science Operations Center, including generating instrument command lists and analysis of TOMS health and safety data; (3) Develop and maintain software and algorithms designed to capture and process raw spacecraft and instrument data, convert the instrument output into measured radiance and irradiances, and produce scientifically valid products; (4) Process the TOMS data into Level 1, Level 2, and Level 3 data products; (5) Provide analysis of the science data products in support of NASA GSFC Code 916's research.
NASA Astrophysics Data System (ADS)
Wardani, K. U.; Mulyani, S.; Wiji
2018-04-01
The aim of this study was to develop intertextual learning strategy with guided inquiry on solubility equilibrium concept to enhance student’s scientific processing skills. This study was conducted with consideration of some various studies which found that lack of student’s process skills in learning chemistry was caused by learning chemistry is just a concept. The method used in this study is a Research and Development to generate the intertextual learning strategy with guided inquiry. The instruments used in the form of sheets validation are used to determine the congruence of learning activities by step guided inquiry learning and scientific processing skills with aspects of learning activities. Validation results obtained that the learning activities conducted in line with aspects of indicators of the scientific processing skills.
ERIC Educational Resources Information Center
van der Graaf, Joep; Segers, Eliane; Verhoeven, Ludo
2015-01-01
A dynamic assessment tool was developed and validated using Mokken scale analysis to assess the extent to which kindergartners are able to construct unconfounded experiments, an essential part of scientific reasoning. Scientific reasoning is one of the learning processes happening within science education. A commonly used, hands-on,…
A Simple Exercise Reveals the Way Students Think about Scientific Modeling
ERIC Educational Resources Information Center
Ruebush, Laura; Sulikowski, Michelle; North, Simon
2009-01-01
Scientific modeling is an integral part of contemporary science, yet many students have little understanding of how models are developed, validated, and used to predict and explain phenomena. A simple modeling exercise led to significant gains in understanding key attributes of scientific modeling while revealing some stubborn misconceptions.…
Assessing Scientific and Technological Enquiry Skills at Age 11 Using the E-Scape System
ERIC Educational Resources Information Center
Davies, Dan; Collier, Chris; Howe, Alan
2012-01-01
This article reports on the outcomes from the "e-scape Primary Scientific and Technological Understanding Assessment Project" (2009-2010), which aimed to support primary teachers in developing valid portfolio-based tasks to assess pupils' scientific and technological enquiry skills at age 11. This was part of the wider…
[Strengthening the methodology of study designs in scientific researches].
Ren, Ze-qin
2010-06-01
Many problems in study designs have affected the validity of scientific researches seriously. We must understand the methodology of research, especially clinical epidemiology and biostatistics, and recognize the urgency in selection and implement of right study design. Thereafter we can promote the research capability and improve the overall quality of scientific researches.
NASA Astrophysics Data System (ADS)
Hultquist, C.; Cervone, G.
2015-12-01
Citizen-led movements producing scientific environmental information are increasingly common during hazards. After the Japanese earthquake-triggered tsunami in 2011, the government produced airborne remote sensing data of the radiation levels after the Fukushima nuclear reactor failures. Advances in technology enabled citizens to monitor radiation by innovative mobile devices built from components bought on the Internet. The citizen-led Safecast project measured on-ground levels of radiation in the Fukushima prefecture which total 14 million entries to date in Japan. This non-authoritative citizen science collection recorded radiation levels at specific coordinates and times is available online, yet the reliability and validity of the data had not been assessed. The nuclear incident provided a case for assessment with comparable dimensions of citizen science and authoritative data. To perform a comparison of the datasets, standardization was required. The sensors were calibrated scientifically but collected using different units of measure. Radiation decays over time so temporal interpolation was necessary for comparison of measurements as being the same time frame. Finally, the GPS located points were selected within the overlapping spatial extent of 500 meters. This study spatially analyzes and statistically compares citizen-volunteered and government-generated radiation data. Quantitative measures are used to assess the similarity and difference in the datasets. Radiation measurements from the same geographic extents show similar spatial variations which suggests that citizen science data can be comparable with government-generated measurements. Validation of Safecast demonstrates that we can infer scientific data from unstructured and not vested data. Citizen science can provide real-time data for situational awareness which is crucial for decision making during disasters. This project provides a methodology for comparing datasets of radiological measurements over time and space. Integrating data for assessment from different earth sensing systems is an essential step to address the big data challenges of volume, velocity, variety, and veracity.
Chandra X-ray Center Science Data Systems Regression Testing of CIAO
NASA Astrophysics Data System (ADS)
Lee, N. P.; Karovska, M.; Galle, E. C.; Bonaventura, N. R.
2011-07-01
The Chandra Interactive Analysis of Observations (CIAO) is a software system developed for the analysis of Chandra X-ray Observatory observations. An important component of a successful CIAO release is the repeated testing of the tools across various platforms to ensure consistent and scientifically valid results. We describe the procedures of the scientific regression testing of CIAO and the enhancements made to the testing system to increase the efficiency of run time and result validation.
The Principles for Successful Scientific Data Management Revisited
NASA Astrophysics Data System (ADS)
Walker, R. J.; King, T. A.; Joy, S. P.
2005-12-01
It has been 23 years since the National Research Council's Committee on Data Management and Computation (CODMAC) published its famous list of principles for successful scientific data management that have provided the framework for modern space science data management. CODMAC outlined seven principles: 1. Scientific Involvement in all aspects of space science missions. 2. Scientific Oversight of all scientific data-management activities. 3. Data Availability - Validated data should be made available to the scientific community in a timely manner. They should include appropriate ancillary data, and complete documentation. 4. Facilities - A proper balance between cost and scientific productivity should be maintained. 5. Software - Transportable well documented software should be available to process and analyze the data. 6. Scientific Data Storage - The data should be preserved in retrievable form. 7. Data System Funding - Adequate data funding should be made available at the outset of missions and protected from overruns. In this paper we will review the lessons learned in trying to apply these principles to space derived data. The Planetary Data System created the concept of data curation to carry out the CODMAC principles. Data curators are scientists and technologists who work directly with the mission scientists to create data products. The efficient application of the CODMAC principles requires that data curators and the mission team start early in a mission to plan for data access and archiving. To build the data products the planetary discipline adopted data access and documentation standards and has adhered to them. The data curators and mission team work together to produce data products and make them available. However even with early planning and agreement on standards the needs of the science community frequently far exceed the available resources. This is especially true for smaller principal investigator run missions. We will argue that one way to make data systems for small missions more effective is for the data curators to provide software tools to help develop the mission data system.
Olorisade, Babatunde Kazeem; Brereton, Pearl; Andras, Peter
2017-09-01
Independent validation of published scientific results through study replication is a pre-condition for accepting the validity of such results. In computation research, full replication is often unrealistic for independent results validation, therefore, study reproduction has been justified as the minimum acceptable standard to evaluate the validity of scientific claims. The application of text mining techniques to citation screening in the context of systematic literature reviews is a relatively young and growing computational field with high relevance for software engineering, medical research and other fields. However, there is little work so far on reproduction studies in the field. In this paper, we investigate the reproducibility of studies in this area based on information contained in published articles and we propose reporting guidelines that could improve reproducibility. The study was approached in two ways. Initially we attempted to reproduce results from six studies, which were based on the same raw dataset. Then, based on this experience, we identified steps considered essential to successful reproduction of text mining experiments and characterized them to measure how reproducible is a study given the information provided on these steps. 33 articles were systematically assessed for reproducibility using this approach. Our work revealed that it is currently difficult if not impossible to independently reproduce the results published in any of the studies investigated. The lack of information about the datasets used limits reproducibility of about 80% of the studies assessed. Also, information about the machine learning algorithms is inadequate in about 27% of the papers. On the plus side, the third party software tools used are mostly free and available. The reproducibility potential of most of the studies can be significantly improved if more attention is paid to information provided on the datasets used, how they were partitioned and utilized, and how any randomization was controlled. We introduce a checklist of information that needs to be provided in order to ensure that a published study can be reproduced. Copyright © 2017 Elsevier Inc. All rights reserved.
On-Orbit Prospective Echocardiography on International Space Station
NASA Technical Reports Server (NTRS)
Hamilton, Douglas R.; Sargsyan, Ashot E.; Martin, David; Garcia, Kathleen M.; Melton, Shannon; Feiverson, Alan; Dulchavsky, Scott A.
2010-01-01
A number of echocardiographic research projects and experiments have been flown on almost every space vehicle since 1970, but validation of standard methods and the determination of Space Normal cardiac function has not been reported to date. Advanced Diagnostics in Microgravity (ADUM) -remote guided echocardiographic technique provides a novel and effective approach to on-board assessment of cardiac physiology and structure using a just-in-time training algorithm and real-time remote guidance aboard the International Space Station (ISS). The validation of remotely guided echocardiographic techniques provides the procedures and protocols to perform scientific and clinical echocardiography on the ISS and the Moon. The objectives of this study were: 1.To confirm the ability of non-physician astronaut/cosmonaut crewmembers to perform clinically relevant remotely guided echocardiography using the Human Research Facility on board the ISS. 2.To compare the preflight, postflight and in-flight echocardiographic parameters commonly used in clinical medicine.
Learning Crude Oil by Using Scientific Literacy Comics
NASA Astrophysics Data System (ADS)
Aisyah, R.; Zakiyah, I. A.; Farida, I.; Ramdhani, M. A.
2017-09-01
A research has been conducted to create a crude oil learning media in the form of scientific literacy-oriented comic. The research included some phases, namely: concept analysis, material transformation to concept map, indicator identification and science literacy aspect. The product was made based on flowcharts and storyboards that have been validated by expert validators. The product has characteristics namely; 1) Develops indicators and aspects of science literacy, 2) presents the materials in form of story of science fiction genre, 3) has characters adopting levels of scientific literacy, 4) has optional stories, because it depends on questions asked to develop scientific literacy in terms of content, context, process and attitude. Based on feasibility test, the product is feasible to be used as learning media. It is suggested to do an expanded experiment to examine its affectivity in improving scientific literacy and growing students’ awareness about the issues of energy crisis and the impacts of fossil fuel use on the environment.
NASA Astrophysics Data System (ADS)
Rusilowati, A.; Nugroho, S. E.; Susilowati, E. S. M.; Mustika, T.; Harfiyani, N.; Prabowo, H. T.
2018-03-01
The research were aimed to develop and find out of validity, reliability, characteristic of scientific literacy assessment, and find out of the profile of students’ scientific literacy skills in Energy themed. The research is conducted in 7th grade of Secondary School at Demak, Central of Java Indonesia. The research design used R&D (Research and Development). The results of the research showed that the scientific literacy assessment was valid and reliable with 0.68 value in the first try out and 0.73 value in the last try out. The characteristics of the scientific literacy assessment are the difficulty index and the discrimination power. The difficulty index and distinguishing are 56.25% easy, 31.25% medium, and 12.5% very difficult with good discrimination power. The proportion of category of scientific literacy as the body of knowledge, the science as a way of investigating, science as a way of thinking, and the interaction among science, environment, technology, and society was 37.5%:25%:18.75%:18.75%. The highest to the lowest profile of students’ scientific literacy skills at Secondary School Demak was 72% in the category of science as a way of thinking and the lowest was 59% in the category of science as the body of knowledge.
Long-term predictions using natural analogues
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ewing, R.C.
1995-09-01
One of the unique and scientifically most challenging aspects of nuclear waste isolation is the extrapolation of short-term laboratory data (hours to years) to the long time periods (10{sup 3}-10{sup 5} years) required by regulatory agencies for performance assessment. The direct validation of these extrapolations is not possible, but methods must be developed to demonstrate compliance with government regulations and to satisfy the lay public that there is a demonstrable and reasonable basis for accepting the long-term extrapolations. Natural systems (e.g., {open_quotes}natural analogues{close_quotes}) provide perhaps the only means of partial {open_quotes}validation,{close_quotes} as well as data that may be used directlymore » in the models that are used in the extrapolation. Natural systems provide data on very large spatial (nm to km) and temporal (10{sup 3}-10{sup 8} years) scales and in highly complex terranes in which unknown synergisms may affect radionuclide migration. This paper reviews the application (and most importantly, the limitations) of data from natural analogue systems to the {open_quotes}validation{close_quotes} of performance assessments.« less
David, Hamilton P; Carey, Cayelan C.; Arvola, Lauri; Arzberger, Peter; Brewer, Carol A.; Cole, Jon J; Gaiser, Evelyn; Hanson, Paul C.; Ibelings, Bas W; Jennings, Eleanor; Kratz, Tim K; Lin, Fang-Pang; McBride, Christopher G.; de Motta Marques, David; Muraoka, Kohji; Nishri, Ami; Qin, Boqiang; Read, Jordan S.; Rose, Kevin C.; Ryder, Elizabeth; Weathers, Kathleen C.; Zhu, Guangwei; Trolle, Dennis; Brookes, Justin D
2014-01-01
A Global Lake Ecological Observatory Network (GLEON; www.gleon.org) has formed to provide a coordinated response to the need for scientific understanding of lake processes, utilising technological advances available from autonomous sensors. The organisation embraces a grassroots approach to engage researchers from varying disciplines, sites spanning geographic and ecological gradients, and novel sensor and cyberinfrastructure to synthesise high-frequency lake data at scales ranging from local to global. The high-frequency data provide a platform to rigorously validate process- based ecological models because model simulation time steps are better aligned with sensor measurements than with lower-frequency, manual samples. Two case studies from Trout Bog, Wisconsin, USA, and Lake Rotoehu, North Island, New Zealand, are presented to demonstrate that in the past, ecological model outputs (e.g., temperature, chlorophyll) have been relatively poorly validated based on a limited number of directly comparable measurements, both in time and space. The case studies demonstrate some of the difficulties of mapping sensor measurements directly to model state variable outputs as well as the opportunities to use deviations between sensor measurements and model simulations to better inform process understanding. Well-validated ecological models provide a mechanism to extrapolate high-frequency sensor data in space and time, thereby potentially creating a fully 3-dimensional simulation of key variables of interest.
Exploration of Korean Students' Scientific Imagination Using the Scientific Imagination Inventory
NASA Astrophysics Data System (ADS)
Mun, Jiyeong; Mun, Kongju; Kim, Sung-Won
2015-09-01
This article reports on the study of the components of scientific imagination and describes the scales used to measure scientific imagination in Korean elementary and secondary students. In this study, we developed an inventory, which we call the Scientific Imagination Inventory (SII), in order to examine aspects of scientific imagination. We identified three conceptual components of scientific imagination, which were composed of (1) scientific sensitivity, (2) scientific creativity, and (3) scientific productivity. We administered SII to 662 students (4th-8th grades) and confirmed validity and reliability using exploratory factor analysis and Cronbach α coefficient. The characteristics of Korean elementary and secondary students' overall scientific imagination and difference across gender and grade level are discussed in the results section.
Cognition and the menopause transition.
Maki, Pauline M; Henderson, Victor W
2016-07-01
Complaints about forgetfulness, "brain fog," and difficulty concentrating are common in women transitioning through menopause. Women with these cognitive complaints often express concern about whether these problems are normal, related to menopause, or represent a symptom of Alzheimer disease or another serious cognitive disorder. In this Practice Pearl, we provide a brief summary of the scientific literature on the frequency of cognitive complaints in midlife women, the validity of complaints in relation to performance on standardized cognitive tests, and the influence of menopause on cognitive performance. We then offer recommendations for healthcare providers and women to address cognitive concerns.
Validation and Error Characterization for the Global Precipitation Measurement
NASA Technical Reports Server (NTRS)
Bidwell, Steven W.; Adams, W. J.; Everett, D. F.; Smith, E. A.; Yuter, S. E.
2003-01-01
The Global Precipitation Measurement (GPM) is an international effort to increase scientific knowledge on the global water cycle with specific goals of improving the understanding and the predictions of climate, weather, and hydrology. These goals will be achieved through several satellites specifically dedicated to GPM along with the integration of numerous meteorological satellite data streams from international and domestic partners. The GPM effort is led by the National Aeronautics and Space Administration (NASA) of the United States and the National Space Development Agency (NASDA) of Japan. In addition to the spaceborne assets, international and domestic partners will provide ground-based resources for validating the satellite observations and retrievals. This paper describes the validation effort of Global Precipitation Measurement to provide quantitative estimates on the errors of the GPM satellite retrievals. The GPM validation approach will build upon the research experience of the Tropical Rainfall Measuring Mission (TRMM) retrieval comparisons and its validation program. The GPM ground validation program will employ instrumentation, physical infrastructure, and research capabilities at Supersites located in important meteorological regimes of the globe. NASA will provide two Supersites, one in a tropical oceanic and the other in a mid-latitude continental regime. GPM international partners will provide Supersites for other important regimes. Those objectives or regimes not addressed by Supersites will be covered through focused field experiments. This paper describes the specific errors that GPM ground validation will address, quantify, and relate to the GPM satellite physical retrievals. GPM will attempt to identify the source of errors within retrievals including those of instrument calibration, retrieval physical assumptions, and algorithm applicability. With the identification of error sources, improvements will be made to the respective calibration, assumption, or algorithm. The instrumentation and techniques of the Supersites will be discussed. The GPM core satellite, with its dual-frequency radar and conically scanning radiometer, will provide insight into precipitation drop-size distributions and potentially increased measurement capabilities of light rain and snowfall. The ground validation program will include instrumentation and techniques commensurate with these new measurement capabilities.
NASA Astrophysics Data System (ADS)
Antrakusuma, B.; Masykuri, M.; Ulfa, M.
2018-04-01
Evolution of Android technology can be applied to chemistry learning, one of the complex chemistry concept was solubility equilibrium. this concept required the science process skills (SPS). This study aims to: 1) Characteristic scientific based chemistry Android module to empowering SPS, and 2) Validity of the module based on content validity and feasibility test. This research uses a Research and Development approach (RnD). Research subjects were 135 s1tudents and three teachers at three high schools in Boyolali, Central of Java. Content validity of the module was tested by seven experts using Aiken’s V technique, and the module feasibility was tested to students and teachers in each school. Characteristics of chemistry module can be accessed using the Android device. The result of validation of the module contents got V = 0.89 (Valid), and the results of the feasibility test Obtained 81.63% (by the student) and 73.98% (by the teacher) indicates this module got good criteria.
Recent Evolutions of the GEOSCOPE Broadband Seismic Observatory
NASA Astrophysics Data System (ADS)
Stutzmann, E.; Vallee, M.; Zigone, D.; Bonaime, S.; Thore, J. Y.; Pesqueira, F.; Pardo, C.; Bernard, A.; Maggi, A.; Vincent, D.; Sayadi, J.
2017-12-01
The GEOSCOPE observatory provides 36 years of continuous broadband data to the scientific community. The 32 operational GEOSCOPE stations are installed in 17 countries, across all continents and on islands throughout the oceans. They are equipped with three component very broadband seismometers (STS1 or STS2) and 24 or 26 bit digitizers (Q330HR). Seismometers are installed with warpless base plates, which decrease long period noise on horizontal components by up to 15dB. All stations send data in real time to the IPGP data center and are automatically transmitted to other data centers (IRIS-DMC and RESIF) and tsunami warning centers. Recent improvements include a new station in Wallis and Futuna (FUTU, South-Western Pacific Ocean) and the re-installation of WUS station in Western China. Data of the stations are technically validated by IPGP (25 stations) or EOST (6 stations) in order to check their continuity and integrity. A scientific data validation is also performed by analyzing seismic noise level of the continuous data and by comparing real and synthetic earthquake waveforms (body waves). After these validations, data are archived by the IPGP data center in Paris. They are made available to the international scientific community through different interfaces (see details on http://geoscope.ipgp.fr). All GEOSCOPE data are in miniseed format but using various conventions. An important technical work is done to homogenize the data miniseed formats of the whole GEOSCOPE database, in order to make easier the data duplication at the IRIS-DMC and RESIF data centers. The GEOSCOPE observatory also provides near-real time information on the World large seismicity (above magnitude 5.5-6) through the automated use of the SCARDEC method. Earthquake parameters (depth, moment magnitude, focal mechanism, source time function) are determined about 45 minutes after the occurrence of the event. A specific webpage is then generated, which also includes information for a non-seismologist audience (past seismicity, foreshocks and aftershocks, 3D representations of the fault motion…). This information is also disseminated in real-time through mailing lists and social networks. Examples for recent earthquakes can be seen in http://geoscope.ipgp.fr/index.php/en/data/earthquake-data/latest-earthquakes.
Recent evolutions of the GEOSCOPE broadband seismic observatory
NASA Astrophysics Data System (ADS)
Vallee, M.; Leroy, N.; Bonaime, S.; Zigone, D.; Stutzmann, E.; Thore, J. Y.; Pardo, C.; Bernard, A.; Pesqueira, F.; Maggi, A.; Vincent, D.
2016-12-01
The GEOSCOPE observatory provides 34 years of continuous broadband data to the scientific community. The 31 operational GEOSCOPE stations are installed in 17 countries, across all continents and on islands throughout the oceans. They are equipped with three component very broadband seismometers (STS1 or STS2) and 24 or 26 bit digitizers (Q330HR). Seismometers are installed with warpless base plates, which decrease long period noise on horizontal components by up to 15dB. All stations send data in real time to the GEOSCOPE data center and are automatically transmitted to other data centers (IRIS-DMC and RESIF) and tsunami warning centers. In 2016, a new station has been installed in Wallis and Futuna (FUTU, South-Western Pacific Ocean), and final work is done to reinstall WUS station in Western China. Data of the stations are technically validated by IPGP (25 stations) or EOST (6 stations) in order to check their continuity and integrity. A scientific data validation is also performed by analyzing seismic noise level of the continuous data and by comparing real and synthetic earthquake waveforms (body waves). After these validations, data are archived by the GEOSCOPE data center in Paris. They are made available to the international scientific community through different interfaces (see details on http://geoscope.ipgp.fr ). An important technical work is now done to homogenize the data formats of the whole GEOSCOPE database, in order to make easier the data duplication at the IRIS-DMC and RESIF data centers. The GEOSCOPE broadband seismic observatory also provides near-real time information on the World large seismicity (above magnitude 5.5-6) through the automated application of the SCARDEC method. By using global data from the FDSN - in particular from GEOSCOPE and IRIS/USGS stations -, earthquake source parameters (depth, moment magnitude, focal mechanism, source time function) are determined about 45 minutes after the occurrence of the event. A specific webpage is then generated for each earthquake, which also includes information for a non-seismologist audience (past seismicity, foreshocks and afterschocks, 3D representations of the fault motion…). Examples for recent earthquakes can be seen in http://geoscope.ipgp.fr/index.php/en/data/earthquake-data/latest-earthquakes
The Model Analyst’s Toolkit: Scientific Model Development, Analysis, and Validation
2013-05-20
Charles River’s Metronome framework. This framework is built on top of the same Equinox libraries that the popular Eclipse Development Environment uses...the names are fully visible (see Figure 8). The Metronome framework also provides functionality for undo and redo, so the user can easily correct...mistakes. Figure 8. Changing Pane sizes and layouts in the new Metronome -enhanced MAT This period, we also improved the MAT project file format so
Maclean, Katherine A.; Leoutsakos, Jeannie-Marie S.; Johnson, Matthew W.; Griffiths, Roland R.
2012-01-01
A large body of historical evidence describes the use of hallucinogenic compounds, such as psilocybin mushrooms, for religious purposes. But few scientific studies have attempted to measure or characterize hallucinogen-occasioned spiritual experiences. The present study examined the factor structure of the Mystical Experience Questionnaire (MEQ), a self-report measure that has been used to assess the effects of hallucinogens in laboratory studies. Participants (N=1602) completed the 43-item MEQ in reference to a mystical or profound experience they had had after ingesting psilocybin. Exploratory factor analysis of the MEQ retained 30 items and revealed a 4-factor structure covering the dimensions of classic mystical experience: unity, noetic quality, sacredness (F1); positive mood (F2); transcendence of time/space (F3); and ineffability (F4). MEQ factor scores showed good internal reliability and correlated with the Hood Mysticism Scale, indicating convergent validity. Participants who endorsed having had a mystical experience on psilocybin, compared to those who did not, had significantly higher factor scores, indicating construct validity. The 4-factor structure was confirmed in a second sample (N=440) and demonstrated superior fit compared to alternative models. The results provide initial evidence of the validity, reliability, and factor structure of a 30-item scale for measuring single, hallucinogen-occasioned mystical experiences, which may be a useful tool in the scientific study of mysticism. PMID:23316089
QSAR modeling: where have you been? Where are you going to?
Cherkasov, Artem; Muratov, Eugene N; Fourches, Denis; Varnek, Alexandre; Baskin, Igor I; Cronin, Mark; Dearden, John; Gramatica, Paola; Martin, Yvonne C; Todeschini, Roberto; Consonni, Viviana; Kuz'min, Victor E; Cramer, Richard; Benigni, Romualdo; Yang, Chihae; Rathman, James; Terfloth, Lothar; Gasteiger, Johann; Richard, Ann; Tropsha, Alexander
2014-06-26
Quantitative structure-activity relationship modeling is one of the major computational tools employed in medicinal chemistry. However, throughout its entire history it has drawn both praise and criticism concerning its reliability, limitations, successes, and failures. In this paper, we discuss (i) the development and evolution of QSAR; (ii) the current trends, unsolved problems, and pressing challenges; and (iii) several novel and emerging applications of QSAR modeling. Throughout this discussion, we provide guidelines for QSAR development, validation, and application, which are summarized in best practices for building rigorously validated and externally predictive QSAR models. We hope that this Perspective will help communications between computational and experimental chemists toward collaborative development and use of QSAR models. We also believe that the guidelines presented here will help journal editors and reviewers apply more stringent scientific standards to manuscripts reporting new QSAR studies, as well as encourage the use of high quality, validated QSARs for regulatory decision making.
Young, Jasmine Y.; Westbrook, John D.; Feng, Zukang; Sala, Raul; Peisach, Ezra; Oldfield, Thomas J.; Sen, Sanchayita; Gutmanas, Aleksandras; Armstrong, David R.; Berrisford, John M.; Chen, Li; Chen, Minyu; Di Costanzo, Luigi; Dimitropoulos, Dimitris; Gao, Guanghua; Ghosh, Sutapa; Gore, Swanand; Guranovic, Vladimir; Hendrickx, Pieter MS; Hudson, Brian P.; Igarashi, Reiko; Ikegawa, Yasuyo; Kobayashi, Naohiro; Lawson, Catherine L.; Liang, Yuhe; Mading, Steve; Mak, Lora; Mir, M. Saqib; Mukhopadhyay, Abhik; Patwardhan, Ardan; Persikova, Irina; Rinaldi, Luana; Sanz-Garcia, Eduardo; Sekharan, Monica R.; Shao, Chenghua; Swaminathan, G. Jawahar; Tan, Lihua; Ulrich, Eldon L.; van Ginkel, Glen; Yamashita, Reiko; Yang, Huanwang; Zhuravleva, Marina A.; Quesada, Martha; Kleywegt, Gerard J.; Berman, Helen M.; Markley, John L.; Nakamura, Haruki; Velankar, Sameer; Burley, Stephen K.
2017-01-01
SUMMARY OneDep, a unified system for deposition, biocuration, and validation of experimentally determined structures of biological macromolecules to the Protein Data Bank (PDB) archive, has been developed as a global collaboration by the Worldwide Protein Data Bank (wwPDB) partners. This new system was designed to ensure that the wwPDB could meet the evolving archiving requirements of the scientific community over the coming decades. OneDep unifies deposition, biocuration, and validation pipelines across all wwPDB, EMDB, and BMRB deposition sites with improved focus on data quality and completeness in these archives, while supporting growth in the number of depositions and increases in their average size and complexity. In this paper, we describe the design, functional operation, and supporting infrastructure of the OneDep system, and provide initial performance assessments. PMID:28190782
QSAR Modeling: Where have you been? Where are you going to?
Cherkasov, Artem; Muratov, Eugene N.; Fourches, Denis; Varnek, Alexandre; Baskin, Igor I.; Cronin, Mark; Dearden, John; Gramatica, Paola; Martin, Yvonne C.; Todeschini, Roberto; Consonni, Viviana; Kuz'min, Victor E.; Cramer, Richard; Benigni, Romualdo; Yang, Chihae; Rathman, James; Terfloth, Lothar; Gasteiger, Johann; Richard, Ann; Tropsha, Alexander
2014-01-01
Quantitative Structure-Activity Relationship modeling is one of the major computational tools employed in medicinal chemistry. However, throughout its entire history it has drawn both praise and criticism concerning its reliability, limitations, successes, and failures. In this paper, we discuss: (i) the development and evolution of QSAR; (ii) the current trends, unsolved problems, and pressing challenges; and (iii) several novel and emerging applications of QSAR modeling. Throughout this discussion, we provide guidelines for QSAR development, validation, and application, which are summarized in best practices for building rigorously validated and externally predictive QSAR models. We hope that this Perspective will help communications between computational and experimental chemists towards collaborative development and use of QSAR models. We also believe that the guidelines presented here will help journal editors and reviewers apply more stringent scientific standards to manuscripts reporting new QSAR studies, as well as encourage the use of high quality, validated QSARs for regulatory decision making. PMID:24351051
2014-09-01
The NATO Science and Technology Organization Science & Technology (S& T ) in the NATO context is defined as the selective and rigorous...generation and application of state-of-the-art, validated knowledge for defence and security purposes. S& T activities embrace scientific research...engineering, operational research and analysis, synthesis, integration and validation of knowledge derived through the scientific method. In NATO, S& T is
Science and Creationism: A View from the National Academy of Sciences.
ERIC Educational Resources Information Center
National Academy of Sciences, Washington, DC.
Five central scientific issues are critical to consideration of the treatment in school curricula of the origin and evolution of the universe and of life on earth. These issues are: (1) the nature of science; (2) scientific evidence on the origin of the universe and the earth; (3) the consistent and validated scientific evidence for biological…
ERIC Educational Resources Information Center
Iding, Marie; Klemm, E. Barbara
2005-01-01
The present study addresses the need for teachers to critically evaluate the credibility, validity, and cognitive load associated with scientific information on Web sites, in order to effectively teach students to evaluate scientific information on the World Wide Web. A line of prior research investigating high school and university students'…
ERIC Educational Resources Information Center
Yang, Kuay-Keng; Lin, Shu-Fen; Hong, Zuway-R; Lin, Huann-shyang
2016-01-01
The purposes of this study were to (a) develop and validate instruments to assess elementary students' scientific creativity and science inquiry, (b) investigate the relationship between the two competencies, and (c) compare the two competencies among different grade level students. The scientific creativity test was composed of 7 open-ended items…
NREL and CSIRO Validating Advanced Microgrid Control Solution | Energy
Organisation NREL and CSIRO Validating Advanced Microgrid Control Solution Australia's Commonwealth Scientific microgrid control solution. This technology helps hybrid microgrids to automatically recognize when solar
Hoffman, Steven J; Justicz, Victoria
2016-07-01
To develop and validate a method for automatically quantifying the scientific quality and sensationalism of individual news records. After retrieving 163,433 news records mentioning the Severe Acute Respiratory Syndrome (SARS) and H1N1 pandemics, a maximum entropy model for inductive machine learning was used to identify relationships among 500 randomly sampled news records that correlated with systematic human assessments of their scientific quality and sensationalism. These relationships were then computationally applied to automatically classify 10,000 additional randomly sampled news records. The model was validated by randomly sampling 200 records and comparing human assessments of them to the computer assessments. The computer model correctly assessed the relevance of 86% of news records, the quality of 65% of records, and the sensationalism of 73% of records, as compared to human assessments. Overall, the scientific quality of SARS and H1N1 news media coverage had potentially important shortcomings, but coverage was not too sensationalizing. Coverage slightly improved between the two pandemics. Automated methods can evaluate news records faster, cheaper, and possibly better than humans. The specific procedure implemented in this study can at the very least identify subsets of news records that are far more likely to have particular scientific and discursive qualities. Copyright © 2016 Elsevier Inc. All rights reserved.
Bret, Patrice
2016-04-01
Eighteenth-century scientific translation was not just a linguistic or intellectual affair. It included numerous material aspects requiring a social organization to marshal the indispensable human and non-human actors. Paratexts and actors' correspondences provide a good observatory to get information about aspects such as shipments and routes, processes of translation and language acquisition (dictionaries, grammars and other helpful materials, such as translated works in both languages), texts acquisition and dissemination (including author's additions and corrections, oral presentations in academic meetings and announcements of forthcoming translations). The nature of scientific translation changed in France during the second half of the eighteenth century. Beside solitary translators, it also happened to become a collective enterprise, dedicated to providing abridgements (Collection académique, 1755-79) or enriching the learned journals with full translations of the most recent foreign texts (Guyton de Morveau's 'Bureau de traduction de Dijon', devoted to chemistry and mineralogy, 1781-90). That new trend clearly had a decisive influence on the nature of the scientific press itself. A way to set up science as a social activity in the provincial capital of Dijon, translation required a local and international network for acquiring the linguistic and scientific expertise, along with the original texts, as quickly as possible. Laboratory results and mineralogical observations were used to compare material facts (colour, odour, shape of crystals, etc.) with those described in the original text. By providing a double kind of validation - with both the experiments and the translations - the laboratory thus happened to play a major role in translation.
Validity threats: overcoming interference with proposed interpretations of assessment data.
Downing, Steven M; Haladyna, Thomas M
2004-03-01
Factors that interfere with the ability to interpret assessment scores or ratings in the proposed manner threaten validity. To be interpreted in a meaningful manner, all assessments in medical education require sound, scientific evidence of validity. The purpose of this essay is to discuss 2 major threats to validity: construct under-representation (CU) and construct-irrelevant variance (CIV). Examples of each type of threat for written, performance and clinical performance examinations are provided. The CU threat to validity refers to undersampling the content domain. Using too few items, cases or clinical performance observations to adequately generalise to the domain represents CU. Variables that systematically (rather than randomly) interfere with the ability to meaningfully interpret scores or ratings represent CIV. Issues such as flawed test items written at inappropriate reading levels or statistically biased questions represent CIV in written tests. For performance examinations, such as standardised patient examinations, flawed cases or cases that are too difficult for student ability contribute CIV to the assessment. For clinical performance data, systematic rater error, such as halo or central tendency error, represents CIV. The term face validity is rejected as representative of any type of legitimate validity evidence, although the fact that the appearance of the assessment may be an important characteristic other than validity is acknowledged. There are multiple threats to validity in all types of assessment in medical education. Methods to eliminate or control validity threats are suggested.
NASA Astrophysics Data System (ADS)
Salvi, S.; Trasatti, E.; Rubbia, G.; Romaniello, V.; Spinetti, C.; Corradini, S.; Merucci, L.
2016-12-01
The EU's H2020 EVER-EST Project is dedicated to the realization of a Virtual Research Environment (VRE) for Earth Science researchers, during 2015-2018. EVER-EST implements state-of-the-art technologies in the area of Earth Science data catalogues, data access/processing and long-term data preservation together with models, techniques and tools for the computational methods, such as scientific workflows. The VRE is designed with the aim of providing the Earth Science user community with an innovative virtual environment to enhance their ability to interoperate and share knowledge and experience, exploiting also the Research Object concept. The GEO Geohazard Supersites is one of the four Research Communities chosen to validate the e-infrastructure. EVER-EST will help the exploitation of the full potential of the GEO Geohazard Supersite and Natural Laboratories (GSNL) initiative demonstrating the use case in the Permanent Supersites of Mt Etna, Campi Flegrei-Vesuvius, and Icelandic volcanoes. Besides providing tools for active volcanoes monitoring and studies, we intend to demonstrate how a more organized and collaborative research environment, such as a VRE, can improve the quality of the scientific research on the Geohazard Supersites, addressing at the same time the problem of the slow uptake of scientific research findings in Disaster Risk Management. Presently, the full exploitation of the in situ and satellite data made available for each Supersite is delayed by the difficult access (especially for researchers in developing countries) to intensive processing and modeling capabilities. EVER-EST is designed to provide these means and also a friendly virtual environment for the easy transfer of scientific knowledge as soon as it is acquired, promoting collaboration among researchers located in distant regions of the world. A further benefit will be to increase the societal impact of the scientific advancements obtained in the Supersites, allowing a more uniform interface towards the different user communities, who will use part of the services provided by EVER-EST during research result uptake. We show a few test cases of use of the Geohazard Supersite VRE at the actual state of development, and its future development.
Update of Standard Practices for New Method Validation in Forensic Toxicology.
Wille, Sarah M R; Coucke, Wim; De Baere, Thierry; Peters, Frank T
2017-01-01
International agreement concerning validation guidelines is important to obtain quality forensic bioanalytical research and routine applications as it all starts with the reporting of reliable analytical data. Standards for fundamental validation parameters are provided in guidelines as those from the US Food and Drug Administration (FDA), the European Medicines Agency (EMA), the German speaking Gesellschaft fur Toxikologie und Forensische Chemie (GTFCH) and the Scientific Working Group of Forensic Toxicology (SWGTOX). These validation parameters include selectivity, matrix effects, method limits, calibration, accuracy and stability, as well as other parameters such as carryover, dilution integrity and incurred sample reanalysis. It is, however, not easy for laboratories to implement these guidelines into practice as these international guidelines remain nonbinding protocols, that depend on the applied analytical technique, and that need to be updated according the analyst's method requirements and the application type. In this manuscript, a review of the current guidelines and literature concerning bioanalytical validation parameters in a forensic context is given and discussed. In addition, suggestions for the experimental set-up, the pros and cons of statistical approaches and adequate acceptance criteria for the validation of bioanalytical applications are given. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Brouard, Benoit; Bardo, Pascale; Bonnet, Clément; Mounier, Nicolas; Vignot, Marina; Vignot, Stéphane
2016-11-01
Mobile applications represent promising tools in management of chronic diseases, both for patients and healthcare professionals, and especially in oncology. Among the large number of mobile health (mhealth) applications available in mobile stores, it could be difficult for users to identify the most relevant ones. This study evaluated the business model and the scientific validation for mobile applications related to oncology. A systematic review was performed over the two major marketplaces. Purpose, scientific validation, and source of funding were evaluated according to the description of applications in stores. Results were stratified according to targeted audience (general population/patients/healthcare professionals). Five hundred and thirty-nine applications related to oncology were identified: 46.8% dedicated to healthcare professionals, 31.5% to general population, and 21.7% to patients. A lack of information about healthcare professionals' involvement in the development process was noted since only 36.5% of applications mentioned an obvious scientific validation. Most apps were free (72.2%) and without explicit support by industry (94.2%). There is a need to enforce independent review of mhealth applications in oncology. The economic model could be questioned and the source of funding should be clarified. Meanwhile, patients and healthcare professionals should remain cautious about applications' contents. Key messages A systematic review was performed to describe the mobile applications related to oncology and it revealed a lack of information on scientific validation and funding. Independent scientific review and the reporting of conflicts of interest should be encouraged. Users, and all health professionals, should be aware that health applications, whatever the quality of their content, do not actually embrace such an approach.
NASA Astrophysics Data System (ADS)
Bhakti, Satria Seto; Samsudin, Achmad; Chandra, Didi Teguh; Siahaan, Parsaoran
2017-05-01
The aim of research is developing multiple-choices test items as tools for measuring the scientific of generic skills on solar system. To achieve the aim that the researchers used the ADDIE model consisting Of: Analyzing, Design, Development, Implementation, dan Evaluation, all of this as a method research. While The scientific of generic skills limited research to five indicator including: (1) indirect observation, (2) awareness of the scale, (3) inference logic, (4) a causal relation, and (5) mathematical modeling. The participants are 32 students at one of junior high schools in Bandung. The result shown that multiple-choices that are constructed test items have been declared valid by the expert validator, and after the tests show that the matter of developing multiple-choices test items be able to measuring the scientific of generic skills on solar system.
NASA Astrophysics Data System (ADS)
Silva, F.; Maechling, P. J.; Goulet, C.; Somerville, P.; Jordan, T. H.
2013-12-01
The Southern California Earthquake Center (SCEC) Broadband Platform is a collaborative software development project involving SCEC researchers, graduate students, and the SCEC Community Modeling Environment. The SCEC Broadband Platform is open-source scientific software that can generate broadband (0-100Hz) ground motions for earthquakes, integrating complex scientific modules that implement rupture generation, low and high-frequency seismogram synthesis, non-linear site effects calculation, and visualization into a software system that supports easy on-demand computation of seismograms. The Broadband Platform operates in two primary modes: validation simulations and scenario simulations. In validation mode, the Broadband Platform runs earthquake rupture and wave propagation modeling software to calculate seismograms of a historical earthquake for which observed strong ground motion data is available. Also in validation mode, the Broadband Platform calculates a number of goodness of fit measurements that quantify how well the model-based broadband seismograms match the observed seismograms for a certain event. Based on these results, the Platform can be used to tune and validate different numerical modeling techniques. During the past year, we have modified the software to enable the addition of a large number of historical events, and we are now adding validation simulation inputs and observational data for 23 historical events covering the Eastern and Western United States, Japan, Taiwan, Turkey, and Italy. In scenario mode, the Broadband Platform can run simulations for hypothetical (scenario) earthquakes. In this mode, users input an earthquake description, a list of station names and locations, and a 1D velocity model for their region of interest, and the Broadband Platform software then calculates ground motions for the specified stations. By establishing an interface between scientific modules with a common set of input and output files, the Broadband Platform facilitates the addition of new scientific methods, which are written by earth scientists in a number of languages such as C, C++, Fortran, and Python. The Broadband Platform's modular design also supports the reuse of existing software modules as building blocks to create new scientific methods. Additionally, the Platform implements a wrapper around each scientific module, converting input and output files to and from the specific formats required (or produced) by individual scientific codes. Working in close collaboration with scientists and research engineers, the SCEC software development group continues to add new capabilities to the Broadband Platform and to release new versions as open-source scientific software distributions that can be compiled and run on many Linux computer systems. Our latest release includes the addition of 3 new simulation methods and several new data products, such as map and distance-based goodness of fit plots. Finally, as the number and complexity of scenarios simulated using the Broadband Platform increase, we have added batching utilities to substantially improve support for running large-scale simulations on computing clusters.
An appraisal of the DSM-III system.
Adamson, J
1989-05-01
DSM-III is a major document in the history of psychiatry. The DSM-III system is here seen as an instrument that promotes the scientific development of psychiatry and the clarity of communication among psychiatrists. However a major theme of this review is that reliability does not ensure validity. While making this point it is recognized that the major defects in the DSM-III system result from scientific inadequacies inherent in present day psychiatry. This review also may be taken as an amplification of the statement in DSM-III-R that it is not a textbook. In particular the data required to arrive at diagnoses in the DSM-III system do not provide sufficient information to arrive at a comprehensive biopsychosocial case formulation, a shortcoming that has relevance for teaching and clinical practice.
Progress towards development of an HIV vaccine: report of the AIDS Vaccine 2009 Conference.
Ross, Anna Laura; Bråve, Andreas; Scarlatti, Gabriella; Manrique, Amapola; Buonaguro, Luigi
2010-05-01
The search for an HIV/AIDS vaccine is steadily moving ahead, generating and validating new concepts in terms of novel vectors for antigen delivery and presentation, new vaccine and adjuvant strategies, alternative approaches to design HIV-1 antigens for eliciting protective cross-neutralising antibodies, and identification of key mechanisms in HIV infection and modulation of the immune system. All these different perspectives are contributing to the unprecedented challenge of developing a protective HIV-1 vaccine. The high scientific value of this massive effort is its great impact on vaccinology as a whole, providing invaluable scientific information for the current and future development of new preventive vaccine as well as therapeutic knowledge-based infectious-disease and cancer vaccines. Copyright 2010 Elsevier Ltd. All rights reserved.
Hesselbach, Renee A; Petering, David H; Berg, Craig A; Tomasiewicz, Henry; Weber, Daniel
2012-12-01
This article presents a detailed guide for high school through graduate level instructors that leads students to write effective and well-organized scientific papers. Interesting research emerges from the ability to ask questions, define problems, design experiments, analyze and interpret data, and make critical connections. This process is incomplete, unless new results are communicated to others because science fundamentally requires peer review and criticism to validate or discard proposed new knowledge. Thus, a concise and clearly written research paper is a critical step in the scientific process and is important for young researchers as they are mastering how to express scientific concepts and understanding. Moreover, learning to write a research paper provides a tool to improve science literacy as indicated in the National Research Council's National Science Education Standards (1996), and A Framework for K-12 Science Education (2011), the underlying foundation for the Next Generation Science Standards currently being developed. Background information explains the importance of peer review and communicating results, along with details of each critical component, the Abstract, Introduction, Methods, Results, and Discussion. Specific steps essential to helping students write clear and coherent research papers that follow a logical format, use effective communication, and develop scientific inquiry are described.
Petering, David H.; Berg, Craig A.; Tomasiewicz, Henry; Weber, Daniel
2012-01-01
Abstract This article presents a detailed guide for high school through graduate level instructors that leads students to write effective and well-organized scientific papers. Interesting research emerges from the ability to ask questions, define problems, design experiments, analyze and interpret data, and make critical connections. This process is incomplete, unless new results are communicated to others because science fundamentally requires peer review and criticism to validate or discard proposed new knowledge. Thus, a concise and clearly written research paper is a critical step in the scientific process and is important for young researchers as they are mastering how to express scientific concepts and understanding. Moreover, learning to write a research paper provides a tool to improve science literacy as indicated in the National Research Council's National Science Education Standards (1996), and A Framework for K–12 Science Education (2011), the underlying foundation for the Next Generation Science Standards currently being developed. Background information explains the importance of peer review and communicating results, along with details of each critical component, the Abstract, Introduction, Methods, Results, and Discussion. Specific steps essential to helping students write clear and coherent research papers that follow a logical format, use effective communication, and develop scientific inquiry are described. PMID:23094692
Assessing altimetry close to the coast
NASA Astrophysics Data System (ADS)
Quartly, G. D.; Nencioli, F.; Conley, D.; Abdalla, S.
2017-10-01
Radar altimetry provides measurements of sea surface elevation, wind speed and wave height, which are used operationally by many agencies and businesses, as well as for scientific research to understand the changes in the oceanatmosphere interface. For the data to be trustworthy they need to be assessed for consistency, and for bias relative to various validation datasets. Sentinel-3A, launched in Feb. 2016, promises, through new technology, to be better able to retrieve useful measurements in the coastal zone; the purpose of this paper is develop ideas on how the performance of this instrument can be assessed in that specific environment. We investigate the magnitude of short-term variability in wave height and range, and explain how two validation facilities in the southwest UK may be used.
STS-41 crewmembers conduct DSO 0472 Intraocular Pressure on OV-103's middeck
1990-10-10
STS-41 crewmembers conduct Detailed Supplementary Objective (DSO) 0472 Intraocular Pressure on the middeck of Discovery, Orbiter Vehicle (OV) 103. Mission Specialist (MS) William M. Shepherd rests his head on the stowed treadmill while Pilot Robert D. Cabana, holding Shepherd's eye open, prepares to measure Shepherd's intraocular pressure using a tono pen (in his right hand). Objectives include: establishing a database of changes in intraocular pressures that can be used to evaluate crew health; validating ten degree head down bedrest as a model for cephalad fluid shifts in microgravity; facilitating the interpretation of data by providing a quantative measure of microgravity induced cephalad fluid shifts; and validating the tono pen as an effective tool for diagnostic and scientific data collection.
Bamidis, P D; Lithari, C; Konstantinidis, S T
2010-01-01
With the number of scientific papers published in journals, conference proceedings, and international literature ever increasing, authors and reviewers are not only facilitated with an abundance of information, but unfortunately continuously confronted with risks associated with the erroneous copy of another's material. In parallel, Information Communication Technology (ICT) tools provide to researchers novel and continuously more effective ways to analyze and present their work. Software tools regarding statistical analysis offer scientists the chance to validate their work and enhance the quality of published papers. Moreover, from the reviewers and the editor's perspective, it is now possible to ensure the (text-content) originality of a scientific article with automated software tools for plagiarism detection. In this paper, we provide a step-bystep demonstration of two categories of tools, namely, statistical analysis and plagiarism detection. The aim is not to come up with a specific tool recommendation, but rather to provide useful guidelines on the proper use and efficiency of either category of tools. In the context of this special issue, this paper offers a useful tutorial to specific problems concerned with scientific writing and review discourse. A specific neuroscience experimental case example is utilized to illustrate the young researcher's statistical analysis burden, while a test scenario is purpose-built using open access journal articles to exemplify the use and comparative outputs of seven plagiarism detection software pieces. PMID:21487489
Bamidis, P D; Lithari, C; Konstantinidis, S T
2010-12-01
With the number of scientific papers published in journals, conference proceedings, and international literature ever increasing, authors and reviewers are not only facilitated with an abundance of information, but unfortunately continuously confronted with risks associated with the erroneous copy of another's material. In parallel, Information Communication Technology (ICT) tools provide to researchers novel and continuously more effective ways to analyze and present their work. Software tools regarding statistical analysis offer scientists the chance to validate their work and enhance the quality of published papers. Moreover, from the reviewers and the editor's perspective, it is now possible to ensure the (text-content) originality of a scientific article with automated software tools for plagiarism detection. In this paper, we provide a step-bystep demonstration of two categories of tools, namely, statistical analysis and plagiarism detection. The aim is not to come up with a specific tool recommendation, but rather to provide useful guidelines on the proper use and efficiency of either category of tools. In the context of this special issue, this paper offers a useful tutorial to specific problems concerned with scientific writing and review discourse. A specific neuroscience experimental case example is utilized to illustrate the young researcher's statistical analysis burden, while a test scenario is purpose-built using open access journal articles to exemplify the use and comparative outputs of seven plagiarism detection software pieces.
Yue, Lilly Q; Campbell, Gregory; Lu, Nelson; Xu, Yunling; Zuckerman, Bram
2016-01-01
Regulatory decisions are made based on the assessment of risk and benefit of medical devices at the time of pre-market approval and subsequently, when post-market risk-benefit balance needs reevaluation. Such assessments depend on scientific evidence obtained from pre-market studies, post-approval studies, post-market surveillance studies, patient perspective information, as well as other real world data such as national and international registries. Such registries provide real world evidence and are playing a more and more important role in enhancing the safety and effectiveness evaluation of medical devices. While these registries provide large quantities of data reflecting real world practice and can potentially reduce the cost of clinical trials, challenges arise concerning (1) data quality adequate for regulatory decision-making, (2) bias introduced at every stage and aspect of study, (3) scientific validity of study designs, and (4) reliability and interpretability of study results. This article will discuss related statistical and regulatory challenges and opportunities with examples encountered in medical device regulatory reviews.
Agnotology: learning from mistakes
NASA Astrophysics Data System (ADS)
Benestad, R. E.; Hygen, H. O.; van Dorland, R.; Cook, J.; Nuccitelli, D.
2013-05-01
Replication is an important part of science, and by repeating past analyses, we show that a number of papers in the scientific literature contain severe methodological flaws which can easily be identified through simple tests and demonstrations. In many cases, shortcomings are related to a lack of robustness, leading to results that are not universally valid but rather an artifact of a particular experimental set-up. Some examples presented here have ignored data that do not fit the conclusions, and in several other cases, inappropriate statistical methods have been adopted or conclusions have been based on misconceived physics. These papers may serve as educational case studies for why certain analytical approaches sometimes are unsuitable in providing reliable answers. They also highlight the merit of replication. A lack of common replication has repercussions for the quality of the scientific literature, and may be a reason why some controversial questions remain unanswered even when ignorance could be reduced. Agnotology is the study of such ignorance. A free and open-source software is provided for demonstration purposes.
NASA Astrophysics Data System (ADS)
Rahayu, S.; Meyliana, M.; Arlingga, A.; Reny, R.; Siahaan, P.; Hernani, H.
2017-09-01
The aim of this study is to develop lesson plans and student worksheets based socio-scientific issues on pollution environmental topic for seventh-grade junior high school students. Environmental pollution topic split into several subtopics namely air pollution, water pollution and soil pollution. The composing of lesson plans were developed based on socio-scientific issues with five stages, namely (1) Motivate; (2) Challenge; (3) Collect scientific evidence; (4) Analyse the evidence; (5) Build knowledge and make connections; and (6) Use evidence. While student worksheets contain articles on socio-scientific issues, practice, and there are a few questions to determine students’ reasoning. The method that is used in this research is research and development (R & D method). Development model used in this study is a model of Plomp that consists of four stages, namely: (1) Initial Research; (2) Design; (3) Realization or Construction; (4) Testing, evaluation and revision; (5) Implementation, while the research was limited to the fourth stage. Lesson plans and student worksheets based on socio-scientific issues was validated through an expert validation. The result showed that lesson plans and student worksheets based socio-scientific issues on pollution theme have a very decent and be able to apply in science classroom.
The Nature of Science Instrument-Elementary (NOSI-E): the end of the road?
Peoples, Shelagh M; O'Dwyer, Laura M
2014-01-01
This research continues prior work published in this journal (Peoples, O'Dwyer, Shields and Wang, 2013). The first paper described the scale development, psychometric analyses and part-validation of a theoretically-grounded Rasch-based instrument, the Nature of Science Instrument-Elementary (NOSI-E). The NOSI-E was designed to measure elementary students' understanding of the Nature of Science (NOS). In the first paper, evidence was provided for three of the six validity aspects (content, substantive and generalizability) needed to support the construct validity of the NOSI-E. The research described in this paper examines two additional validity aspects (structural and external). The purpose of this study was to determine which of three competing internal models provides reliable, interpretable, and responsive measures of students' understanding of NOS. One postulate is that the NOS construct is unidimensional;. alternatively, the NOS construct is composed of five independent unidimensional constructs (the consecutive approach). Lastly, the NOS construct is multidimensional and composed of five inter-related but separate dimensions. The vast body of evidence supported the claim that the NOS construct is multidimensional. Measures from the multidimensional model were positively related to student science achievement and students' perceptions of their classroom environment; this provided supporting evidence for the external validity aspect of the NOS construct. As US science education moves toward students learning science through engaging in authentic scientific practices and building learning progressions (NRC, 2012), it will be important to assess whether this new approach to teaching science is effective, and the NOSI-E may be used as a measure of the impact of this reform.
Development of Scientific Approach Based on Discovery Learning Module
NASA Astrophysics Data System (ADS)
Ellizar, E.; Hardeli, H.; Beltris, S.; Suharni, R.
2018-04-01
Scientific Approach is a learning process, designed to make the students actively construct their own knowledge through stages of scientific method. The scientific approach in learning process can be done by using learning modules. One of the learning model is discovery based learning. Discovery learning is a learning model for the valuable things in learning through various activities, such as observation, experience, and reasoning. In fact, the students’ activity to construct their own knowledge were not optimal. It’s because the available learning modules were not in line with the scientific approach. The purpose of this study was to develop a scientific approach discovery based learning module on Acid Based, also on electrolyte and non-electrolyte solution. The developing process of this chemistry modules use the Plomp Model with three main stages. The stages are preliminary research, prototyping stage, and the assessment stage. The subject of this research was the 10th and 11th Grade of Senior High School students (SMAN 2 Padang). Validation were tested by the experts of Chemistry lecturers and teachers. Practicality of these modules had been tested through questionnaire. The effectiveness had been tested through experimental procedure by comparing student achievement between experiment and control groups. Based on the findings, it can be concluded that the developed scientific approach discovery based learning module significantly improve the students’ learning in Acid-based and Electrolyte solution. The result of the data analysis indicated that the chemistry module was valid in content, construct, and presentation. Chemistry module also has a good practicality level and also accordance with the available time. This chemistry module was also effective, because it can help the students to understand the content of the learning material. That’s proved by the result of learning student. Based on the result can conclude that chemistry module based on discovery learning and scientific approach in electrolyte and non-electrolyte solution and Acid Based for the 10th and 11th grade of senior high school students were valid, practice, and effective.
Strawman Philosophical Guide for Developing International Network of GPM GV Sites
NASA Technical Reports Server (NTRS)
Smith, Eric A.
2005-01-01
The creation of an international network of ground validation (GV) sites that will support the Global Precipitation Measurement (GPM) Mission's international science programme will require detailed planning of mechanisms for exchanging technical information, GV data products, and scientific results. An important component of the planning will be the philosophical guide under which the network will grow and emerge as a successful element of the GPM Mission. This philosophical guide should be able to serve the mission in developing scientific pathways for ground validation research which will ensure the highest possible quality measurement record of global precipitation products. The philosophical issues, in this regard, partly stem from the financial architecture under which the GV network will be developed, i.e., each participating country will provide its own financial support through committed institutions -- regardless of whether a national or international space agency is involved.At the 1st International GPM Ground Validation Workshop held in Abingdon, UK in November-2003, most of the basic tenants behind the development of the international GV network were identified and discussed. Therefore, with this progress in mind, this presentation is intended to put forth a strawman philosophical guide supporting the development of the international network of GPM GV sites, noting that the initial progress has been reported in the Proceedings of the 1st International GPM GV Workshop -- available online. The central philosophical issues themselves, all flow from the fact that each participating institution can only bring to the table, GV facilities and scientific personnel that are affordable to the sanctioning (funding) national agency (be that a research, research-support, or operational agency). This situation imposes on the network, heterogeneity in the measuring sensors, data collection periods, data collection procedures, data latencies, and data reporting capabilities. Therefore, in order for the network to be effective in supporting the central scientific goals of the GPM mission, there must be a basic agreed upon doctrine under which the network participants function vis-a-vis: (1) an overriding set of general scientific requirements, (2) a minimal set of policies governing the free flow of GV data between the scientific participants, (3) a few basic definitions concerning the prioritization of measurements and their respective value to the mission, (4) a few basic procedures concerning data formats, data reporting procedures, data access, and data archiving, and (5) a simple means to differentiate GV sites according to their level of effort and ability to perform near real-time data acquisition - data reporting tasks. Most important, in case they choose to operate as a near real-time data collection-data distribution site, they would be expected to operate under a fairly narrowly defined protocol needed to ensure smooth GV support operations. This presentation will suggest measures responsive to items (1) - (5) from which to proceed,. In addition, this presentation will seek to stimulate discussion and debate concerning how much heterogeneity is tolerable within the eventual GV site network, given that the any individual GV site can only be considered scientifically useful if it supports the achievement of the central GPM Mission goals. Only ground validation research that has a direct connection to the space mission should be considered justifiable given the overarching scientific goals of the mission. Therefore each site will have to seek some level of accommodation to what the GPM Mission requires in the way of retrieval error characterization, retrieval error detection and reporting, and generation of GV data products that support assessment and improvement of the mission's standard precipitation retrieval algorithms. These are all important scientific issues that will be best resolved in open scientific debate.
Information Quality in Regulatory Decision Making: Peer Review versus Good Laboratory Practice.
McCarty, Lynn S; Borgert, Christopher J; Mihaich, Ellen M
2012-07-01
There is an ongoing discussion on the provenance of toxicity testing data regarding how best to ensure its validity and credibility. A central argument is whether journal peer-review procedures are superior to Good Laboratory Practice (GLP) standards employed for compliance with regulatory mandates. We sought to evaluate the rationale for regulatory decision making based on peer-review procedures versus GLP standards. We examined pertinent published literature regarding how scientific data quality and validity are evaluated for peer review, GLP compliance, and development of regulations. Some contend that peer review is a coherent, consistent evaluative procedure providing quality control for experimental data generation, analysis, and reporting sufficient to reliably establish relative merit, whereas GLP is seen as merely a tracking process designed to thwart investigator corruption. This view is not supported by published analyses pointing to subjectivity and variability in peer-review processes. Although GLP is not designed to establish relative merit, it is an internationally accepted quality assurance, quality control method for documenting experimental conduct and data. Neither process is completely sufficient for establishing relative scientific soundness. However, changes occurring both in peer-review processes and in regulatory guidance resulting in clearer, more transparent communication of scientific information point to an emerging convergence in ensuring information quality. The solution to determining relative merit lies in developing a well-documented, generally accepted weight-of-evidence scheme to evaluate both peer-reviewed and GLP information used in regulatory decision making where both merit and specific relevance inform the process.
FDA regulation of labeling and promotional claims in therapeutic color vision devices: a tutorial.
Drum, Bruce
2004-01-01
The Food and Drug Administration (FDA) is responsible for determining whether medical device manufacturers have provided reasonable assurance, based on valid scientific evidence, that new devices are safe and effective for their intended use before they are introduced into the U.S. market. Most existing color vision devices pose so little risk that their manufacturers are not required to submit a premarket notification [510(k)] to FDA prior to market. However, even low-risk devices may not be acceptable if they are marketed on the basis of misleading or excessive claims. Although most color vision devices are diagnostic, two types that are therapeutic rather than diagnostic are colored lenses intended to improve deficient color vision and colored lenses intended to improve reading performance. Both of these devices have presented special regulatory challenges to FDA because the intended uses and effectiveness claims initially proposed by the manufacturers were not supported by valid scientific evidence. In each instance, however, FDA worked with the manufacturer to restrict labeling and promotional claims in ways that were consistent with the available device performance data and that allowed for the legal marketing of the device.
NASA Astrophysics Data System (ADS)
Sheffield Guy, L.; Wiggins, H. V.; Schreck, M. B.; Metcalf, V. K.
2017-12-01
The Sea Ice for Walrus Outlook (SIWO) provides Alaskan Native subsistence walrus hunters and Bering Strait coastal communities with weekly reports on spring sea ice and weather conditions to promote hunter safety, food security, and preservation of cultural heritage. These reports integrate scientific and Indigenous knowledge into a co-produced tool that is used by both local and scientific communities. SIWO is a team effort led by the Arctic Research Consortium of the U.S. (ARCUS, with funding from NSF Arctic Sciences Section), with the Eskimo Walrus Commission, National Weather Service - Alaska Sea Ice Program, University of Alaska Fairbanks - International Arctic Research Center, and local observers. For each weekly outlook, the National Weather Service provides location-specific weather and sea ice forecasts and regional satellite imagery. Local observations of sea ice, weather, and hunting conditions are provided by observers from five Alaskan communities in the Bering Strait region: Wales, Shishmaref, Nome, Gambell, and Savoonga. These observations typically include a written description of conditions accompanied by photographs of sea ice or subsistence activities. Outlooks are easily accessible and provide a platform for sharing of knowledge among hunters in neighboring communities. The opportunity to contribute is open, and Indigenous language and terms are encouraged. These observations from local hunters and community members also provide a valuable tool for validation of weather forecasts, satellite products, and other information for scientists. This presentation will discuss the process, products, and mutually beneficial outcomes of the Sea Ice for Walrus Outlook.
Lauffer, A; Solé, L; Bernstein, S; Lopes, M H; Francisconi, C F
2013-01-01
The development and validation of questionnaires for evaluating quality of life (QoL) has become an important area of research. However, there is a proliferation of non-validated measuring instruments in the health setting that do not contribute to advances in scientific knowledge. To present, through the analysis of available validated questionnaires, a checklist of the practical aspects of how to carry out the cross-cultural adaptation of QoL questionnaires (generic, or disease-specific) so that no step is overlooked in the evaluation process, and thus help prevent the elaboration of insufficient or incomplete validations. We have consulted basic textbooks and Pubmed databases using the following keywords quality of life, questionnaires, and gastroenterology, confined to «validation studies» in English, Spanish, and Portuguese, and with no time limit, for the purpose of analyzing the translation and validation of the questionnaires available through the Mapi Institute and PROQOLID websites. A checklist is presented to aid in the planning and carrying out of the cross-cultural adaptation of QoL questionnaires, in conjunction with a glossary of key terms in the area of knowledge. The acronym DSTAC was used, which refers to each of the 5 stages involved in the recommended procedure. In addition, we provide a table of the QoL instruments that have been validated into Spanish. This article provides information on how to adapt QoL questionnaires from a cross-cultural perspective, as well as to minimize common errors. Copyright © 2012 Asociación Mexicana de Gastroenterología. Published by Masson Doyma México S.A. All rights reserved.
A DBMS architecture for global change research
NASA Astrophysics Data System (ADS)
Hachem, Nabil I.; Gennert, Michael A.; Ward, Matthew O.
1993-08-01
The goal of this research is the design and development of an integrated system for the management of very large scientific databases, cartographic/geographic information processing, and exploratory scientific data analysis for global change research. The system will represent both spatial and temporal knowledge about natural and man-made entities on the eath's surface, following an object-oriented paradigm. A user will be able to derive, modify, and apply, procedures to perform operations on the data, including comparison, derivation, prediction, validation, and visualization. This work represents an effort to extend the database technology with an intrinsic class of operators, which is extensible and responds to the growing needs of scientific research. Of significance is the integration of many diverse forms of data into the database, including cartography, geography, hydrography, hypsography, images, and urban planning data. Equally important is the maintenance of metadata, that is, data about the data, such as coordinate transformation parameters, map scales, and audit trails of previous processing operations. This project will impact the fields of geographical information systems and global change research as well as the database community. It will provide an integrated database management testbed for scientific research, and a testbed for the development of analysis tools to understand and predict global change.
15 CFR 904.509 - Disposal of forfeited property.
Code of Federal Regulations, 2010 CFR
2010-01-01
... (Continued) NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE GENERAL REGULATIONS CIVIL... such property for scientific, educational, and public display purposes; and for other valid reasons. In... requesting such property for scientific, educational, or public display purposes. Property will be loaned...
15 CFR 904.509 - Disposal of forfeited property.
Code of Federal Regulations, 2011 CFR
2011-01-01
... (Continued) NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE GENERAL REGULATIONS CIVIL... such property for scientific, educational, and public display purposes; and for other valid reasons. In... requesting such property for scientific, educational, or public display purposes. Property will be loaned...
Modern software approaches applied to a Hydrological model: the GEOtop Open-Source Software Project
NASA Astrophysics Data System (ADS)
Cozzini, Stefano; Endrizzi, Stefano; Cordano, Emanuele; Bertoldi, Giacomo; Dall'Amico, Matteo
2017-04-01
The GEOtop hydrological scientific package is an integrated hydrological model that simulates the heat and water budgets at and below the soil surface. It describes the three-dimensional water flow in the soil and the energy exchange with the atmosphere, considering the radiative and turbulent fluxes. Furthermore, it reproduces the highly non-linear interactions between the water and energy balance during soil freezing and thawing, and simulates the temporal evolution of snow cover, soil temperature and moisture. The core components of the package were presented in the 2.0 version (Endrizzi et al, 2014), which was released as Free Software Open-source project. However, despite the high scientific quality of the project, a modern software engineering approach was still missing. Such weakness hindered its scientific potential and its use both as a standalone package and, more importantly, in an integrate way with other hydrological software tools. In this contribution we present our recent software re-engineering efforts to create a robust and stable scientific software package open to the hydrological community, easily usable by researchers and experts, and interoperable with other packages. The activity takes as a starting point the 2.0 version, scientifically tested and published. This version, together with several test cases based on recent published or available GEOtop applications (Cordano and Rigon, 2013, WRR, Kollet et al, 2016, WRR) provides the baseline code and a certain number of referenced results as benchmark. Comparison and scientific validation can then be performed for each software re-engineering activity performed on the package. To keep track of any single change the package is published on its own github repository geotopmodel.github.io/geotop/ under GPL v3.0 license. A Continuous Integration mechanism by means of Travis-CI has been enabled on the github repository on master and main development branches. The usage of CMake configuration tool and the suite of tests (easily manageable by means of ctest tools) greatly reduces the burden of the installation and allows us to enhance portability on different compilers and Operating system platforms. The package was also complemented by several software tools which provide web-based visualization of results based on R plugins, in particular "shiny" (Chang at al, 2016), "geotopbricks" and "geotopOptim2" (Cordano et al, 2016) packages, which allow rapid and efficient scientific validation of new examples and tests. The software re-engineering activities are still under development. However, our first results are promising enough to eventually reach a robust and stable software project that manages in a flexible way a complex state-of-the-art hydrological model like GEOtop and integrates it into wider workflows.
Demonstrating Experimenter "Ineptitude" as a Means of Teaching Internal and External Validity
ERIC Educational Resources Information Center
Treadwell, Kimberli R.H.
2008-01-01
Internal and external validity are key concepts in understanding the scientific method and fostering critical thinking. This article describes a class demonstration of a "botched" experiment to teach validity to undergraduates. Psychology students (N = 75) completed assessments at the beginning of the semester, prior to and immediately following…
Political homogeneity can nurture threats to research validity.
Chambers, John R; Schlenker, Barry R
2015-01-01
Political homogeneity within a scientific field nurtures threats to the validity of many research conclusions by allowing ideologically compatible values to influence interpretations, by minimizing skepticism, and by creating premature consensus. Although validity threats can crop in any research, the usual corrective activities in science are more likely to be minimized and delayed.
NASA Astrophysics Data System (ADS)
Narici, Livo; Berger, Thomas; Burmeister, Sönke; Di Fino, Luca; Rizzo, Alessandro; Matthiä, Daniel; Reitz, Günther
2017-08-01
The solar system exploration by humans requires to successfully deal with the radiation exposition issue. The scientific aspect of this issue is twofold: knowing the radiation environment the astronauts are going to face and linking radiation exposure to health risks. Here we focus on the first issue. It is generally agreed that the final tool to describe the radiation environment in a space habitat will be a model featuring the needed amount of details to perform a meaningful risk assessment. The model should also take into account the shield changes due to the movement of materials inside the habitat, which in turn produce changes in the radiation environment. This model will have to undergo a final validation with a radiation field of similar complexity. The International Space Station (ISS) is a space habitat that features a radiation environment inside which is similar to what will be found in habitats in deep space, if we use measurements acquired only during high latitude passages (where the effects of the Earth magnetic field are reduced). Active detectors, providing time information, that can easily select data from different orbital sections, are the ones best fulfilling the requirements for these kinds of measurements. The exploitation of the radiation measurements performed in the ISS by all the available instruments is therefore mandatory to provide the largest possible database to the scientific community, to be merged with detailed Computer Aided Design (CAD) models, in the quest for a full model validation. While some efforts in comparing results from multiple active detectors have been attempted, a thorough study of a procedure to merge data in a single data matrix in order to provide the best validation set for radiation environment models has never been attempted. The aim of this paper is to provide such a procedure, to apply it to two of the most performing active detector systems in the ISS: the Anomalous Long Term Effects in Astronauts (ALTEA) instrument and the DOSimetry TELescope (DOSTEL) detectors, applied in the frame of the DOSIS and DOSIS 3D project onboard the ISS and to present combined results exploiting the features of each of the two apparatuses.
Testing and validating environmental models
Kirchner, J.W.; Hooper, R.P.; Kendall, C.; Neal, C.; Leavesley, G.
1996-01-01
Generally accepted standards for testing and validating ecosystem models would benefit both modellers and model users. Universally applicable test procedures are difficult to prescribe, given the diversity of modelling approaches and the many uses for models. However, the generally accepted scientific principles of documentation and disclosure provide a useful framework for devising general standards for model evaluation. Adequately documenting model tests requires explicit performance criteria, and explicit benchmarks against which model performance is compared. A model's validity, reliability, and accuracy can be most meaningfully judged by explicit comparison against the available alternatives. In contrast, current practice is often characterized by vague, subjective claims that model predictions show 'acceptable' agreement with data; such claims provide little basis for choosing among alternative models. Strict model tests (those that invalid models are unlikely to pass) are the only ones capable of convincing rational skeptics that a model is probably valid. However, 'false positive' rates as low as 10% can substantially erode the power of validation tests, making them insufficiently strict to convince rational skeptics. Validation tests are often undermined by excessive parameter calibration and overuse of ad hoc model features. Tests are often also divorced from the conditions under which a model will be used, particularly when it is designed to forecast beyond the range of historical experience. In such situations, data from laboratory and field manipulation experiments can provide particularly effective tests, because one can create experimental conditions quite different from historical data, and because experimental data can provide a more precisely defined 'target' for the model to hit. We present a simple demonstration showing that the two most common methods for comparing model predictions to environmental time series (plotting model time series against data time series, and plotting predicted versus observed values) have little diagnostic power. We propose that it may be more useful to statistically extract the relationships of primary interest from the time series, and test the model directly against them.
Zombie algorithms: a timesaving remote sensing systems engineering tool
NASA Astrophysics Data System (ADS)
Ardanuy, Philip E.; Powell, Dylan C.; Marley, Stephen
2008-08-01
In modern horror fiction, zombies are generally undead corpses brought back from the dead by supernatural or scientific means, and are rarely under anyone's direct control. They typically have very limited intelligence, and hunger for the flesh of the living [1]. Typical spectroradiometric or hyperspectral instruments providess calibrated radiances for a number of remote sensing algorithms. The algorithms typically must meet specified latency and availability requirements while yielding products at the required quality. These systems, whether research, operational, or a hybrid, are typically cost constrained. Complexity of the algorithms can be high, and may evolve and mature over time as sensor characterization changes, product validation occurs, and areas of scientific basis improvement are identified and completed. This suggests the need for a systems engineering process for algorithm maintenance that is agile, cost efficient, repeatable, and predictable. Experience on remote sensing science data systems suggests the benefits of "plug-n-play" concepts of operation. The concept, while intuitively simple, can be challenging to implement in practice. The use of zombie algorithms-empty shells that outwardly resemble the form, fit, and function of a "complete" algorithm without the implemented theoretical basis-provides the ground systems advantages equivalent to those obtained by integrating sensor engineering models onto the spacecraft bus. Combined with a mature, repeatable process for incorporating the theoretical basis, or scientific core, into the "head" of the zombie algorithm, along with associated scripting and registration, provides an easy "on ramp" for the rapid and low-risk integration of scientific applications into operational systems.
Hengartner, Michael P
2017-01-01
Major scientific flaws such as reporting and publication biases are well documented, even though acknowledgment of their importance appears to be lacking in various psychological and medical fields. Subtle and less obvious biases including selective reviews of the literature and empirically unsupported conclusions and recommendations have received even less attention. Using the literature on the association between transition to menopause, hormones and the onset of depression as a guiding example, I outline how such scientific fallacies undermine the validity of neuroendocrinological research. It is shown that in contrast to prominent claims, first, most prospective studies do not support the notion that the menopausal transition relates to increased risk for depression, second, that associations between hormone levels and depression are largely inconsistent and irreproducible, and, third, that the evidence for the efficacy of hormone therapy for the treatment of depression is very weak and at best inconclusive. I conclude that a direct and uniform association between female sex hormones and depression is clearly not supported by the literature and that more attention should be paid to the manifold scientific biases that undermine the validity of findings in psychological and medical research, with a specific focus on the behavioral neurosciences.
Khoury, Muin J.; McBride, Colleen M.; Schully, Sheri D.; Ioannidis, John P. A.; Feero, W. Gregory; Janssens, A. Cecile J. W.; Gwinn, Marta; Simons-Morton, Denise G.; Bernhardt, Jay M.; Cargill, Michele; Chanock, Stephen J.; Church, George M.; Coates, Ralph J.; Collins, Francis S.; Croyle, Robert T.; Davis, Barry R.; Downing, Gregory J.; DuRoss, Amy; Friedman, Susan; Gail, Mitchell H.; Ginsburg, Geoffrey S.; Green, Robert C.; Greene, Mark H.; Greenland, Philip; Gulcher, Jeffrey R.; Hsu, Andro; Hudson, Kathy L.; Kardia, Sharon L. R.; Kimmel, Paul L.; Lauer, Michael S.; Miller, Amy M.; Offit, Kenneth; Ransohoff, David F.; Roberts, J. Scott; Rasooly, Rebekah S.; Stefansson, Kari; Terry, Sharon F.; Teutsch, Steven M.; Trepanier, Angela; Wanke, Kay L.; Witte, John S.; Xu, Jianfeng
2010-01-01
The increasing availability of personal genomic tests has led to discussions about the validity and utility of such tests and the balance of benefits and harms. A multidisciplinary workshop was convened by the National Institutes of Health and the Centers for Disease Control and Prevention to review the scientific foundation for using personal genomics in risk assessment and disease prevention and to develop recommendations for targeted research. The clinical validity and utility of personal genomics is a moving target with rapidly developing discoveries but little translation research to close the gap between discoveries and health impact. Workshop participants made recommendations in five domains: (1) developing and applying scientific standards for assessing personal genomic tests; (2) developing and applying a multidisciplinary research agenda, including observational studies and clinical trials to fill knowledge gaps in clinical validity and utility; (3) enhancing credible knowledge synthesis and information dissemination to clinicians and consumers; (4) linking scientific findings to evidence-based recommendations for use of personal genomics; and (5) assessing how the concept of personal utility can affect health benefits, costs, and risks by developing appropriate metrics for evaluation. To fulfill the promise of personal genomics, a rigorous multidisciplinary research agenda is needed. PMID:19617843
The need for scientific software engineering in the pharmaceutical industry
NASA Astrophysics Data System (ADS)
Luty, Brock; Rose, Peter W.
2017-03-01
Scientific software engineering is a distinct discipline from both computational chemistry project support and research informatics. A scientific software engineer not only has a deep understanding of the science of drug discovery but also the desire, skills and time to apply good software engineering practices. A good team of scientific software engineers can create a software foundation that is maintainable, validated and robust. If done correctly, this foundation enable the organization to investigate new and novel computational ideas with a very high level of efficiency.
The need for scientific software engineering in the pharmaceutical industry.
Luty, Brock; Rose, Peter W
2017-03-01
Scientific software engineering is a distinct discipline from both computational chemistry project support and research informatics. A scientific software engineer not only has a deep understanding of the science of drug discovery but also the desire, skills and time to apply good software engineering practices. A good team of scientific software engineers can create a software foundation that is maintainable, validated and robust. If done correctly, this foundation enable the organization to investigate new and novel computational ideas with a very high level of efficiency.
Choosing a Control Group in Effectiveness Trials of Behavioral Drug Abuse Treatments
Brigham, Gregory S.; Feaster, Daniel J.; Wakim, Paul G.; Dempsey, Catherine L.
2009-01-01
Effectiveness trials are an important step in the scientific process of developing and evaluating behavioral treatments. The focus on effectiveness research presents a different set of requirements on the research design when compared with efficacy studies. The choice of a control condition has many implications for a clinical trial's internal and external validity. The purpose of this manuscript is to provide a discussion of the issues involved in choosing a control group for effectiveness trials of behavioral interventions in substance abuse treatment. The authors provide a description of four trial designs and a discussion of the advantages and disadvantages of each. PMID:19553062
Accelerator Based Tools of Stockpile Stewardship
NASA Astrophysics Data System (ADS)
Seestrom, Susan
2017-01-01
The Manhattan Project had to solve difficult challenges in physics and materials science. During the cold war a large nuclear stockpile was developed. In both cases, the approach was largely empirical. Today that stockpile must be certified without nuclear testing, a task that becomes more difficult as the stockpile ages. I will discuss the role of modern accelerator based experiments, such as x-ray radiography, proton radiography, neutron and nuclear physics experiments, in stockpile stewardship. These new tools provide data of exceptional sensitivity and are answering questions about the stockpile, improving our scientific understanding, and providing validation for the computer simulations that are relied upon to certify todays' stockpile.
Using EO-1 Hyperion Images to Prototype Environmental Products for Hyspiri
NASA Technical Reports Server (NTRS)
Middleton, Elizabeth M.; Campbell, Petya K. E.; Ungar, Stephen G.; Ong, Lawrence; Zhang, Qingyuan; Huemmrich, K. Fred; Mandl, Daniel J.; Frye, Stuart W.
2011-01-01
In November 2010, the Earth Observing One (EO-1) Satellite Mission will successfully complete a decade of Earth imaging by its two unique instruments, the Hyperion and the Advanced Land Imager (ALI). Both instruments are serving as prototypes for new orbital sensors, and the EO-1 is a heritage platform for the upcoming German mission, EnMAP. We provide an overview of the mission's lifetime. We briefly describe calibration & validation activities and overview the technical and scientific accomplishments of this mission. Some examples of the Mission Science Office (MSO) products are provided, as is an example of a image collected for disaster monitoring.
Embedded assessment algorithms within home-based cognitive computer game exercises for elders.
Jimison, Holly; Pavel, Misha
2006-01-01
With the recent consumer interest in computer-based activities designed to improve cognitive performance, there is a growing need for scientific assessment algorithms to validate the potential contributions of cognitive exercises. In this paper, we present a novel methodology for incorporating dynamic cognitive assessment algorithms within computer games designed to enhance cognitive performance. We describe how this approach works for variety of computer applications and describe cognitive monitoring results for one of the computer game exercises. The real-time cognitive assessments also provide a control signal for adapting the difficulty of the game exercises and providing tailored help for elders of varying abilities.
McClure, Kimberley A; McGuire, Katherine L; Chapan, Denis M
2018-05-07
Policy on officer-involved shootings is critically reviewed and errors in applying scientific knowledge identified. Identifying and evaluating the most relevant science to a field-based problem is challenging. Law enforcement administrators with a clear understanding of valid science and application are in a better position to utilize scientific knowledge for the benefit of their organizations and officers. A recommended framework is proposed for considering the validity of science and its application. Valid science emerges via hypothesis testing, replication, extension and marked by peer review, known error rates, and general acceptance in its field of origin. Valid application of behavioral science requires an understanding of the methodology employed, measures used, and participants recruited to determine whether the science is ready for application. Fostering a science-practitioner partnership and an organizational culture that embraces quality, empirically based policy, and practices improves science-to-practice translation. © 2018 American Academy of Forensic Sciences.
Science and Learning Disabilities.
ERIC Educational Resources Information Center
Stanovich, Keith E.
1988-01-01
Reactions to H. Lee Swanson's paper "Toward a Metatheory of Learning Disabilities" are outlined, and his arguments are applied to reading disabilities, focusing on the importance of the scientific attitude, the misuse of ecological validity, interpretation of Thomas Kuhn's work, modularity and reading disability, and scientific progress…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amerio, S.; Behari, S.; Boyd, J.
The Fermilab Tevatron collider's data-taking run ended in September 2011, yielding a dataset with rich scientific potential. The CDF and D0 experiments each have approximately 9 PB of collider and simulated data stored on tape. A large computing infrastructure consisting of tape storage, disk cache, and distributed grid computing for physics analysis with the Tevatron data is present at Fermilab. The Fermilab Run II data preservation project intends to keep this analysis capability sustained through the year 2020 and beyond. To achieve this goal, we have implemented a system that utilizes virtualization, automated validation, and migration to new standards inmore » both software and data storage technology and leverages resources available from currently-running experiments at Fermilab. Lastly, these efforts have also provided useful lessons in ensuring long-term data access for numerous experiments, and enable high-quality scientific output for years to come.« less
Reitmaier, Sandra; Graichen, Friedmar; Shirazi-Adl, Aboulfazl; Schmidt, Hendrik
2017-10-04
Approximately 5,168 large animals (pigs, sheep, goats, and cattle) were used for intervertebral disc research in identified studies published between 1985 and 2016. Most of the reviewed studies revealed a low scientific impact, a lack of sound justifications for the animal models, and a number of deficiencies in the documentation of the animal experimentation. The scientific community should take suitable measures to investigate the presumption that animal models have translational value in intervertebral disc research. Recommendations for future investigations are provided to improve the quality, validity, and usefulness of animal studies for intervertebral disc research. More in vivo studies are warranted to comprehensively evaluate the suitability of animal models in various applications and help place animal models as an integral, complementary part of intervertebral disc research.
Fermi-LAT View of Bright Flaring Gamma-Ray Blazars
NASA Astrophysics Data System (ADS)
Bastieri, D.; Ciprini, S.; Gasparrini, D.
2011-06-01
The Fermi LAT provides a continuous and uniform monitoring of the Universe in the gamma-ray band. During the first year many gamma-ray blazar flares, some unidentified transients and emission by the Sun while in a quiet state were promptly detected. This is mainly due to the design of the mission, featuring a detector, the LAT with a wide field of view, and to the operation of the spacecraft itself, that can cover every region of the sky every 3 hours. Nevertheless, the scientific exploitation of this monitoring is more fruitful when early information about transients reaches a broader community. In this respect, the indefatigable activity of flare advocates, who worked on weekly shifts to validate the results and quickly broadcast information about flares and new detections, was the key to most scientific results.
Critical appraisal of published literature
Umesh, Goneppanavar; Karippacheril, John George; Magazine, Rahul
2016-01-01
With a large output of medical literature coming out every year, it is impossible for readers to read every article. Critical appraisal of scientific literature is an important skill to be mastered not only by academic medical professionals but also by those involved in clinical practice. Before incorporating changes into the management of their patients, a thorough evaluation of the current or published literature is an important step in clinical practice. It is necessary for assessing the published literature for its scientific validity and generalizability to the specific patient community and reader's work environment. Simple steps have been provided by Consolidated Standard for Reporting Trial statements, Scottish Intercollegiate Guidelines Network and several other resources which if implemented may help the reader to avoid reading flawed literature and prevent the incorporation of biased or untrustworthy information into our practice. PMID:27729695
Zakharov, Sergey
2011-03-01
The relevance and admissibility of expert medical testimony in relation to medical malpractice suits requires a more successful development of formal criteria and a more intentional compliance with efficient judicial procedures. The American judicial system provides an excellent model for implementation of a critical approach to knowledge collection, the evaluation of the validity of scientifically sound information, and the examination of expert's testimony on the basis of a sound methodology. An analysis of the assessment and application of reliability yields evidence that assuring standards to improve the quality of expert medical testimony will increase the overall probability of a fair outcome during the judicial process. Applying these beneficial strategies in medical malpractice cases will continue to support further considerations of promoting justice and solving problems through sufficient scientific means.
Data preservation at the Fermilab Tevatron
NASA Astrophysics Data System (ADS)
Amerio, S.; Behari, S.; Boyd, J.; Brochmann, M.; Culbertson, R.; Diesburg, M.; Freeman, J.; Garren, L.; Greenlee, H.; Herner, K.; Illingworth, R.; Jayatilaka, B.; Jonckheere, A.; Li, Q.; Naymola, S.; Oleynik, G.; Sakumoto, W.; Varnes, E.; Vellidis, C.; Watts, G.; White, S.
2017-04-01
The Fermilab Tevatron collider's data-taking run ended in September 2011, yielding a dataset with rich scientific potential. The CDF and D0 experiments each have approximately 9 PB of collider and simulated data stored on tape. A large computing infrastructure consisting of tape storage, disk cache, and distributed grid computing for physics analysis with the Tevatron data is present at Fermilab. The Fermilab Run II data preservation project intends to keep this analysis capability sustained through the year 2020 and beyond. To achieve this goal, we have implemented a system that utilizes virtualization, automated validation, and migration to new standards in both software and data storage technology and leverages resources available from currently-running experiments at Fermilab. These efforts have also provided useful lessons in ensuring long-term data access for numerous experiments, and enable high-quality scientific output for years to come.
Diagnostic microbiology in veterinary dermatology: present and future.
Guardabassi, Luca; Damborg, Peter; Stamm, Ivonne; Kopp, Peter A; Broens, Els M; Toutain, Pierre-Louis
2017-02-01
The microbiology laboratory can be perceived as a service provider rather than an integral part of the healthcare team. The aim of this review is to discuss the current challenges of providing a state-of-the-art diagnostic veterinary microbiology service including the identification (ID) and antimicrobial susceptibility testing (AST) of key pathogens in veterinary dermatology. The Study Group for Veterinary Microbiology (ESGVM) of the European Society of Clinical Microbiology and Infectious Diseases (ESCMID) identified scientific, technological, educational and regulatory issues impacting the predictive value of AST and the quality of the service offered by microbiology laboratories. The advent of mass spectrometry has significantly reduced the time required for ID of key pathogens such as Staphylococcus pseudintermedius. However, the turnaround time for validated AST methods has remained unchanged for many years. Beyond scientific and technological constraints, AST methods are not harmonized and clinical breakpoints for some antimicrobial drugs are either missing or inadequate. Small laboratories, including in-clinic laboratories, are usually not adequately equipped to run up-to-date clinical microbiologic diagnostic tests. ESGVM recommends the use of laboratories employing mass spectrometry for ID and broth micro-dilution for AST, and offering assistance by expert microbiologists on pre- and post-analytical issues. Setting general standards for veterinary clinical microbiology, promoting antimicrobial stewardship, and the development of new, validated and rapid diagnostic methods, especially for AST, are among the missions of ESGVM. © 2017 The Authors. Veterinary Dermatology published by John Wiley & Sons Ltd on behalf of the ESVD and ACVD.
Interoperability challenges in river discharge modelling: A cross domain application scenario
NASA Astrophysics Data System (ADS)
Santoro, Mattia; Andres, Volker; Jirka, Simon; Koike, Toshio; Looser, Ulrich; Nativi, Stefano; Pappenberger, Florian; Schlummer, Manuela; Strauch, Adrian; Utech, Michael; Zsoter, Ervin
2018-06-01
River discharge is a critical water cycle variable, as it integrates all the processes (e.g. runoff and evapotranspiration) occurring within a river basin and provides a hydrological output variable that can be readily measured. Its prediction is of invaluable help for many water-related tasks including water resources assessment and management, flood protection, and disaster mitigation. Observations of river discharge are important to calibrate and validate hydrological or coupled land, atmosphere and ocean models. This requires using datasets from different scientific domains (Water, Weather, etc.). Typically, such datasets are provided using different technological solutions. This complicates the integration of new hydrological data sources into application systems. Therefore, a considerable effort is often spent on data access issues instead of the actual scientific question. This paper describes the work performed to address multidisciplinary interoperability challenges related to river discharge modeling and validation. This includes definition and standardization of domain specific interoperability standards for hydrological data sharing and their support in global frameworks such as the Global Earth Observation System of Systems (GEOSS). The research was developed in the context of the EU FP7-funded project GEOWOW (GEOSS Interoperability for Weather, Ocean and Water), which implemented a "River Discharge" application scenario. This scenario demonstrates the combination of river discharge observations data from the Global Runoff Data Centre (GRDC) database and model outputs produced by the European Centre for Medium-Range Weather Forecasts (ECMWF) predicting river discharge based on weather forecast information in the context of the GEOSS.
Boison, Joe O
2016-05-01
The Joint Food and Agriculture Organization and World Health Organization (FAO/WHO) Expert Committee on Food Additives (JECFA) is one of three Codex committees tasked with applying risk analysis and relying on independent scientific advice provided by expert bodies organized by FAO/WHO when developing standards. While not officially part of the Codex Alimentarius Commission structure, JECFA provides independent scientific advice to the Commission and its specialist committees such as the Codex Committee on Residues of Veterinary Drugs in Foods (CCRVDF) in setting maximum residue limits (MRLs) for veterinary drugs. Codex methods of analysis (Types I, II, III, and IV) are defined in the Codex Procedural Manual as are criteria to be used for selecting methods of analysis. However, if a method is to be used under a single laboratory condition to support regulatory work, it must be validated according to an internationally recognized protocol and the use of the method must be embedded in a quality assurance system in compliance with ISO/IEC 17025:2005. This paper examines the attributes of the methods used to generate residue depletion data for drug registration and/or licensing and for supporting regulatory enforcement initiatives that experts consider to be useful and appropriate in their assessment of methods of analysis. Copyright © 2016 Her Majesty the Queen in Right of Canada. Drug Testing and Analysis © 2016 John Wiley & Sons, Ltd. © 2016 Her Majesty the Queen in Right of Canada. Drug Testing and Analysis © 2016 John Wiley & Sons, Ltd.
Is Echinococcus intermedius a valid species?
USDA-ARS?s Scientific Manuscript database
Medical and veterinary sciences require scientific names to discriminate pathogenic organisms in our living environment. Various species concepts have been proposed for metazoan animals. There are, however, constant controversies over their validity because of lack of a common criterion to define ...
NASA Astrophysics Data System (ADS)
Serevina, V.; Muliyati, D.
2018-05-01
This research aims to develop students’ performance assessment instrument based on scientific approach is valid and reliable in assessing the performance of students on basic physics lab of Simple Harmonic Motion (SHM). This study uses the ADDIE consisting of stages: Analyze, Design, Development, Implementation, and Evaluation. The student performance assessment developed can be used to measure students’ skills in observing, asking, conducting experiments, associating and communicate experimental results that are the ‘5M’ stages in a scientific approach. Each grain of assessment in the instrument is validated by the instrument expert and the evaluation with the result of all points of assessment shall be eligible to be used with a 100% eligibility percentage. The instrument is then tested for the quality of construction, material, and language by panel (lecturer) with the result: 85% or very good instrument construction aspect, material aspect 87.5% or very good, and language aspect 83% or very good. For small group trial obtained instrument reliability level of 0.878 or is in the high category, where r-table is 0.707. For large group trial obtained instrument reliability level of 0.889 or is in the high category, where r-table is 0.320. Instruments declared valid and reliable for 5% significance level. Based on the result of this research, it can be concluded that the student performance appraisal instrument based on the developed scientific approach is declared valid and reliable to be used in assessing student skill in SHM experimental activity.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-26
... Scientific Advisory Committee on Alternative Toxicological Methods; Announcement of Meeting; Request for... Toxicological Methods (SACATM). SACATM advises the Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM), the National Toxicology Program (NTP) Interagency Center for the Evaluation of...
Theory of Multiple Intelligences: Is It a Scientific Theory?
ERIC Educational Resources Information Center
Chen, Jie-Qi
2004-01-01
This essay discusses the status of multiple intelligences (MI) theory as a scientific theory by addressing three issues: the empirical evidence Gardner used to establish MI theory, the methodology he employed to validate MI theory, and the purpose or function of MI theory.
About the Undecidable Thing of Onthologic Truth of the Reality of the Fundamental Physics
NASA Astrophysics Data System (ADS)
Amaya, J. M.; Carbonell, M. V.; Martínez, E.; Flórez, M.
2007-04-01
In the philosophy of contemporary science, different currents from thought have been arising, each one of which, it has developed methodologies that include a set of normative rules for the evaluation that must decide the acceptance or rejection of the scientific theories already elaborated. These methodologies, rivals to each other, are: "Inductivism", "Conventionalism", "Falsacionism" and the "Theories like structures" (Kühn and Lakatos). These evaluative methodologies of the quality and validity of the scientific knowledge (or metatheories of science) associate a epistemological validation as soon as until limit or border of science forms, it presents/displays and it describes the observable world as well as the one that there is behind the appearances. Consequence of it has been the appearance of different valorative interpretations from the representation that the scientific theories give us of which reality is called, such as "Scientific realism", "Antirealism", "Conjectural realism", "Structural realism" etc that has based their theses on the problem on language, truth and reality.
Food for Thought ... Mechanistic Validation
Hartung, Thomas; Hoffmann, Sebastian; Stephens, Martin
2013-01-01
Summary Validation of new approaches in regulatory toxicology is commonly defined as the independent assessment of the reproducibility and relevance (the scientific basis and predictive capacity) of a test for a particular purpose. In large ring trials, the emphasis to date has been mainly on reproducibility and predictive capacity (comparison to the traditional test) with less attention given to the scientific or mechanistic basis. Assessing predictive capacity is difficult for novel approaches (which are based on mechanism), such as pathways of toxicity or the complex networks within the organism (systems toxicology). This is highly relevant for implementing Toxicology for the 21st Century, either by high-throughput testing in the ToxCast/ Tox21 project or omics-based testing in the Human Toxome Project. This article explores the mostly neglected assessment of a test's scientific basis, which moves mechanism and causality to the foreground when validating/qualifying tests. Such mechanistic validation faces the problem of establishing causality in complex systems. However, pragmatic adaptations of the Bradford Hill criteria, as well as bioinformatic tools, are emerging. As critical infrastructures of the organism are perturbed by a toxic mechanism we argue that by focusing on the target of toxicity and its vulnerability, in addition to the way it is perturbed, we can anchor the identification of the mechanism and its verification. PMID:23665802
Analytical Performance Modeling and Validation of Intel’s Xeon Phi Architecture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chunduri, Sudheer; Balaprakash, Prasanna; Morozov, Vitali
Modeling the performance of scientific applications on emerging hardware plays a central role in achieving extreme-scale computing goals. Analytical models that capture the interaction between applications and hardware characteristics are attractive because even a reasonably accurate model can be useful for performance tuning before the hardware is made available. In this paper, we develop a hardware model for Intel’s second-generation Xeon Phi architecture code-named Knights Landing (KNL) for the SKOPE framework. We validate the KNL hardware model by projecting the performance of mini-benchmarks and application kernels. The results show that our KNL model can project the performance with prediction errorsmore » of 10% to 20%. The hardware model also provides informative recommendations for code transformations and tuning.« less
TIE: an ability test of emotional intelligence.
Śmieja, Magdalena; Orzechowski, Jarosław; Stolarski, Maciej S
2014-01-01
The Test of Emotional Intelligence (TIE) is a new ability scale based on a theoretical model that defines emotional intelligence as a set of skills responsible for the processing of emotion-relevant information. Participants are provided with descriptions of emotional problems, and asked to indicate which emotion is most probable in a given situation, or to suggest the most appropriate action. Scoring is based on the judgments of experts: professional psychotherapists, trainers, and HR specialists. The validation study showed that the TIE is a reliable and valid test, suitable for both scientific research and individual assessment. Its internal consistency measures were as high as .88. In line with theoretical model of emotional intelligence, the results of the TIE shared about 10% of common variance with a general intelligence test, and were independent of major personality dimensions.
NASA Technical Reports Server (NTRS)
Livingston, John M.
2004-01-01
NASA Cooperative Agreement NCC2-1251 provided funding from April 2001 through December 2003 for Mr. John Livingston of SRI International to collaborate with NASA Ames Research Center scientists and engineers in the acquisition and analysis of airborne sunphotometer measurements during various atmospheric field studies. Mr. Livingston participated in instrument calibrations at Mauna Loa Observatory, pre-mission hardware and software preparations, acquisition and analysis of sunphotometer measurements during the missions, and post-mission analysis of data and reporting of scientific findings. The atmospheric field missions included the spring 2001 Intensive of the Asian Pacific Regional Aerosol Characterization Experiment (ACE-Asia), the Asian Dust Above Monterey-2003 (ADAM-2003) experiment, and the winter 2003 Second SAGE III Ozone Loss and Validation Experiment (SOLVE II).
Düking, Peter; Fuss, Franz Konstantin; Holmberg, Hans-Christer; Sperlich, Billy
2018-04-30
Although it is becoming increasingly popular to monitor parameters related to training, recovery, and health with wearable sensor technology (wearables), scientific evaluation of the reliability, sensitivity, and validity of such data is limited and, where available, has involved a wide variety of approaches. To improve the trustworthiness of data collected by wearables and facilitate comparisons, we have outlined recommendations for standardized evaluation. We discuss the wearable devices themselves, as well as experimental and statistical considerations. Adherence to these recommendations should be beneficial not only for the individual, but also for regulatory organizations and insurance companies. ©Peter Düking, Franz Konstantin Fuss, Hans-Christer Holmberg, Billy Sperlich. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 30.04.2018.
NASA Astrophysics Data System (ADS)
Wu, Huiquan; Khan, Mansoor
2012-08-01
As an emerging technology, THz spectroscopy has gained increasing attention in the pharmaceutical area during the last decade. This attention is due to the fact that (1) it provides a promising alternative approach for in-depth understanding of both intermolecular interaction among pharmaceutical molecules and pharmaceutical product quality attributes; (2) it provides a promising alternative approach for enhanced process understanding of certain pharmaceutical manufacturing processes; and (3) the FDA pharmaceutical quality initiatives, most noticeably, the Process Analytical Technology (PAT) initiative. In this work, the current status and progress made so far on using THz spectroscopy for pharmaceutical development and pharmaceutical PAT applications are reviewed. In the spirit of demonstrating the utility of first principles modeling approach for addressing model validation challenge and reducing unnecessary model validation "burden" for facilitating THz pharmaceutical PAT applications, two scientific case studies based on published THz spectroscopy measurement results are created and discussed. Furthermore, other technical challenges and opportunities associated with adapting THz spectroscopy as a pharmaceutical PAT tool are highlighted.
Scientific Reporting: Raising the Standards
ERIC Educational Resources Information Center
McLeroy, Kenneth R.; Garney, Whitney; Mayo-Wilson, Evan; Grant, Sean
2016-01-01
This article is based on a presentation that was made at the 2014 annual meeting of the editorial board of "Health Education & Behavior." The article addresses critical issues related to standards of scientific reporting in journals, including concerns about external and internal validity and reporting bias. It reviews current…
Vendantic view on life and consciousness: BN Shanta is correct.
Jagannadham, Medicharla Venkata
2016-01-01
The explanation for Vedanta offered by Bhakti Niskama Santa (BNS) 1 is valid from both scientific and philosophical grounds. It seems that the published critique of Gustavo Caetano-Anollés (GCA) 2 to Shanta's paper is purely emotional and does not have any valid scientific or philosophical justification. In his rebuttal to Caetano-Anollés's critique, Shanta 3 highlighted how the concept of 'Organic Whole' in Vedanta is completely different than that of Creationist Movement and Intelligent Design. Thus Caetano-Anollé's attempt to equate Vedanta with Creationist Movement and Intelligent Design is merely superfluous. This article highlights the validity of the argument made by Bhakti Niskama Shanta 1 and thus also intends to clarify why the Caetano-Anollés critique is groundless.
Vendantic view on life and consciousness: BN Shanta is correct
Jagannadham, Medicharla Venkata
2016-01-01
ABSTRACT The explanation for Vedanta offered by Bhakti Niskama Santa (BNS)1 is valid from both scientific and philosophical grounds. It seems that the published critique of Gustavo Caetano-Anollés (GCA)2 to Shanta's paper is purely emotional and does not have any valid scientific or philosophical justification. In his rebuttal to Caetano-Anollés's critique, Shanta3 highlighted how the concept of ‘Organic Whole’ in Vedanta is completely different than that of Creationist Movement and Intelligent Design. Thus Caetano-Anollé's attempt to equate Vedanta with Creationist Movement and Intelligent Design is merely superfluous. This article highlights the validity of the argument made by Bhakti Niskama Shanta1 and thus also intends to clarify why the Caetano-Anollés critique is groundless. PMID:27829974
A History of NASA Remote Sensing Contributions to Archaeology
NASA Technical Reports Server (NTRS)
Giardino, Marco J.
2010-01-01
During its long history of developing and deploying remote sensing instruments, NASA has provided a scientific data that have benefitted a variety of scientific applications among them archaeology. Multispectral and hyperspectral instrument mounted on orbiting and suborbital platforms have provided new and important information for the discovery, delineation and analysis of archaeological sites worldwide. Since the early 1970s, several of the ten NASA centers have collaborated with archaeologists to refine and validate the use of active and passive remote sensing for archeological use. The Stennis Space Center (SSC), located in Mississippi USA has been the NASA leader in archeological research. Together with colleagues from Goddard Space Flight Center (GSFC), Marshall Space Flight Center (MSFC), and the Jet Propulsion Laboratory (JPL), SSC scientists have provided the archaeological community with useful images and sophisticated processing that have pushed the technological frontiers of archaeological research and applications. Successful projects include identifying prehistoric roads in Chaco canyon, identifying sites from the Lewis and Clark Corps of Discovery exploration and assessing prehistoric settlement patterns in southeast Louisiana. The Scientific Data Purchase (SDP) stimulated commercial companies to collect archaeological data. At present, NASA formally solicits "space archaeology" proposals through its Earth Science Directorate and continues to assist archaeologists and cultural resource managers in doing their work more efficiently and effectively. This paper focuses on passive remote sensing and does not consider the significant contributions made by NASA active sensors. Hyperspectral data offers new opportunities for future archeological discoveries.
Assessment Methodology for Process Validation Lifecycle Stage 3A.
Sayeed-Desta, Naheed; Pazhayattil, Ajay Babu; Collins, Jordan; Chen, Shu; Ingram, Marzena; Spes, Jana
2017-07-01
The paper introduces evaluation methodologies and associated statistical approaches for process validation lifecycle Stage 3A. The assessment tools proposed can be applied to newly developed and launched small molecule as well as bio-pharma products, where substantial process and product knowledge has been gathered. The following elements may be included in Stage 3A: number of 3A batch determination; evaluation of critical material attributes, critical process parameters, critical quality attributes; in vivo in vitro correlation; estimation of inherent process variability (IPV) and PaCS index; process capability and quality dashboard (PCQd); and enhanced control strategy. US FDA guidance on Process Validation: General Principles and Practices, January 2011 encourages applying previous credible experience with suitably similar products and processes. A complete Stage 3A evaluation is a valuable resource for product development and future risk mitigation of similar products and processes. Elements of 3A assessment were developed to address industry and regulatory guidance requirements. The conclusions made provide sufficient information to make a scientific and risk-based decision on product robustness.
EVER-EST: a virtual research environment for Earth Sciences
NASA Astrophysics Data System (ADS)
Marelli, Fulvio; Albani, Mirko; Glaves, Helen
2016-04-01
There is an increasing requirement for researchers to work collaboratively using common resources whilst being geographically dispersed. By creating a virtual research environment (VRE) using a service oriented architecture (SOA) tailored to the needs of Earth Science (ES) communities, the EVEREST project will provide a range of both generic and domain specific data management services to support a dynamic approach to collaborative research. EVER-EST will provide the means to overcome existing barriers to sharing of Earth Science data and information allowing research teams to discover, access, share and process heterogeneous data, algorithms, results and experiences within and across their communities, including those domains beyond Earth Science. Researchers will be able to seamlessly manage both the data involved in their computationally intensive disciplines and the scientific methods applied in their observations and modelling, which lead to the specific results that need to be attributable, validated and shared both within the community and more widely e.g. in the form of scholarly communications. Central to the EVEREST approach is the concept of the Research Object (RO) , which provides a semantically rich mechanism to aggregate related resources about a scientific investigation so that they can be shared together using a single unique identifier. Although several e-laboratories are incorporating the research object concept in their infrastructure, the EVER-EST VRE will be the first infrastructure to leverage the concept of Research Objects and their application in observational rather than experimental disciplines. Development of the EVEREST VRE will leverage the results of several previous projects which have produced state-of-the-art technologies for scientific data management and curation as well those which have developed models, techniques and tools for the preservation of scientific methods and their implementation in computational forms such as scientific workflows. The EVER-EST data processing infrastructure will be based on a Cloud Computing approach, in which new applications can be integrated using "virtual machines" that have their own specifications (disk size, processor speed, operating system etc.) and run on shared private (physical deployment over local hardware) or commercial Cloud infrastructures. The EVER-EST e-infrastructure will be validated by four virtual research communities (VRC) covering different multidisciplinary Earth Science domains including: ocean monitoring, natural hazards, land monitoring and risk management (volcanoes and seismicity). Each VRC will use the virtual research environment according to its own specific requirements for data, software, best practice and community engagement. This user-centric approach will allow an assessment to be made of the capability for the proposed solution to satisfy the heterogeneous needs of a variety of Earth Science communities for more effective collaboration, and higher efficiency and creativity in research. EVER-EST is funded by the European Commission's H2020 for three years starting in October 2015. The project is led by the European Space Agency (ESA), involves some of the major European Earth Science data providers/users including NERC, DLR, INGV, CNR and SatCEN.
NASA Astrophysics Data System (ADS)
Shao, Hongbing
Software testing with scientific software systems often suffers from test oracle problem, i.e., lack of test oracles. Amsterdam discrete dipole approximation code (ADDA) is a scientific software system that can be used to simulate light scattering of scatterers of various types. Testing of ADDA suffers from "test oracle problem". In this thesis work, I established a testing framework to test scientific software systems and evaluated this framework using ADDA as a case study. To test ADDA, I first used CMMIE code as the pseudo oracle to test ADDA in simulating light scattering of a homogeneous sphere scatterer. Comparable results were obtained between ADDA and CMMIE code. This validated ADDA for use with homogeneous sphere scatterers. Then I used experimental result obtained for light scattering of a homogeneous sphere to validate use of ADDA with sphere scatterers. ADDA produced light scattering simulation comparable to the experimentally measured result. This further validated the use of ADDA for simulating light scattering of sphere scatterers. Then I used metamorphic testing to generate test cases covering scatterers of various geometries, orientations, homogeneity or non-homogeneity. ADDA was tested under each of these test cases and all tests passed. The use of statistical analysis together with metamorphic testing is discussed as a future direction. In short, using ADDA as a case study, I established a testing framework, including use of pseudo oracles, experimental results and the metamorphic testing techniques to test scientific software systems that suffer from test oracle problems. Each of these techniques is necessary and contributes to the testing of the software under test.
Interoperable Data Sharing for Diverse Scientific Disciplines
NASA Astrophysics Data System (ADS)
Hughes, John S.; Crichton, Daniel; Martinez, Santa; Law, Emily; Hardman, Sean
2016-04-01
For diverse scientific disciplines to interoperate they must be able to exchange information based on a shared understanding. To capture this shared understanding, we have developed a knowledge representation framework using ontologies and ISO level archive and metadata registry reference models. This framework provides multi-level governance, evolves independent of implementation technologies, and promotes agile development, namely adaptive planning, evolutionary development, early delivery, continuous improvement, and rapid and flexible response to change. The knowledge representation framework is populated through knowledge acquisition from discipline experts. It is also extended to meet specific discipline requirements. The result is a formalized and rigorous knowledge base that addresses data representation, integrity, provenance, context, quantity, and their relationships within the community. The contents of the knowledge base is translated and written to files in appropriate formats to configure system software and services, provide user documentation, validate ingested data, and support data analytics. This presentation will provide an overview of the framework, present the Planetary Data System's PDS4 as a use case that has been adopted by the international planetary science community, describe how the framework is being applied to other disciplines, and share some important lessons learned.
NASA Astrophysics Data System (ADS)
van Leeuwen, F.; de Bruijne, J. H. J.; Arenou, F.; Bakker, J.; Blomme, R.; Busso, G.; Cacciari, C.; Castañeda, J.; Cellino, A.; Clotet, M.; Comoretto, G.; Eyer, L.; González-Núñez, J.; Guy, L.; Hambly, N.; Hobbs, D.; van Leeuwen, M.; Luri, X.; Manteiga, M.; Pourbaix, D.; Roegiers, T.; Salgado, J.; Sartoretti, P.; Tanga, P.; Ulla, A.; Utrilla Molina, E.; Abreu, A.; Altmann, M.; Andrae, R.; Antoja, T.; Audard, M.; Babusiaux, C.; Bailer-Jones, C. A. L.; Barache, C.; Bastian, U.; Beck, M.; Berthier, J.; Bianchi, L.; Biermann, M.; Bombrun, A.; Bossini, D.; Breddels, M.; Brown, A. G. A.; Busonero, D.; Butkevich, A.; Cantat-Gaudin, T.; Carrasco, J. M.; Cheek, N.; Clementini, G.; Creevey, O.; Crowley, C.; David, M.; Davidson, M.; De Angeli, F.; De Ridder, J.; Delbò, M.; Dell'Oro, A.; Diakité, S.; Distefano, E.; Drimmel, R.; Durán, J.; Evans, D. W.; Fabricius, C.; Fabrizio, M.; Fernández-Hernández, J.; Findeisen, K.; Fleitas, J.; Fouesneau, M.; Galluccio, L.; Gracia-Abril, G.; Guerra, R.; Gutiérrez-Sánchez, R.; Helmi, A.; Hernandez, J.; Holl, B.; Hutton, A.; Jean-Antoine-Piccolo, A.; Jevardat de Fombelle, G.; Joliet, E.; Jordi, C.; Juhász, Á.; Klioner, S.; Löffler, W.; Lammers, U.; Lanzafame, A.; Lebzelter, T.; Leclerc, N.; Lecoeur-Taïbi, I.; Lindegren, L.; Marinoni, S.; Marrese, P. M.; Mary, N.; Massari, D.; Messineo, R.; Michalik, D.; Mignard, F.; Molinaro, R.; Molnár, L.; Montegriffo, P.; Mora, A.; Mowlavi, N.; Muinonen, K.; Muraveva, T.; Nienartowicz, K.; Ordenovic, C.; Pancino, E.; Panem, C.; Pauwels, T.; Petit, J.; Plachy, E.; Portell, J.; Racero, E.; Regibo, S.; Reylé, C.; Rimoldini, L.; Ripepi, V.; Riva, A.; Robichon, N.; Robin, A.; Roelens, M.; Romero-Gómez, M.; Sarro, L.; Seabroke, G.; Segovia, J. C.; Siddiqui, H.; Smart, R.; Smith, K.; Sordo, R.; Soria, S.; Spoto, F.; Stephenson, C.; Turon, C.; Vallenari, A.; Veljanoski, J.; Voutsinas, S.
2018-04-01
The second Gaia data release, Gaia DR2, encompasses astrometry, photometry, radial velocities, astrophysical parameters (stellar effective temperature, extinction, reddening, radius, and luminosity), and variability information plus astrometry and photometry for a sample of pre-selected bodies in the solar system. The data collected during the first 22 months of the nominal, five-year mission have been processed by the Gaia Data Processing and Analysis Consortium (DPAC), resulting into this second data release. A summary of the release properties is provided in Gaia Collaboration et al. (2018b). The overall scientific validation of the data is described in Arenou et al. (2018). Background information on the mission and the spacecraft can be found in Gaia Collaboration et al. (2016), with a more detailed presentation of the Radial Velocity Spectrometer (RVS) in Cropper et al. (2018). In addition, Gaia DR2 is accompanied by various, dedicated papers that describe the processing and validation of the various data products. Four more Gaia Collaboration papers present a glimpse of the scientific richness of the data. In addition to this set of refereed publications, this documentation provides a detailed, complete overview of the processing and validation of the Gaia DR2 data. Gaia data, from both Gaia DR1 and Gaia DR2, can be retrieved from the Gaia archive, which is accessible from https://archives.esac.esa.int/gaia. The archive also provides various tutorials on data access and data queries plus an integrated data model (i.e., description of the various fields in the data tables). In addition, Luri et al. (2018) provide concrete advice on how to deal with Gaia astrometry, with recommendations on how best to estimate distances from parallaxes. The Gaia archive features an enhanced visualisation service which can be used for quick initial explorations of the entire Gaia DR2 data set. Pre-computed cross matches between Gaia DR2 and a selected set of large surveys are provided. Gaia DR2 represents a major advance with respect to Gaia DR1 in terms of survey completeness, precision and accuracy, and the richness of the published data. Nevertheless, Gaia DR2 is still an early release based on a limited amount of input data, simplifications in the data processing, and imperfect calibrations. Many limitations hence exist which the user of Gaia DR2 should be aware of; they are described in Gaia Collaboration et al. (2018b).
System verification and validation: a fundamental systems engineering task
NASA Astrophysics Data System (ADS)
Ansorge, Wolfgang R.
2004-09-01
Systems Engineering (SE) is the discipline in a project management team, which transfers the user's operational needs and justifications for an Extremely Large Telescope (ELT) -or any other telescope-- into a set of validated required system performance characteristics. Subsequently transferring these validated required system performance characteris-tics into a validated system configuration, and eventually into the assembled, integrated telescope system with verified performance characteristics and provided it with "objective evidence that the particular requirements for the specified intended use are fulfilled". The latter is the ISO Standard 8402 definition for "Validation". This presentation describes the verification and validation processes of an ELT Project and outlines the key role System Engineering plays in these processes throughout all project phases. If these processes are implemented correctly into the project execution and are started at the proper time, namely at the very beginning of the project, and if all capabilities of experienced system engineers are used, the project costs and the life-cycle costs of the telescope system can be reduced between 25 and 50 %. The intention of this article is, to motivate and encourage project managers of astronomical telescopes and scientific instruments to involve the entire spectrum of Systems Engineering capabilities performed by trained and experienced SYSTEM engineers for the benefit of the project by explaining them the importance of Systems Engineering in the AIV and validation processes.
Fernandez-Hermida, Jose Ramon; Calafat, Amador; Becoña, Elisardo; Tsertsvadze, Alexander; Foxcroft, David R
2012-09-01
To assess external validity characteristics of studies from two Cochrane Systematic Reviews of the effectiveness of universal family-based prevention of alcohol misuse in young people. Two reviewers used an a priori developed external validity rating form and independently assessed three external validity dimensions of generalizability, applicability and predictability (GAP) in randomized controlled trials. The majority (69%) of the included 29 studies were rated 'unclear' on the reporting of sufficient information for judging generalizability from sample to study population. Ten studies (35%) were rated 'unclear' on the reporting of sufficient information for judging applicability to other populations and settings. No study provided an assessment of the validity of the trial end-point measures for subsequent mortality, morbidity, quality of life or other economic or social outcomes. Similarly, no study reported on the validity of surrogate measures using established criteria for assessing surrogate end-points. Studies evaluating the benefits of family-based prevention of alcohol misuse in young people are generally inadequate at reporting information relevant to generalizability of the findings or implications for health or social outcomes. Researchers, study authors, peer reviewers, journal editors and scientific societies should take steps to improve the reporting of information relevant to external validity in prevention trials. © 2012 The Authors. Addiction © 2012 Society for the Study of Addiction.
The GEOSCOPE broadband seismic observatory
NASA Astrophysics Data System (ADS)
Douet, Vincent; Vallée, Martin; Zigone, Dimitri; Bonaimé, Sébastien; Stutzmann, Eléonore; Maggi, Alessia; Pardo, Constanza; Bernard, Armelle; Leroy, Nicolas; Pesqueira, Frédéric; Lévêque, Jean-Jacques; Thoré, Jean-Yves; Bes de Berc, Maxime; Sayadi, Jihane
2016-04-01
The GEOSCOPE observatory has provided continuous broadband data to the scientific community for the past 34 years. The 31 operational GEOSCOPE stations are installed in 17 countries, across all continents and on islands throughout the oceans. They are equipped with three component very broadband seismometers (STS1, T240 or STS2) and 24 or 26 bit digitizers (Q330HR). Seismometers are installed with warpless base plates, which decrease long period noise on horizontal components by up to 15dB. All stations send data in real time to the IPGP data center, which transmits them automatically to other data centers (FDSN/IRIS-DMC and RESIF) and tsunami warning centers. In 2016, three stations are expected to be installed or re-installed: in Western China (WUS station), in Saint Pierre and Miquelon Island (off the East coast of Canada) and in Walis and Futuna (SouthWest Pacific Ocean). The waveform data are technically validated by IPGP (25 stations) or EOST (6 stations) in order to check their continuity and integrity. Scientific data validation is also performed by analyzing seismic noise level of the continuous data and by comparing real and synthetic earthquake waveforms (body waves). After these validations, data are archived by the IPGP data center in Paris. They are made available to the international scientific community through different interfaces (see details on http://geoscope.ipgp.fr). Data are duplicated at the FDSN/IRIS-DMC data center and a similar duplication at the French national data center RESIF will be operational in 2016. The GEOSCOPE broadband seismic observatory also provides near-real time information on global moderate-to-large seismicity (above magnitude 5.5-6) through the automated application of the SCARDEC method (Vallée et al., 2011). By using global data from the FDSN - in particular from GEOSCOPE and IRIS/USGS stations -, earthquake source parameters (depth, moment magnitude, focal mechanism, source time function) are determined about 45 minutes after the occurrence of the event. A specific webpage is then generated for each earthquake, which also includes information for a non-seismologist audience (past seismicity, foreshocks and aftershocks, 3D representations of the fault motion…). Examples for recent earthquakes can be seen on http://geoscope.ipgp.fr/index.php/en/data/earthquake-data/latest-earthquakes.
Recent evolutions of the GEOSCOPE broadband seismic observatory
NASA Astrophysics Data System (ADS)
Vallée, Martin; Zigone, Dimitri; Bonaimé, Sébastien; Thoré, Jean-Yves; Pesqueira, Frédéric; Pardo, Constanza; Bernard, Armelle; Stutzmann, Eléonore; Maggi, Alessia; Douet, Vincent; Sayadi, Jihane; Lévêque, Jean-Jacques
2017-04-01
The GEOSCOPE observatory provides 35 years of continuous broadband data to the scientific community. The 32 operational GEOSCOPE stations are installed in 17 countries, across all continents and on islands throughout the oceans. They are equipped with three component very broadband seismometers (STS1 or STS2) and 24 or 26 bit digitizers (Q330HR). Seismometers are installed with warpless base plates, which decrease long period noise on horizontal components by up to 15dB. All stations send data in real time to the GEOSCOPE data center and are automatically transmitted to other data centers (IRIS-DMC and RESIF) and tsunami warning centers. In 2016, a new station has been installed in Wallis and Futuna (FUTU, South-Western Pacific Ocean), and WUS station has been reinstalled in Western China. Data of the stations are technically validated by IPGP (25 stations) or EOST (6 stations) in order to check their continuity and integrity. A scientific data validation is also performed by analyzing seismic noise level of the continuous data and by comparing real and synthetic earthquake waveforms (body waves). After these validations, data are archived by the GEOSCOPE data center in Paris. They are made available to the international scientific community through different interfaces (see details on http://geoscope.ipgp.fr). An important technical work is done to homogenize the data formats of the whole GEOSCOPE database, in order to make easier the data duplication at the IRIS-DMC and RESIF data centers. The GEOSCOPE broadband seismic observatory also provides near-real time information on the World large seismicity (above magnitude 5.5-6) through the automated application of the SCARDEC method. By using global data from the FDSN - in particular from GEOSCOPE and IRIS/USGS stations -, earthquake source parameters (depth, moment magnitude, focal mechanism, source time function) are determined about 45 minutes after the occurrence of the event. A specific webpage is then generated for each earthquake, which also includes information for a non-seismologist audience (past seismicity, foreshocks and afterschocks, 3D representations of the fault motion…). Examples for recent earthquakes can be seen in http://geoscope.ipgp.fr/index.php/en/data/earthquake-data/latest-earthquakes. This procedure has also been applied to past earthquakes since 1992, resulting in a database of more than 3000 source time functions (http://scardec.projects.sismo.ipgp.fr/).
[The ethical aspects of physiological experiment].
Al'bertin, S V
2014-01-01
A modern classification of invasive procedures developed according to International Bioethical Principles has been presented. The experimental data convincingly demonstrate that using of noninvasive approaches and techniques give a good opportunity to reduce a number of animals recruited in experiment as well as to keep the normal (not distressful) physiological functions of animals. The data presented stress that development of noninvasive techniques is closely related both to scientific and social aspects of our life, allowing the scientists to provide high validity of experimental data obtained as well as to keep themselves as a human beings.
Bertucci, W; Duc, S; Villerius, V; Pernin, J N; Grappe, F
2005-12-01
The SRM power measuring crank system is nowadays a popular device for cycling power output (PO) measurements in the field and in laboratories. The PowerTap (CycleOps, Madison, USA) is a more recent and less well-known device that allows mobile PO measurements of cycling via the rear wheel hub. The aim of this study is to test the validity and reliability of the PowerTap by comparing it with the most accurate (i.e. the scientific model) of the SRM system. The validity of the PowerTap is tested during i) sub-maximal incremental intensities (ranging from 100 to 420 W) on a treadmill with different pedalling cadences (45 to 120 rpm) and cycling positions (standing and seated) on different grades, ii) a continuous sub-maximal intensity lasting 30 min, iii) a maximal intensity (8-s sprint), and iiii) real road cycling. The reliability is assessed by repeating ten times the sub-maximal incremental and continuous tests. The results show a good validity of the PowerTap during sub-maximal intensities between 100 and 450 W (mean PO difference -1.2 +/- 1.3 %) when it is compared to the scientific SRM model, but less validity for the maximal PO during sprint exercise, where the validity appears to depend on the gear ratio. The reliability of the PowerTap during the sub-maximal intensities is similar to the scientific SRM model (the coefficient of variation is respectively 0.9 to 2.9 % and 0.7 to 2.1 % for PowerTap and SRM). The PowerTap must be considered as a suitable device for PO measurements during sub-maximal real road cycling and in sub-maximal laboratory tests.
NASA Astrophysics Data System (ADS)
Cegnar, T.; Benestad, R.; Billard, C.
2010-09-01
The EMS Media team recognises that: Scientific knowledge is valuable for society, but it also becomes fragile in a media-dominated society where the distortion of facts clouds the validity of the information. The use of scientific titles in communication normally brings expectations of high standards regarding the information content. Freedom of speech is fragile in the sense that it can be diluted by a high proportion of false information. The value of scientific and scholastic titles is degraded when they are used to give the impression of false validity. Science communication is powerful, and implies a certain responsibility and ethical standard. The scientific community operates with a more or less tacit ethics code in all areas touching the scientists' activities. Even though many scientific questions cannot be completely resolved, there is a set of established and unequivocal scientific practices, methods, and tests, on which our scientific knowledge rests. Scientists are assumed to master the scientific practices, methods, and tests. High standard in science-related communication and media exposure, openness, and honesty will increase the relevance of science, academies, and scientists in the society, in addition to benefiting the society itself. Science communication is important to maintain and enhance the general appreciation of science. The value of the role of science is likely to increase with a reduced distance between scientists and the society and a lower knowledge barrier. An awareness about the ethical aspects of science and science communication may aid scientists in making decisions about how and what to say. Scientists are often not trained in communication or ethics. A set of guide lines may lower the barrier for scientists concerned about tacit codes to come forward and talk to the media. Recommendations: The mass media should seek more insight into scientific knowledge, history, principles, and societies. Journalists and artists should be encouraged and receive support to attend the large scientific conferences organised by e.g the EMS, EGU, AMS, and the AGU. National meteorological societies can contribute by promoting the idea of media participation, e.g. through statements and letters of opinion to news papers, in TV and radio. They can point to media awards and best-practice examples (such as the Norwegian collaboration between the national broadcasting corporation and the meteorological service yr.no.) Tacit ethics codes and expectations from scientists should be spelled out. The role of scientists should be clear, and national academies and member organisations are encouraged to provide a clear list of expectations. Statements drawing on the authority of science should have a basis in well-established and unequivocal scientific practices, methods, and tests. This means, for instance, that analysis and statistics must conform to well-established robust methods, avoiding 'cherry picking' and the misrepresentation of data. The information should also - to the greatest possible degree - be based on open source and transparent methods and data.
Ethnicity in Dutch health research: situating scientific practice.
Helberg-Proctor, Alana; Meershoek, Agnes; Krumeich, Anja; Horstman, Klasien
2016-10-01
A growing body of work is examining the role health research itself plays in the construction of 'ethnicity.' We discuss the results of our investigation as to how the political, social, and institutional dynamics of the context in which health research takes place affect the manner in which knowledge about ethnicity and health is produced. Qualitative content analysis of academic publications, interviews with biomedical and health researchers, and participant observation at various conferences and scientific events. We identified four aspects related to the context in which Dutch research takes place that we have found relevant to biomedical and health-research practices. Firstly, the 'diversity' and 'inclusion' policies of the major funding institution; secondly, the official Dutch national ethnic registration system; a third factor was the size of the Netherlands and the problem of small sample sizes; and lastly, the need for researchers to use meaningful ethnic categories when publishing in English-language journals. Our analysis facilitates the understanding of how specific ethnicities are constructed in this field and provides fruitful insight into the socio-scientific co-production of ethnicity, and specifically into the manner in which common-sense ethnic categories and hierarchies are granted scientific validity through academic publication and, are subsequently, used in clinical guidelines and policy.
Specifics on a XML Data Format for Scientific Data
NASA Astrophysics Data System (ADS)
Shaya, E.; Thomas, B.; Cheung, C.
An XML-based data format for interchange and archiving of scientific data would benefit in many ways from the features standardized in XML. Foremost of these features is the world-wide acceptance and adoption of XML. Applications, such as browsers, XQL and XSQL advanced query, XML editing, or CSS or XSLT transformation, that are coming out of industry and academia can be easily adopted and provide startling new benefits and features. We have designed a prototype of a core format for holding, in a very general way, parameters, tables, scalar and vector fields, atlases, animations and complex combinations of these. This eXtensible Data Format (XDF) makes use of XML functionalities such as: self-validation of document structure, default values for attributes, XLink hyperlinks, entity replacements, internal referencing, inheritance, and XSLT transformation. An API is available to aid in detailed assembly, extraction, and manipulation. Conversion tools to and from FITS and other existing data formats are under development. In the future, we hope to provide object oriented interfaces to C++, Java, Python, IDL, Mathematica, Maple, and various databases. http://xml.gsfc.nasa.gov/XDF
Can Mary Shelley's Frankenstein be read as an early research ethics text?
Davies, H
2004-06-01
The current popular view of the novel Frankenstein is that it describes the horrors consequent upon scientific experimentation; the pursuit of science leading inevitably to tragedy. In reality the importance of the book is far from this. Although the evil and tragedy resulting from one medical experiment are its theme, a critical and fair reading finds a more balanced view that includes science's potential to improve the human condition and reasons why such an experiment went awry. The author argues that Frankenstein is an early and balanced text on the ethics of research upon human subjects and that it provides insights that are as valid today as when the novel was written. As a narrative it provides a gripping story that merits careful analysis by those involved in medical research and its ethical review, and it is more enjoyable than many current textbooks! To support this thesis, the author will place the book in historical, scientific context, analyse it for lessons relevant to those involved in research ethics today, and then draw conclusions.
Baur, Xaver; Budnik, Lygia Therese; Ruff, Kathleen; Egilman, David S; Lemen, Richard A; Soskolne, Colin L
2015-01-01
Clinical and public health research, education, and medical practice are vulnerable to influence by corporate interests driven by the for-profit motive. Developments over the last 10 years have shown that transparency and self-reporting of corporate ties do not always mitigate bias. In this article, we provide examples of how sound scientific reasoning and evidence-gathering are undermined through compromised scientific enquiry resulting in misleading science, decision-making, and policy intervention. Various medical disciplines provide reference literature essential for informing public, environmental, and occupational health policy. Published literature impacts clinical and laboratory methods, the validity of respective clinical guidelines, and the development and implementation of public health regulations. Said literature is also used in expert testimony related to resolving tort actions on work-related illnesses and environmental risks. We call for increased sensitivity, full transparency, and the implementation of effective ethical and professional praxis rules at all relevant regulatory levels to rout out inappropriate corporate influence in science. This is needed because influencing the integrity of scientists who engage in such activities cannot be depended upon.
Lunar International Science Coordination/Calibration Targets
NASA Technical Reports Server (NTRS)
Head, J. W.; Issacson, P.; Petro, N.; Runyon, C.; Ohtake, M.; Foing, B.; Grande, M.
2007-01-01
A new era of international lunar exploration has begun and will expand over the next four years with data acquired from at least four sophisticated remote sensing missions: KAGUYA (SELENE) [Japan], Chang'E [China], Chandrayaan-l [India], and LRO [United States]. It is recognized that this combined activity at the Moon with modern sophisticated sensors wi II provide unprecedented new information about the Moon and will dramatically improve our understanding of Earth's nearest neighbor. It is anticipated that the blooming of scientific exploration of the Moon by nations involved in space activities will seed and foster peaceful international coordination and cooperation that will benefit all. Summarized here are eight Lunar International Science Coordination/Calibration Targets (L-ISCT) that are intended to a) allow cross-calibration of diverse multi-national instruments and b) provide a focus for training young scientists about a range of lunar science issues. The targets, discussed at several scientific forums, were selected for coordinated science and instrument calibration of orbital data. All instrument teams are encouraged to participate in a coordinated activity of early-release data that will improve calibration and validation of data across independent and diverse instruments.
NASA Astrophysics Data System (ADS)
Frailis, M.; Maris, M.; Zacchei, A.; Morisset, N.; Rohlfs, R.; Meharga, M.; Binko, P.; Türler, M.; Galeotta, S.; Gasparo, F.; Franceschi, E.; Butler, R. C.; D'Arcangelo, O.; Fogliani, S.; Gregorio, A.; Lowe, S. R.; Maggio, G.; Malaspina, M.; Mandolesi, N.; Manzato, P.; Pasian, F.; Perrotta, F.; Sandri, M.; Terenzi, L.; Tomasi, M.; Zonca, A.
2009-12-01
The Level 1 of the Planck LFI Data Processing Centre (DPC) is devoted to the handling of the scientific and housekeeping telemetry. It is a critical component of the Planck ground segment which has to strictly commit to the project schedule to be ready for the launch and flight operations. In order to guarantee the quality necessary to achieve the objectives of the Planck mission, the design and development of the Level 1 software has followed the ESA Software Engineering Standards. A fundamental step in the software life cycle is the Verification and Validation of the software. The purpose of this work is to show an example of procedures, test development and analysis successfully applied to a key software project of an ESA mission. We present the end-to-end validation tests performed on the Level 1 of the LFI-DPC, by detailing the methods used and the results obtained. Different approaches have been used to test the scientific and housekeeping data processing. Scientific data processing has been tested by injecting signals with known properties directly into the acquisition electronics, in order to generate a test dataset of real telemetry data and reproduce as much as possible nominal conditions. For the HK telemetry processing, validation software have been developed to inject known parameter values into a set of real housekeeping packets and perform a comparison with the corresponding timelines generated by the Level 1. With the proposed validation and verification procedure, where the on-board and ground processing are viewed as a single pipeline, we demonstrated that the scientific and housekeeping processing of the Planck-LFI raw data is correct and meets the project requirements.
Clickstream data yields high-resolution maps of science.
Bollen, Johan; Van de Sompel, Herbert; Hagberg, Aric; Bettencourt, Luis; Chute, Ryan; Rodriguez, Marko A; Balakireva, Lyudmila
2009-01-01
Intricate maps of science have been created from citation data to visualize the structure of scientific activity. However, most scientific publications are now accessed online. Scholarly web portals record detailed log data at a scale that exceeds the number of all existing citations combined. Such log data is recorded immediately upon publication and keeps track of the sequences of user requests (clickstreams) that are issued by a variety of users across many different domains. Given these advantages of log datasets over citation data, we investigate whether they can produce high-resolution, more current maps of science. Over the course of 2007 and 2008, we collected nearly 1 billion user interactions recorded by the scholarly web portals of some of the most significant publishers, aggregators and institutional consortia. The resulting reference data set covers a significant part of world-wide use of scholarly web portals in 2006, and provides a balanced coverage of the humanities, social sciences, and natural sciences. A journal clickstream model, i.e. a first-order Markov chain, was extracted from the sequences of user interactions in the logs. The clickstream model was validated by comparing it to the Getty Research Institute's Architecture and Art Thesaurus. The resulting model was visualized as a journal network that outlines the relationships between various scientific domains and clarifies the connection of the social sciences and humanities to the natural sciences. Maps of science resulting from large-scale clickstream data provide a detailed, contemporary view of scientific activity and correct the underrepresentation of the social sciences and humanities that is commonly found in citation data.
Clickstream Data Yields High-Resolution Maps of Science
Bollen, Johan; Van de Sompel, Herbert; Rodriguez, Marko A.; Balakireva, Lyudmila
2009-01-01
Background Intricate maps of science have been created from citation data to visualize the structure of scientific activity. However, most scientific publications are now accessed online. Scholarly web portals record detailed log data at a scale that exceeds the number of all existing citations combined. Such log data is recorded immediately upon publication and keeps track of the sequences of user requests (clickstreams) that are issued by a variety of users across many different domains. Given these advantages of log datasets over citation data, we investigate whether they can produce high-resolution, more current maps of science. Methodology Over the course of 2007 and 2008, we collected nearly 1 billion user interactions recorded by the scholarly web portals of some of the most significant publishers, aggregators and institutional consortia. The resulting reference data set covers a significant part of world-wide use of scholarly web portals in 2006, and provides a balanced coverage of the humanities, social sciences, and natural sciences. A journal clickstream model, i.e. a first-order Markov chain, was extracted from the sequences of user interactions in the logs. The clickstream model was validated by comparing it to the Getty Research Institute's Architecture and Art Thesaurus. The resulting model was visualized as a journal network that outlines the relationships between various scientific domains and clarifies the connection of the social sciences and humanities to the natural sciences. Conclusions Maps of science resulting from large-scale clickstream data provide a detailed, contemporary view of scientific activity and correct the underrepresentation of the social sciences and humanities that is commonly found in citation data. PMID:19277205
The On-Line Uv/Vis Spectra Data Base An Example For Interactive Access To Scientific Information
NASA Astrophysics Data System (ADS)
Noelle, A.; Hartmann, G.; Richter, A.
2003-04-01
The basic concept of the on-line "UV/Vis Spectra Data Base" is to provide useful information to the scientific community on a proper basis, especially in times where scientific information becomes more and more a commercial product and is therefore often not within the financial means of those people who actually generated the information. Besides the EGS activities in peer reviewed open access e-publishing (e.g. the journal "Atmopheric Chemistry and Physics", ACP) this concept can help the community to reduce the "digital divide" for scientific and technical information. The on-line data base is maintained by a team consisting of the data base providers, the data producer and its users. The long-term scienctific success depends on the close cooperation of this team. Therefore all scientists are encouraged to join this cooperative effort and support the data base either actively or passively. Active support means the provision of missing or newly measured validated spectral data for inclusion in the data base. Although there is a moderate annual maintenance fee for the data base utilization, those scientists who actively support the data base can use the data base free-of-charge. There is also the possibility to support the data base passively by subscription to the data base. Even those scienctists who do not support the data base can benefit from the "Literature Service" which is free-of-charge. This data base concept differs from other commercial activities on this area and matches the philosophy of Copernicus.
Working towards a consensus for antibody validation.
Reiss, Peter D; Min, Danxi; Leung, Mei Y
2014-01-01
Commercial research antibodies are the most commonly used product in the life science tools market, and their applications represent a significant investment of time and resources for researchers. Frequently however, the quality of antibodies does not meet the expectations of consumers, causing loss of valuable time and money. This can delay research efforts and scientific discovery, or even lead to false, irreproducible results to be published in the scientific literature. This raises the question of whether there should be universal standards for validating antibodies. During the 1 (st) International Antibody Validation Forum, hosted by St John's Laboratory Ltd on October 15 (th) 2014 at Queen Mary University of London, scientists from academia and industry presented data highlighting quality issues arising from lack of antibody validation. While the forum identified significant current problems in the antibody market, it also discussed future opportunities for improved quality and transparency by encouraging data disclosure and data sharing. This article highlights the key issues and conclusions reached at the forum.
Data preservation at the Fermilab Tevatron
Amerio, S.; Behari, S.; Boyd, J.; ...
2017-01-22
The Fermilab Tevatron collider's data-taking run ended in September 2011, yielding a dataset with rich scientific potential. The CDF and D0 experiments each have approximately 9 PB of collider and simulated data stored on tape. A large computing infrastructure consisting of tape storage, disk cache, and distributed grid computing for physics analysis with the Tevatron data is present at Fermilab. The Fermilab Run II data preservation project intends to keep this analysis capability sustained through the year 2020 and beyond. To achieve this goal, we have implemented a system that utilizes virtualization, automated validation, and migration to new standards inmore » both software and data storage technology and leverages resources available from currently-running experiments at Fermilab. Lastly, these efforts have also provided useful lessons in ensuring long-term data access for numerous experiments, and enable high-quality scientific output for years to come.« less
Lessons learned in managing crowdsourced data in the Alaskan Arctic.
NASA Astrophysics Data System (ADS)
Mastracci, Diana
2017-04-01
There is perhaps no place in which the consequences of global climate change can be felt more acutely than the Arctic. However, due to lack of measurements at the high latitudes, validation processes are often problematic. Citizen science projects, co-designed together with Native communities at the interface of traditional knowledge and scientific research, could play a major role in climate change adaptation strategies by advancing knowledge of the Arctic system, strengthening inter-generational bonds and facilitating improved knowledge transfer. This presentation will present lessons learned from a pilot project in the Alaskan Arctic, in which innovative approaches were used to design climate change adaptation strategies to support young subsistence hunters in taking in-situ measurements whilst out on the sea-ice. Both the socio-cultural and hardware/software challenges presented in this presentation, could provide useful guidance for future programs that aim to integrate citizens' with scientific data in Arctic communities.
Debating complexity in modeling
Hunt, Randall J.; Zheng, Chunmiao
1999-01-01
As scientists trying to understand the natural world, how should our effort be apportioned? We know that the natural world is characterized by complex and interrelated processes. Yet do we need to explicitly incorporate these intricacies to perform the tasks we are charged with? In this era of expanding computer power and development of sophisticated preprocessors and postprocessors, are bigger machines making better models? Put another way, do we understand the natural world better now with all these advancements in our simulation ability? Today the public's patience for long-term projects producing indeterminate results is wearing thin. This increases pressure on the investigator to use the appropriate technology efficiently. On the other hand, bringing scientific results into the legal arena opens up a new dimension to the issue: to the layperson, a tool that includes more of the complexity known to exist in the real world is expected to provide the more scientifically valid answer.
Greher, Michael R; Wodushek, Thomas R
2017-03-01
Performance validity testing refers to neuropsychologists' methodology for determining whether neuropsychological test performances completed in the course of an evaluation are valid (ie, the results of true neurocognitive function) or invalid (ie, overly impacted by the patient's effort/engagement in testing). This determination relies upon the use of either standalone tests designed for this sole purpose, or specific scores/indicators embedded within traditional neuropsychological measures that have demonstrated this utility. In response to a greater appreciation for the critical role that performance validity issues play in neuropsychological testing and the need to measure this variable to the best of our ability, the scientific base for performance validity testing has expanded greatly over the last 20 to 30 years. As such, the majority of current day neuropsychologists in the United States use a variety of measures for the purpose of performance validity testing as part of everyday forensic and clinical practice and address this issue directly in their evaluations. The following is the first article of a 2-part series that will address the evolution of performance validity testing in the field of neuropsychology, both in terms of the science as well as the clinical application of this measurement technique. The second article of this series will review performance validity tests in terms of methods for development of these measures, and maximizing of diagnostic accuracy.
Sabour, Siamak
2018-03-08
The purpose of this letter, in response to Hall, Mehta, and Fackrell (2017), is to provide important knowledge about methodology and statistical issues in assessing the reliability and validity of an audiologist-administered tinnitus loudness matching test and a patient-reported tinnitus loudness rating. The author uses reference textbooks and published articles regarding scientific assessment of the validity and reliability of a clinical test to discuss the statistical test and the methodological approach in assessing validity and reliability in clinical research. Depending on the type of the variable (qualitative or quantitative), well-known statistical tests can be applied to assess reliability and validity. The qualitative variables of sensitivity, specificity, positive predictive value, negative predictive value, false positive and false negative rates, likelihood ratio positive and likelihood ratio negative, as well as odds ratio (i.e., ratio of true to false results), are the most appropriate estimates to evaluate validity of a test compared to a gold standard. In the case of quantitative variables, depending on distribution of the variable, Pearson r or Spearman rho can be applied. Diagnostic accuracy (validity) and diagnostic precision (reliability or agreement) are two completely different methodological issues. Depending on the type of the variable (qualitative or quantitative), well-known statistical tests can be applied to assess validity.
Semantic Web Compatible Names and Descriptions for Organisms
NASA Astrophysics Data System (ADS)
Wang, H.; Wilson, N.; McGuinness, D. L.
2012-12-01
Modern scientific names are critical for understanding the biological literature and provide a valuable way to understand evolutionary relationships. To validly publish a name, a description is required to separate the described group of organisms from those described by other names at the same level of the taxonomic hierarchy. The frequent revision of descriptions due to new evolutionary evidence has lead to situations where a single given scientific name may over time have multiple descriptions associated with it and a given published description may apply to multiple scientific names. Because of these many-to-many relationships between scientific names and descriptions, the usage of scientific names as a proxy for descriptions is inevitably ambiguous. Another issue lies in the fact that the precise application of scientific names often requires careful microscopic work, or increasingly, genetic sequencing, as scientific names are focused on the evolutionary relatedness between and within named groups such as species, genera, families, etc. This is problematic to many audiences, especially field biologists, who often do not have access to the instruments and tools required to make identifications on a microscopic or genetic basis. To better connect scientific names to descriptions and find a more convenient way to support computer assisted identification, we proposed the Semantic Vernacular System, a novel naming system that creates named, machine-interpretable descriptions for groups of organisms, and is compatible with the Semantic Web. Unlike the evolutionary relationship based scientific naming system, it emphasizes the observable features of organisms. By independently naming the descriptions composed of sets of observational features, as well as maintaining connections to scientific names, it preserves the observational data used to identify organisms. The system is designed to support a peer-review mechanism for creating new names, and uses a controlled vocabulary encoded in the Web Ontology Language to represent the observational features. A prototype of the system is currently under development in collaboration with the Mushroom Observer website. It allows users to propose new names and descriptions for fungi, provide feedback on those proposals, and ultimately have them formally approved. It relies on SPARQL queries and semantic reasoning for data management. This effort will offer the mycology community a knowledge base of fungal observational features and a tool for identifying fungal observations. It will also serve as an operational specification of how the Semantic Vernacular System can be used in practice in one scientific community (in this case mycology).
NASA Technical Reports Server (NTRS)
Brubaker, N.; Jedlovec, G. J.
2004-01-01
With the preliminary release of AIRS Level 1 and 2 data to the scientific community, there is a growing need for an accurate AIRS cloud mask for data assimilation studies and in producing products derived from cloud free radiances. Current cloud information provided with the AIRS data are limited or based on simplified threshold tests. A multispectral cloud detection approach has been developed for AIRS that utilizes the hyper-spectral capabilities to detect clouds based on specific cloud signatures across the short wave and long wave infrared window regions. This new AIRS cloud mask has been validated against the existing AIRS Level 2 cloud product and cloud information derived from MODIS. Preliminary results for both day and night applications over the continental U.S. are encouraging. Details of the cloud detection approach and validation results will be presented at the conference.
Pediatric Cancer Survivorship Research: Experience of the Childhood Cancer Survivor Study
Leisenring, Wendy M.; Mertens, Ann C.; Armstrong, Gregory T.; Stovall, Marilyn A.; Neglia, Joseph P.; Lanctot, Jennifer Q.; Boice, John D.; Whitton, John A.; Yasui, Yutaka
2009-01-01
The Childhood Cancer Survivor Study (CCSS) is a comprehensive multicenter study designed to quantify and better understand the effects of pediatric cancer and its treatment on later health, including behavioral and sociodemographic outcomes. The CCSS investigators have published more than 100 articles in the scientific literature related to the study. As with any large cohort study, high standards for methodologic approaches are imperative for valid and generalizable results. In this article we describe methodological issues of study design, exposure assessment, outcome validation, and statistical analysis. Methods for handling missing data, intrafamily correlation, and competing risks analysis are addressed; each with particular relevance to pediatric cancer survivorship research. Our goal in this article is to provide a resource and reference for other researchers working in the area of long-term cancer survivorship. PMID:19364957
TIE: An Ability Test of Emotional Intelligence
Śmieja, Magdalena; Orzechowski, Jarosław; Stolarski, Maciej S.
2014-01-01
The Test of Emotional Intelligence (TIE) is a new ability scale based on a theoretical model that defines emotional intelligence as a set of skills responsible for the processing of emotion-relevant information. Participants are provided with descriptions of emotional problems, and asked to indicate which emotion is most probable in a given situation, or to suggest the most appropriate action. Scoring is based on the judgments of experts: professional psychotherapists, trainers, and HR specialists. The validation study showed that the TIE is a reliable and valid test, suitable for both scientific research and individual assessment. Its internal consistency measures were as high as .88. In line with theoretical model of emotional intelligence, the results of the TIE shared about 10% of common variance with a general intelligence test, and were independent of major personality dimensions. PMID:25072656
Science Requirements Document for OMI-EOS. 2
NASA Technical Reports Server (NTRS)
Bhartia, P. K.; Chance, K.; Isaksen, I.; Levelt, P. F.; Boersma, F.; Brinksma, E.; Carpay, J.; vanderA, R.; deHaan, J.; Hilsenrath, E.
2000-01-01
A Dutch-Finnish scientific and industrial consortium is supplying the Ozone Monitoring Instrument (OMI) for Earth Observing System-Aura (EOS-Aura). EOS-Aura is the next NASA mission to study the Earth's atmosphere extensively, and successor to the highly successful UARS (Upper Atmospheric Research Satellite) mission. The 'Science Requirements Document for OMI-EOS' presents an overview of the Aura and OMI mission objectives. It describes how OMI fits into the Aura mission and it reviews the synergy with the other instruments onboard Aura to fulfill the mission. This evolves in the Scientific Requirements for OMI (Chapter 3), stating which trace gases have to be measured with what necessary accuracy, in order for OMI to meet Aura's objectives. The most important data product of OMI, the ozone vertical column, densities shall have a better accuracy and an improved global coverage than the predecessor instruments TOMS (Total Ozone Monitoring Spectrometer) and GOME (Global Ozone Monitoring Experiment), which is a.o. achieved by a better signal to noise ratio, improved calibration and a wide field-of-view. Moreover, in order to meet its role on Aura, OMI shall measure trace gases, such as NO2, OClO, BrO, HCHO and SO2, aerosols, cloud top height and cloud coverage. Improved accuracy, better coverage, and finer ground grid than has been done in the past are goals for OMI. After the scientific requirements are defined, three sets of subordinate requirements are derived. These are: the algorithm requirements, i.e. what do the algorithms need in order to meet the scientific requirements; the instrument and calibration requirements, i.e. what has to be measured and how accurately in order to provide the quality of data necessary for deriving the data products; and the validation requirements, i.e. a strategy of how the OMI program will assure that its data products are valid in the atmosphere, at least to the required accuracy.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-18
... 29-30, 2011, to evaluate the validation status of the LUMI-CELL[supreg] (BG1Luc ER TA) test method...: http://iccvam.niehs.nih.gov/docs/endo_docs/EDPRPRept2011.pdf or by contacting NICEATM (see ADDRESSES). The report contains (1) the Panel's evaluation of the validation status of the test method and (2) the...
Integrated Disposal Facility FY 2016: ILAW Verification and Validation of the eSTOMP Simulator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Freedman, Vicky L.; Bacon, Diana H.; Fang, Yilin
2016-05-13
This document describes two sets of simulations carried out to further verify and validate the eSTOMP simulator. In this report, a distinction is made between verification and validation, and the focus is on verifying eSTOMP through a series of published benchmarks on cementitious wastes, and validating eSTOMP based on a lysimeter experiment for the glassified waste. These activities are carried out within the context of a scientific view of validation that asserts that models can only be invalidated, and that model validation (and verification) is a subjective assessment.
Gravitational Biology Facility on Space Station: Meeting the needs of space biology
NASA Technical Reports Server (NTRS)
Allen, Katherine; Wade, Charles
1992-01-01
The Gravitational Biology Facility (GBF) is a set of generic laboratory equipment needed to conduct research on Space Station Freedom (SSF), focusing on Space Biology Program science (Cell and Developmental Biology and Plant Biology). The GBF will be functional from the earliest utilization flights through the permanent manned phase. Gravitational biology research will also make use of other Life Sciences equipment on the space station as well as existing equipment developed for the space shuttle. The facility equipment will be developed based on requirements derived from experiments proposed by the scientific community to address critical questions in the Space Biology Program. This requires that the facility have the ability to house a wide variety of species, various methods of observation, and numerous methods of sample collection, preservation, and storage. The selection of the equipment will be done by the members of a scientific working group (5 members representing cell biology, 6 developmental biology, and 6 plant biology) who also provide requirements to design engineers to ensure that the equipment will meet scientific needs. All equipment will undergo extensive ground based experimental validation studies by various investigators addressing a variety of experimental questions. Equipment will be designed to be adaptable to other space platforms. The theme of the Gravitational Biology Facility effort is to provide optimal and reliable equipment to answer the critical questions in Space Biology as to the effects of gravity on living systems.
Handling missing values in the MDS-UPDRS.
Goetz, Christopher G; Luo, Sheng; Wang, Lu; Tilley, Barbara C; LaPelle, Nancy R; Stebbins, Glenn T
2015-10-01
This study was undertaken to define the number of missing values permissible to render valid total scores for each Movement Disorder Society Unified Parkinson's Disease Rating Scale (MDS-UPDRS) part. To handle missing values, imputation strategies serve as guidelines to reject an incomplete rating or create a surrogate score. We tested a rigorous, scale-specific, data-based approach to handling missing values for the MDS-UPDRS. From two large MDS-UPDRS datasets, we sequentially deleted item scores, either consistently (same items) or randomly (different items) across all subjects. Lin's Concordance Correlation Coefficient (CCC) compared scores calculated without missing values with prorated scores based on sequentially increasing missing values. The maximal number of missing values retaining a CCC greater than 0.95 determined the threshold for rendering a valid prorated score. A second confirmatory sample was selected from the MDS-UPDRS international translation program. To provide valid part scores applicable across all Hoehn and Yahr (H&Y) stages when the same items are consistently missing, one missing item from Part I, one from Part II, three from Part III, but none from Part IV can be allowed. To provide valid part scores applicable across all H&Y stages when random item entries are missing, one missing item from Part I, two from Part II, seven from Part III, but none from Part IV can be allowed. All cutoff values were confirmed in the validation sample. These analyses are useful for constructing valid surrogate part scores for MDS-UPDRS when missing items fall within the identified threshold and give scientific justification for rejecting partially completed ratings that fall below the threshold. © 2015 International Parkinson and Movement Disorder Society.
ERIC Educational Resources Information Center
Morin, Olivier; Simonneaux, Laurence; Tytler, Russell
2017-01-01
Scientific expertise and outcomes often give rise to controversy. An educational response that equips students to take part in socioscientific discussions is the teaching of Socially Acute Questions (SAQs). Students engaging with SAQs need to engage with socio scientific reasoning, which involves reasoning with evidence from a variety of fields…
Towards a Trans-Disciplinary Methodology for a Game-Based Intervention Development Process
ERIC Educational Resources Information Center
Arnab, Sylvester; Clarke, Samantha
2017-01-01
The application of game-based learning adds play into educational and instructional contexts. Even though there is a lack of standard methodologies or formulaic frameworks to better inform game-based intervention development, there exist scientific and empirical studies that can serve as benchmarks for establishing scientific validity in terms of…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-17
... intends to post the status of the test orders, including recipients' responses, on the EPA Web site so... screening program using appropriate validated test systems and other scientifically relevant information to... chemicals. Scientific research and development services (NAICS code 5417), e.g., persons who conduct testing...
Measuring Graph Comprehension, Critique, and Construction in Science
ERIC Educational Resources Information Center
Lai, Kevin; Cabrera, Julio; Vitale, Jonathan M.; Madhok, Jacquie; Tinker, Robert; Linn, Marcia C.
2016-01-01
Interpreting and creating graphs plays a critical role in scientific practice. The K-12 Next Generation Science Standards call for students to use graphs for scientific modeling, reasoning, and communication. To measure progress on this dimension, we need valid and reliable measures of graph understanding in science. In this research, we designed…
ERIC Educational Resources Information Center
Hyatt, Keith J.; Stephenson, Jennifer; Carter, Mark
2009-01-01
Children with disabilities have frequently participated in various interventions before the efficacy of those practices was scientifically validated. When subsequent scientific evidence failed to support particular practices, those that had already made inroads into the educational arena frequently continued to be used. Given the current emphasis…
Two Cultures in Modern Science and Technology: For Safety and Validity Does Medicine Have to Update?
Becker, Robert E
2016-01-11
Two different scientific cultures go unreconciled in modern medicine. Each culture accepts that scientific knowledge and technologies are vulnerable to and easily invalidated by methods and conditions of acquisition, interpretation, and application. How these vulnerabilities are addressed separates the 2 cultures and potentially explains medicine's difficulties eradicating errors. A traditional culture, dominant in medicine, leaves error control in the hands of individual and group investigators and practitioners. A competing modern scientific culture accepts errors as inevitable, pernicious, and pervasive sources of adverse events throughout medical research and patient care too malignant for individuals or groups to control. Error risks to the validity of scientific knowledge and safety in patient care require systemwide programming able to support a culture in medicine grounded in tested, continually updated, widely promulgated, and uniformly implemented standards of practice for research and patient care. Experiences from successes in other sciences and industries strongly support the need for leadership from the Institute of Medicine's recommended Center for Patient Safely within the Federal Executive branch of government.
Couderc, Jean-Philippe
2011-01-01
We present an initiative supported by the National Heart Lung, and Blood Institute and the Food and Drug Administration for the development of a repository containing continuous electrocardiographic information to be shared with the worldwide scientific community. We believe that sharing data reinforces open scientific inquiry. It encourages diversity of analysis and opinion while promoting new research and facilitating the education of new researchers. In this paper, we present the resources available in this initiative for the scientific community. We describe the set of ECG signals currently hosted and we briefly discuss the associated clinical information (medical history. Disease and study-specific endpoints) and software tools we propose. Currently, the repository contains more than 250GB of data from eight clinical studies including healthy individuals and cardiac patients. This data is available for the development, implementation and validation of technologies related to body-surface ECGs. To conclude, the Telemetric and Holter ECG Warehouse (THEW) is an initiative developed to benefit the scientific community and to advance the field of quantitative electrocardiography and cardiac safety. PMID:21097349
Best Practices: How to Evaluate Psychological Science for Use by Organizations.
Fiske, Susan T; Borgida, Eugene
2011-01-01
We discuss how organizations can evaluate psychological science for its potential usefulness to their own purposes. Common sense is often the default but inadequate alternative, and bench-marking supplies only collective hunches instead of validated principles. External validity is an empirical process of identifying moderator variables, not a simple yes-no judgment about whether lab results replicate in the field. Hence, convincing criteria must specify what constitutes high-quality empirical evidence for organizational use. First, we illustrate some theories and science that have potential use. Then we describe generally accepted criteria for scientific quality and consensus, starting with peer review for quality, and scientific agreement in forms ranging from surveys of experts to meta-analyses to National Research Council consensus reports. Linkages of basic science to organizations entail communicating expert scientific consensus, motivating managerial interest, and translating broad principles to specific contexts. We close with parting advice to both sides of the researcher-practitioner divide.
Validating concepts of mental disorder: precedents from the history of science.
Miller, Robert
2014-10-01
A fundamental issue in any branch of the natural sciences is validating the basic concepts for use in that branch. In psychiatry, this issue has not yet been resolved, and indeed, the proper nature of the problem has scarcely been recognised. As a result, psychiatry (or at least those parts of the discipline which aspire to scientific status) still cannot claim to be a part of scientific medicine, or to be incorporated within the common language of the natural sciences. While this creates difficulties within the discipline, and its standing in relation to other branches of medicine, it makes it an exciting place for "frontiersmen" (and women). This is one of the key growing points in the natural science tradition. In this essay, which moves from the early history of that tradition to today's debates in scientific psychiatry, I give my views about how these fundamental issues can move towards resolution.
SOIL Geo-Wiki: A tool for improving soil information
NASA Astrophysics Data System (ADS)
Skalský, Rastislav; Balkovic, Juraj; Fritz, Steffen; See, Linda; van der Velde, Marijn; Obersteiner, Michael
2014-05-01
Crowdsourcing is increasingly being used as a way of collecting data for scientific research, e.g. species identification, classification of galaxies and unravelling of protein structures. The WorldSoilProfiles.org database at ISRIC is a global collection of soil profiles, which have been 'crowdsourced' from experts. This system, however, requires contributors to have a priori knowledge about soils. Yet many soil parameters can be observed in the field without specific knowledge or equipment such as stone content, soil depth or color. By crowdsourcing this information over thousands of locations, the uncertainty in current soil datasets could be radically reduced, particularly in areas currently without information or where multiple interpretations are possible from different existing soil maps. Improved information on soils could benefit many research fields and applications. Better soil data could enhance assessments of soil ecosystem services (e.g. soil carbon storage) and facilitate improved process-based ecosystem modeling from local to global scales. Geo-Wiki is a crowdsourcing tool that was developed at IIASA for land cover validation using satellite imagery. Several branches are now available focused on specific aspects of land cover validation, e.g. validating cropland extent or urbanized areas. Geo-Wiki Pictures is a smart phone application for collecting land cover related information on the ground. The extension of Geo-Wiki to a mobile environment provides a tool for experts in land cover validation but is also a way of reaching the general public in the validation of land cover. Here we propose a Soil Geo-Wiki tool that builds on the existing functionality of the Geo-Wiki application, which will be largely designed for the collection and sharing of soil information. Two distinct applications are envisaged: an expert-oriented application mainly for scientific purposes, which will use soil science related language (e.g. WRB or any other global reference soil classification system) and allow experts to upload and share scientifically rigorous soil data; and an application oriented towards the general public, which will be more focused on describing well observed, individual soil properties using simplified classification keys. The latter application will avoid the use of soil science related terminology and focus on the most useful soil parameters such as soil surface features, stone content, soil texture, soil plasticity, calcium carbonate presence, soil color, soil pH, soil repellency, and soil depth. Collection of soil and landscape pictures will also be supported in Soil Geo-Wiki to allow for comprehensive data collection while simultaneously allowing for quality checking by experts.
DoSSiER: Database of scientific simulation and experimental results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wenzel, Hans; Yarba, Julia; Genser, Krzystof
The Geant4, GeantV and GENIE collaborations regularly perform validation and regression tests for simulation results. DoSSiER (Database of Scientific Simulation and Experimental Results) is being developed as a central repository to store the simulation results as well as the experimental data used for validation. DoSSiER can be easily accessed via a web application. In addition, a web service allows for programmatic access to the repository to extract records in json or xml exchange formats. In this paper, we describe the functionality and the current status of various components of DoSSiER as well as the technology choices we made.
DoSSiER: Database of scientific simulation and experimental results
Wenzel, Hans; Yarba, Julia; Genser, Krzystof; ...
2016-08-01
The Geant4, GeantV and GENIE collaborations regularly perform validation and regression tests for simulation results. DoSSiER (Database of Scientific Simulation and Experimental Results) is being developed as a central repository to store the simulation results as well as the experimental data used for validation. DoSSiER can be easily accessed via a web application. In addition, a web service allows for programmatic access to the repository to extract records in json or xml exchange formats. In this paper, we describe the functionality and the current status of various components of DoSSiER as well as the technology choices we made.
Difficulties in the neuroscience of creativity: jazz improvisation and the scientific method.
McPherson, Malinda; Limb, Charles J
2013-11-01
Creativity is a fundamental and remarkable human capacity, yet the scientific study of creativity has been limited by the difficulty of reconciling the scientific method and creative processes. We outline several hurdles and considerations that should be addressed when studying the cognitive neuroscience of creativity and suggest that jazz improvisation may be one of the most useful experimental models for the study of spontaneous creativity. More broadly, we argue that studying creativity in a way that is both scientifically and ecologically valid requires collaboration between neuroscientists and artists. © 2013 New York Academy of Sciences.
NASA Astrophysics Data System (ADS)
Boger, R. A.; Low, R.; Paull, S.; Anyamba, A.; Soebiyanto, R. P.
2017-12-01
Temperature and precipitation are important drivers of mosquito population dynamics, and a growing set of models have been proposed to characterize these relationships. Validation of these models, and development of broader theories across mosquito species and regions could nonetheless be improved by comparing observations from a global dataset of mosquito larvae with satellite-based measurements of meteorological variables. Citizen science data can be particularly useful for two such aspects of research into the meteorological drivers of mosquito populations: i) Broad-scale validation of mosquito distribution models and ii) Generation of quantitative hypotheses regarding changes to mosquito abundance and phenology across scales. The recently released GLOBE Observer Mosquito Habitat Mapper (GO-MHM) app engages citizen scientists in identifying vector taxa, mapping breeding sites and decommissioning non-natural habitats, and provides a potentially useful new tool for validating mosquito ubiquity projections based on the analysis of remotely sensed environmental data. Our early work with GO-MHM data focuses on two objectives: validating citizen science reports of Aedes aegypti distribution through comparison with accepted scientific data sources, and exploring the relationship between extreme temperature and precipitation events and subsequent observations of mosquito larvae. Ultimately the goal is to develop testable hypotheses regarding the shape and character of this relationship between mosquito species and regions.
Mikulak, Anna
2011-06-01
As differentiation within scientific disciplines increases, so does differentiation between the sciences and other ways of knowing. This distancing between 'scientific' and 'non-scientific' cultures reflects differences in what are considered valid and reliable approaches to acquiring knowledge and has played a major role in recent science-oriented controversies. Scientists' reluctance to actively engage in science communication, coupled with journalists' reliance on the norms of balance, conflict, and human interest in covering scientific issues, have combined to exacerbate public mistrust of science on issues like the measles-mumps-rubella (MMR) vaccine. The failure of effective communications between scientists and non-scientists has hindered the progress of both effective science and effective policy. In order to better bridge the gap between the 'scientific' and 'non-scientific' cultures, renewed efforts must be made to encourage substantive public engagement, with the ultimate goal of facilitating an open, democratic policy-making process.
NASA Astrophysics Data System (ADS)
Javier Romualdez, Luis
Scientific balloon-borne instrumentation offers an attractive, competitive, and effective alternative to space-borne missions when considering the overall scope, cost, and development timescale required to design and launch scientific instruments. In particular, the balloon-borne environment provides a near-space regime that is suitable for a number of modern astronomical and cosmological experiments, where the atmospheric interference suffered by ground-based instrumentation is negligible at stratospheric altitudes. This work is centered around the analytical strategies and implementation considerations for the attitude determination and control of SuperBIT, a scientific balloon-borne payload capable of meeting the strict sub-arcsecond pointing and image stability requirements demanded by modern cosmological experiments. Broadly speaking, the designed stability specifications of SuperBIT coupled with its observational efficiency, image quality, and accessibility rivals state-of-the-art astronomical observatories such as the Hubble Space Telescope. To this end, this work presents an end-to-end design methodology for precision pointing balloon-borne payloads such as SuperBIT within an analytical yet implementationally grounded context. Simulation models of SuperBIT are analytically derived to aid in pre-assembly trade-off and case studies that are pertinent to the dynamic balloon-borne environment. From these results, state estimation techniques and control methodologies are extensively developed, leveraging the analytical framework of simulation models and design studies. This pre-assembly design phase is physically validated during assembly, integration, and testing through implementation in real-time hardware and software, which bridges the gap between analytical results and practical application. SuperBIT attitude determination and control is demonstrated throughout two engineering test flights that verify pointing and image stability requirements in flight, where the post-flight results close the overall design loop by suggesting practical improvements to pre-design methodologies. Overall, the analytical and practical results presented in this work, though centered around the SuperBIT project, provide generically useful and implementationally viable methodologies for high precision balloon-borne instrumentation, all of which are validated, justified, and improved both theoretically and practically. As such, the continuing development of SuperBIT, built from the work presented in this thesis, strives to further the potential for scientific balloon-borne astronomy in the near future.
Oh, Deborah M; Kim, Joshua M; Garcia, Raymond E; Krilowicz, Beverly L
2005-06-01
There is increasing pressure, both from institutions central to the national scientific mission and from regional and national accrediting agencies, on natural sciences faculty to move beyond course examinations as measures of student performance and to instead develop and use reliable and valid authentic assessment measures for both individual courses and for degree-granting programs. We report here on a capstone course developed by two natural sciences departments, Biological Sciences and Chemistry/Biochemistry, which engages students in an important culminating experience, requiring synthesis of skills and knowledge developed throughout the program while providing the departments with important assessment information for use in program improvement. The student work products produced in the course, a written grant proposal, and an oral summary of the proposal, provide a rich source of data regarding student performance on an authentic assessment task. The validity and reliability of the instruments and the resulting student performance data were demonstrated by collaborative review by content experts and a variety of statistical measures of interrater reliability, including percentage agreement, intraclass correlations, and generalizability coefficients. The high interrater reliability reported when the assessment instruments were used for the first time by a group of external evaluators suggests that the assessment process and instruments reported here will be easily adopted by other natural science faculty.
Development and validation of a nursing professionalism evaluation model in a career ladder system.
Kim, Yeon Hee; Jung, Young Sun; Min, Ja; Song, Eun Young; Ok, Jung Hui; Lim, Changwon; Kim, Kyunghee; Kim, Ji-Su
2017-01-01
The clinical ladder system categorizes the degree of nursing professionalism and rewards and is an important human resource tool for managing nursing. We developed a model to evaluate nursing professionalism, which determines the clinical ladder system levels, and verified its validity. Data were collected using a clinical competence tool developed in this study, and existing methods such as the nursing professionalism evaluation tool, peer reviews, and face-to-face interviews to evaluate promotions and verify the presented content in a medical institution. Reliability and convergent and discriminant validity of the clinical competence evaluation tool were verified using SmartPLS software. The validity of the model for evaluating overall nursing professionalism was also analyzed. Clinical competence was determined by five dimensions of nursing practice: scientific, technical, ethical, aesthetic, and existential. The structural model explained 66% of the variance. Clinical competence scales, peer reviews, and face-to-face interviews directly determined nursing professionalism levels. The evaluation system can be used for evaluating nurses' professionalism in actual medical institutions from a nursing practice perspective. A conceptual framework for establishing a human resources management system for nurses and a tool for evaluating nursing professionalism at medical institutions is provided.
Journalism Degree Motivations: The Development of a Scale
ERIC Educational Resources Information Center
Carpenter, Serena; Grant, August E.; Hoag, Anne
2016-01-01
Scientific knowledge should reflect valid, consistent measurement. It is argued research on scale development needs to be more systematic and prevalent. The intent of this article is to address scale development by creating and validating a construct that measures the underlying reasons why undergraduate students seek a degree in journalism, the…
ERIC Educational Resources Information Center
Wrigley, William J.; Emmerson, Stephen B.
2013-01-01
This study investigated ways to improve the quality of music performance evaluation in an effort to address the accountability imperative in tertiary music education. An enhanced scientific methodology was employed incorporating ecological validity and using recognized qualitative methods involving grounded theory and quantitative methods…
NASA Astrophysics Data System (ADS)
Widowati, A.; Anjarsari, P.; Zuhdan, K. P.; Dita, A.
2018-03-01
The challenges of the 21st century require innovative solutions. Education must able to make an understanding of science learning that leads to the formation of scientific literacy learners. This research was conducted to produce the prototype as science worksheet based on Nature of Science (NoS) within inquiry approach and to know the effectiveness its product for developing scientific literacy. This research was the development and research design, by pointing to Four D models and Borg & Gall Model. There were 4 main phases (define, design, develop, disseminate) and additional phases (preliminary field testing, main product revision, main field testing, and operational product revision). Research subjects were students of the junior high school in Yogyakarta. The instruments used included questionnaire sheet product validation and scientific literacy test. For the validation data were analyzed descriptively. The test result was analyzed by an N-gain score. The results showed that the appropriateness of worksheet applying NoS within inquiry-based learning approach is eligible based on the assessment from excellent by experts and teachers, students’ scientific literacy can improve high category of the N-gain score at 0.71 by using student worksheet with Nature of Science (NoS) within inquiry approach.
Modelling guidelines--terminology and guiding principles
NASA Astrophysics Data System (ADS)
Refsgaard, Jens Christian; Henriksen, Hans Jørgen
2004-01-01
Some scientists argue, with reference to Popper's scientific philosophical school, that models cannot be verified or validated. Other scientists and many practitioners nevertheless use these terms, but with very different meanings. As a result of an increasing number of examples of model malpractice and mistrust to the credibility of models, several modelling guidelines are being elaborated in recent years with the aim of improving the quality of modelling studies. This gap between the views and the lack of consensus experienced in the scientific community and the strongly perceived need for commonly agreed modelling guidelines is constraining the optimal use and benefits of models. This paper proposes a framework for quality assurance guidelines, including a consistent terminology and a foundation for a methodology bridging the gap between scientific philosophy and pragmatic modelling. A distinction is made between the conceptual model, the model code and the site-specific model. A conceptual model is subject to confirmation or falsification like scientific theories. A model code may be verified within given ranges of applicability and ranges of accuracy, but it can never be universally verified. Similarly, a model may be validated, but only with reference to site-specific applications and to pre-specified performance (accuracy) criteria. Thus, a model's validity will always be limited in terms of space, time, boundary conditions and types of application. This implies a continuous interaction between manager and modeller in order to establish suitable accuracy criteria and predictions associated with uncertainty analysis.
AIRS Retrieval Validation During the EAQUATE
NASA Technical Reports Server (NTRS)
Zhou, Daniel K.; Smith, William L.; Cuomo, Vincenzo; Taylor, Jonathan P.; Barnet, Christopher D.; DiGirolamo, Paolo; Pappalardo, Gelsomina; Larar, Allen M.; Liu, Xu; Newman, Stuart M.
2006-01-01
Atmospheric and surface thermodynamic parameters retrieved with advanced hyperspectral remote sensors of Earth observing satellites are critical for weather prediction and scientific research. The retrieval algorithms and retrieved parameters from satellite sounders must be validated to demonstrate the capability and accuracy of both observation and data processing systems. The European AQUA Thermodynamic Experiment (EAQUATE) was conducted mainly for validation of the Atmospheric InfraRed Sounder (AIRS) on the AQUA satellite, but also for assessment of validation systems of both ground-based and aircraft-based instruments which will be used for other satellite systems such as the Infrared Atmospheric Sounding Interferometer (IASI) on the European MetOp satellite, the Cross-track Infrared Sounder (CrIS) from the NPOESS Preparatory Project and the following NPOESS series of satellites. Detailed inter-comparisons were conducted and presented using different retrieval methodologies: measurements from airborne ultraspectral Fourier transform spectrometers, aircraft in-situ instruments, dedicated dropsondes and radiosondes, and ground based Raman Lidar, as well as from the European Center for Medium range Weather Forecasting (ECMWF) modeled thermal structures. The results of this study not only illustrate the quality of the measurements and retrieval products but also demonstrate the capability of these validation systems which are put in place to validate current and future hyperspectral sounding instruments and their scientific products.
Aßmann, C
2016-06-01
Besides large efforts regarding field work, provision of valid databases requires statistical and informational infrastructure to enable long-term access to longitudinal data sets on height, weight and related issues. To foster use of longitudinal data sets within the scientific community, provision of valid databases has to address data-protection regulations. It is, therefore, of major importance to hinder identifiability of individuals from publicly available databases. To reach this goal, one possible strategy is to provide a synthetic database to the public allowing for pretesting strategies for data analysis. The synthetic databases can be established using multiple imputation tools. Given the approval of the strategy, verification is based on the original data. Multiple imputation by chained equations is illustrated to facilitate provision of synthetic databases as it allows for capturing a wide range of statistical interdependencies. Also missing values, typically occurring within longitudinal databases for reasons of item non-response, can be addressed via multiple imputation when providing databases. The provision of synthetic databases using multiple imputation techniques is one possible strategy to ensure data protection, increase visibility of longitudinal databases and enhance the analytical potential.
NASA Astrophysics Data System (ADS)
Walker, David; Forsythe, Nathan; Parkin, Geoff; Gowing, John
2016-07-01
This study shows how community-based hydrometeorological monitoring programmes can provide reliable high-quality measurements comparable to formal observations. Time series of daily rainfall, river stage and groundwater levels obtained by a local community in Dangila woreda, northwest Ethiopia, have passed accepted quality control standards and have been statistically validated against formal sources. In a region of low-density and declining formal hydrometeorological monitoring networks, a situation shared by much of the developing world, community-based monitoring can fill the observational void providing improved spatial and temporal characterisation of rainfall, river flow and groundwater levels. Such time series data are invaluable in water resource assessment and management, particularly where, as shown here, gridded rainfall datasets provide gross under or over estimations of rainfall and where groundwater level data are non-existent. Discussions with the local community during workshops held at the setup of the monitoring programme and since have demonstrated that the community have become engaged in the project and have benefited from a greater hydrological knowledge and sense of ownership of their resources. This increased understanding and empowerment is at the relevant scale required for effective community-based participatory management of shallow groundwater and river catchments.
Examining students' views about validity of experiments: From introductory to Ph.D. students
NASA Astrophysics Data System (ADS)
Hu, Dehui; Zwickl, Benjamin M.
2018-06-01
We investigated physics students' epistemological views on measurements and validity of experimental results. The roles of experiments in physics have been underemphasized in previous research on students' personal epistemology, and there is a need for a broader view of personal epistemology that incorporates experiments. An epistemological framework incorporating the structure, methodology, and validity of scientific knowledge guided the development of an open-ended survey. The survey was administered to students in algebra-based and calculus-based introductory physics courses, upper-division physics labs, and physics Ph.D. students. Within our sample, we identified several differences in students' ideas about validity and uncertainty in measurement. The majority of introductory students justified the validity of results through agreement with theory or with results from others. Alternatively, Ph.D. students frequently justified the validity of results based on the quality of the experimental process and repeatability of results. When asked about the role of uncertainty analysis, introductory students tended to focus on the representational roles (e.g., describing imperfections, data variability, and human mistakes). However, advanced students focused on the inferential roles of uncertainty analysis (e.g., quantifying reliability, making comparisons, and guiding refinements). The findings suggest that lab courses could emphasize a variety of approaches to establish validity, such as by valuing documentation of the experimental process when evaluating the quality of student work. In order to emphasize the role of uncertainty in an authentic way, labs could provide opportunities to iterate, make repeated comparisons, and make decisions based on those comparisons.
Real-time remote scientific model validation
NASA Technical Reports Server (NTRS)
Frainier, Richard; Groleau, Nicolas
1994-01-01
This paper describes flight results from the use of a CLIPS-based validation facility to compare analyzed data from a space life sciences (SLS) experiment to an investigator's preflight model. The comparison, performed in real-time, either confirms or refutes the model and its predictions. This result then becomes the basis for continuing or modifying the investigator's experiment protocol. Typically, neither the astronaut crew in Spacelab nor the ground-based investigator team are able to react to their experiment data in real time. This facility, part of a larger science advisor system called Principal Investigator in a Box, was flown on the space shuttle in October, 1993. The software system aided the conduct of a human vestibular physiology experiment and was able to outperform humans in the tasks of data integrity assurance, data analysis, and scientific model validation. Of twelve preflight hypotheses associated with investigator's model, seven were confirmed and five were rejected or compromised.
On the nature of psychodynamic science.
Brookes, Crittenden E
2004-01-01
In a previous article (Brookes, 2003), it was suggested that a science devoted to the subjective data obtained in psychodynamic therapy would require classification of psyche, the object of study, as a hypothetical construct, a classic concept in the philosophy of psychological science. The present article argues for and outlines a necessarily new and unique scientific paradigm for psychodynamics by further suggesting (1) that positivistic science is not appropriate to the phenomenological data which psychodynamic concepts explain, (2) that retroductive inferences are preferable to inductive or deductive inferences in handling such data, (3) that the concept of meaning as scientific validation is more suitable to psychodynamic science than are the concepts of measurement and operational validation, and (4) that meaningful validation is best elaborated through the application of the concepts of erroneous meaning, of synchronicity, and of numinosity. These and other ideas are briefly described, and will be elaborated further in subsequent papers.
ERIC Educational Resources Information Center
Roberts, Ros
2016-01-01
This article considers what might be taught to meet a widely held curriculum aim of students being able to understand research in a discipline. Expertise, which may appear as a "chain of practice," is widely held to be underpinned by networks of understanding. Scientific research expertise is considered from this perspective. Within…
ERIC Educational Resources Information Center
Phillipson, Robert
2011-01-01
Humphrey Tonkin's article (2011, this issue) refers to many relevant parameters in the current dominance of English in science. His conclusion that publication in English "is erroneously equated with scientific advancement in general" is a disturbingly valid generalization that ultimately reflects ignorance, prejudice, and myopia.…
Measurement of Global Precipitation: Introduction to International GPM Program
NASA Technical Reports Server (NTRS)
Hwang, P.
2004-01-01
The Global Precipitation Measurement (GPM) Program is an international cooperative effort whose objectives are to (a) obtain better understanding of rainfall processes, and (b) make frequent rainfall measurements on a global basis. The National Aeronautics and Space Administration (NASA) of the United States and the Japanese Aviation and Exploration Agency (JAXA) have entered into a cooperative agreement for the formulation and development of GPM. This agreement is a continuation of the partnership that developed the highly successful Tropical Rainfall Measuring Mission (TRMM) that was launched in November 1997; this mission continues to provide valuable scientific and meteorological information on rainfall and the associated processes. International collaboration on GPM from other space agencies has been solicited, and discussions regarding their participation are currently in progress. NASA has taken lead responsibility for the planning and formulation of GPM. Key elements of the Program to be provided by NASA include a Core satellite instrumented with a multi-channel microwave radiometer, a Ground Validation System and a ground-based Precipitation Processing System (PPS). JAXA will provide a Dual-frequency Precipitation Radar for installation on the Core satellite and launch services. Other United States agencies and international partners may participate in a number of ways, such as providing rainfall measurements obtained from their own national space-borne platforms, providing local rainfall measurements to support the ground validation activities, or providing hardware or launch services for GPM constellation spacecraft.
Harding, Anna K.; Daston, George P.; Boyd, Glen R.; Lucier, George W.; Safe, Stephen H.; Stewart, Juarine; Tillitt, Donald E.; Van Der Kraak, Glen
2006-01-01
At the request of the U.S. Environmental Protection Agency (EPA) Office of Research and Development, a subcommittee of the Board of Scientific Counselors Executive Committee conducted an independent and open peer review of the Endocrine Disrupting Chemicals Research Program (EDC Research Program) of the U.S. EPA. The subcommittee was charged with reviewing the design, relevance, progress, scientific leadership, and resources of the program. The subcommittee found that the long-term goals and science questions in the EDC Program are appropriate and represent an understandable and solid framework for setting research priorities, representing a combination of problem-driven and core research. Long-term goal (LTG) 1, dealing with the underlying science surrounding endocrine disruptors, provides a solid scientific foundation for conducting risk assessments and making risk management decisions. LTG 2, dealing with defining the extent of the impact of endocrine-disrupting chemicals (EDCs), has shown greater progress on ecologic effects of EDCs compared with that on human health effects. LTG 3, which involves support of the Endocrine Disruptor Screening and Testing Program of the U.S. EPA, has two mammalian tests already through a validation program and soon available for use. Despite good progress, we recommend that the U.S. EPA a) strengthen their expertise in wildlife toxicology, b) expedite validation of the Endocrine Disruptors Screening and Testing Advisory Committee tests, c) continue dependable funding for the EDC Research Program, d) take a leadership role in the application of “omics” technologies to address many of the science questions critical for evaluating environmental and human health effects of EDCs, and e) continue to sponsor multidisciplinary intramural research and interagency collaborations.
Durham, Mary F; Knight, Jennifer K; Couch, Brian A
2017-01-01
The Scientific Teaching (ST) pedagogical framework provides various approaches for science instructors to teach in a way that more closely emulates how science is practiced by actively and inclusively engaging students in their own learning and by making instructional decisions based on student performance data. Fully understanding the impact of ST requires having mechanisms to quantify its implementation. While many useful instruments exist to document teaching practices, these instruments only partially align with the range of practices specified by ST, as described in a recently published taxonomy. Here, we describe the development, validation, and implementation of the Measurement Instrument for Scientific Teaching (MIST), a survey derived from the ST taxonomy and designed to gauge the frequencies of ST practices in undergraduate science courses. MIST showed acceptable validity and reliability based on results from 7767 students in 87 courses at nine institutions. We used factor analyses to identify eight subcategories of ST practices and used these categories to develop a short version of the instrument amenable to joint administration with other research instruments. We further discuss how MIST can be used by instructors, departments, researchers, and professional development programs to quantify and track changes in ST practices. © 2017 M. F. Durham et al. CBE—Life Sciences Education © 2017 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).
The SCEC Broadband Platform: Open-Source Software for Strong Ground Motion Simulation and Validation
NASA Astrophysics Data System (ADS)
Silva, F.; Goulet, C. A.; Maechling, P. J.; Callaghan, S.; Jordan, T. H.
2016-12-01
The Southern California Earthquake Center (SCEC) Broadband Platform (BBP) is a carefully integrated collection of open-source scientific software programs that can simulate broadband (0-100 Hz) ground motions for earthquakes at regional scales. The BBP can run earthquake rupture and wave propagation modeling software to simulate ground motions for well-observed historical earthquakes and to quantify how well the simulated broadband seismograms match the observed seismograms. The BBP can also run simulations for hypothetical earthquakes. In this case, users input an earthquake location and magnitude description, a list of station locations, and a 1D velocity model for the region of interest, and the BBP software then calculates ground motions for the specified stations. The BBP scientific software modules implement kinematic rupture generation, low- and high-frequency seismogram synthesis using wave propagation through 1D layered velocity structures, several ground motion intensity measure calculations, and various ground motion goodness-of-fit tools. These modules are integrated into a software system that provides user-defined, repeatable, calculation of ground-motion seismograms, using multiple alternative ground motion simulation methods, and software utilities to generate tables, plots, and maps. The BBP has been developed over the last five years in a collaborative project involving geoscientists, earthquake engineers, graduate students, and SCEC scientific software developers. The SCEC BBP software released in 2016 can be compiled and run on recent Linux and Mac OS X systems with GNU compilers. It includes five simulation methods, seven simulation regions covering California, Japan, and Eastern North America, and the ability to compare simulation results against empirical ground motion models (aka GMPEs). The latest version includes updated ground motion simulation methods, a suite of new validation metrics and a simplified command line user interface.
Choosing phenomenology as a guiding philosophy for nursing research.
Matua, Gerald Amandu
2015-03-01
To provide an overview of important methodological considerations that nurse researchers need to adhere to when choosing phenomenology as a guiding philosophy and research method. Phenomenology is a major philosophy and research method in the humanities, human sciences and arts disciplines with a central goal of describing people's experiences. However, many nurse researchers continue to grapple with methodological issues related to their choice of phenomenological method. The author conducted online and manual searches of relevant research books and electronic databases. Using an integrative method, peer-reviewed research and discussion papers published between January 1990 and December 2011 and listed in the CINAHL, Science Direct, PubMed and Google Scholar databases were reviewed. In addition, textbooks that addressed research methodologies such as phenomenology were used. Although phenomenology is widely used today to broaden understanding of human phenomena relevant to nursing practice, nurse researchers often fail to adhere to acceptable scientific and phenomenological standards. Cognisant of these challenges, researchers are expected to indicate in their work the focus of their investigations, designs, and approaches to collecting and analysing data. They are also expected to present their findings in an evocative and expressive manner. Choosing phenomenology requires researchers to understand it as a philosophy, including basic assumptions and tenets of phenomenology as a research method. This awareness enables researchers, especially novices, to make important methodological decisions, particularly those necessary to indicate the study's scientific rigour and phenomenological validity. This paper adds to the discussion of phenomenology as a guiding philosophy for nursing research. It aims to guide new researchers on important methodological decisions they need to make to safeguard their study's scientific rigour and phenomenological validity.
Farrohknia, Nasim; Castrén, Maaret; Ehrenberg, Anna; Lind, Lars; Oredsson, Sven; Jonsson, Håkan; Asplund, Kjell; Göransson, Katarina E
2011-06-30
Emergency department (ED) triage is used to identify patients' level of urgency and treat them based on their triage level. The global advancement of triage scales in the past two decades has generated considerable research on the validity and reliability of these scales. This systematic review aims to investigate the scientific evidence for published ED triage scales. The following questions are addressed: 1. Does assessment of individual vital signs or chief complaints affect mortality during the hospital stay or within 30 days after arrival at the ED?2. What is the level of agreement between clinicians' triage decisions compared to each other or to a gold standard for each scale (reliability)? 3. How valid is each triage scale in predicting hospitalization and hospital mortality? A systematic search of the international literature published from 1966 through March 31, 2009 explored the British Nursing Index, Business Source Premier, CINAHL, Cochrane Library, EMBASE, and PubMed. Inclusion was limited to controlled studies of adult patients (≥ 15 years) visiting EDs for somatic reasons. Outcome variables were death in ED or hospital and need for hospitalization (validity). Methodological quality and clinical relevance of each study were rated as high, medium, or low. The results from the studies that met the inclusion criteria and quality standards were synthesized applying the internationally developed GRADE system. Each conclusion was then assessed as having strong, moderately strong, limited, or insufficient scientific evidence. If studies were not available, this was also noted.We found ED triage scales to be supported, at best, by limited and often insufficient evidence.The ability of the individual vital signs included in the different scales to predict outcome is seldom, if at all, studied in the ED setting. The scientific evidence to assess interrater agreement (reliability) was limited for one triage scale and insufficient or lacking for all other scales. Two of the scales yielded limited scientific evidence, and one scale yielded insufficient evidence, on which to assess the risk of early death or hospitalization in patients assigned to the two lowest triage levels on a 5-level scale (validity).
2011-01-01
Emergency department (ED) triage is used to identify patients' level of urgency and treat them based on their triage level. The global advancement of triage scales in the past two decades has generated considerable research on the validity and reliability of these scales. This systematic review aims to investigate the scientific evidence for published ED triage scales. The following questions are addressed: 1. Does assessment of individual vital signs or chief complaints affect mortality during the hospital stay or within 30 days after arrival at the ED? 2. What is the level of agreement between clinicians' triage decisions compared to each other or to a gold standard for each scale (reliability)? 3. How valid is each triage scale in predicting hospitalization and hospital mortality? A systematic search of the international literature published from 1966 through March 31, 2009 explored the British Nursing Index, Business Source Premier, CINAHL, Cochrane Library, EMBASE, and PubMed. Inclusion was limited to controlled studies of adult patients (≥15 years) visiting EDs for somatic reasons. Outcome variables were death in ED or hospital and need for hospitalization (validity). Methodological quality and clinical relevance of each study were rated as high, medium, or low. The results from the studies that met the inclusion criteria and quality standards were synthesized applying the internationally developed GRADE system. Each conclusion was then assessed as having strong, moderately strong, limited, or insufficient scientific evidence. If studies were not available, this was also noted. We found ED triage scales to be supported, at best, by limited and often insufficient evidence. The ability of the individual vital signs included in the different scales to predict outcome is seldom, if at all, studied in the ED setting. The scientific evidence to assess interrater agreement (reliability) was limited for one triage scale and insufficient or lacking for all other scales. Two of the scales yielded limited scientific evidence, and one scale yielded insufficient evidence, on which to assess the risk of early death or hospitalization in patients assigned to the two lowest triage levels on a 5-level scale (validity). PMID:21718476
NASA Astrophysics Data System (ADS)
Mendoza, A. M.; Bakshi, S.; Berrios, D.; Chulaki, A.; Evans, R. M.; Kuznetsova, M. M.; Lee, H.; MacNeice, P. J.; Maddox, M. M.; Mays, M. L.; Mullinix, R. E.; Ngwira, C. M.; Patel, K.; Pulkkinen, A.; Rastaetter, L.; Shim, J.; Taktakishvili, A.; Zheng, Y.
2012-12-01
Community Coordinated Modeling Center (CCMC) was established to enhance basic solar terrestrial research and to aid in the development of models for specifying and forecasting conditions in the space environment. In achieving this goal, CCMC has developed and provides a set of innovative tools varying from: Integrated Space Weather Analysis (iSWA) web -based dissemination system for space weather information, Runs-On-Request System providing access to unique collection of state-of-the-art solar and space physics models (unmatched anywhere in the world), Advanced Online Visualization and Analysis tools for more accurate interpretation of model results, Standard Data formats for Simulation Data downloads, and recently Mobile apps (iPhone/Android) to view space weather data anywhere to the scientific community. The number of runs requested and the number of resulting scientific publications and presentations from the research community has not only been an indication of the broad scientific usage of the CCMC and effective participation by space scientists and researchers, but also guarantees active collaboration and coordination amongst the space weather research community. Arising from the course of CCMC activities, CCMC also supports community-wide model validation challenges and research focus group projects for a broad range of programs such as the multi-agency National Space Weather Program, NSF's CEDAR (Coupling, Energetics and Dynamics of Atmospheric Regions), GEM (Geospace Environment Modeling) and Shine (Solar Heliospheric and INterplanetary Environment) programs. In addition to performing research and model development, CCMC also supports space science education by hosting summer students through local universities; through the provision of simulations in support of classroom programs such as Heliophysics Summer School (with student research contest) and CCMC Workshops; training next generation of junior scientists in space weather forecasting; and educating the general public about the importance and impacts of space weather effects. Although CCMC is organizationally comprised of United States federal agencies, CCMC services are open to members of the international science community and encourages interagency and international collaboration. In this poster, we provide an overview of using Community Coordinated Modeling Center (CCMC) tools and services to support worldwide space weather scientific communities and networks.;
The biomaterials conundrum in tissue engineering.
Williams, David F
2014-04-01
The development of biomaterials for use in tissue engineering processes has not so far followed a scientifically valid pathway; there have been no properly constituted specifications for these biomaterials, whose choice has often been dictated by the perceived need to comply with prior FDA approval for use of the materials in nontissue engineering applications. This short essay discusses the difficulties that have resulted in this approach and provides both conceptual and practical solutions for the future, based on sound principles of biocompatibility and the need to use tissue engineering templates that replicate the niche of the target cells.
The clinical case: validity, values and strategies to approach its writing.
Mellado, J M; Packer, C D
The case report is used to communicate the experience acquired by its authors with a patient. Although its relevance has been doubted, the case report deserves to be vindicated and contextualized. We review the case report's historical tradition, recent evolution and current formats. We describe its utility as a scientific tool, a continuing education resource and an aid to diagnosis. We reflect on the teaching potential its writing entails. Finally, we provide strategies to address the writing of a radiological case report. Copyright © 2017 SERAM. Publicado por Elsevier España, S.L.U. All rights reserved.
The Model Analyst’s Toolkit: Scientific Model Development, Analysis, and Validation
2014-11-20
Government Contract N00014-12-C-0653 Charles River Analytics p. 7 The new MAT system can be downloaded from our FTP site with a username and...password that we provide. We also updated our web site to tell visitors about the new release and to tell them how to request a copy of the new software...hemorrhaging are being applied properly. Laparoscopic Surgery Training System (LASTS) (Phase II SBIR) US Navy’s Office of Naval Research (ONR) Under
An ontology-based framework for bioinformatics workflows.
Digiampietri, Luciano A; Perez-Alcazar, Jose de J; Medeiros, Claudia Bauzer
2007-01-01
The proliferation of bioinformatics activities brings new challenges - how to understand and organise these resources, how to exchange and reuse successful experimental procedures, and to provide interoperability among data and tools. This paper describes an effort toward these directions. It is based on combining research on ontology management, AI and scientific workflows to design, reuse and annotate bioinformatics experiments. The resulting framework supports automatic or interactive composition of tasks based on AI planning techniques and takes advantage of ontologies to support the specification and annotation of bioinformatics workflows. We validate our proposal with a prototype running on real data.
Bigham, Blair; Welsford, Michelle
2015-05-01
The practice of emergency medicine (EM) has been intertwined with emergency medical services (EMS) for more than 40 years. In this commentary, we explore the practice of translating hospital based evidence into the prehospital setting. We will challenge both EMS and EM dogma-bringing hospital care to patients in the field is not always better. In providing examples of therapies championed in hospitals that have failed to translate into the field, we will discuss the unique prehospital environment, and why evidence from the hospital setting cannot necessarily be translated to the prehospital field. Paramedicine is maturing so that the capability now exists to conduct practice-specific research that can inform best practices. Before translation from the hospital environment is implemented, evidence must be evaluated by people with expertise in three domains: critical appraisal, EM, and EMS. Scientific evidence should be assessed for: quality and bias; directness, generalizability, and validity to the EMS population; effect size and anticipated benefit from prehospital application; feasibility (including economic evaluation, human resource availability in the mobile environment); and patient and provider safety.
Iavicoli, S; Natali, E; Rondinone, B M; Castaldi, T; Persechino, B
2010-01-01
Over the last years, stress has been recognized as a potential work-related risk factor. Unfortunately, work-related stress is a very delicate subject, especially because it is difficult to assess it objectively and in broadly acceptable terms. In fact, work-related stress is a subjective personal response to a specific work environment, ad is of a multifactorial origin. In order to provide a practical tool for the assessment of work-related stress, the authors carried out a thorough benchmarking analysis of the various models to manage work stress problems adopted by EU countries. As a result, the authors have chosen to apply and implement the Health and Safety Executive (HSE) Management Standards approach in the Italian context. In compliance with the European Framework Agreement signed on October 8, 2004, HSE Management Standards ask for the coordinated and integrated involvement of workers and safety personnel and represent a valid assessment approach based on principles widely acknowledged in the scientific literature.
NASA Astrophysics Data System (ADS)
Smith, L. C.; Gleason, C. J.; Pietroniro, A.; Fiset, J. M.
2016-12-01
The NASA/CNES/CSA Surface Water and Ocean Topography (SWOT) satellite mission holds strong promise to be a transformational mission for land surface hydrology in much the same way that conventional radar altimetry transformed physical oceanography following the launch of Seasat in 1978. However, to achieve this potential key pre-launch tasks remain, including 1) establishing benchmark monitoring sites, standardized measurement protocols, and international partnerships for quality calibration/validation of SWOT hydrology products; 2) demonstration that SWOT inundation area mapping for rivers, lakes, and wetlands is feasible; 3) demonstration that quality SWOT discharge retrievals for large rivers are feasible; and 4) demonstration of exciting new science from SWOT-like measurements. To these ends we present a new U.S.-Canada partnership to establish new SWOT calibration/validation sites, collect unique "SWOT-like" field and remote sensing datasets, conduct phenomenology studies of potentially important impacts (vegetation, sedimentary deposits, ice, and wind) on SWOT backscatter and water surface elevation (WSE) retrievals; and to gain scientific knowledge of the impact of permafrost on the form, hydraulics, and water surface elevations of northern rivers and lakes. This U.S-Canada partnership will establish scientifically interesting calibration/validation sites along three to four major Canadian rivers (current candidates: Saskatchewan, Athabasca, Arctic Red, Slave/Peace, or Ottawa Rivers). Field sites will be selected optimize scientific impact, logistics, and location inside the nominal planned orbits of the SWOT Fast Sampling Phase.
Cocoa Bioactive Compounds: Significance and Potential for the Maintenance of Skin Health
Scapagnini, Giovanni; Davinelli, Sergio; Di Renzo, Laura; De Lorenzo, Antonino; Olarte, Hector Hugo; Micali, Giuseppe; Cicero, Arrigo F.; Gonzalez, Salvador
2014-01-01
Cocoa has a rich history in human use. Skin is prone to the development of several diseases, and the mechanisms in the pathogenesis of aged skin are still poorly understood. However, a growing body of evidence from clinical and bench research has begun to provide scientific validation for the use of cocoa-derived phytochemicals as an effective approach for skin protection. Although the specific molecular and cellular mechanisms of the beneficial actions of cocoa phytochemicals remain to be elucidated, this review will provide an overview of the current literature emphasizing potential cytoprotective pathways modulated by cocoa and its polyphenolic components. Moreover, we will summarize in vivo studies showing that bioactive compounds of cocoa may have a positive impact on skin health. PMID:25116848
Monitoring the southwestern Wyoming landscape—A foundation for management and science
Manier, Daniel J.; Anderson, Patrick J.; Assal, Timothy J.; Chong, Geneva W.; Melcher, Cynthia P.
2017-08-29
Natural resource monitoring involves repeated collections of resource condition data and analyses to detect possible changes and identify underlying causes of changes. For natural resource agencies, monitoring provides the foundation for management and science. Specifically, analyses of monitoring data allow managers to better understand effects of land-use and other changes on important natural resources and to achieve their conservation and management goals. Examples of natural resources monitored on public lands include wildlife habitats, plant productivity, animal movements and population trends, soil chemistry, and water quality and quantity. Broader definitions of monitoring also recognize the need for scientifically valid data to help support planning efforts and informed decisions, to develop adaptive management strategies, and to provide the means for evaluating management outcomes.
[TOPICS-MDS: a versatile resource for generating scientific and social knowledge for elderly care].
van den Brink, Danielle; Lutomski, Jennifer E; Qin, Li; den Elzen, Wendy P J; Kempen, Gertrudis I J M; Krabbe, Paul F M; Steyerberg, Ewout W; Muntinga, Maaike; Moll van Charante, Eric P; Bleijenberg, Nienke; Olde Rikkert, Marcel G M; Melis, René J F
2015-04-01
Developed as part of the National Care for the Elderly Programme (NPO), TOPICS-MDS is a uniform, national database on the health and wellbeing of the older persons and caregivers who participated in NPO-funded projects. TOPICS-MDS Consortium has gained extensive experience in constructing a standardized questionnaire to collect relevant health care data on quality of life, health services utilization, and informal care use. A proactive approach has been undertaken not only to ensure the standardization and validation of instruments but also the infrastructure for external data requests. Efforts have been made to promote scientifically and socially responsible use of TOPICS-MDS; data has been available for secondary use since early 2014. Through this data sharing initiative, researchers can explore health issues in a broader framework which may have not been possible within individual NPO projects; this broader framework is highly relevant for influencing health policy. In this article, we provide an overview of the development and on-going progress of TOPICS-MDS. We further describe how information derived from TOPICS-MDS can be applied to facilitate future scientific innovations and public health initiatives to improve care for frail older persons and their caregivers.
Application of Metamorphic Testing to Supervised Classifiers
Xie, Xiaoyuan; Ho, Joshua; Kaiser, Gail; Xu, Baowen; Chen, Tsong Yueh
2010-01-01
Many applications in the field of scientific computing - such as computational biology, computational linguistics, and others - depend on Machine Learning algorithms to provide important core functionality to support solutions in the particular problem domains. However, it is difficult to test such applications because often there is no “test oracle” to indicate what the correct output should be for arbitrary input. To help address the quality of such software, in this paper we present a technique for testing the implementations of supervised machine learning classification algorithms on which such scientific computing software depends. Our technique is based on an approach called “metamorphic testing”, which has been shown to be effective in such cases. More importantly, we demonstrate that our technique not only serves the purpose of verification, but also can be applied in validation. In addition to presenting our technique, we describe a case study we performed on a real-world machine learning application framework, and discuss how programmers implementing machine learning algorithms can avoid the common pitfalls discovered in our study. We also discuss how our findings can be of use to other areas outside scientific computing, as well. PMID:21243103
Ciabatti, I; Marchesi, U; Froiio, A; Paternò, A; Ruggeri, M; Amaddeo, D
2005-08-01
The National Reference Centre for Genetically Modified Organisms (GMO) detection was established in 2002 within the Istituto Zooprofilattico Sperimentale Lazio e Toscana, with the aim of providing scientific and technical support to the National Health System and to the Ministry of Health within the scope of the regulation of GMO use in food and feed.The recently adopted EU legislation on GMOs (Regulation CE no. 1829/2003 and no. 1830/2003) introduced more rigorous procedures for the authorisation, labelling and analytical control of food and feed consisting, containing or derived from GMOs. The National Reference Centre, besides its institutional tasks as one of the laboratories of the Italian National Health System, collects and analyses data and results of the national official control of GMOs; carries out scientific research aimed at developing, improving, validating and harmonising detection and quantification methods, in cooperation with other scientific institutions, the Community Reference Laboratory and within the European Network of GMOs laboratories (ENGL); collaborates with the Ministry of Health in the definition of control programmes and promotes educational and training initiatives. Objectives defined for 2004-2006, activities in progress and goals already achieved are presented.
On the map: Nature and Science editorials.
Waaijer, Cathelijn J F; van Bochove, Cornelis A; van Eck, Nees Jan
2011-01-01
Bibliometric mapping of scientific articles based on keywords and technical terms in abstracts is now frequently used to chart scientific fields. In contrast, no significant mapping has been applied to the full texts of non-specialist documents. Editorials in Nature and Science are such non-specialist documents, reflecting the views of the two most read scientific journals on science, technology and policy issues. We use the VOSviewer mapping software to chart the topics of these editorials. A term map and a document map are constructed and clusters are distinguished in both of them. The validity of the document clustering is verified by a manual analysis of a sample of the editorials. This analysis confirms the homogeneity of the clusters obtained by mapping and augments the latter with further detail. As a result, the analysis provides reliable information on the distribution of the editorials over topics, and on differences between the journals. The most striking difference is that Nature devotes more attention to internal science policy issues and Science more to the political influence of scientists. ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (doi:10.1007/s11192-010-0205-9) contains supplementary material, which is available to authorized users.
“Living proof” and the pseudo-science of alternative cancer treatments
Vickers, Andrew J.; Cassileth, Barrie R.
2008-01-01
Michael Gearin-Tosh was an English Professor at Oxford University who was diagnosed with multiple myeloma in 1994. He rejected conventional chemotherapeutic approaches and turned to a variety of alternative cancer treatments, particularly those involving nutritional supplements and dietary change. In 2002, Dr Gearin-Tosh published a book, “Living Proof”, recounting his experiences. The book gained significant public and media attention. One chapter was written by Carmen Wheatley, an advocate of alternative cancer treatments. In distinction to Dr Gearin-Tosh’s personal story, Dr Wheatley makes general claims about cancer treatment that are supposedly based on the research literature. This appears to provide scientific validation for a highly unconventional program of cancer care. However, the scientific case made for alternative cancer treatments in “Living Proof” does not bear serious examination. There are numerous inaccuracies, omissions and misrepresentations. Many important claims are either entirely unsubstantiated or not supported by the literature cited. In conclusion, a highly publicized book gives the impression that alternative cancer treatments are supported by scientific research. It also suggests that little progress has been made in the conventional treatment of myeloma. This is highly misleading and may lead to cancer patients rejecting effective treatments. PMID:18302909
Living proof and the pseudoscience of alternative cancer treatments.
Vickers, Andrew J; Cassileth, Barrie R
2008-01-01
Michael Gearin-Tosh was an English professor at Oxford University who was diagnosed with multiple myeloma in 1994. He rejected conventional chemotherapeutic approaches and turned to a variety of alternative cancer treatments, particularly those involving nutritional supplements and dietary change. In 2002, Dr. Gearin-Tosh published a book, Living Proof: A Medical Mutiny, recounting his experiences. The book gained significant public and media attention. One chapter was written by Carmen Wheatley, an advocate of alternative cancer treatments. In distinction to Dr. Gearin-Tosh's personal story, Dr. Wheatley makes general claims about cancer treatment that are supposedly based on the research literature. This appears to provide scientific validation for a highly unconventional program of cancer care. However, the scientific case made for alternative cancer treatments in Living Proof does not bear serious examination. There are numerous inaccuracies, omissions, and misrepresentations. Many important claims are either entirely unsubstantiated or not supported by the literature cited. In conclusion, a highly publicized book gives the impression that alternative cancer treatments are supported by scientific research. It also suggests that little progress has been made in the conventional treatment of myeloma. This is highly misleading and may lead to cancer patients rejecting effective treatments.
The Dartmouth Database of Children’s Faces: Acquisition and Validation of a New Face Stimulus Set
Dalrymple, Kirsten A.; Gomez, Jesse; Duchaine, Brad
2013-01-01
Facial identity and expression play critical roles in our social lives. Faces are therefore frequently used as stimuli in a variety of areas of scientific research. Although several extensive and well-controlled databases of adult faces exist, few databases include children’s faces. Here we present the Dartmouth Database of Children’s Faces, a set of photographs of 40 male and 40 female Caucasian children between 6 and 16 years-of-age. Models posed eight facial expressions and were photographed from five camera angles under two lighting conditions. Models wore black hats and black gowns to minimize extra-facial variables. To validate the images, independent raters identified facial expressions, rated their intensity, and provided an age estimate for each model. The Dartmouth Database of Children’s Faces is freely available for research purposes and can be downloaded by contacting the corresponding author by email. PMID:24244434
A new approach to assess movements and isometric postures of spine and trunk at the workplace.
Wunderlich, Max; Rüther, Thomas; Essfeld, Dieter; Erren, Thomas C; Piekarski, Claus; Leyk, Dieter
2011-08-01
Low back pain is regarded as the primary cause of occupational disability in many countries worldwide. However, there is a lack of valid assessment of kinematic spine and trunk parameters to provide further insight into occupational spine loads. A new 3-dimensional mobile measurement system (3D-SpineMoveGuard) was developed and evaluated by means of repeated dynamic and isometric trunk positions by 10 male and 10 female volunteers. The interclass correlation coefficient indicates high test-retest reliability (r = 0.975-0.999) of the 3D-SpineMoveGuard. Moreover, analysis of validity revealed almost identical results for the new measurement system. The evaluation study indicates a good scientific quality for the use in occupational task analyses. The objective assessment of indirectly measured spine and trunk kinematics will give further insight to predict and prevent job-related spine loads.
A campaign to end animal testing: introducing the PETA International Science Consortium Ltd.
Stoddart, Gilly; Brown, Jeffrey
2014-12-01
The successful development and validation of non-animal techniques, or the analysis of existing data to satisfy regulatory requirements, provide no guarantee that this information will be used in place of animal experiments. In order to advocate for the replacement of animal-based testing requirements, the PETA International Science Consortium Ltd (PISC) liaises with industry, regulatory and research agencies to establish and promote clear paths to validation and regulatory use of non-animal techniques. PISC and its members use an approach that identifies, promotes and verifies the implementation of good scientific practices in place of testing on animals. Examples of how PISC and its members have applied this approach to minimise the use of animals for the Registration, Evaluation, Authorisation and Restriction of Chemicals regulation in the EU and testing of cosmetics on animals in India, are described. 2014 FRAME.
Combining accuracy assessment of land-cover maps with environmental monitoring programs
Stephen V. Stehman; Raymond L. Czaplewski; Sarah M. Nusser; Limin Yang; Zhiliang Zhu
2000-01-01
A scientifically valid accuracy assessment of a large-area, land-cover map is expensive. Environmental monitoring programs offer a potential source of data to partially defray the cost of accuracy assessment while still maintaining the statistical validity. In this article, three general strategies for combining accuracy assessment and environmental monitoring...
The Development and Validation of a Learning Progression for Argumentation in Science
ERIC Educational Resources Information Center
Osborne, Jonathan F.; Henderson, J. Bryan; MacPherson, Anna; Szu, Evan; Wild, Andrew; Yao, Shi-Ying
2016-01-01
Given the centrality of argumentation in the Next Generation Science Standards, there is an urgent need for an empirically validated learning progression of this core practice and the development of high-quality assessment items. Here, we introduce a hypothesized three-tiered learning progression for scientific argumentation. The learning…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koh, J. H.; Ng, E. Y. K.; Robertson, Amy
As part of a collaboration of the National Renewable Energy Laboratory (NREL) and SWAY AS, NREL installed scientific wind, wave, and motion measurement equipment on the spar-type 1/6.5th-scale prototype SWAY floating offshore wind system. The equipment enhanced SWAY's data collection and allowed SWAY to verify the concept and NREL to validate a FAST model of the SWAY design in an open-water condition. Nanyang Technological University (NTU), in collaboration with NREL, assisted with the validation. This final report gives an overview of the SWAY prototype and NREL and NTU's efforts to validate a model of the system. The report provides amore » summary of the different software tools used in the study, the modeling strategies, and the development of a FAST model of the SWAY prototype wind turbine, including justification of the modeling assumptions. Because of uncertainty in system parameters and modeling assumptions due to the complexity of the design, several system properties were tuned to better represent the system and improve the accuracy of the simulations. Calibration was performed using data from a static equilibrium test and free-decay tests.« less
Studying Sexual Aggression: A Review of the Evolution and Validity of Laboratory Paradigms
Davis, Kelly Cue; George, William H.; Nagayama Hall, Gordon C.; Parrott, Dominic J.; Tharp, Andra Teten; Stappenbeck, Cynthia A.
2018-01-01
Objective Researchers have endeavored for decades to develop and implement experimental assessments of sexual aggression and its precursors to capitalize on the many scientific advantages offered by laboratory experiments, such as rigorous control of key variables and identification of causal relationships. The purpose of this review is to provide an overview of and commentary on the evolution of these laboratory-based methods. Conclusions To date, two primary types of sexual aggression laboratory studies have been developed: those that involve behavioral analogues of sexual aggression and those that assess postulated precursors to sexually aggressive behavior. Although the study of sexual aggression in the laboratory is fraught with methodological challenges, validity concerns, and ethical considerations, advances in the field have resulted in greater methodological rigor, more precise dependent measures, and improved experimental validity, reliability, and realism. Because highly effective sexual aggression prevention strategies remain elusive, continued laboratory-based investigation of sexual aggression coupled with translation of critical findings to the development and modification of sexual aggression prevention programs remains an important task for the field. PMID:29675289
AERONET's Development and Contributions through Two Decades of Aerosol Research
NASA Astrophysics Data System (ADS)
Holben, B. N.
2016-12-01
The name Brent Holben has been synonymous with AERONET since it's inception nearly two and a half decades ago. Like most scientific endeavors, progress relies on collaboration, persistence and the occasional good idea at the right time. And so it is with AERONET. I will use this opportunity to trace the history of AERONET's development and the scientific achievements that we, as a community, have developed and profited from in our research and understanding of aerosols, describe measurements from this simple instrument applied on a grand scale that created new research opportunities and most importantly acknowledge those that have been and continue to be key in AERONET contributions to aerosol science. Born from a need to remove atmospheric effects in remotely sensed data in the 1980's, molded at a confluence of ideas and shaped as a public domain database, the program has grown from a prototype instrument in 1992 designed to routinely monitor biomass burning aerosol optical depth to over 600 globally distributed sites providing near real-time aerosol properties for satellite validation, assimilation in models and access for numerous research projects. Although standardization and calibration are fundamental elements for scientific success, development for the scientific needs of the community drive new approaches for reprocessing archival data and making new measurements. I'll discuss these and glimpse into the future for AERONET.
Validation of the Hospital Ethical Climate Survey for older people care.
Suhonen, Riitta; Stolt, Minna; Katajisto, Jouko; Charalambous, Andreas; Olson, Linda L
2015-08-01
The exploration of the ethical climate in the care settings for older people is highlighted in the literature, and it has been associated with various aspects of clinical practice and nurses' jobs. However, ethical climate is seldom studied in the older people care context. Valid, reliable, feasible measures are needed for the measurement of ethical climate. This study aimed to test the reliability, validity, and sensitivity of the Hospital Ethical Climate Survey in healthcare settings for older people. A non-experimental cross-sectional study design was employed, and a survey using questionnaires, including the Hospital Ethical Climate Survey was used for data collection. Data were analyzed using descriptive statistics, inferential statistics, and multivariable methods. Survey data were collected from a sample of nurses working in the care settings for older people in Finland (N = 1513, n = 874, response rate = 58%) in 2011. This study was conducted according to good scientific inquiry guidelines, and ethical approval was obtained from the university ethics committee. The mean score for the Hospital Ethical Climate Survey total was 3.85 (standard deviation = 0.56). Cronbach's alpha was 0.92. Principal component analysis provided evidence for factorial validity. LISREL provided evidence for construct validity based on goodness-of-fit statistics. Pearson's correlations of 0.68-0.90 were found between the sub-scales and the Hospital Ethical Climate Survey. The Hospital Ethical Climate Survey was found able to reveal discrimination across care settings and proved to be a valid and reliable tool for measuring ethical climate in care settings for older people and sensitive enough to reveal variations across various clinical settings. The Finnish version of the Hospital Ethical Climate Survey, used mainly in the hospital settings previously, proved to be a valid instrument to be used in the care settings for older people. Further studies are due to analyze the factor structure and some items of the Hospital Ethical Climate Survey. © The Author(s) 2014.
ERIC Educational Resources Information Center
Yang, Fang-Ying; Chang, Cheng-Chieh; Chen, Li-Ling; Chen, Yi-Chun
2016-01-01
The main purpose of this study was to explore learners' beliefs about science reading and scientific epistemic beliefs, and how these beliefs were associating with their understanding of science texts. About 400 10th graders were involved in the development and validation of the Beliefs about Science Reading Inventory (BSRI). To find the effects…
ERIC Educational Resources Information Center
Martins, Isabel P.; Veiga, Luisa
2001-01-01
Argues that science education is a fundamental tool for global education and that it must be introduced in early years as a first step to a scientific culture for all. Describes testing validity of a didactic strategy for developing the learning of concepts, which was based upon an experimental work approach using everyday life contexts. (Author)
Should you recommend a low-carb, high-protein diet?
Tapper-Gardzina, Yvonne; Cotugna, Nancy; Vickery, Connie E
2002-04-01
Despite the billions of dollars spent each year on weight-loss diets and products, few individuals maintain their weight loss after initiating popular diet programs. One diet that has raised safety concerns among the scientific community is the low-carbohydrate, high-protein diet. This article evaluates the scientific validity of this diet so that clinicians can appropriately advise patients.
ERIC Educational Resources Information Center
Adeleke, A. A.; Joshua, E. O.
2015-01-01
Physics literacy plays a crucial part in global technological development as several aspects of science and technology apply concepts and principles of physics in their operations. However, the acquisition of scientific literacy in physics in our society today is not encouraging enough to the desirable standard. Therefore, this study focuses on…
Political orientations do not cancel out, and politics is not about truth.
Pfister, Hans-Rüdiger; Böhm, Gisela
2015-01-01
Duarte et al. propose that divergent political biases cancel each other out such that increasing political diversity will improve scientific validity. We argue that this idea is misguided. Their recommendations for improving political diversity in academia bear the danger of imposing political interests on science. Scientific scrutiny and criticism are the only viable remedies for bad science.
On Being a Scientist: A Guide to Responsible Conduct in Research--Third Edition
ERIC Educational Resources Information Center
National Academies Press, 2009
2009-01-01
The scientific research enterprise is built on a foundation of trust. Scientists trust that the results reported by others are valid. Society trusts that the results of research reflect an honest attempt by scientists to describe the world accurately and without bias. But this trust will endure only if the scientific community devotes itself to…
A systematic review of health care efficiency measures.
Hussey, Peter S; de Vries, Han; Romley, John; Wang, Margaret C; Chen, Susan S; Shekelle, Paul G; McGlynn, Elizabeth A
2009-06-01
To review and characterize existing health care efficiency measures in order to facilitate a common understanding about the adequacy of these methods. Review of the MedLine and EconLit databases for articles published from 1990 to 2008, as well as search of the "gray" literature for additional measures developed by private organizations. We performed a systematic review for existing efficiency measures. We classified the efficiency measures by perspective, outputs, inputs, methods used, and reporting of scientific soundness. We identified 265 measures in the peer-reviewed literature and eight measures in the gray literature, with little overlap between the two sets of measures. Almost all of the measures did not explicitly consider the quality of care. Thus, if quality varies substantially across groups, which is likely in some cases, the measures reflect only the costs of care, not efficiency. Evidence on the measures' scientific soundness was mostly lacking: evidence on reliability or validity was reported for six measures (2.3 percent) and sensitivity analyses were reported for 67 measures (25.3 percent). Efficiency measures have been subjected to few rigorous evaluations of reliability and validity, and methods of accounting for quality of care in efficiency measurement are not well developed at this time. Use of these measures without greater understanding of these issues is likely to engender resistance from providers and could lead to unintended consequences.
Dautzenberg, B; Adler, M; Garelik, D; Loubrieu, J F; Mathern, G; Peiffer, G; Perriot, J; Rouquet, R M; Schmitt, A; Underner, M; Urban, T
2017-02-01
A group of 11 French medical experts has developed guidelines through a Delphi progressive consensus about smoking management at the e-cigarette era. The lack of scientific data about e-cigarettes led the experts to set out recommendations, mainly based on clinical practice while waiting for scientific validations. The validated smoking cessation treatments keep the first place in the prevention and the treatment of tobacco-induced damages. The e-cigarette, experimented by a large proportion of smokers, is a safer product than tobacco. The health professional must answer the patients about the e-cigarettes: (1) A smoker who questions about e-cigarettes should receive information. Even if there is a lack of data, e-cigarettes offer much lower risks than tobacco. (2) A dual user is at high risk of returning to exclusive tobacco use; he should also optimize other nicotine intakes by combining nicotine replacement therapy and/or optimizing the nicotine intake through the e-cigarette. (3) A smoker who wish to use the e-cigarette in order to quit with or without associated pharmacological treatment should be accompanied and not discouraged. (4) A vaper who is tired to continuing to vape should be accompanied to quit. Specific guidelines are also provided for adolescents, pregnant women, patients during perioperative periods and also for pulmonary, cardiac and schizophrenic patients. Copyright © 2017 SPLF. Published by Elsevier Masson SAS. All rights reserved.
Cambronero Saiz, Belén; Ruiz Cantero, María Teresa; Papí Gálvez, Natalia
2012-01-01
To review the scientific literature on pharmaceutical advertising aimed at health professionals in order to determine whether gender bias has decreased and the quality of information in pharmaceutical advertising has improved over time. We performed a content analysis of original articles dealing with medical drug promotion (1998-2008), according to quality criteria such as (a) the number, validity and accessibility of bibliographic references provided in pharmaceutical advertising and (b) the extent to which gender representations were consistent with the prevalence of the diseases. Databases: PUBMED, Medline, Scopus, Sociological Abstract, Eric and LILACS. We reviewed 31 articles that analyzed advertising in medical journals from 1975-2005 and were published between 1998 and 2008. We found that the number of references used to support pharmaceutical advertising claims increased from 1975 but that 50% of these references were not valid. There was a tendency to depict men in paid productive roles, while women appeared inside the home or in non-occupational social contexts. Advertisements for psychotropic and cardiovascular drugs overrepresented women and men respectively. The use of bibliographic references increased between 1998 and 2008. However, representation of traditional male-female roles was similar in 1975 and 2005. Pharmaceutical advertisements may contribute to reinforcing the perception that certain diseases are associated with the most frequently portrayed sex. Copyright © 2011 SESPAS. Published by Elsevier Espana. All rights reserved.
Traditional uses, phytochemistry and pharmacology of wild banana (Musa acuminata Colla): A review.
Mathew, Nimisha Sarah; Negi, Pradeep Singh
2017-01-20
Musa acuminata, the wild species of banana is a plant of the tropical and subtropical regions. Over the past few decades, the health benefits of M. acuminata have received much attention. All parts of the plant including fruits, peel, pseudostem, corm, flowers, leaves, sap and roots have found their use in the treatment of many diseases in traditional medicine. Literature review have indicated use of M. acuminata in the treatment of various diseases such as fever, cough, bronchitis, dysentery, allergic infections, sexually transmitted infections, and some of the non-communicable diseases. The reported pharmacological activities of M. acuminata include antioxidant, antidiabetic, immunomodulatory, hypolipidemic, anticancer, and antimicrobial especially anti-HIV activity. This review presents information on the phytochemicals and pharmacological studies to validate the traditional use of different parts of M. acuminata in various diseases and ailments. A comprehensive assessment of the biological activities of M. acuminata extracts is included and possible mechanisms and phytochemicals involved have also been correlated to provide effective intervention strategies for preventing or managing diseases. A literature search was performed on M. acuminata using ethnobotanical textbooks, published articles in peer-reviewed journals, local magazines, unpublished materials, and scientific databases such as Pubmed, Scopus, Web of Science, ScienceDirect, and Google Scholar. The Plant List, Promusa, Musalit, the Integrated Taxonomic Information System (ITIS) databases were used to validate the scientific names and also provide information on the subspecies and cultivars of M. acuminata. The edible part of M. acuminata provides energy, vitamins and minerals. All other parts of the plant have been used in the treatment of many diseases in traditional medicine. The rich diversity of phytochemicals present in them probably contributes to their beneficial effects, and validates the role of M. acuminata plant parts used by various tribes and ethnic groups across the geographical areas of the world. This review presents information on phytochemicals and pharmacological activities of M. acuminata plant parts. Pharmacological studies support the traditional uses of the plant, and probably validate the uses of M. acuminata by the indigenous people to treat and heal many infections and diseases. Some studies on animal models have been carried out, which also provide evidence of efficacy of the M. acuminata plant as a therapeutic agent. These observations suggest that M. acuminata plant parts possesses pluripharmacological properties, and can be used in designing potent therapeutic agents. However, individual bioactive constituent(s) from different parts of this plant need further investigations to confirm various pharmacological claims, and to explore the potential of M. acuminata in the development of drugs and use in functional foods. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Scientific goals of the Cooperative Multiscale Experiment (CME)
NASA Technical Reports Server (NTRS)
Cotton, William
1993-01-01
Mesoscale Convective Systems (MCS) form the focus of CME. Recent developments in global climate models, the urgent need to improve the representation of the physics of convection, radiation, the boundary layer, and orography, and the surge of interest in coupling hydrologic, chemistry, and atmospheric models of various scales, have emphasized the need for a broad interdisciplinary and multi-scale approach to understanding and predicting MCS's and their interactions with processes at other scales. The role of mesoscale systems in the large-scale atmospheric circulation, the representation of organized convection and other mesoscale flux sources in terms of bulk properties, and the mutually consistent treatment of water vapor, clouds, radiation, and precipitation, are all key scientific issues concerning which CME will seek to increase understanding. The manner in which convective, mesoscale, and larger scale processes interact to produce and organize MCS's, the moisture cycling properties of MCS's, and the use of coupled cloud/mesoscale models to better understand these processes, are also major objectives of CME. Particular emphasis will be placed on the multi-scale role of MCS's in the hydrological cycle and in the production and transport of chemical trace constituents. The scientific goals of the CME consist of the following: understand how the large and small scales of motion influence the location, structure, intensity, and life cycles of MCS's; understand processes and conditions that determine the relative roles of balanced (slow manifold) and unbalanced (fast manifold) circulations in the dynamics of MCS's throughout their life cycles; assess the predictability of MCS's and improve the quantitative forecasting of precipitation and severe weather events; quantify the upscale feedback of MCS's to the large-scale environment and determine interrelationships between MCS occurrence and variations in the large-scale flow and surface forcing; provide a data base for initialization and verification of coupled regional, mesoscale/hydrologic, mesoscale/chemistry, and prototype mesoscale/cloud-resolving models for prediction of severe weather, ceilings, and visibility; provide a data base for initialization and validation of cloud-resolving models, and for assisting in the fabrication, calibration, and testing of cloud and MCS parameterization schemes; and provide a data base for validation of four dimensional data assimilation schemes and algorithms for retrieving cloud and state parameters from remote sensing instrumentation.
Educational and Scientific Applications of Climate Model Diagnostic Analyzer
NASA Astrophysics Data System (ADS)
Lee, S.; Pan, L.; Zhai, C.; Tang, B.; Kubar, T. L.; Zhang, J.; Bao, Q.
2016-12-01
Climate Model Diagnostic Analyzer (CMDA) is a web-based information system designed for the climate modeling and model analysis community to analyze climate data from models and observations. CMDA provides tools to diagnostically analyze climate data for model validation and improvement, and to systematically manage analysis provenance for sharing results with other investigators. CMDA utilizes cloud computing resources, multi-threading computing, machine-learning algorithms, web service technologies, and provenance-supporting technologies to address technical challenges that the Earth science modeling and model analysis community faces in evaluating and diagnosing climate models. As CMDA infrastructure and technology have matured, we have developed the educational and scientific applications of CMDA. Educationally, CMDA supported the summer school of the JPL Center for Climate Sciences for three years since 2014. In the summer school, the students work on group research projects where CMDA provide datasets and analysis tools. Each student is assigned to a virtual machine with CMDA installed in Amazon Web Services. A provenance management system for CMDA is developed to keep track of students' usages of CMDA, and to recommend datasets and analysis tools for their research topic. The provenance system also allows students to revisit their analysis results and share them with their group. Scientifically, we have developed several science use cases of CMDA covering various topics, datasets, and analysis types. Each use case developed is described and listed in terms of a scientific goal, datasets used, the analysis tools used, scientific results discovered from the use case, an analysis result such as output plots and data files, and a link to the exact analysis service call with all the input arguments filled. For example, one science use case is the evaluation of NCAR CAM5 model with MODIS total cloud fraction. The analysis service used is Difference Plot Service of Two Variables, and the datasets used are NCAR CAM total cloud fraction and MODIS total cloud fraction. The scientific highlight of the use case is that the CAM5 model overall does a fairly decent job at simulating total cloud cover, though simulates too few clouds especially near and offshore of the eastern ocean basins where low clouds are dominant.
An open source workflow for 3D printouts of scientific data volumes
NASA Astrophysics Data System (ADS)
Loewe, P.; Klump, J. F.; Wickert, J.; Ludwig, M.; Frigeri, A.
2013-12-01
As the amount of scientific data continues to grow, researchers need new tools to help them visualize complex data. Immersive data-visualisations are helpful, yet fail to provide tactile feedback and sensory feedback on spatial orientation, as provided from tangible objects. The gap in sensory feedback from virtual objects leads to the development of tangible representations of geospatial information to solve real world problems. Examples are animated globes [1], interactive environments like tangible GIS [2], and on demand 3D prints. The production of a tangible representation of a scientific data set is one step in a line of scientific thinking, leading from the physical world into scientific reasoning and back: The process starts with a physical observation, or from a data stream generated by an environmental sensor. This data stream is turned into a geo-referenced data set. This data is turned into a volume representation which is converted into command sequences for the printing device, leading to the creation of a 3D printout. As a last, but crucial step, this new object has to be documented and linked to the associated metadata, and curated in long term repositories to preserve its scientific meaning and context. The workflow to produce tangible 3D data-prints from science data at the German Research Centre for Geosciences (GFZ) was implemented as a software based on the Free and Open Source Geoinformatics tools GRASS GIS and Paraview. The workflow was successfully validated in various application scenarios at GFZ using a RapMan printer to create 3D specimens of elevation models, geological underground models, ice penetrating radar soundings for planetology, and space time stacks for Tsunami model quality assessment. While these first pilot applications have demonstrated the feasibility of the overall approach [3], current research focuses on the provision of the workflow as Software as a Service (SAAS), thematic generalisation of information content and long term curation. [1] http://www.arcscience.com/systemDetails/omniTechnology.html [2] http://video.esri.com/watch/53/landscape-design-with-tangible-gis [3] Löwe et al. (2013), Geophysical Research Abstracts, Vol. 15, EGU2013-1544-1.
Benson, Sarah J; Lennard, Christopher J; Hill, David M; Maynard, Philip; Roux, Claude
2010-01-01
A significant amount of research has been conducted into the use of stable isotopes to assist in determining the origin of various materials. The research conducted in the forensic field shows the potential of isotope ratio mass spectrometry (IRMS) to provide a level of discrimination not achievable utilizing traditional forensic techniques. Despite the research there have been few, if any, publications addressing the validation and measurement uncertainty of the technique for forensic applications. This study, the first in a planned series, presents validation data for the measurement of bulk nitrogen isotope ratios in ammonium nitrate (AN) using the DELTA(plus)XP (Thermo Finnigan) IRMS instrument equipped with a ConFlo III interface and FlashEA 1112 elemental analyzer (EA). Appropriate laboratory standards, analytical methods and correction calculations were developed and evaluated. A validation protocol was developed in line with the guidelines provided by the National Association of Testing Authorities, Australia (NATA). Performance characteristics including: accuracy, precision/repeatability, reproducibility/ruggedness, robustness, linear range, and measurement uncertainty were evaluated for the measurement of nitrogen isotope ratios in AN. AN (99.5%) and ammonium thiocyanate (99.99+%) were determined to be the most suitable laboratory standards and were calibrated against international standards (certified reference materials). All performance characteristics were within an acceptable range when potential uncertainties, including the manufacturer's uncertainty of the technique and standards, were taken into account. The experiments described in this article could be used as a model for validation of other instruments for similar purposes. Later studies in this series will address the more general issue of demonstrating that the IRMS technique is scientifically sound and fit-for-purpose in the forensic explosives analysis field.
Development and evaluation of the Expressions of Moral Injury Scale-Military Version.
Currier, Joseph M; Farnsworth, Jacob K; Drescher, Kent D; McDermott, Ryon C; Sims, Brook M; Albright, David L
2018-05-01
There is consensus that military personnel can encounter a far more diverse set of challenges than researchers and clinicians have historically appreciated. Moral injury (MI) represents an emerging construct to capture behavioural, social, and spiritual suffering that may transcend and overlap with mental health diagnoses (e.g., post-traumatic stress disorder and major depressive disorder). The Expressions of Moral Injury Scale-Military Version (EMIS-M) was developed to provide a reliable and valid means for assessing the warning signs of a MI in military populations. Drawing on independent samples of veterans who had served in a war-zone environment, factor analytic results revealed 2 distinct factors related to MI expressions directed at both self (9 items) and others (8 items). These subscales generated excellent internal consistency and temporal stability over a 6-month period. When compared to measures of post-traumatic stress disorder, major depressive disorder, and other theoretically relevant constructs (e.g., forgiveness, social support, moral emotions, and combat exposure), EMIS-M scores demonstrated strong convergent, divergent, and incremental validity. In addition, although structural equation modelling findings supported a possible general MI factor in Study 2, the patterns of associations for self- and other-directed expressions yielded evidence for differential validity with varying forms of forgiveness and combat exposure. As such, the EMIS-M provides a face valid, psychometrically validated tool for assessing expressions of apparent MI subtypes in research and clinical settings. Looking ahead, the EMIS-M will hopefully advance the scientific understanding of MI while supporting innovation for clinicians to tailor evidence-based treatments and/or develop novel approaches for addressing MI in their work. Copyright © 2017 John Wiley & Sons, Ltd.
Dark Targets, Aerosols, Clouds and Toys
NASA Astrophysics Data System (ADS)
Remer, L. A.
2015-12-01
Today if you use the Thomson-Reuters Science Citations Index to search for "aerosol*", across all scientific disciplines and years, with no constraints, and you sort by number of citations, you will find a 2005 paper published in the Journal of the Atmospheric Sciences in the top 20. This is the "The MODIS Aerosol Algorithm, Products and Validation". Although I am the first author, there are in total 12 co-authors who each made a significant intellectual contribution to the paper or to the algorithm, products and validation described. This paper, that algorithm, those people lie at the heart of a lineage of scientists whose collaborations and linked individual pursuits have made a significant contribution to our understanding of radiative transfer and climate, of aerosol properties and the global aerosol system, of cloud physics and aerosol-cloud interaction, and how to measure these parameters and maximize the science that can be obtained from those measurements. The 'lineage' had its origins across the globe, from Soviet Russia to France, from the U.S. to Israel, from the Himalayas, the Sahel, the metropolises of Sao Paulo, Taipei, and the cities of east and south Asia. It came together in the 1990s and 2000s at the NASA Goddard Space Flight Center, using cultural diversity as a strength to form a common culture of scientific creativity that continues to this day. The original algorithm has spawned daughter algorithms that are being applied to new satellite and airborne sensors. The original MODIS products have been fundamental to analyses as diverse as air quality monitoring and aerosol-cloud forcing. AERONET, designed originally for the need of validation, is now its own thriving institution, and the lineage continues to push forward to provide new technology for the coming generations.
Leong, James; McAuslane, Neil; Walker, Stuart; Salek, Sam
2013-09-01
To explore the current status and need for a universal benefit-risk framework for medicines in regulatory agencies and pharmaceutical companies. A questionnaire was developed and sent to 14 mature regulatory agencies and 24 major companies. The data were analysed using descriptive statistics, for a minority of questions preceded by manual grouping of the responses. Overall response rate was 82%, and study participants included key decision makers from agencies and companies. None used a fully quantitative system, most companies preferring a qualitative method. The major reasons for this group not using semi-quantitative or quantitative systems were lack of a universal and scientifically validated framework. The main advantages of a benefit-risk framework were that it provided a systematic standardised approach to decision-making and that it acted as a tool to enhance quality of communication. It was also reported that a framework should be of value to both agencies and companies throughout the life cycle of a product. They believed that it is possible to develop an overarching benefit-risk framework that should involve relevant stakeholders in the development, validation and application of a universal framework. The entire cohort indicated common barriers to implementing a framework were resource limitations, a lack of knowledge and a scientifically validated and acceptable framework. Stakeholders prefer a semi-quantitative, overarching framework that incorporates a toolbox of different methodologies. A coordinating committee of relevant stakeholders should be formed to guide its development and implementation. Through engaging the stakeholders, these outcomes confirm sentiments and need for developing a universal benefit-risk assessment framework. Copyright © 2013 John Wiley & Sons, Ltd.
Whelan, Maurice; Eskes, Chantra
Validation is essential for the translation of newly developed alternative approaches to animal testing into tools and solutions suitable for regulatory applications. Formal approaches to validation have emerged over the past 20 years or so and although they have helped greatly to progress the field, it is essential that the principles and practice underpinning validation continue to evolve to keep pace with scientific progress. The modular approach to validation should be exploited to encourage more innovation and flexibility in study design and to increase efficiency in filling data gaps. With the focus now on integrated approaches to testing and assessment that are based on toxicological knowledge captured as adverse outcome pathways, and which incorporate the latest in vitro and computational methods, validation needs to adapt to ensure it adds value rather than hinders progress. Validation needs to be pursued both at the method level, to characterise the performance of in vitro methods in relation their ability to detect any association of a chemical with a particular pathway or key toxicological event, and at the methodological level, to assess how integrated approaches can predict toxicological endpoints relevant for regulatory decision making. To facilitate this, more emphasis needs to be given to the development of performance standards that can be applied to classes of methods and integrated approaches that provide similar information. Moreover, the challenge of selecting the right reference chemicals to support validation needs to be addressed more systematically, consistently and in a manner that better reflects the state of the science. Above all however, validation requires true partnership between the development and user communities of alternative methods and the appropriate investment of resources.
Validation of Aura Data: Needs and Implementation
NASA Astrophysics Data System (ADS)
Froidevaux, L.; Douglass, A. R.; Schoeberl, M. R.; Hilsenrath, E.; Kinnison, D. E.; Kroon, M.; Sander, S. P.
2003-12-01
Validation of Aura data: needs and implementation L. Froidevaux, A. R. Douglass, M. R. Schoeberl, E. Hilsenrath, D. Kinnison, M. Kroon, and S. P. Sander We describe the needs for validation of the Aura scientific data products expected in 2004 and for several years thereafter, as well as the implementation plan to fullfill these needs. Many profiles of stratospheric and tropospheric composition are expected from the combination of four instruments aboard Aura, along with column abundances, aerosol and cloud information. The Aura validation working group and the Aura Project have been developing programs and collaborations that are expected to lead to a significant number of validation activities after the Aura launch (in early 2004). Spatial and temporal variability in the lower stratosphere and troposphere present challenges to validation of Aura measurements even where cloud contamination effects can be minimized. Data from ground-based networks, balloons, and other satellites will contribute in a major way to Aura data validation. In addition, plans are in place to obtain correlative data for special conditions, such as profiles of O3 and NO2 in polluted areas. Several aircraft campaigns planned for the 2004-2007 time period will provide additional tropospheric and lower stratospheric validation opportunities for Aura; some atmospheric science goals will be addressed by the eventual combination of these data sets. A team of "Aura liaisons" will assist in the dissemination of information about various correlative measurements to be expected in the above timeframe, along with any needed protocols and agreements on data exchange and file formats. A data center is being established at the Goddard Space Flight Center to collect and distribute the various data files to be used in the validation of the Aura data.
Best Practices: How to Evaluate Psychological Science for Use by Organizations
Fiske, Susan T.; Borgida, Eugene
2014-01-01
We discuss how organizations can evaluate psychological science for its potential usefulness to their own purposes. Common sense is often the default but inadequate alternative, and bench-marking supplies only collective hunches instead of validated principles. External validity is an empirical process of identifying moderator variables, not a simple yes-no judgment about whether lab results replicate in the field. Hence, convincing criteria must specify what constitutes high-quality empirical evidence for organizational use. First, we illustrate some theories and science that have potential use. Then we describe generally accepted criteria for scientific quality and consensus, starting with peer review for quality, and scientific agreement in forms ranging from surveys of experts to meta-analyses to National Research Council consensus reports. Linkages of basic science to organizations entail communicating expert scientific consensus, motivating managerial interest, and translating broad principles to specific contexts. We close with parting advice to both sides of the researcher-practitioner divide. PMID:24478533
The NIH 3D Print Exchange: A Public Resource for Bioscientific and Biomedical 3D Prints.
Coakley, Meghan F; Hurt, Darrell E; Weber, Nick; Mtingwa, Makazi; Fincher, Erin C; Alekseyev, Vsevelod; Chen, David T; Yun, Alvin; Gizaw, Metasebia; Swan, Jeremy; Yoo, Terry S; Huyen, Yentram
2014-09-01
The National Institutes of Health (NIH) has launched the NIH 3D Print Exchange, an online portal for discovering and creating bioscientifically relevant 3D models suitable for 3D printing, to provide both researchers and educators with a trusted source to discover accurate and informative models. There are a number of online resources for 3D prints, but there is a paucity of scientific models, and the expertise required to generate and validate such models remains a barrier. The NIH 3D Print Exchange fills this gap by providing novel, web-based tools that empower users with the ability to create ready-to-print 3D files from molecular structure data, microscopy image stacks, and computed tomography scan data. The NIH 3D Print Exchange facilitates open data sharing in a community-driven environment, and also includes various interactive features, as well as information and tutorials on 3D modeling software. As the first government-sponsored website dedicated to 3D printing, the NIH 3D Print Exchange is an important step forward to bringing 3D printing to the mainstream for scientific research and education.
Cognition in multiple sclerosis
Benedict, Ralph; Enzinger, Christian; Filippi, Massimo; Geurts, Jeroen J.; Hamalainen, Paivi; Hulst, Hanneke; Inglese, Matilde; Leavitt, Victoria M.; Rocca, Maria A.; Rosti-Otajarvi, Eija M.; Rao, Stephen
2018-01-01
Cognitive decline is recognized as a prevalent and debilitating symptom of multiple sclerosis (MS), especially deficits in episodic memory and processing speed. The field aims to (1) incorporate cognitive assessment into standard clinical care and clinical trials, (2) utilize state-of-the-art neuroimaging to more thoroughly understand neural bases of cognitive deficits, and (3) develop effective, evidence-based, clinically feasible interventions to prevent or treat cognitive dysfunction, which are lacking. There are obstacles to these goals. Our group of MS researchers and clinicians with varied expertise took stock of the current state of the field, and we identify several important practical and theoretical challenges, including key knowledge gaps and methodologic limitations related to (1) understanding and measurement of cognitive deficits, (2) neuroimaging of neural bases and correlates of deficits, and (3) development of effective treatments. This is not a comprehensive review of the extensive literature, but instead a statement of guidelines and priorities for the field. For instance, we provide recommendations for improving the scientific basis and methodologic rigor for cognitive rehabilitation research. Toward this end, we call for multidisciplinary collaborations toward development of biologically based theoretical models of cognition capable of empirical validation and evidence-based refinement, providing the scientific context for effective treatment discovery. PMID:29343470
Claim validity of print advertisements found in otolaryngology journals.
Del Signore, Anthony; Murr, Andrew H; Lustig, Lawrence R; Platt, Michael P; Jalisi, Scharukh; Pratt, Loring W; Spiegel, Jeffrey H
2011-08-01
To evaluate the accuracy and scientific evidence supporting product claims made in print advertisements within otolaryngology journals. Cross-sectional survey with literature review and multiple-reviewer evaluation. Fifty claims made within 23 unique advertisements found in prominent otolaryngology journals were selected. References to support the claims were provided within the advertisements or obtained through direct request to the manufacturer. Five academic otolaryngologists with distinct training and geographic practice locations reviewed the claims and supporting evidence. Each physician had substantial experience as an editorial reviewer, and several had specific training in research methodology and scientific methods. Of the 50 claims, only 14 were determined to be based on strong evidence (28%). With regard to the supporting references, 32 references were published sources (76%), while 3 references were package inserts and/or prescribing information (7%). Interobserver agreement among the reviewers overall was poor; however, when 3 or more of the reviewers were in agreement, only 10% of the claims were deemed correct (n = 5). Reviewers also noted that only 6% of the claims were considered well supported (n = 3). Advertisers make claims that appear in respectable journals, but greater than half of the claims reviewed were not supported by the provided reference materials.
Empirical agreement in model validation.
Jebeile, Julie; Barberousse, Anouk
2016-04-01
Empirical agreement is often used as an important criterion when assessing the validity of scientific models. However, it is by no means a sufficient criterion as a model can be so adjusted as to fit available data even though it is based on hypotheses whose plausibility is known to be questionable. Our aim in this paper is to investigate into the uses of empirical agreement within the process of model validation. Copyright © 2015 Elsevier Ltd. All rights reserved.
Bibliometrics for Social Validation.
Hicks, Daniel J
2016-01-01
This paper introduces a bibliometric, citation network-based method for assessing the social validation of novel research, and applies this method to the development of high-throughput toxicology research at the US Environmental Protection Agency. Social validation refers to the acceptance of novel research methods by a relevant scientific community; it is formally independent of the technical validation of methods, and is frequently studied in history, philosophy, and social studies of science using qualitative methods. The quantitative methods introduced here find that high-throughput toxicology methods are spread throughout a large and well-connected research community, which suggests high social validation. Further assessment of social validation involving mixed qualitative and quantitative methods are discussed in the conclusion.
Bibliometrics for Social Validation
2016-01-01
This paper introduces a bibliometric, citation network-based method for assessing the social validation of novel research, and applies this method to the development of high-throughput toxicology research at the US Environmental Protection Agency. Social validation refers to the acceptance of novel research methods by a relevant scientific community; it is formally independent of the technical validation of methods, and is frequently studied in history, philosophy, and social studies of science using qualitative methods. The quantitative methods introduced here find that high-throughput toxicology methods are spread throughout a large and well-connected research community, which suggests high social validation. Further assessment of social validation involving mixed qualitative and quantitative methods are discussed in the conclusion. PMID:28005974
Models, validation, and applied geochemistry: Issues in science, communication, and philosophy
Nordstrom, D. Kirk
2012-01-01
Models have become so fashionable that many scientists and engineers cannot imagine working without them. The predominant use of computer codes to execute model calculations has blurred the distinction between code and model. The recent controversy regarding model validation has brought into question what we mean by a ‘model’ and by ‘validation.’ It has become apparent that the usual meaning of validation may be common in engineering practice and seems useful in legal practice but it is contrary to scientific practice and brings into question our understanding of science and how it can best be applied to such problems as hazardous waste characterization, remediation, and aqueous geochemistry in general. This review summarizes arguments against using the phrase model validation and examines efforts to validate models for high-level radioactive waste management and for permitting and monitoring open-pit mines. Part of the controversy comes from a misunderstanding of ‘prediction’ and the need to distinguish logical from temporal prediction. Another problem stems from the difference in the engineering approach contrasted with the scientific approach. The reductionist influence on the way we approach environmental investigations also limits our ability to model the interconnected nature of reality. Guidelines are proposed to improve our perceptions and proper utilization of models. Use of the word ‘validation’ is strongly discouraged when discussing model reliability.
McAuliff, Bradley D; Kovera, Margaret Bull; Nunez, Gabriel
2009-06-01
This study examined the ability of jury-eligible community members (N = 248) to detect internal validity threats in psychological science presented during a trial. Participants read a case summary in which an expert testified about a study that varied in internal validity (valid, missing control group, confound, and experimenter bias) and ecological validity (high, low). Ratings of expert evidence quality and expert credibility were higher for the valid versus missing control group versions only. Internal validity did not influence verdict or ratings of plaintiff credibility and no differences emerged as a function of ecological validity. Expert evidence quality, expert credibility, and plaintiff credibility were positively correlated with verdict. Implications for the scientific reasoning literature and for trials containing psychological science are discussed.
GEOSCOPE network : continuous recording over 29 years
NASA Astrophysics Data System (ADS)
Pardo, C.; Bonaime, S.; Stutzmann, E.; Maggi, A.; Team, G.; Geoscope Team
2011-12-01
The GEOSCOPE observatory was created in 1982 and has now provided the scientific community with nearly 30 years of continuous broadband recordings . The 33 GEOSCOPE stations are installed in 19 countries, across all continents and on islands throughout the oceans. They are equipped with three component very broadband seismometers (STS1 or STS2) and 24 or 26 bit digitizers. Progressively stations are being equipped with warpless base plates which decrease long period noise on horizontal components by up to 15dB. In most stations, a pressure gauge and a thermometer are also installed. In 2011, 3 stations have been upgraded: COYC and PEL in Chile and ATD in Djibouti. Two new stations have been installed in Ivituut (station IVI, Greenland, GLISN project) and in Vanuatu islands (station SANVU). Currently, 27 stations transmit data in real or near real time to the GEOSCOPE Data Center and to tsunami warning centers. Continuous data from all stations are collected by the GEOSCOPE Data Center in Paris where they are validated, archived and made available to the international scientific community. Data are freely available to users via different interfaces according to data type (http://geoscope.ipgp.fr). We provide continuous data in real time through the seedlink protocol, validated continuous waveforms through the NetDC system and Data Handler Interface, a selection of large earthquake seismograms through the geoscope web portal. GEOSCOPE/IPGP is one of the four primary nodes of EIDA (European Integrated Data Archive) and our data are accessible through (http://www.seismicportal.eu). Noise levels for the last 20 years of continuous data are also accessible via the geoscope web site. Stations in both hemispheres show stronger noise amplitude during local winter except for the station DRV which is surrounded by sea ice in winter.
Fortier, Isabel; Burton, Paul R; Robson, Paula J; Ferretti, Vincent; Little, Julian; L’Heureux, Francois; Deschênes, Mylène; Knoppers, Bartha M; Doiron, Dany; Keers, Joost C; Linksted, Pamela; Harris, Jennifer R; Lachance, Geneviève; Boileau, Catherine; Pedersen, Nancy L; Hamilton, Carol M; Hveem, Kristian; Borugian, Marilyn J; Gallagher, Richard P; McLaughlin, John; Parker, Louise; Potter, John D; Gallacher, John; Kaaks, Rudolf; Liu, Bette; Sprosen, Tim; Vilain, Anne; Atkinson, Susan A; Rengifo, Andrea; Morton, Robin; Metspalu, Andres; Wichmann, H Erich; Tremblay, Mark; Chisholm, Rex L; Garcia-Montero, Andrés; Hillege, Hans; Litton, Jan-Eric; Palmer, Lyle J; Perola, Markus; Wolffenbuttel, Bruce HR; Peltonen, Leena; Hudson, Thomas J
2010-01-01
Background Vast sample sizes are often essential in the quest to disentangle the complex interplay of the genetic, lifestyle, environmental and social factors that determine the aetiology and progression of chronic diseases. The pooling of information between studies is therefore of central importance to contemporary bioscience. However, there are many technical, ethico-legal and scientific challenges to be overcome if an effective, valid, pooled analysis is to be achieved. Perhaps most critically, any data that are to be analysed in this way must be adequately ‘harmonized’. This implies that the collection and recording of information and data must be done in a manner that is sufficiently similar in the different studies to allow valid synthesis to take place. Methods This conceptual article describes the origins, purpose and scientific foundations of the DataSHaPER (DataSchema and Harmonization Platform for Epidemiological Research; http://www.datashaper.org), which has been created by a multidisciplinary consortium of experts that was pulled together and coordinated by three international organizations: P3G (Public Population Project in Genomics), PHOEBE (Promoting Harmonization of Epidemiological Biobanks in Europe) and CPT (Canadian Partnership for Tomorrow Project). Results The DataSHaPER provides a flexible, structured approach to the harmonization and pooling of information between studies. Its two primary components, the ‘DataSchema’ and ‘Harmonization Platforms’, together support the preparation of effective data-collection protocols and provide a central reference to facilitate harmonization. The DataSHaPER supports both ‘prospective’ and ‘retrospective’ harmonization. Conclusion It is hoped that this article will encourage readers to investigate the project further: the more the research groups and studies are actively involved, the more effective the DataSHaPER programme will ultimately be. PMID:20813861
Criminal profiling as expert witness evidence: The implications of the profiler validity research.
Kocsis, Richard N; Palermo, George B
The use and development of the investigative tool colloquially known as criminal profiling has steadily increased over the past five decades throughout the world. Coupled with this growth has been a diversification in the suggested range of applications for this technique. Possibly the most notable of these has been the attempted transition of the technique from a tool intended to assist police investigations into a form of expert witness evidence admissible in legal proceedings. Whilst case law in various jurisdictions has considered with mutual disinclination the evidentiary admissibility of criminal profiling, a disjunction has evolved between these judicial examinations and the scientifically vetted research testing the accuracy (i.e., validity) of the technique. This article offers an analysis of the research directly testing the validity of the criminal profiling technique and the extant legal principles considering its evidentiary admissibility. This analysis reveals that research findings concerning the validity of criminal profiling are surprisingly compatible with the extant legal principles. The overall conclusion is that a discrete form of crime behavioural analysis is supported by the profiler validity research and could be regarded as potentially admissible expert witness evidence. Finally, a number of theoretical connections are also identified concerning the skills and qualifications of individuals who may feasibly provide such expert testimony. Copyright © 2016 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ricci, P., E-mail: paolo.ricci@epfl.ch; Riva, F.; Theiler, C.
In the present work, a Verification and Validation procedure is presented and applied showing, through a practical example, how it can contribute to advancing our physics understanding of plasma turbulence. Bridging the gap between plasma physics and other scientific domains, in particular, the computational fluid dynamics community, a rigorous methodology for the verification of a plasma simulation code is presented, based on the method of manufactured solutions. This methodology assesses that the model equations are correctly solved, within the order of accuracy of the numerical scheme. The technique to carry out a solution verification is described to provide a rigorousmore » estimate of the uncertainty affecting the numerical results. A methodology for plasma turbulence code validation is also discussed, focusing on quantitative assessment of the agreement between experiments and simulations. The Verification and Validation methodology is then applied to the study of plasma turbulence in the basic plasma physics experiment TORPEX [Fasoli et al., Phys. Plasmas 13, 055902 (2006)], considering both two-dimensional and three-dimensional simulations carried out with the GBS code [Ricci et al., Plasma Phys. Controlled Fusion 54, 124047 (2012)]. The validation procedure allows progress in the understanding of the turbulent dynamics in TORPEX, by pinpointing the presence of a turbulent regime transition, due to the competition between the resistive and ideal interchange instabilities.« less
Guidelines for the Design and Conduct of Clinical Studies in Knee Articular Cartilage Repair
Mithoefer, Kai; Saris, Daniel B.F.; Farr, Jack; Kon, Elizaveta; Zaslav, Kenneth; Cole, Brian J.; Ranstam, Jonas; Yao, Jian; Shive, Matthew; Levine, David; Dalemans, Wilfried; Brittberg, Mats
2011-01-01
Objective: To summarize current clinical research practice and develop methodological standards for objective scientific evaluation of knee cartilage repair procedures and products. Design: A comprehensive literature review was performed of high-level original studies providing information relevant for the design of clinical studies on articular cartilage repair in the knee. Analysis of cartilage repair publications and synopses of ongoing trials were used to identify important criteria for the design, reporting, and interpretation of studies in this field. Results: Current literature reflects the methodological limitations of the scientific evidence available for articular cartilage repair. However, clinical trial databases of ongoing trials document a trend suggesting improved study designs and clinical evaluation methodology. Based on the current scientific information and standards of clinical care, detailed methodological recommendations were developed for the statistical study design, patient recruitment, control group considerations, study endpoint definition, documentation of results, use of validated patient-reported outcome instruments, and inclusion and exclusion criteria for the design and conduct of scientifically sound cartilage repair study protocols. A consensus statement among the International Cartilage Repair Society (ICRS) and contributing authors experienced in clinical trial design and implementation was achieved. Conclusions: High-quality clinical research methodology is critical for the optimal evaluation of current and new cartilage repair technologies. In addition to generally applicable principles for orthopedic study design, specific criteria and considerations apply to cartilage repair studies. Systematic application of these criteria and considerations can facilitate study designs that are scientifically rigorous, ethical, practical, and appropriate for the question(s) being addressed in any given cartilage repair research project. PMID:26069574
ERIC Educational Resources Information Center
Goldstein, Barry L.; Patterson, Patrick O.
1988-01-01
Refers to Title VII of the Civil Rights Act of 1964 and the Supreme Court's disparate impact interpretation of Title VII in Griggs versus Duke Power Company. Contends that attacks on the Griggs decision are legally unsound and that claims made by advocates of validity generalization are scientifically unsupported. (Author/NB)
Design, Development and Validation of a Model of Problem Solving for Egyptian Science Classes
ERIC Educational Resources Information Center
Shahat, Mohamed A.; Ohle, Annika; Treagust, David F.; Fischer, Hans E.
2013-01-01
Educators and policymakers envision the future of education in Egypt as enabling learners to acquire scientific inquiry and problem-solving skills. In this article, we describe the validation of a model for problem solving and the design of instruments for evaluating new teaching methods in Egyptian science classes. The instruments were based on…
NASA Astrophysics Data System (ADS)
Buchholz, Bernhard; Ebert, Volker
2018-01-01
Highly accurate water vapor measurements are indispensable for understanding a variety of scientific questions as well as industrial processes. While in metrology water vapor concentrations can be defined, generated, and measured with relative uncertainties in the single percentage range, field-deployable airborne instruments deviate even under quasistatic laboratory conditions up to 10-20 %. The novel SEALDH-II hygrometer, a calibration-free, tuneable diode laser spectrometer, bridges this gap by implementing a new holistic concept to achieve higher accuracy levels in the field. We present in this paper the absolute validation of SEALDH-II at a traceable humidity generator during 23 days of permanent operation at 15 different H2O mole fraction levels between 5 and 1200 ppmv. At each mole fraction level, we studied the pressure dependence at six different gas pressures between 65 and 950 hPa. Further, we describe the setup for this metrological validation, the challenges to overcome when assessing water vapor measurements on a high accuracy level, and the comparison results. With this validation, SEALDH-II is the first airborne, metrologically validated humidity transfer standard which links several scientific airborne and laboratory measurement campaigns to the international metrological water vapor scale.
Candidate Quality Measures for Hand Surgery.
2017-11-01
Quality measures are tools used by physicians, health care systems, and payers to evaluate performance, monitor the outcomes of interventions, and inform quality improvement efforts. A paucity of quality measures exist that address hand surgery care. We completed a RAND/UCLA (University of California Los Angeles) Delphi Appropriateness process with the goal of developing and evaluating candidate hand surgery quality measures to be used for national quality measure development efforts. A consortium of 9 academic upper limb surgeons completed a RAND/UCLA Delphi Appropriateness process to evaluate the importance, scientific acceptability, usability, and feasibility of 44 candidate quality measures. These addressed hand problems the panelists felt were most appropriate for quality measure development. Panelists rated the measures on an ordinal scale between 1 (definitely not valid) and 9 (definitely valid) in 2 rounds (preliminary round and final round) with an intervening face-to-face discussion. Ratings from 1 to 3 were considered not valid, 4 to 6 as equivocal or uncertain, and 7 to 9 as valid. If no more than 2 of the 9 ratings were outside the 3-point range that included the median (1-3, 4-6, or 7-9), the panelists were considered to be in agreement. If 3 or more of the panelists' ratings of a measure were within the 1 to 3 range and 3 or more ratings were in the 7 to 9 range, the panelists were considered to be in disagreement. There was agreement on 43% (19) of the measures as important, 27% (12) as scientifically sound, 48% (21) as usable, and 59% (26) as feasible to complete. Ten measures met all 4 of these criteria and were, therefore, considered valid measurements of quality. Quality measures that were developed address outcomes (patient-reported outcomes for assessment and improvement of function) and processes of care (utilization rates of imaging, antibiotics, occupational therapy, ultrasound, and operative treatment). The consortium developed 10 measures of hand surgery quality using a validated methodology. These measures merit further development. Quality measures can be used to evaluate the quality of care provided by physicians and health systems and can inform quality and value-based reimbursement models. Copyright © 2017 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.
Missed Programs (You Can't TiVo This One): Why Psychologists Should Study Media.
Okdie, Bradley M; Ewoldsen, David R; Muscanell, Nicole L; Guadagno, Rosanna E; Eno, Cassie A; Velez, John A; Dunn, Robert A; O'Mally, Jamie; Smith, Lauren Reichart
2014-03-01
Media psychology involves the scientific examination of the cognitive processes and behavior involved in the selection, use, interpretation, and effects of communication across a variety of media (e.g., via the Internet, television, telephone, film). Media are central to people's lives, with projections indicating that an average person spent over 3,515 hours using media in 2012. New technologies are increasing the importance of media. Data from two content analyses demonstrate the underrepresentation of media psychology in mainstream psychological literature and in undergraduate and graduate psychology course offerings. We argue for the importance of a psychological approach to the study of media because of its presence in people's lives and because psychologists use it in their research and their choices may affect the external validity of their findings. We provide a useful framework from which psychologists can approach the study of media, and we conclude with recommendations for further areas of scientific inquiry relevant to psychological science. © The Author(s) 2014.
Concept of Science Data Management for the Korea Pathfinder Lunar Orbiter
NASA Astrophysics Data System (ADS)
Kim, Joo Hyeon
2016-10-01
South Korea has a plan to explore the Moon in 2018 or 2019. For the plan, the Korea Aerospace Research Institute which is a government funded research institute kicked off the Korea Lunar Exploration Development Program in January, 2016 in support of Ministry of Science, ICT and Future Planning, South Korea.As the 1st stage mission of the program, named as the Korea Pathfinder Lunar Orbiter(KPLO), will perform acquisition of high resolution images and science data for investigation of lunar environment as well as the core technology demonstration and validation for space explorations. The scientific instruments consists of three Korean domestic developed science instruments except an imaging instrument and several foreign provided instruments. We are developing a science data management plan to encourage scientific activities using science data acquired by the science instruments.I introduce the Korean domestic developed science instruments and present concept of the science data management plan for data delivery, processing, and distribution for the science instruments.
An explanation of resisted discoveries based on construal-level theory.
Fang, Hui
2015-02-01
New discoveries and theories are crucial for the development of science, but they are often initially resisted by the scientific community. This paper analyses resistance to scientific discoveries that supplement previous research results or conclusions with new phenomena, such as long chains in macromolecules, Alfvén waves, parity nonconservation in weak interactions and quasicrystals. Construal-level theory is used to explain that the probability of new discoveries may be underestimated because of psychological distance. Thus, the insufficiently examined scope of an accepted theory may lead to overstating the suitable scope and underestimating the probability of its undiscovered counter-examples. Therefore, psychological activity can result in people instinctively resisting new discoveries. Direct evidence can help people judge the validity of a hypothesis with rational thinking. The effects of authorities and textbooks on the resistance to discoveries are also discussed. From the results of our analysis, suggestions are provided to reduce resistance to real discoveries, which will benefit the development of science.
2017-12-08
This February 8, 2016 composite image reveals the complex distribution of phytoplankton in one of Earth's eastern boundary upwelling systems — the California Current. Recent work suggests that our warming climate my be increasing the intensity of upwelling in such regions with possible repercussions for the species that comprise those ecosystems. NASA's OceanColor Web is supported by the Ocean Biology Processing Group (OBPG) at NASA's Goddard Space Flight Center. Our responsibilities include the collection, processing, calibration, validation, archive and distribution of ocean-related products from a large number of operational, satellite-based remote-sensing missions providing ocean color, sea surface temperature and sea surface salinity data to the international research community since 1996. Credit: NASA/Goddard/Suomin-NPP/VIIRS NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram
Ribera, Josep M; Cardellach, Francesc; Selva, Albert
2005-12-01
The decision-making process includes a series of activities undertaken in biomedical journals from the moment a manuscript is received until it is accepted or rejected. Firstly, the manuscript is evaluated by the members of the Editorial Board, who analyze both its suitability for the journal and its scientific quality. After this initial evaluation, the article is evaluated by peer reviewers, an essential process to guarantee its scientific validity. Both the Editorial Board and the peer reviewers usually use checklists which are of enormous help in this task. Once the biomedical article has been accepted, the publication process is started, which in turn includes a series of steps, beginning with technical and medical review of the article's contents and ending with the article's publication in the journal. The present article provides a detailed description of the main technical and ethical issues involved in the processes of decision-making and publication of biomedical articles.
EVEREST: a virtual research environment for the Earth Sciences
NASA Astrophysics Data System (ADS)
Glaves, H. M.; Marelli, F.; Albani, M.
2015-12-01
There is an increasing requirement for researchers to work collaboratively using common resources whilst being geographically dispersed. By creating a virtual research environment (VRE) using a service oriented architecture (SOA) tailored to the needs of Earth Science (ES) communities, the EVEREST project will provide a range of both generic and domain specific data management services to support a dynamic approach to collaborative research. EVER-EST will provide the means to overcome existing barriers to sharing of Earth Science data and information allowing research teams to discover, access, share and process heterogeneous data, algorithms, results and experiences within and across their communities, including those domains beyond Earth Science. Data providers will be also able to monitor user experiences and collect feedback through the VRE, improving their capacity to adapt to the changing requirements of their end-users. The EVER-EST e-infrastructure will be validated by four virtual research communities (VRC) covering different multidisciplinary ES domains: including ocean monitoring, selected natural hazards (flooding, ground instability and extreme weather events), land monitoring and risk management (volcanoes and seismicity). Each of the VRC represents a different collaborative use case for the VRE according to its own specific requirements for data, software, best practice and community engagement. The diverse use cases will demonstrate how the VRE can be used for a range of activities from straight forward data/software sharing to investigating ways to improve cooperative working. Development of the EVEREST VRE will leverage on the results of several previous projects which have produced state-of-the-art technologies for scientific data management and curation as well those initiatives which have developed models, techniques and tools for the preservation of scientific methods and their implementation in computational forms such as scientific workflows.
Laboratory Experimental Design for a Glycomic Study.
Ugrina, Ivo; Campbell, Harry; Vučković, Frano
2017-01-01
Proper attention to study design before, careful conduct of procedures during, and appropriate inference from results after scientific experiments are important in all scientific studies in order to ensure valid and sometimes definitive conclusions can be made. The design of experiments, also called experimental design, addresses the challenge of structuring and conducting experiments to answer the questions of interest as clearly and efficiently as possible.
ERIC Educational Resources Information Center
Gelisli, Yücel; Beisenbayeva, Lyazzat
2017-01-01
The purpose of the current study is to develop a reliable scale to be used to determine the scientific inquiry competency perception of post-graduate students engaged in post-graduate studies in the field of educational sciences and teacher education in Kazakhstan. The study employed the descriptive method. Within the context of the study, a scale…
ERIC Educational Resources Information Center
Pepiton, M. Brianna; Alvis, Lindsey J.; Allen, Kenneth; Logid, Gregory
2012-01-01
This article reviews a recent book arguing how a concept known as parental alienation syndrome--now parental alienation disorder--should be included in official psychiatric/psychological and medical classification diagnostic manuals. Anecdotal cases and opinion are presented as research and scientific evidence, and stories are presented as…
Argument within a Scientific Debate: The Case of the DRD2 A1 Allele as a Gene for Alcoholism.
ERIC Educational Resources Information Center
Wastyn, Ronald O.; Wastyn, M. Linda
1997-01-01
Investigates how opposing parties advanced arguments to the scientific community about the validity of DRD2 A1 allele as a gene causing alcoholism. Demonstrates to what extent scientists debate each other in journals by advancing opposing viewpoints with rigor and insight. Reveals what it means when scientists label a discovery in terms of finding…
Situational awareness of hazards: Validation of multi-source radiation measurements
NASA Astrophysics Data System (ADS)
Hultquist, C.; Cervone, G.
2016-12-01
Citizen-led movements producing scientific hazard data during disasters are increasingly common. After the Japanese earthquake-triggered tsunami in 2011, and the resulting radioactive releases at the damaged Fukushima Daiichi nuclear power plants, citizens monitored on-ground levels of radiation with innovative mobile devices built from off-the-shelf components. To date, the citizen-led SAFECAST project has recorded 50 million radiation measurements worldwide, with the majority of these measurements from Japan. The analysis of data which are multi-dimensional, not vetted, and provided from multiple devices presents big data challenges due to their volume, velocity, variety, and veracity. While the SAFECAST project produced massive open-source radiation measurements at specific coordinates and times, the reliability and validity of the overall data have not yet been assessed. The nuclear disaster provides a case for assessing the SAFECAST data with official aerial remote sensing radiation data jointly collected by the governments of the United States and Japan. A spatial and statistical assessment of SAFECAST requires several preprocessing steps. First, SAFECAST ionized radiation sensors collected data using different units of measure than the government data, and they had to be converted. Secondly, the normally occurring radiation and decay rates of Cesium from deposition surveys were used to properly compare measurements in space and time. Finally, the GPS located points were selected within overlapping extents at multiple spatial resolutions. Quantitative measures were used to assess the similarity and differences in the observed measurements. Radiation measurements from the same geographic extents show similar spatial variations and statistically significant correlations. The results suggest that actionable scientific data for disasters and emergencies can be inferred from non-traditional and not vetted data generated through citizen science projects. This project provides a methodology for comparing datasets of radiological measurements over time and space. Integrating data for assessment from different Earth sensing systems is paramount for societal and environmental problems.
NASA Astrophysics Data System (ADS)
Douglass, D. H.; Kalnay, E.; Li, H.; Cai, M.
2005-05-01
Carbon monoxide (CO) is present in the troposphere as a product of fossil fuel combustion, biomass burning and the oxidation of volatile hydrocarbons. It is the principal sink of the hydroxyl radical (OH), thereby affecting the concentrations of greenhouse gases such as CH4 and O3. In addition, CO has a lifetime of 1-3 months, making it a good tracer for studying the long range transport of pollution. Satellite observations present a valuable tool in the investigation of tropospheric CO. The Atmospheric InfraRed Sounder (AIRS), onboard the Aqua satellite, is sensitive to tropospheric CO in a number of its 2378 channels. This sensitivity to CO, combined with the daily global coverage provided by AIRS, makes AIRS a potentially useful instrument for observing CO sources and transport. A maximum a posteriori (MAP) retrieval scheme (Rodgers 2000) has been developed for AIRS, to provide CO profiles from near-surface altitudes to around 150 hPa. An extensive validation data set, consisting of over 50 in-situ aircraft CO profiles, has been constructed. This data set combines CO data from a number of independent aircraft campaigns. Results from this validation study and comparisons with the AIRS level 2 CO product will be presented. Rodgers, C. D. (2000), Inverse Methods for Atmospheric Sounding : Theory and Practice, World Scientific, Singapore.
López-Díaz, Cristina; Fraille-Calle, Luis; Herrero-Rosado, Marta; Arnés-Muñoz, Vanessa; De-Dios-De-Dios, Teresa
2016-01-01
The Guides of Good Practices (GGP) are necessary tools in the universal healthcare and in the clinical management, providing the user/patient with a major quality in the assistance, in order to optimize and reinforce an individualized attention into action, taking into account the best scientific evidence. The literature provides different references to the development of the GGP, but there is little knowledge about the attitude of professionals towards them, since most of the studies that exist are qualitative. Therefore, the aim of this work is to construct and validate a Likert scale which could assess the attitude of the nurse towards GGP. The methodology used was quantitative, descriptive, cross, opinion, anonymous and also it could validate a scale via the following measurements: content validation by experts, correlation between items, external reliability, internal consistency, stability and exploratory factor analysis. The result was a scale consisting of 20 items that refer to the attitude toward the GGP, with a percentage of agreement among experts over 75 % on all the items, and a significant Pearson correlation between the pre-test and post-test in all variables, but for three. The internal consistency measured by Cronbach's alpha was 0.878. These results are acceptable in terms of the psychometric characteristics of the instrument, with easy and fast administration and simple in their interpretation, allowing quantifying and generating knowledge about the attitudes of nurses towards GGP.
SMAP Verification and Validation Project - Final Report
NASA Technical Reports Server (NTRS)
Murry, Michael
2012-01-01
In 2007, the National Research Council (NRC) released the Decadal Survey of Earth science. In the future decade, the survey identified 15 new space missions of significant scientific and application value for the National Aeronautics and Space Administration (NASA) to undertake. One of these missions was the Soil Moisture Active Passive (SMAP) mission that NASA assigned to the Jet Propulsion Laboratory (JPL) in 2008. The goal of SMAP1 is to provide global, high resolution mapping of soil moisture and its freeze/thaw states. The SMAP project recently passed its Critical Design Review and is proceeding with its fabrication and testing phase.Verification and Validation (V&V) is widely recognized as a critical component in system engineering and is vital to the success of any space mission. V&V is a process that is used to check that a system meets its design requirements and specifications in order to fulfill its intended purpose. Verification often refers to the question "Have we built the system right?" whereas Validation asks "Have we built the right system?" Currently the SMAP V&V team is verifying design requirements through inspection, demonstration, analysis, or testing. An example of the SMAP V&V process is the verification of the antenna pointing accuracy with mathematical models since it is not possible to provide the appropriate micro-gravity environment for testing the antenna on Earth before launch.
Handwriting Examination: Moving from Art to Science
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jarman, Kristin H.; Hanlen, Richard C.; Manzolillo, P. A.
The scientific basis for handwriting individuality and the expertise of handwriting examiners has been questioned in several court cases and law review articles. The criticisms were originally directed at the proficiency and expertise of forensic document examiners (FDE's). However, these criticisms also illustrate the lack of empirical data to support and validate the premises and methodology of handwriting examination. As a result the admissibility and weight of FDE testimony has been called into question. These assaults on the scientific integrity of handwriting analysis have created an urgent need for the forensic document examination community to develop objective standards, measurable criteriamore » and a uniform methodology supported by properly controlled studies that evaluate and validate the significance of measurable handwriting characteristics.« less
Mother-child bonding assessment tools☆
Perrelli, Jaqueline Galdino Albuquerque; Zambaldi, Carla Fonseca; Cantilino, Amaury; Sougey, Everton Botelho
2014-01-01
Objective: To identify and describe research tools used to evaluate bonding between mother and child up to one year of age, as well as to provide information on reliability and validity measures related to these tools. Data source: Research studies available on PUBMED, LILACS, ScienceDirect, PsycINFO and CINAHL databases with the following descriptors: mother-child relations and mother infant relationship, as well as the expressions validity, reliability and scale. Data synthesis: 23 research studies were selected and fully analyzed. Thirteen evaluation research tools were identified concerning mother and child attachment: seven scales, three questionnaires, two inventories and one observation method. From all tools analyzed, the Prenatal Attachment Inventory presented the higher validity and reliability measures to assess mother and fetus relation during pregnancy. Concerning the puerperal period, better consistency coefficients were found for Maternal Attachment Inventory and Postpartum Bonding Questionnaire. Besides, the last one revealed a higher sensibility to identify amenable and severe disorders in the affective relations between mother and child. Conclusions: The majority of research tools are reliable to study the phenomenon presented, although there are some limitations regarding the construct and criterion related to validity. In addition to this, only two of them are translated into Portuguese and adapted to women and children populations in Brazil, being a decisive gap to scientific production in this area. PMID:25479859
Assessing Predictive Validity of Pressure Ulcer Risk Scales- A Systematic Review and Meta-Analysis
PARK, Seong-Hi; LEE, Hea Shoon
2016-01-01
Background: The purpose of this study was to present a scientific reason for pressure ulcer risk scales: Cubbin& Jackson modified Braden, Norton, and Waterlow, as a nursing diagnosis tool by utilizing predictive validity of pressure sores. Methods: Articles published between 1966 and 2013 from periodicals indexed in the Ovid Medline, Embase, CINAHL, KoreaMed, NDSL, and other databases were selected using the key word “pressure ulcer”. QUADAS-II was applied for assessment for internal validity of the diagnostic studies. Selected studies were analyzed using meta-analysis with MetaDisc 1.4. Results: Seventeen diagnostic studies with high methodological quality, involving 5,185 patients, were included. In the results of the meta-analysis, sROC AUC of Braden, Norton, and Waterflow scale was over 0.7, showing moderate predictive validity, but they have limited interpretation due to significant differences between studies. In addition, Waterlow scale is insufficient as a screening tool owing to low sensitivity compared with other scales. Conclusion: The contemporary pressure ulcer risk scale is not suitable for uninform practice on patients under standardized criteria. Therefore, in order to provide more effective nursing care for bedsores, a new or modified pressure ulcer risk scale should be developed upon strength and weaknesses of existing tools. PMID:27114977
HÖner, Oliver; Votteler, Andreas; Schmid, Markus; Schultz, Florian; Roth, Klaus
2015-01-01
The utilisation of motor performance tests for talent identification in youth sports is discussed intensively in talent research. This article examines the reliability, differential stability and validity of the motor diagnostics conducted nationwide by the German football talent identification and development programme and provides reference values for a standardised interpretation of the diagnostics results. Highly selected players (the top 4% of their age groups, U12-U15) took part in the diagnostics at 17 measurement points between spring 2004 and spring 2012 (N = 68,158). The heterogeneous test battery measured speed abilities and football-specific technical skills (sprint, agility, dribbling, ball control, shooting, juggling). For all measurement points, the overall score and the speed tests showed high internal consistency, high test-retest reliability and satisfying differential stability. The diagnostics demonstrated satisfying factorial-related validity with plausible and stable loadings on the two empirical factors "speed" and "technical skills". The score, and the technical skills dribbling and juggling, differentiated the most among players of different performance levels and thus showed the highest criterion-related validity. Satisfactory psychometric properties for the diagnostics are an important prerequisite for a scientifically sound rating of players' actual motor performance and for the future examination of the prognostic validity for success in adulthood.
Evaluation of animal models of neurobehavioral disorders
van der Staay, F Josef; Arndt, Saskia S; Nordquist, Rebecca E
2009-01-01
Animal models play a central role in all areas of biomedical research. The process of animal model building, development and evaluation has rarely been addressed systematically, despite the long history of using animal models in the investigation of neuropsychiatric disorders and behavioral dysfunctions. An iterative, multi-stage trajectory for developing animal models and assessing their quality is proposed. The process starts with defining the purpose(s) of the model, preferentially based on hypotheses about brain-behavior relationships. Then, the model is developed and tested. The evaluation of the model takes scientific and ethical criteria into consideration. Model development requires a multidisciplinary approach. Preclinical and clinical experts should establish a set of scientific criteria, which a model must meet. The scientific evaluation consists of assessing the replicability/reliability, predictive, construct and external validity/generalizability, and relevance of the model. We emphasize the role of (systematic and extended) replications in the course of the validation process. One may apply a multiple-tiered 'replication battery' to estimate the reliability/replicability, validity, and generalizability of result. Compromised welfare is inherent in many deficiency models in animals. Unfortunately, 'animal welfare' is a vaguely defined concept, making it difficult to establish exact evaluation criteria. Weighing the animal's welfare and considerations as to whether action is indicated to reduce the discomfort must accompany the scientific evaluation at any stage of the model building and evaluation process. Animal model building should be discontinued if the model does not meet the preset scientific criteria, or when animal welfare is severely compromised. The application of the evaluation procedure is exemplified using the rat with neonatal hippocampal lesion as a proposed model of schizophrenia. In a manner congruent to that for improving animal models, guided by the procedure expounded upon in this paper, the developmental and evaluation procedure itself may be improved by careful definition of the purpose(s) of a model and by defining better evaluation criteria, based on the proposed use of the model. PMID:19243583
Alijani, Rahim
2015-01-01
In recent years emphasis has been placed on evaluation studies and the publication of scientific papers in national and international journals. In this regard the publication of scientific papers in journals in the Institute for Scientific Information (ISI) database is highly recommended. The evaluation of scientific output via articles in journals indexed in the ISI database will enable the Iranian research authorities to allocate and organize research budgets and human resources in a way that maximises efficient science production. The purpose of the present paper is to publish a general and valid view of science production in the field of stem cells. In this research, outputs in the field of stem cell research are evaluated by survey research, the method of science assessment called Scientometrics in this branch of science. A total of 1528 documents was extracted from the ISI database and analysed using descriptive statistics software in Excel. The results of this research showed that 1528 papers in the stem cell field in the Web of Knowledge database were produced by Iranian researchers. The top ten Iranian researchers in this field have produced 936 of these papers, equivalent to 61.3% of the total. Among the top ten, Soleimani M. has occupied the first place with 181 papers. Regarding international scientific participation, Iranian researchers have cooperated to publish papers with researchers from 50 countries. Nearly 32% (452 papers) of the total research output in this field has been published in the top 10 journals. These results show that a small number of researchers have published the majority of papers in the stem cell field. International participation in this field of research unacceptably low. Such participation provides the opportunity to import modern science and international experience into Iran. This not only causes scientific growth, but also improves the research and enhances opportunities for employment and professional development. Iranian scientific outputs from stem cell research should not be limited to only a few specific journals.
Castrejon, I; Carmona, L; Agrinier, N; Andres, M; Briot, K; Caron, M; Christensen, R; Consolaro, A; Curbelo, R; Ferrer, Montserrat; Foltz, Violaine; Gonzalez, C; Guillemin, F; Machado, P M; Prodinger, Birgit; Ravelli, A; Scholte-Voshaar, M; Uhlig, T; van Tuyl, L H D; Zink, A; Gossec, L
2015-01-01
Patient reported outcomes (PROs) are relevant in rheumatology. Variable accessibility and validity of commonly used PROs are obstacles to homogeneity in evidence synthesis. The objective of this project was to provide a comprehensive library of "validated PROs". A launch meeting with rheumatologists, PROs methodological experts, and patients, was held to define the library's aims and scope, and basic requirements. To feed the library we performed systematic reviews on selected diseases and domains. Relevant information on PROs was collected using standardised data collection forms based on the COSMIN checklist. The EULAR Outcomes Measures Library (OML), whose aims are to provide and to advise on PROs on a user-friendly manner albeit based on scientific grounds, has been launched and made accessible to all. PROs currently included cover any domain and, are generic or specifically target to the following diseases: rheumatoid arthritis, osteoarthritis, spondyloarthritis, low back pain, systemic lupus erythematosus, gout, osteoporosis, juvenile idiopathic arthritis, and fibromyalgia. Up to 236 instruments (106 generic and 130 specific) have been identified, evaluated, and included. The systematic review for SLE, which yielded 10 specific instruments, is presented here as an example. The OML website includes, for each PRO, information on the construct being measured and the extent of validation, recommendations for use, and available versions; it also contains a glossary on common validation terms. The OML is an in progress library led by rheumatologists, related professionals and patients, that will help to better understand and apply PROs in rheumatic and musculoskeletal diseases.
Secure Peer-to-Peer Networks for Scientific Information Sharing
NASA Technical Reports Server (NTRS)
Karimabadi, Homa
2012-01-01
The most common means of remote scientific collaboration today includes the trio of e-mail for electronic communication, FTP for file sharing, and personalized Web sites for dissemination of papers and research results. With the growth of broadband Internet, there has been a desire to share large files (movies, files, scientific data files) over the Internet. Email has limits on the size of files that can be attached and transmitted. FTP is often used to share large files, but this requires the user to set up an FTP site for which it is hard to set group privileges, it is not straightforward for everyone, and the content is not searchable. Peer-to-peer technology (P2P), which has been overwhelmingly successful in popular content distribution, is the basis for development of a scientific collaboratory called Scientific Peer Network (SciPerNet). This technology combines social networking with P2P file sharing. SciPerNet will be a standalone application, written in Java and Swing, thus insuring portability to a number of different platforms. Some of the features include user authentication, search capability, seamless integration with a data center, the ability to create groups and social networks, and on-line chat. In contrast to P2P networks such as Gnutella, Bit Torrent, and others, SciPerNet incorporates three design elements that are critical to application of P2P for scientific purposes: User authentication, Data integrity validation, Reliable searching SciPerNet also provides a complementary solution to virtual observatories by enabling distributed collaboration and sharing of downloaded and/or processed data among scientists. This will, in turn, increase scientific returns from NASA missions. As such, SciPerNet can serve a two-fold purpose for NASA: a cost-savings software as well as a productivity tool for scientists working with data from NASA missions.
NREL Spectrum of Clean Energy Innovation (Brochure)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
2011-09-01
This brochure describes the NREL Spectrum of Clean Energy Innovation, which includes analysis and decision support, fundamental science, market relevant research, systems integration, testing and validation, commercialization and deployment. Through deep technical expertise and an unmatched breadth of capabilities, the National Renewable Energy Laboratory (NREL) leads an integrated approach across the spectrum of renewable energy innovation. From scientific discovery to accelerating market deployment, NREL works in partnership with private industry to drive the transformation of our nation's energy systems. NREL integrates the entire spectrum of innovation, including fundamental science, market relevant research, systems integration, testing and validation, commercialization, and deployment.more » Our world-class analysis and decision support informs every point on the spectrum. The innovation process at NREL is inter-dependent and iterative. Many scientific breakthroughs begin in our own laboratories, but new ideas and technologies may come to NREL at any point along the innovation spectrum to be validated and refined for commercial use.« less
Iyioha, Ireh
2011-01-01
This paper examines the (in)compatibility between the diagnostic and therapeutic theories of complementary and alternative medicine (CAM) and a science-based regulatory framework. Specifically, the paper investigates the nexus between statutory legitimacy and scientific validation of health systems, with an examination of its impact on the development of complementary and alternative therapies. The paper evaluates competing theories for validating CAM ranging from the RCT methodology to anthropological perspectives and contends that while the RCT method might be beneficial in the regulation of many CAM therapies, yet dogmatic adherence to this paradigm as the exclusive method for legitimizing CAM will be adverse to the independent development of many CAM therapies whose philosophies and mechanisms of action are not scientifically interpretable. Drawing on history and research evidence to support this argument, the paper sues for a regulatory model that is accommodative of different evidential paradigms in support of a pluralistic healthcare system that balances the imperative of quality assurance with the need to ensure access. PMID:20953428
Planetary geomorphology: Some historical/analytical perspectives
NASA Astrophysics Data System (ADS)
Baker, V. R.
2015-07-01
Three broad themes from the history of planetary geomorphology provide lessons in regard to the logic (valid reasoning processes) for the doing of that science. The long controversy over the origin of lunar craters, which was dominated for three centuries by the volcanic hypothesis, provides examples of reasoning on the basis of authority and a priori presumptions. Percival Lowell's controversy with geologists over the nature of linear markings on the surface of Mars illustrates the role of tenacity in regard to the beliefs of some individual scientists. Finally, modern controversies over the role of water in shaping the surface of Mars illustrate how the a priori method, i.e., belief produced according to reason, can seductively cloud the scientific openness to the importance of brute facts that deviate from a prevailing paradigm.
Krueger, Robert F; Eaton, Nicholas R
2010-04-01
We were sincerely flattered to discover that John Gunderson, Michael First, Paul Costa, Robert McCrae, Michael Hallquist, and Paul Pilkonis provided commentaries on our target article. In this brief response, we cannot hope to discuss the myriad points raised by this august group. Such a task would be particularly daunting given the diversity of the commentaries. Indeed, the diversity of the commentaries provides a kind of "metacommentary" on the state of personality and psychopathology research. That is, the intellectual diversity contained in the commentaries underlines the substantial challenges that lie ahead of us, in terms of articulating a model of personality and psychopathology with both scientific validity and clinical applicability. PsycINFO Database Record (c) 2010 APA, all rights reserved.
Flight Demonstrations of Orbital Space Plane (OSP) Technologies
NASA Technical Reports Server (NTRS)
Turner, Susan
2003-01-01
The Orbital Space Plane (OSP) Program embodies NASA s priority to transport Space Station crews safely, reliably, and affordably, while it empowers the Nation s greater strategies for scientific exploration and space leadership. As early in the development cycle as possible, the OSP will provide crew rescue capability, offering an emergency ride home from the Space Station, while accommodating astronauts who are deconditioned due to long- duration missions, or those that may be ill or injured. As the OSP Program develops a fully integrated system, it will use existing technologies and employ computer modeling and simulation. Select flight demonstrator projects will provide valuable data on launch, orbital, reentry, and landing conditions to validate thermal protection systems, autonomous operations, and other advancements, especially those related to crew safety and survival.
Pairis-Garcia, M; Moeller, S J
2017-03-01
The Common Swine Industry Audit (CSIA) was developed and scientifically evaluated through the combined efforts of a task force consisting of university scientists, veterinarians, pork producers, packers, processers, and retail and food service personnel to provide stakeholders throughout the pork chain with a consistent, reliable, and verifiable system to ensure on-farm swine welfare and food safety. The CSIA tool was built from the framework of the Pork Quality Assurance Plus (PQA Plus) site assessment program with the purpose of developing a single, common audit platform for the U.S. swine industry. Twenty-seven key aspects of swine care are captured and evaluated in CSIA and cover the specific focal areas of animal records, animal observations, facilities, and caretakers. Animal-based measures represent approximately 50% of CSIA evaluation criteria and encompass critical failure criteria, including observation of willful acts of abuse and determination of timely euthanasia. Objective, science-based measures of animal well-being parameters (e.g., BCS, lameness, lesions, hernias) are assessed within CSIA using statistically validated sample sizes providing a detection ability of 1% with 95% confidence. The common CSIA platform is used to identify care issues and facilitate continuous improvement in animal care through a validated, repeatable, and feasible animal-based audit process. Task force members provide continual updates to the CSIA tool with a specific focus toward 1) identification and interpretation of appropriate animal-based measures that provide inherent value to pig welfare, 2) establishment of acceptability thresholds for animal-based measures, and 3) interpretation of CSIA data for use and improvement of welfare within the U.S. swine industry.
Networking Technologies Enable Advances in Earth Science
NASA Technical Reports Server (NTRS)
Johnson, Marjory; Freeman, Kenneth; Gilstrap, Raymond; Beck, Richard
2004-01-01
This paper describes an experiment to prototype a new way of conducting science by applying networking and distributed computing technologies to an Earth Science application. A combination of satellite, wireless, and terrestrial networking provided geologists at a remote field site with interactive access to supercomputer facilities at two NASA centers, thus enabling them to validate and calibrate remotely sensed geological data in near-real time. This represents a fundamental shift in the way that Earth scientists analyze remotely sensed data. In this paper we describe the experiment and the network infrastructure that enabled it, analyze the data flow during the experiment, and discuss the scientific impact of the results.
HoPaCI-DB: host-Pseudomonas and Coxiella interaction database
Bleves, Sophie; Dunger, Irmtraud; Walter, Mathias C.; Frangoulidis, Dimitrios; Kastenmüller, Gabi; Voulhoux, Romé; Ruepp, Andreas
2014-01-01
Bacterial infectious diseases are the result of multifactorial processes affected by the interplay between virulence factors and host targets. The host-Pseudomonas and Coxiella interaction database (HoPaCI-DB) is a publicly available manually curated integrative database (http://mips.helmholtz-muenchen.de/HoPaCI/) of host–pathogen interaction data from Pseudomonas aeruginosa and Coxiella burnetii. The resource provides structured information on 3585 experimentally validated interactions between molecules, bioprocesses and cellular structures extracted from the scientific literature. Systematic annotation and interactive graphical representation of disease networks make HoPaCI-DB a versatile knowledge base for biologists and network biology approaches. PMID:24137008
Native American medicine and cardiovascular disease.
Nauman, Eileen
2007-01-01
Native American medicine provides an approach to the treatment of cardiovascular disease that is unique and that can complement modern medicine treatments. Although specific practices among the various Native American tribes (Nations) can vary, there is a strong emphasis on the power of shamanism that can be supplemented by the use of herbal remedies, sweat lodges, and special ceremonies. Most of the practices are passed down by oral tradition, and there is specific training regarding the Native American healer. Native American medicine has strong testimonial experiences to suggest benefit in cardiac patients; however, critical scientific scrutiny is necessary to confirm the validity of the benefits shown to date.
NASA Technical Reports Server (NTRS)
Hicks, K.; Steele, W.
1974-01-01
The SEASAT program will provide scientific and economic benefits from global remote sensing of the ocean's dynamic and physical characteristics. The program as presently envisioned consists of: (1) SEASAT A; (2) SEASAT B; and (3) Operational SEASAT. This economic assessment was to identify, rationalize, quantify and validate the economic benefits evolving from SEASAT. These benefits will arise from improvements in the operating efficiency of systems that interface with the ocean. SEASAT data will be combined with data from other ocean and atmospheric sampling systems and then processed through analytical models of the interaction between oceans and atmosphere to yield accurate global measurements and global long range forecasts of ocean conditions and weather.
Gonthier, Paolo; Visentin, Ivan; Valentino, Danila; Tamietti, Giacomo; Cardinale, Francesca
2017-04-01
When more scientists describe independently the same species under different valid Latin names, a case of synonymy occurs. In such a case, the international nomenclature rules stipulate that the first name to appear on a peer-reviewed publication has priority over the others. Based on a recent episode involving priority determination between two competing names of the same fungal plant pathogen, this letter wishes to open a discussion on the ethics of scientific publications and points out the necessity of a correct management of the information provided through personal communications, whose traceability would prevent their fraudulent or accidental manipulation.
Bracketing as a skill in conducting unstructured qualitative interviews.
Sorsa, Minna Anneli; Kiikkala, Irma; Åstedt-Kurki, Päivi
2015-03-01
To provide an overview of bracketing as a skill in unstructured qualitative research interviews. Researchers affect the qualitative research process. Bracketing in descriptive phenomenology entails researchers setting aside their pre-understanding and acting non-judgementally. In interpretative phenomenology, previous knowledge is used intentionally to create new understanding. A literature search of bracketing in phenomenology and qualitative research. This is a methodology paper examining the researchers' impact in creating data in creating data in qualitative research. Self-knowledge, sensitivity and reflexivity of the researcher enable bracketing. Skilled and experienced researchers are needed to use bracketing in unstructured qualitative research interviews. Bracketing adds scientific rigour and validity to any qualitative study.
The coexistence of alternative and scientific conceptions in physics
NASA Astrophysics Data System (ADS)
Ozdemir, Omer F.
The purpose of this study was to inquire about the simultaneous coexistence of alternative and scientific conceptions in the domain of physics. This study was particularly motivated by several arguments put forward in opposition to the Conceptual Change Model. In the simplest form, these arguments state that people construct different domains of knowledge and different modes of perception in different situations. Therefore, holding different conceptualizations is unavoidable and expecting a replacement in an individual's conceptual structure is not plausible in terms of instructional practices. The following research questions were generated to inquire about this argument: (1) Do individuals keep their alternative conceptions after they have acquired scientific conceptions? (2) Assuming that individuals who acquired scientific conceptions also have alternative conceptions, how are these different conceptions nested in their conceptual structure? (3) What kind of knowledge, skills, and reasoning are necessary to transfer scientific principles instead of alternative ones in the construction of a valid model? Analysis of the data collected from the non-physics group indicated that the nature of alternative conceptions is framed by two types of reasoning: reasoning by mental simulation and semiformal reasoning. Analysis of the data collected from the physics group revealed that mental images or scenes feeding reasoning by mental simulation had not disappeared after the acquisition of scientific conceptions. The analysis of data also provided enough evidence to conclude that alternative principles feeding semiformal reasoning have not necessarily disappeared after the acquisition of scientific conceptions. However, in regard to semiformal reasoning, compartmentalization was not as clear as the case demonstrated in reasoning by mental simulation; instead semiformal and scientific reasoning are intertwined in a way that the components of semiformal reasoning can easily take their place among the components of scientific reasoning. In spite of the fact that the coexistence of multiple conceptions might obstruct the transfer of scientific conceptions in problem-solving situations, several factors stimulating the use of scientific conceptions were noticed explicitly. These factors were categorized as follows: (a) the level of individuals' domain specific knowledge in the corresponding field, (b) the level of individuals' knowledge about the process of science (how science generates its knowledge claims), (c) the level of individuals' awareness of different types of reasoning and conceptions, and (d) the context in which the problem is situated. (Abstract shortened by UMI.)
Nurturing reliable and robust open-source scientific software
NASA Astrophysics Data System (ADS)
Uieda, L.; Wessel, P.
2017-12-01
Scientific results are increasingly the product of software. The reproducibility and validity of published results cannot be ensured without access to the source code of the software used to produce them. Therefore, the code itself is a fundamental part of the methodology and must be published along with the results. With such a reliance on software, it is troubling that most scientists do not receive formal training in software development. Tools such as version control, continuous integration, and automated testing are routinely used in industry to ensure the correctness and robustness of software. However, many scientist do not even know of their existence (although efforts like Software Carpentry are having an impact on this issue; software-carpentry.org). Publishing the source code is only the first step in creating an open-source project. For a project to grow it must provide documentation, participation guidelines, and a welcoming environment for new contributors. Expanding the project community is often more challenging than the technical aspects of software development. Maintainers must invest time to enforce the rules of the project and to onboard new members, which can be difficult to justify in the context of the "publish or perish" mentality. This problem will continue as long as software contributions are not recognized as valid scholarship by hiring and tenure committees. Furthermore, there are still unsolved problems in providing attribution for software contributions. Many journals and metrics of academic productivity do not recognize citations to sources other than traditional publications. Thus, some authors choose to publish an article about the software and use it as a citation marker. One issue with this approach is that updating the reference to include new contributors involves writing and publishing a new article. A better approach would be to cite a permanent archive of individual versions of the source code in services such as Zenodo (zenodo.org). However, citations to these sources are not always recognized when computing citation metrics. In summary, the widespread development of reliable and robust open-source software relies on the creation of formal training programs in software development best practices and the recognition of software as a valid form of scholarship.
Module Validity of Peer Counselor Character Service in State University of Medan
ERIC Educational Resources Information Center
Dewi, Rosmala; Rahmadana, Muhammad Fitri; Dalimunthe, Muhammad Bukhori
2016-01-01
Many ways can be done to address the problem of students, one of them involving the students themselves (peer counselor). It required a standard model that can be applied by students as guidelines for the implementation of the guidance. Validity of the module it must be done according to the rules of various scientific tests. State University of…
Developing a Science Process Skills Test for Secondary Students: Validity and Reliability Study
ERIC Educational Resources Information Center
Feyzioglu, Burak; Demirdag, Baris; Akyildiz, Murat; Altun, Eralp
2012-01-01
Science process skills are claimed to enable an individual to improve their own life visions and give a scientific view/literacy as a standard of their understanding about the nature of science. The main purpose of this study was to develop a test for measuring a valid, reliable and practical test for Science Process Skills (SPS) in secondary…
ERIC Educational Resources Information Center
Rubilar, Álvaro Sebastián Bustos; Badillo, Gonzalo Zubieta
2017-01-01
In this article, we report how a geometric task based on the ACODESA methodology (collaborative learning, scientific debate and self-reflection) promotes the reformulation of the students' validations and allows revealing the students' aims in each of the stages of the methodology. To do so, we present the case of a team and, particularly, one of…
ERIC Educational Resources Information Center
Herodotou, Christothea; Kyza, Eleni A.; Nicolaidou, Iolie; Hadjichambis, Andreas; Kafouris, Dimitris; Terzian, Freda
2012-01-01
Genetically modified organisms (GMOs) is a rapidly evolving area of scientific innovation and an issue receiving global attention. Attempts to devise usable instruments that assess people's attitudes towards this innovation have been rare and non-systematic. The aim of this paper is to present the development and validation of the genetically…
ERIC Educational Resources Information Center
Medina Munoz, Arlette Zamarie
2013-01-01
In Puerto Rico, there isn't a survey that collects the parent's perception of the available services for gifted children. Considering this, in this investigation an instrument was created and scientifically validated to collect the parents' perception of the educational services available. The instrument was validated using internal and external…
Standardization of Distant Intercessory Prayer for Research on Health and Well-Being
ERIC Educational Resources Information Center
Parkinson, Kathleen Elizabeth
2007-01-01
In recent years, distant (remote) intercessory prayer has been put up against the scientific method of research. Studies are few, variable, and tend to be nongeneralizable. Lack of construct validity of the variable prayer is one of the weaknesses that opens up the research to valid critique and scrutiny. The belief that research in this field is…
Elder Abuse: Global Situation, Risk Factors, and Prevention Strategies
Pillemer, Karl; Burnes, David; Riffin, Catherine; Lachs, Mark S.
2016-01-01
Purpose: Elder mistreatment is now recognized internationally as a pervasive and growing problem, urgently requiring the attention of health care systems, social welfare agencies, policymakers, and the general public. In this article, we provide an overview of global issues in the field of elder abuse, with a focus on prevention. Design and Methods: This article provides a scoping review of key issues in the field from an international perspective. Results: By drawing primarily on population-based studies, this scoping review provided a more valid and reliable synthesis of current knowledge about prevalence and risk factors than has been available. Despite the lack of scientifically rigorous intervention research on elder abuse, the review also identified 5 promising strategies for prevention. Implications: The findings highlight a growing consensus across studies regarding the extent and causes of elder mistreatment, as well as the urgent need for efforts to make elder mistreatment prevention programs more effective and evidence based. PMID:26994260
Best practices for evaluating single nucleotide variant calling methods for microbial genomics
Olson, Nathan D.; Lund, Steven P.; Colman, Rebecca E.; Foster, Jeffrey T.; Sahl, Jason W.; Schupp, James M.; Keim, Paul; Morrow, Jayne B.; Salit, Marc L.; Zook, Justin M.
2015-01-01
Innovations in sequencing technologies have allowed biologists to make incredible advances in understanding biological systems. As experience grows, researchers increasingly recognize that analyzing the wealth of data provided by these new sequencing platforms requires careful attention to detail for robust results. Thus far, much of the scientific Communit’s focus for use in bacterial genomics has been on evaluating genome assembly algorithms and rigorously validating assembly program performance. Missing, however, is a focus on critical evaluation of variant callers for these genomes. Variant calling is essential for comparative genomics as it yields insights into nucleotide-level organismal differences. Variant calling is a multistep process with a host of potential error sources that may lead to incorrect variant calls. Identifying and resolving these incorrect calls is critical for bacterial genomics to advance. The goal of this review is to provide guidance on validating algorithms and pipelines used in variant calling for bacterial genomics. First, we will provide an overview of the variant calling procedures and the potential sources of error associated with the methods. We will then identify appropriate datasets for use in evaluating algorithms and describe statistical methods for evaluating algorithm performance. As variant calling moves from basic research to the applied setting, standardized methods for performance evaluation and reporting are required; it is our hope that this review provides the groundwork for the development of these standards. PMID:26217378
Lost in space: design of experiments and scientific exploration in a Hogarth Universe.
Lendrem, Dennis W; Lendrem, B Clare; Woods, David; Rowland-Jones, Ruth; Burke, Matthew; Chatfield, Marion; Isaacs, John D; Owen, Martin R
2015-11-01
A Hogarth, or 'wicked', universe is an irregular environment generating data to support erroneous beliefs. Here, we argue that development scientists often work in such a universe. We demonstrate that exploring these multidimensional spaces using small experiments guided by scientific intuition alone, gives rise to an illusion of validity and a misplaced confidence in that scientific intuition. By contrast, design of experiments (DOE) permits the efficient mapping of such complex, multidimensional spaces. We describe simulation tools that enable research scientists to explore these spaces in relative safety. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Mcmaster, Leonard R.; Chu, William P.; Rowland, Michael W.
1992-01-01
A guide for using the data products from the Stratospheric Aerosol and Gas Experiment 1 (SAGE 1) for scientific investigations of stratospheric chemistry related to aerosol, ozone, nitrogen dioxide, dynamics, and climate change is presented. A detailed description of the aerosol profile tape, the ozone profile tape, and the nitrogen dioxide profile tape is included. These tapes are the SAGE 1 data products containing aerosol extinction data and ozone and nitrogen dioxide concentration data for use in the different scientific investigations. Brief descriptions of the instrument operation, data collection, processing, and validation, and some of the scientific analyses that were conducted are also included.
NASA Astrophysics Data System (ADS)
Weible, Jennifer L.; Toomey Zimmerman, Heather
2016-05-01
Although curiosity is considered an integral aspect of science learning, researchers have debated how to define, measure, and support its development in individuals. Prior measures of curiosity include questionnaire type scales (primarily for adults) and behavioral measures. To address the need to measure scientific curiosity, the Science Curiosity in Learning Environments (SCILE) scale was created and validated as a 12-item scale to measure scientific curiosity in youth. The scale was developed through (a) adapting the language of the Curiosity and Exploration Inventory-II [Kashdan, T. B., Gallagher, M. W., Silvia, P. J., Winterstein, B. P., Breen, W. E., Terhar, D., & Steger, M. F. (2009). The curiosity and exploration inventory-II: Development, factor structure, and psychometrics. Journal of Research in Personality, 43(6), 987-998] for youth and (b) crafting new items based on scientific practices drawn from U.S. science standards documents. We administered a preliminary set of 30 items to 663 youth ages 8-18 in the U.S.A. Exploratory and confirmatory factor analysis resulted in a three-factor model: stretching, embracing, and science practices. The findings indicate that the SCILE scale is a valid measure of youth's scientific curiosity for boys and girls as well as elementary, middle school, and high school learners.
NASA Astrophysics Data System (ADS)
Masson, V.; Le Moigne, P.; Martin, E.; Faroux, S.; Alias, A.; Alkama, R.; Belamari, S.; Barbu, A.; Boone, A.; Bouyssel, F.; Brousseau, P.; Brun, E.; Calvet, J.-C.; Carrer, D.; Decharme, B.; Delire, C.; Donier, S.; Essaouini, K.; Gibelin, A.-L.; Giordani, H.; Habets, F.; Jidane, M.; Kerdraon, G.; Kourzeneva, E.; Lafaysse, M.; Lafont, S.; Lebeaupin Brossier, C.; Lemonsu, A.; Mahfouf, J.-F.; Marguinaud, P.; Mokhtari, M.; Morin, S.; Pigeon, G.; Salgado, R.; Seity, Y.; Taillefer, F.; Tanguy, G.; Tulet, P.; Vincendon, B.; Vionnet, V.; Voldoire, A.
2013-07-01
SURFEX is a new externalized land and ocean surface platform that describes the surface fluxes and the evolution of four types of surfaces: nature, town, inland water and ocean. It is mostly based on pre-existing, well-validated scientific models that are continuously improved. The motivation for the building of SURFEX is to use strictly identical scientific models in a high range of applications in order to mutualise the research and development efforts. SURFEX can be run in offline mode (0-D or 2-D runs) or in coupled mode (from mesoscale models to numerical weather prediction and climate models). An assimilation mode is included for numerical weather prediction and monitoring. In addition to momentum, heat and water fluxes, SURFEX is able to simulate fluxes of carbon dioxide, chemical species, continental aerosols, sea salt and snow particles. The main principles of the organisation of the surface are described first. Then, a survey is made of the scientific module (including the coupling strategy). Finally, the main applications of the code are summarised. The validation work undertaken shows that replacing the pre-existing surface models by SURFEX in these applications is usually associated with improved skill, as the numerous scientific developments contained in this community code are used to good advantage.
Scientific writing of novice researchers: what difficulties and encouragements do they encounter?
Shah, Jatin; Shah, Anand; Pietrobon, Ricardo
2009-04-01
Writing scientific articles is a daunting task for novice researchers. In this qualitative study carried out in 2007, the authors evaluated the experiences of a group of novice researchers engaged in the writing process, to elucidate the main difficulties and sources of encouragement they encountered. Sixteen novice researchers were interviewed. Most were women (10), and most were enrolled in programs of medicine (9), followed by nursing (4) and physical therapy (3). These were drawn via convenience sampling from a randomized control trial in which 48 of them were equally assigned to either an online or a face-to-face course of instruction. On completion, interviews were conducted in focus groups of four students each. The interviews were transcribed and read independently by two of the authors, who then encoded the material based on the principles of grounded theory. Initial categories were converted to major emerging themes, which were validated when participants were asked to review the findings. Triangulation of results was carried out by discussing the emerging themes in an online forum with five specialists in college writing education. Classifying the diverse responses of participants led to the emergence of four major themes: cognitive burden, group support and mentoring, difficulty in distinguishing between content and structure, and backward design of manuscripts. The themes produced by this study provide some insight into the challenges faced by novice researchers in their early attempts at scientific writing. Remedies that address these challenges are needed to substantially improve scientific writing instruction.
Animal behavior and well-being symposium: Farm animal welfare assurance: science and application.
Rushen, J; Butterworth, A; Swanson, J C
2011-04-01
Public and consumer pressure for assurances that farm animals are raised humanely has led to a range of private and public animal welfare standards, and for methods to assess compliance with these standards. The standards usually claim to be science based, but even though researchers have developed measures of animal welfare and have tested the effects of housing and management variables on welfare within controlled laboratory settings, there are challenges in extending this research to develop on-site animal welfare standards. The standards need to be validated against a definition of welfare that has broad support and which is amenable to scientific investigation. Ensuring that such standards acknowledge scientific uncertainty is also challenging, and balanced input from all scientific disciplines dealing with animal welfare is needed. Agencies providing animal welfare audit services need to integrate these scientific standards and legal requirements into successful programs that effectively measure and objectively report compliance. On-farm assessment of animal welfare requires a combination of animal-based measures to assess the actual state of welfare and resource-based measures to identify risk factors. We illustrate this by referring to a method of assessing welfare in broiler flocks. Compliance with animal welfare standards requires buy-in from all stakeholders, and this will be best achieved by a process of inclusion in the development of pragmatic assessment methods and the development of audit programs verifying the conditions and continuous improvement of farm animal welfare.
Examining the Predictive Validity of NIH Peer Review Scores
Lindner, Mark D.; Nakamura, Richard K.
2015-01-01
The predictive validity of peer review at the National Institutes of Health (NIH) has not yet been demonstrated empirically. It might be assumed that the most efficient and expedient test of the predictive validity of NIH peer review would be an examination of the correlation between percentile scores from peer review and bibliometric indices of the publications produced from funded projects. The present study used a large dataset to examine the rationale for such a study, to determine if it would satisfy the requirements for a test of predictive validity. The results show significant restriction of range in the applications selected for funding. Furthermore, those few applications that are funded with slightly worse peer review scores are not selected at random or representative of other applications in the same range. The funding institutes also negotiate with applicants to address issues identified during peer review. Therefore, the peer review scores assigned to the submitted applications, especially for those few funded applications with slightly worse peer review scores, do not reflect the changed and improved projects that are eventually funded. In addition, citation metrics by themselves are not valid or appropriate measures of scientific impact. The use of bibliometric indices on their own to measure scientific impact would likely increase the inefficiencies and problems with replicability already largely attributed to the current over-emphasis on bibliometric indices. Therefore, retrospective analyses of the correlation between percentile scores from peer review and bibliometric indices of the publications resulting from funded grant applications are not valid tests of the predictive validity of peer review at the NIH. PMID:26039440
There is no ``I'' in referee: Why referees should be anonymous
NASA Astrophysics Data System (ADS)
Ucko, Daniel
2015-03-01
From the early days of modern science, it has been recognized that scientific claims must be verified by someone who is not the maker of those claims, and who furthermore has no stake in the matter. In other words, claims need to be evaluated objectively, by the community. The way in which this tends to be done is by peer review conducted by journals. Peer review as currently practiced touches on the themes of trust, where the trust is in institutions and procedures that emerge from expert communities. The practice of peer review is viewed as a citizenly duty of scientists in the scientific community, because all scientists take turns serving either as authors, referees, and editors in the peer review process We lack the resources to have a work evaluated by the entire community, so we substitute with a representative. Yet, in most examples of scientific review, the referee or referees are anonymous. This question is particularly important when the peer review process is brought to bear in order to evaluate matters beyond scientific validity, more ``subjective'' criteria such as relative importance, broadness of interest - criteria that do not appear to have an objective standard of comparison and validation. I will show that the anonymity of referees, far from endangering this trust, actually strengthens it. I will show that this anonymity is crucial in order to maintain any objectivity in scientific peer review, and why authors should not try to unmask the referee. Also at American Physical Society (APS).
Lee, Preston V; Dinu, Valentin
2015-11-04
Our publication of the BitTorious portal [1] demonstrated the ability to create a privatized distributed data warehouse of sufficient magnitude for real-world bioinformatics studies using minimal changes to the standard BitTorrent tracker protocol. In this second phase, we release a new server-side specification to accept anonymous philantropic storage donations by the general public, wherein a small portion of each user's local disk may be used for archival of scientific data. We have implementated the server-side announcement and control portions of this BitTorrent extension into v3.0.0 of the BitTorious portal, upon which compatible clients may be built. Automated test cases for the BitTorious Volunteer extensions have been added to the portal's v3.0.0 release, supporting validation of the "peer affinity" concept and announcement protocol introduced by this specification. Additionally, a separate reference implementation of affinity calculation has been provided in C++ for informaticians wishing to integrate into libtorrent-based projects. The BitTorrent "affinity" extensions as provided in the BitTorious portal reference implementation allow data publishers to crowdsource the extreme storage prerequisites for research in "big data" fields. With sufficient awareness and adoption of BitTorious Volunteer-based clients by the general public, the BitTorious portal may be able to provide peta-scale storage resources to the scientific community at relatively insignificant financial cost.
Habitability research priorities for the International Space Station and beyond.
Whitmore, M; Adolf, J A; Woolford, B J
2000-09-01
Advanced technology and the desire to explore space have resulted in increasingly longer manned space missions. Long Duration Space Flights (LDSF) have provided a considerable amount of scientific research on the ability of humans to adapt and function in microgravity environments. In addition, studies conducted in analogous environments, such as winter-over expeditions in Antarctica, have complemented the scientific understanding of human performance in LDSF. These findings indicate long duration missions may take a toll on the individual, both physiologically and psychologically, with potential impacts on performance. Significant factors in any manned LDSF are habitability, workload and performance. They are interrelated and influence one another, and therefore necessitate an integrated research approach. An integral part of this approach will be identifying and developing tools not only for assessment of habitability, workload, and performance, but also for prediction of these factors as well. In addition, these tools will be used to identify and provide countermeasures to minimize decrements and maximize mission success. The purpose of this paper is to identify research goals and methods for the International Space Station (ISS) in order to identify critical factors and level of impact on habitability, workload, and performance, and to develop and validate countermeasures. Overall, this approach will provide the groundwork for creating an optimal environment in which to live and work onboard ISS as well as preparing for longer planetary missions.
Research Priorities for the International Space Station and Beyond
NASA Technical Reports Server (NTRS)
Whitmore, Mihriban; Adolf, Jurine A.; Woolford, Barbara J.
1999-01-01
Advanced technology and the desire to explore space have resulted in increasingly longer manned space missions. Long Duration Space Flights (LDSF) have provided a considerable amount of scientific research on the ability of humans to adapt and function in microgravity environments. In addition, studies conducted in analogous environments, such as winter-over expeditions in Antarctica, have complemented the scientific understanding of human performance in LDSF. These findings indicate long duration missions may take a toll on the individual, both physiologically and psychologically, with potential impacts on performance. Significant factors in any manned LDSF are habitability, workload and performance. They are interrelated and influence one another, and therefore necessitate an integrated research approach. An integral part of this approach will be identifying and developing tools not only for assessment of habitability, workload, and performance, but also for prediction of these factors as well. In addition, these tools will be used to identify and provide countermeasures to minimize decrements and maximize mission success. The purpose of this paper is to identify research goals and methods for the International Space Station (ISS) in order to identify critical factors and level of impact on habitability, workload, and performance, and to develop and validate countermeasures. Overall, this approach will provide the groundwork for creating an optimal environment in which to live and work onboard ISS as well as preparing for longer planetary missions.
Hyder, Adnan A; Allen, Katharine A; Peters, David H; Chandran, Aruna; Bishai, David
2013-01-01
The growing burden of road traffic injuries, which kill over 1.2 million people yearly, falls mostly on low- and middle-income countries (LMICs). Despite this, evidence generation on the effectiveness of road safety interventions in LMIC settings remains scarce. This paper explores a scientific approach for evaluating road safety programmes in LMICs and introduces such a road safety multi-country initiative, the Road Safety in 10 Countries Project (RS-10). By building on existing evaluation frameworks, we develop a scientific approach for evaluating large-scale road safety programmes in LMIC settings. This also draws on '13 lessons' of large-scale programme evaluation: defining the evaluation scope; selecting study sites; maintaining objectivity; developing an impact model; utilising multiple data sources; using multiple analytic techniques; maximising external validity; ensuring an appropriate time frame; the importance of flexibility and a stepwise approach; continuous monitoring; providing feedback to implementers, policy-makers; promoting the uptake of evaluation results; and understanding evaluation costs. The use of relatively new approaches for evaluation of real-world programmes allows for the production of relevant knowledge. The RS-10 project affords an important opportunity to scientifically test these approaches for a real-world, large-scale road safety evaluation and generate new knowledge for the field of road safety.
Promoting the Multidimensional Character of Scientific Reasoning.
Bradshaw, William S; Nelson, Jennifer; Adams, Byron J; Bell, John D
2017-04-01
This study reports part of a long-term program to help students improve scientific reasoning using higher-order cognitive tasks set in the discipline of cell biology. This skill was assessed using problems requiring the construction of valid conclusions drawn from authentic research data. We report here efforts to confirm the hypothesis that data interpretation is a complex, multifaceted exercise. Confirmation was obtained using a statistical treatment showing that various such problems rank students differently-each contains a unique set of cognitive challenges. Additional analyses of performance results have allowed us to demonstrate that individuals differ in their capacity to navigate five independent generic elements that constitute successful data interpretation: biological context, connection to course concepts, experimental protocols, data inference, and integration of isolated experimental observations into a coherent model. We offer these aspects of scientific thinking as a "data analysis skills inventory," along with usable sample problems that illustrate each element. Additionally, we show that this kind of reasoning is rigorous in that it is difficult for most novice students, who are unable to intuitively implement strategies for improving these skills. Instructors armed with knowledge of the specific challenges presented by different types of problems can provide specific helpful feedback during formative practice. The use of this instructional model is most likely to require changes in traditional classroom instruction.
NASA Astrophysics Data System (ADS)
Nieminen, Pasi; Savinainen, Antti; Viiri, Jouni
2010-07-01
This study investigates students’ ability to interpret multiple representations consistently (i.e., representational consistency) in the context of the force concept. For this purpose we developed the Representational Variant of the Force Concept Inventory (R-FCI), which makes use of nine items from the 1995 version of the Force Concept Inventory (FCI). These original FCI items were redesigned using various representations (such as motion map, vectorial and graphical), yielding 27 multiple-choice items concerning four central concepts underpinning the force concept: Newton’s first, second, and third laws, and gravitation. We provide some evidence for the validity and reliability of the R-FCI; this analysis is limited to the student population of one Finnish high school. The students took the R-FCI at the beginning and at the end of their first high school physics course. We found that students’ (n=168) representational consistency (whether scientifically correct or not) varied considerably depending on the concept. On average, representational consistency and scientifically correct understanding increased during the instruction, although in the post-test only a few students performed consistently both in terms of representations and scientifically correct understanding. We also compared students’ (n=87) results of the R-FCI and the FCI, and found that they correlated quite well.
Planetary cubesats - mission architectures
NASA Astrophysics Data System (ADS)
Bousquet, Pierre W.; Ulamec, Stephan; Jaumann, Ralf; Vane, Gregg; Baker, John; Clark, Pamela; Komarek, Tomas; Lebreton, Jean-Pierre; Yano, Hajime
2016-07-01
Miniaturisation of technologies over the last decade has made cubesats a valid solution for deep space missions. For example, a spectacular set 13 cubesats will be delivered in 2018 to a high lunar orbit within the frame of SLS' first flight, referred to as Exploration Mission-1 (EM-1). Each of them will perform autonomously valuable scientific or technological investigations. Other situations are encountered, such as the auxiliary landers / rovers and autonomous camera that will be carried in 2018 to asteroid 1993 JU3 by JAXA's Hayabusas 2 probe, and will provide complementary scientific return to their mothership. In this case, cubesats depend on a larger spacecraft for deployment and other resources, such as telecommunication relay or propulsion. For both situations, we will describe in this paper how cubesats can be used as remote observatories (such as NEO detection missions), as technology demonstrators, and how they can perform or contribute to all steps in the Deep Space exploration sequence: Measurements during Deep Space cruise, Body Fly-bies, Body Orbiters, Atmospheric probes (Jupiter probe, Venus atmospheric probes, ..), Static Landers, Mobile landers (such as balloons, wheeled rovers, small body rovers, drones, penetrators, floating devices, …), Sample Return. We will elaborate on mission architectures for the most promising concepts where cubesat size devices offer an advantage in terms of affordability, feasibility, and increase of scientific return.
Associations between attitudes towards scientific misconduct and self-reported behavior.
Holm, Søren; Hofmann, Bjørn
2018-06-25
We investigate the relationship between doctoral students' attitudes towards scientific misconduct and their self-reported behavior. 203 questionnaires were distributed to doctoral candidates at the Faculty of Medicine, University of Oslo 2016/2017. The response rate was 74%. The results show a correlation between attitudes towards misconduct and self-reported problematic behaviors among doctoral students in biomedicine. The four most common reported misbehaviors are adding author(s) who did not qualify for authorship (17.9%), collecting more data after seeing that the results were almost statistically significant (11.8%), turning a blind eye to colleagues' use of flawed data or questionable interpretation of data (11.2%), and reporting an unexpected finding as having been hypothesized from the start (10.5%). We find correlations between scientific misbehavior and the location of undergraduate studies and whether the respondents have had science ethics lectures previously. The study provides evidence for the concurrent validity of the two instruments used to measure attitudes and behavior, i.e. the Kalichman scale and the Research Misbehavior Severity Score (RMSS). Although the direction of causality between attitudes and misbehavior cannot be determined in this study the correlation between the two indicates that it can be important to engender the right attitudes in early career researchers.
Article-level assessment of influence and translation in biomedical research.
Santangelo, George M
2017-06-01
Given the vast scale of the modern scientific enterprise, it can be difficult for scientists to make judgments about the work of others through careful analysis of the entirety of the relevant literature. This has led to a reliance on metrics that are mathematically flawed and insufficiently diverse to account for the variety of ways in which investigators contribute to scientific progress. An urgent, critical first step in solving this problem is replacing the Journal Impact Factor with an article-level alternative. The Relative Citation Ratio (RCR), a metric that was designed to serve in that capacity, measures the influence of each publication on its respective area of research. RCR can serve as one component of a multifaceted metric that provides an effective data-driven supplement to expert opinion. Developing validated methods that quantify scientific progress can help to optimize the management of research investments and accelerate the acquisition of knowledge that improves human health. © 2017 Santangelo. This article is distributed by The American Society for Cell Biology under license from the author(s). Two months after publication it is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).
Gondola development for CNES stratospheric balloons
NASA Astrophysics Data System (ADS)
Vargas, A.; Audoubert, J.; Cau, M.; Evrard, J.; Verdier, N.
The CNES has been supporting scientific ballooning since its establishment in 1962. The two main parts of the balloon system or aerostat are the balloon itself and the flight train, comprising the house-keeping gondola, for the control of balloon flight (localization and operational telemetry & telecommand - TM/TC), and the scientific gondola with its dedicated telecommunication system. For zero pressure balloon, the development of new TM/TC system for the housekeeping and science data transmission are going on from 1999. The main concepts are : - for balloon house-keeping and low rate scientific telemetry, the ELITE system, which is based on single I2C bus standardizing communication between the different components of the system : trajectography, balloon control, power supply, scientific TM/TC, .... In this concept, Radio Frequency links are developed between the house keeping gondola and the components of the aerostat (balloon valve, ballast machine, balloon gas temperature measurements, ...). The main objectives are to simplify the flight train preparation in term of gondola testing before flight, and also by reducing the number of long electrical cables integrated in the balloon and the flight train; - for high rate scientific telemetry, the use of functional interconnection Internet Protocol (IP) in interface with the Radio Frequency link. The main idea is to use off-the-shelf IP hardware products (routers, industrial PC, ...) and IP software (Telnet, FTP, Web-HTTP, ...) to reduce the development costs; - for safety increase, the adding, in the flight train, of a totally independent house keeping gondola based on the satellite Inmarsat M and Iridium telecommunication systems, which permits to get real time communications between the on-board data mobile and the ground station, reduced to a PC computer with modem connected to the phone network. These GEO and LEO telecommunication systems give also the capability to operate balloon flights over longer distance (over the line of sight) than with dedicated RF system, which requires balloon visibility from the ground station. For long duration flights (3 months) of Infra Red Montgolfieres, a house keeping gondola has been developed, using the Inmarsat C standard to have communication all around the world (up to N or S 80 ° latitude) with an automatic switching between the 4 geostationnary Inmarsat satellites. After validation flights performed from Bauru / Brazil. (2000 & 2001) and Kiruna/Sweden (2002), the first operational flights took place from Bauru in February 2003 during ENVISAT validation campaign. The next flights will be realized in the framework of the Hibiscus campaign planned in February 2004 in Bauru.. The Balloon Division was involved in the Franco / Japanese HSFD II project which consists to drop a mock-up of the Japanese HOPE-X space shuttle from a stratospheric balloon to validate its flight from the altitude of 30 km. We developed a specific gondola as a service module for the HOPE-X shuttle, providing power and GPS radio-frequency signal during the balloon flight phase, telemetry end remote control radio frequency links and separation system with pyrotechnic cutters for the drop of the shuttle. A successful flight was performed at Kiruna in July 2003. Concerning gondola with pointing system, the study of a big g-ray telescope (8 m of focal length), started by the end of 2002. For this 1 ton gondola, the telescope stabilization system will be based on control moment gyro (CMG). The CMG system has been designed and will be manufactured and validated during 2004. The first flight of this g-ray gondola is planned for 2006. The progress, status and future plans concerning these gondola developments will be presented.
Schechtman, Leonard M
2002-01-01
Toxicological testing in the current regulatory environment is steeped in a history of using animals to answer questions about the safety of products to which humans are exposed. That history forms the basis for the testing strategies that have evolved to satisfy the needs of the regulatory bodies that render decisions that affect, for the most part, virtually all phases of premarket product development and evaluation and, to a lesser extent, postmarketing surveillance. Only relatively recently have the levels of awareness of, and responsiveness to, animal welfare issues reached current proportions. That paradigm shift, although sluggish, has nevertheless been progressive. New and alternative toxicological methods for hazard evaluation and risk assessment have now been adopted and are being viewed as a means to address those issues in a manner that considers humane treatment of animals yet maintains scientific credibility and preserves the goal of ensuring human safety. To facilitate this transition, regulatory agencies and regulated industry must work together toward improved approaches. They will need assurance that the methods will be reliable and the results comparable with, or better than, those derived from the current classical methods. That confidence will be a function of the scientific validation and resultant acceptance of any given method. In the United States, to fulfill this need, the Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM) and its operational center, the National Toxicology Program Interagency Center for the Evaluation of Alternative Toxicological Methods (NICEATM), have been constituted as prescribed in federal law. Under this mandate, ICCVAM has developed a process and established criteria for the scientific validation and regulatory acceptance of new and alternative methods. The role of ICCVAM in the validation and acceptance process and the criteria instituted toward that end are described. Also discussed are the participation of the US Food and Drug Administration (FDA) in the ICCVAM process and that agency's approach to the application and implementation of ICCVAM-recommended methods.
Development and validation of a notational system to study the offensive process in football.
Sarmento, Hugo; Anguera, Teresa; Campaniço, Jorge; Leitão, José
2010-01-01
The most striking change within football development is the application of science to its problems and in particular the use of increasingly sophisticated technology that, supported by scientific data, allows us to establish a "code of reading" the reality of the game. Therefore, this study describes the process of the development and validation of an ad hoc system of categorization, which allows the different methods of offensive game in football and the interaction to be analyzed. Therefore, through an exploratory phase of the study, we identified 10 vertebrate criteria and the respective behaviors observed for each of these criteria. We heard a panel of five experts with the purpose of a content validation. The resulting instrument is characterized by a combination of field formats and systems of categories. The reliability of the instrument was calculated by the intraobserver agreement, and values above 0.95 for all criteria were achieved. Two FC Barcelona games were coded and analyzed, which allowed the detection of various T-patterns. The results show that the instrument serves the purpose for which it was developed and can provide important information for the understanding of game interaction in football.
KÖLLER, OLAF
2016-01-01
ABSTRACT National and international large‐scale assessments (LSA) have a major impact on educational systems, which raises fundamental questions about the validity of the measures regarding their internal structure and their relations to relevant covariates. Given its importance, research on the validity of instruments specifically developed for LSA is still sparse, especially in science and its subdomains biology, chemistry, and physics. However, policy decisions for the improvement of educational quality based on LSA can only be helpful if valid information on students’ achievement levels is provided. In the present study, the nature of the measurement instruments based on the German Educational Standards in Biology is examined. On the basis of data from 3,165 students in Grade 10, we present dimensional analyses and report the relationship between different subdimensions of biology literacy and cognitive covariates such as general cognitive abilities and verbal skills. A theory‐driven two‐dimensional model fitted the data best. Content knowledge and scientific inquiry, two subdimensions of biology literacy, are highly correlated and show differential correlational patterns to the covariates. We argue that the underlying structure of biology should be incorporated into curricula, teacher training and future assessments. PMID:27818532
Twilight reloaded: the peptide experience
Weichenberger, Christian X.; Pozharski, Edwin; Rupp, Bernhard
2017-01-01
The de facto commoditization of biomolecular crystallography as a result of almost disruptive instrumentation automation and continuing improvement of software allows any sensibly trained structural biologist to conduct crystallographic studies of biomolecules with reasonably valid outcomes: that is, models based on properly interpreted electron density. Robust validation has led to major mistakes in the protein part of structure models becoming rare, but some depositions of protein–peptide complex structure models, which generally carry significant interest to the scientific community, still contain erroneous models of the bound peptide ligand. Here, the protein small-molecule ligand validation tool Twilight is updated to include peptide ligands. (i) The primary technical reasons and potential human factors leading to problems in ligand structure models are presented; (ii) a new method used to score peptide-ligand models is presented; (iii) a few instructive and specific examples, including an electron-density-based analysis of peptide-ligand structures that do not contain any ligands, are discussed in detail; (iv) means to avoid such mistakes and the implications for database integrity are discussed and (v) some suggestions as to how journal editors could help to expunge errors from the Protein Data Bank are provided. PMID:28291756
Twilight reloaded: the peptide experience.
Weichenberger, Christian X; Pozharski, Edwin; Rupp, Bernhard
2017-03-01
The de facto commoditization of biomolecular crystallography as a result of almost disruptive instrumentation automation and continuing improvement of software allows any sensibly trained structural biologist to conduct crystallographic studies of biomolecules with reasonably valid outcomes: that is, models based on properly interpreted electron density. Robust validation has led to major mistakes in the protein part of structure models becoming rare, but some depositions of protein-peptide complex structure models, which generally carry significant interest to the scientific community, still contain erroneous models of the bound peptide ligand. Here, the protein small-molecule ligand validation tool Twilight is updated to include peptide ligands. (i) The primary technical reasons and potential human factors leading to problems in ligand structure models are presented; (ii) a new method used to score peptide-ligand models is presented; (iii) a few instructive and specific examples, including an electron-density-based analysis of peptide-ligand structures that do not contain any ligands, are discussed in detail; (iv) means to avoid such mistakes and the implications for database integrity are discussed and (v) some suggestions as to how journal editors could help to expunge errors from the Protein Data Bank are provided.
Kampa, Nele; Köller, Olaf
2016-09-01
National and international large-scale assessments (LSA) have a major impact on educational systems, which raises fundamental questions about the validity of the measures regarding their internal structure and their relations to relevant covariates. Given its importance, research on the validity of instruments specifically developed for LSA is still sparse, especially in science and its subdomains biology, chemistry, and physics. However, policy decisions for the improvement of educational quality based on LSA can only be helpful if valid information on students' achievement levels is provided. In the present study, the nature of the measurement instruments based on the German Educational Standards in Biology is examined. On the basis of data from 3,165 students in Grade 10, we present dimensional analyses and report the relationship between different subdimensions of biology literacy and cognitive covariates such as general cognitive abilities and verbal skills. A theory-driven two-dimensional model fitted the data best. Content knowledge and scientific inquiry, two subdimensions of biology literacy, are highly correlated and show differential correlational patterns to the covariates. We argue that the underlying structure of biology should be incorporated into curricula, teacher training and future assessments.
Solecki, Roland; Kortenkamp, Andreas; Bergman, Åke; Chahoud, Ibrahim; Degen, Gisela H; Dietrich, Daniel; Greim, Helmut; Håkansson, Helen; Hass, Ulla; Husoy, Trine; Jacobs, Miriam; Jobling, Susan; Mantovani, Alberto; Marx-Stoelting, Philip; Piersma, Aldert; Ritz, Vera; Slama, Remy; Stahlmann, Ralf; van den Berg, Martin; Zoeller, R Thomas; Boobis, Alan R
2017-02-01
Endocrine disruption is a specific form of toxicity, where natural and/or anthropogenic chemicals, known as "endocrine disruptors" (EDs), trigger adverse health effects by disrupting the endogenous hormone system. There is need to harmonize guidance on the regulation of EDs, but this has been hampered by what appeared as a lack of consensus among scientists. This publication provides summary information about a consensus reached by a group of world-leading scientists that can serve as the basis for the development of ED criteria in relevant EU legislation. Twenty-three international scientists from different disciplines discussed principles and open questions on ED identification as outlined in a draft consensus paper at an expert meeting hosted by the German Federal Institute for Risk Assessment (BfR) in Berlin, Germany on 11-12 April 2016. Participants reached a consensus regarding scientific principles for the identification of EDs. The paper discusses the consensus reached on background, definition of an ED and related concepts, sources of uncertainty, scientific principles important for ED identification, and research needs. It highlights the difficulty in retrospectively reconstructing ED exposure, insufficient range of validated test systems for EDs, and some issues impacting on the evaluation of the risk from EDs, such as non-monotonic dose-response and thresholds, modes of action, and exposure assessment. This report provides the consensus statement on EDs agreed among all participating scientists. The meeting facilitated a productive debate and reduced a number of differences in views. It is expected that the consensus reached will serve as an important basis for the development of regulatory ED criteria.
Management Effectiveness of the World's Marine Fisheries
Mora, Camilo; Coll, Marta; Libralato, Simone; Pitcher, Tony J.; Sumaila, Rashid U.; Zeller, Dirk; Watson, Reg; Gaston, Kevin J.; Worm, Boris
2009-01-01
Ongoing declines in production of the world's fisheries may have serious ecological and socioeconomic consequences. As a result, a number of international efforts have sought to improve management and prevent overexploitation, while helping to maintain biodiversity and a sustainable food supply. Although these initiatives have received broad acceptance, the extent to which corrective measures have been implemented and are effective remains largely unknown. We used a survey approach, validated with empirical data, and enquiries to over 13,000 fisheries experts (of which 1,188 responded) to assess the current effectiveness of fisheries management regimes worldwide; for each of those regimes, we also calculated the probable sustainability of reported catches to determine how management affects fisheries sustainability. Our survey shows that 7% of all coastal states undergo rigorous scientific assessment for the generation of management policies, 1.4% also have a participatory and transparent processes to convert scientific recommendations into policy, and 0.95% also provide for robust mechanisms to ensure the compliance with regulations; none is also free of the effects of excess fishing capacity, subsidies, or access to foreign fishing. A comparison of fisheries management attributes with the sustainability of reported fisheries catches indicated that the conversion of scientific advice into policy, through a participatory and transparent process, is at the core of achieving fisheries sustainability, regardless of other attributes of the fisheries. Our results illustrate the great vulnerability of the world's fisheries and the urgent need to meet well-identified guidelines for sustainable management; they also provide a baseline against which future changes can be quantified. PMID:19547743
The Community Earth System Model-Polar Climate Working Group and the status of CESM2.
NASA Astrophysics Data System (ADS)
Bailey, D. A.; Holland, M. M.; DuVivier, A. K.
2017-12-01
The Polar Climate Working Group (PCWG) is a consortium of scientists who are interested in modeling and understanding the climate in the Arctic and the Antarctic, and how polar climate processes interact with and influence climate at lower latitudes. Our members come from universities and laboratories, and our interests span all elements of polar climate, from the ocean depths to the top of the atmosphere. In addition to conducting scientific modeling experiments, we are charged with contributing to the development and maintenance of the state-of-the-art sea ice model component (CICE) used in the Community Earth System Model (CESM). A recent priority for the PCWG has been to come up with innovative ways to bring the observational and modeling communities together. This will allow for more robust validation of climate model simulations, the development and implementation of more physically-based model parameterizations, improved data assimilation capabilities, and the better use of models to design and implement field experiments. These have been informed by topical workshops and scientific visitors that we have hosted in these areas. These activities will be discussed and information on how the better integration of observations and models has influenced the new version of the CESM, which is due to be released in late 2017, will be provided. Additionally, we will address how enhanced interactions with the observational community will contribute to model developments and validation moving forward.
Vending machine assessment methodology. A systematic review.
Matthews, Melissa A; Horacek, Tanya M
2015-07-01
The nutritional quality of food and beverage products sold in vending machines has been implicated as a contributing factor to the development of an obesogenic food environment. How comprehensive, reliable, and valid are the current assessment tools for vending machines to support or refute these claims? A systematic review was conducted to summarize, compare, and evaluate the current methodologies and available tools for vending machine assessment. A total of 24 relevant research studies published between 1981 and 2013 met inclusion criteria for this review. The methodological variables reviewed in this study include assessment tool type, study location, machine accessibility, product availability, healthfulness criteria, portion size, price, product promotion, and quality of scientific practice. There were wide variations in the depth of the assessment methodologies and product healthfulness criteria utilized among the reviewed studies. Of the reviewed studies, 39% evaluated machine accessibility, 91% evaluated product availability, 96% established healthfulness criteria, 70% evaluated portion size, 48% evaluated price, 52% evaluated product promotion, and 22% evaluated the quality of scientific practice. Of all reviewed articles, 87% reached conclusions that provided insight into the healthfulness of vended products and/or vending environment. Product healthfulness criteria and complexity for snack and beverage products was also found to be variable between the reviewed studies. These findings make it difficult to compare results between studies. A universal, valid, and reliable vending machine assessment tool that is comprehensive yet user-friendly is recommended. Copyright © 2015 Elsevier Ltd. All rights reserved.
Cloke, Jonathan; Evans, Katharine; Crabtree, David; Hughes, Annette; Simpson, Helen; Holopainen, Jani; Wickstrand, Nina; Kauppinen, Mikko; Leon-Velarde, Carlos; Larson, Nathan; Dave, Keron
2014-01-01
The Thermo Scientific SureTect Listeria species Assay is a new real-time PCR assay for the detection of all species of Listeria in food and environmental samples. This validation study was conducted using the AOAC Research Institute (RI) Performance Tested Methods program to validate the SureTect Listeria species Assay in comparison to the reference method detailed in International Organization for Standardization 11290-1:1996 including amendment 1:2004 in a variety of foods plus plastic and stainless steel. The food matrixes validated were smoked salmon, processed cheese, fresh bagged spinach, cantaloupe, cooked prawns, cooked sliced turkey meat, cooked sliced ham, salami, pork frankfurters, and raw ground beef. All matrixes were tested by Thermo Fisher Scientific, Microbiology Division, Basingstoke, UK. In addition, three matrixes (pork frankfurters, fresh bagged spinach, and stainless steel surface samples) were analyzed independently as part of the AOAC-RI-controlled independent laboratory study by the University ofGuelph, Canada. Using probability of detection statistical analysis, a significant difference in favour of the SureTect assay was demonstrated between the SureTect and reference method for high level spiked samples of pork frankfurters, smoked salmon, cooked prawns, stainless steel, and low-spiked samples of salami. For all other matrixes, no significant difference was seen between the two methods during the study. Inclusivity testing was conducted with 68 different isolates of Listeria species, all of which were detected by the SureTect Listeria species Assay. None of the 33 exclusivity isolates were detected by the SureTect Listeria species Assay. Ruggedness testing was conducted to evaluate the performance of the assay with specific method deviations outside of the recommended parameters open to variation, which demonstrated that the assay gave reliable performance. Accelerated stability testing was additionally conducted, validating the assay shelf life.
Update on simulation-based surgical training and assessment in ophthalmology: a systematic review.
Thomsen, Ann Sofia S; Subhi, Yousif; Kiilgaard, Jens Folke; la Cour, Morten; Konge, Lars
2015-06-01
This study reviews the evidence behind simulation-based surgical training of ophthalmologists to determine (1) the validity of the reported models and (2) the ability to transfer skills to the operating room. Simulation-based training is established widely within ophthalmology, although it often lacks a scientific basis for implementation. We conducted a systematic review of trials involving simulation-based training or assessment of ophthalmic surgical skills among health professionals. The search included 5 databases (PubMed, EMBASE, PsycINFO, Cochrane Library, and Web of Science) and was completed on March 1, 2014. Overall, the included trials were divided into animal, cadaver, inanimate, and virtual-reality models. Risk of bias was assessed using the Cochrane Collaboration's tool. Validity evidence was evaluated using a modern validity framework (Messick's). We screened 1368 reports for eligibility and included 118 trials. The most common surgery simulated was cataract surgery. Most validity trials investigated only 1 or 2 of 5 sources of validity (87%). Only 2 trials (48 participants) investigated transfer of skills to the operating room; 4 trials (65 participants) evaluated the effect of simulation-based training on patient-related outcomes. Because of heterogeneity of the studies, it was not possible to conduct a quantitative analysis. The methodologic rigor of trials investigating simulation-based surgical training in ophthalmology is inadequate. To ensure effective implementation of training models, evidence-based knowledge of validity and efficacy is needed. We provide a useful tool for implementation and evaluation of research in simulation-based training. Copyright © 2015 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.
CYGNSS Surface Wind Validation and Characteristics in the Maritime Continent
NASA Astrophysics Data System (ADS)
Asharaf, S.; Waliser, D. E.; Zhang, C.; Wandala, A.
2017-12-01
Surface wind over tropical oceans plays a crucial role in many local/regional weather and climate processes and helps to shape the global climate system. However, there is a lack of consistent high quality observations for surface winds. The newly launched NASA Cyclone Global Navigation Satellite System (CYGNSS) mission provides near surface wind speed over the tropical ocean with sampling that accounts for the diurnal cycle. In the early phase of the mission, validation is a critical task, and over-ocean validation is typically challenging due to a lack of robust validation resources that a cover a variety of environmental conditions. In addition, it can also be challenging to obtain in-situ observation resources and also to extract co-located CYGNSS records for some of the more scientifically interesting regions, such as the Maritime Continent (MC). The MC is regarded as a key tropical driver for the mean global circulation as well as important large-scale circulation variability such as the Madian-Julian Oscillation (MJO). The focus of this project and analysis is to take advantage of local in-situ resources from the MC regions (e.g. volunteer shipping, marine buoys, and the Year of Maritime Continent (YMC) campaign) to quantitatively characterize and validate the CYGNSS derived winds in the MC region and in turn work to unravel the complex multi-scale interactions between the MJO and MC. This presentation will show preliminary results of a comparison between the CYGNSS and the in-situ surface wind measurements focusing on the MC region. Details about the validation methods, uncertainties, and planned work will be discussed in this presentation.
ERIC Educational Resources Information Center
Struyf, Elke; Adriaensens, Stefanie; Meynen, Karen
2011-01-01
Society has become more complex in recent decades, and this has increased the demands placed on the educational system and the teaching profession. This study focuses on the development and validation of an instrument that measures the basic skills of beginning teachers. The instrument was developed according to scientific knowledge on teacher…
Code of Federal Regulations, 2013 CFR
2013-04-01
... International Environmental and Scientific Affairs, will pay the claimant the amount calculated under § 33.9... shall thereafter constitute a valid, but non-interest bearing obligation of the Government. Delays in... issue. ...
Code of Federal Regulations, 2012 CFR
2012-04-01
... International Environmental and Scientific Affairs, will pay the claimant the amount calculated under § 33.9... shall thereafter constitute a valid, but non-interest bearing obligation of the Government. Delays in... issue. ...
Code of Federal Regulations, 2014 CFR
2014-04-01
... International Environmental and Scientific Affairs, will pay the claimant the amount calculated under § 33.9... shall thereafter constitute a valid, but non-interest bearing obligation of the Government. Delays in... issue. ...