Sample records for validation methodologies volume

  1. Airport Landside - Volume III : ALSIM Calibration and Validation.

    DOT National Transportation Integrated Search

    1982-06-01

    This volume discusses calibration and validation procedures applied to the Airport Landside Simulation Model (ALSIM), using data obtained at Miami, Denver and LaGuardia Airports. Criteria for the selection of a validation methodology are described. T...

  2. LOX/hydrocarbon rocket engine analytical design methodology development and validation. Volume 2: Appendices

    NASA Technical Reports Server (NTRS)

    Niiya, Karen E.; Walker, Richard E.; Pieper, Jerry L.; Nguyen, Thong V.

    1993-01-01

    This final report includes a discussion of the work accomplished during the period from Dec. 1988 through Nov. 1991. The objective of the program was to assemble existing performance and combustion stability models into a usable design methodology capable of designing and analyzing high-performance and stable LOX/hydrocarbon booster engines. The methodology was then used to design a validation engine. The capabilities and validity of the methodology were demonstrated using this engine in an extensive hot fire test program. The engine used LOX/RP-1 propellants and was tested over a range of mixture ratios, chamber pressures, and acoustic damping device configurations. This volume contains time domain and frequency domain stability plots which indicate the pressure perturbation amplitudes and frequencies from approximately 30 tests of a 50K thrust rocket engine using LOX/RP-1 propellants over a range of chamber pressures from 240 to 1750 psia with mixture ratios of from 1.2 to 7.5. The data is from test configurations which used both bitune and monotune acoustic cavities and from tests with no acoustic cavities. The engine had a length of 14 inches and a contraction ratio of 2.0 using a 7.68 inch diameter injector. The data was taken from both stable and unstable tests. All combustion instabilities were spontaneous in the first tangential mode. Although stability bombs were used and generated overpressures of approximately 20 percent, no tests were driven unstable by the bombs. The stability instrumentation included six high-frequency Kistler transducers in the combustion chamber, a high-frequency Kistler transducer in each propellant manifold, and tri-axial accelerometers. Performance data is presented, both characteristic velocity efficiencies and energy release efficiencies, for those tests of sufficient duration to record steady state values.

  3. Hypersonic Experimental and Computational Capability, Improvement and Validation. Volume 2

    NASA Technical Reports Server (NTRS)

    Muylaert, Jean (Editor); Kumar, Ajay (Editor); Dujarric, Christian (Editor)

    1998-01-01

    The results of the phase 2 effort conducted under AGARD Working Group 18 on Hypersonic Experimental and Computational Capability, Improvement and Validation are presented in this report. The first volume, published in May 1996, mainly focused on the design methodology, plans and some initial results of experiments that had been conducted to serve as validation benchmarks. The current volume presents the detailed experimental and computational data base developed during this effort.

  4. Validation of On-Orbit Methodology for the Assessment of Cardiac Function and Changes in the Circulating Volume Using Ultrasound and "Braslet-M" Occlusion Cuffs

    NASA Technical Reports Server (NTRS)

    Bogomolov, V. V.; Duncan, J. M.; Alferova, I. V.; Dulchavsky, S. A.; Ebert, D.; Hamilton, D. R.; Matveev, V. P.; Sargsyan, A. E.

    2008-01-01

    Recent advances in remotely guided imaging techniques on ISS allow the acquisition of high quality ultrasound data using crewmember operators with no medical background and minimal training. However, ongoing efforts are required to develop and validate methodology for complex imaging protocols to ensure their repeatability, efficiency, and suitability for use aboard the ISS. This Station Developmental Test Objective (SDTO) tests a cardiovascular evaluation methodology that takes advantage of the ISS Ultrasound capability, the Braslet-M device, and modified respiratory maneuvers (Valsalva and Mueller), to broaden the spectrum of anatomical and functional information on human cardiovascular system during long-duration space missions. The proposed methodology optimizes and combines new and previously demonstrated methods, and is expected to benefit medically indicated assessments, operational research protocols, and data collections for science. Braslet-M is a current Russian operational countermeasure that compresses the upper thigh to impede the venous return from lower extremities. The goal of the SDTO is to establish and validate a repeatable ultrasound-based methodology for the assessment of a number of cardiovascular criteria in microgravity. Braslet-M device is used as a means to acutely alter volume distribution while focused ultrasound measurements are performed. Modified respiratory maneuvers are done upon volume manipulations to record commensurate changes in anatomical and functional parameters. The overall cardiovascular effects of the Braslet-M device are not completely understood, and although not a primary objective of this SDTO, this effort will provide pilot data regarding the suitability of Braslet-M for its intended purpose, effects, and the indications for its use.

  5. Methodologies for Verification and Validation of Space Launch System (SLS) Structural Dynamic Models: Appendices

    NASA Technical Reports Server (NTRS)

    Coppolino, Robert N.

    2018-01-01

    Verification and validation (V&V) is a highly challenging undertaking for SLS structural dynamics models due to the magnitude and complexity of SLS subassemblies and subassemblies. Responses to challenges associated with V&V of Space Launch System (SLS) structural dynamics models are presented in Volume I of this paper. Four methodologies addressing specific requirements for V&V are discussed. (1) Residual Mode Augmentation (RMA). (2) Modified Guyan Reduction (MGR) and Harmonic Reduction (HR, introduced in 1976). (3) Mode Consolidation (MC). Finally, (4) Experimental Mode Verification (EMV). This document contains the appendices to Volume I.

  6. Validation of On-Orbit Methodology for the Assessment of Cardiac Function and Changes in the Circulating Volume Using Ultrasound and Braslet-M Occlusion Cuffs

    NASA Technical Reports Server (NTRS)

    Hamilton, Douglas; Sargsyan, Ashot E.; Ebert, Douglas; Duncan, Michael; Bogomolov, Valery V.; Alferova, Irina V.; Matveev, Vladimir P.; Dulchavsky, Scott A.

    2010-01-01

    The objective of this joint U.S. - Russian project was the development and validation of an in-flight methodology to assess a number of cardiac and vascular parameters associated with circulating volume and its manipulation in long-duration space flight. Responses to modified Valsalva and Mueller maneuvers were measured by cardiac and vascular ultrasound (US) before, during, and after temporary volume reduction by means of Braslet-M thigh occlusion cuffs (Russia). Materials and Methods: The study protocol was conducted in 14 sessions on 9 ISS crewmembers, with an average exposure to microgravity of 122 days. Baseline cardiovascular measurements were taken by echocardiography in multiple modes (including tissue Doppler of both ventricles) and femoral and jugular vein imaging on the International Space Station (ISS). The Braslet devices were then applied and measurements were repeated after >10 minutes. The cuffs were then released and the hemodynamic recovery process was monitored. Modified Valsalva and Mueller maneuvers were used throughout the protocol. All US data were acquired by the HDI-5000 ultrasound system aboard the ISS (ATL/Philips, USA) during remotely guided sessions. The study protocol, including the use of Braslet-M for this purpose, was approved by the ISS Human Research Multilateral Review Board (HRMRB). Results: The effects of fluid sequestration on a number of echocardiographic and vascular parameters were readily detectable by in-flight US, as were responses to respiratory maneuvers. The overall volume status assessment methodology appears to be valid and practical, with a decrease in left heart lateral E (tissue Doppler) as one of the most reliable measures. Increase in the femoral vein cross-sectional areas was consistently observed with Braslet application. Other significant differences and trends within the extensive cardiovascular data were also observed. (Decreased - RV and LV preload indices, Cardiac Output, LV E all maneuvers, LV Stroke

  7. Validation of biomarkers to predict response to immunotherapy in cancer: Volume II - clinical validation and regulatory considerations.

    PubMed

    Dobbin, Kevin K; Cesano, Alessandra; Alvarez, John; Hawtin, Rachael; Janetzki, Sylvia; Kirsch, Ilan; Masucci, Giuseppe V; Robbins, Paul B; Selvan, Senthamil R; Streicher, Howard Z; Zhang, Jenny; Butterfield, Lisa H; Thurin, Magdalena

    2016-01-01

    There is growing recognition that immunotherapy is likely to significantly improve health outcomes for cancer patients in the coming years. Currently, while a subset of patients experience substantial clinical benefit in response to different immunotherapeutic approaches, the majority of patients do not but are still exposed to the significant drug toxicities. Therefore, a growing need for the development and clinical use of predictive biomarkers exists in the field of cancer immunotherapy. Predictive cancer biomarkers can be used to identify the patients who are or who are not likely to derive benefit from specific therapeutic approaches. In order to be applicable in a clinical setting, predictive biomarkers must be carefully shepherded through a step-wise, highly regulated developmental process. Volume I of this two-volume document focused on the pre-analytical and analytical phases of the biomarker development process, by providing background, examples and "good practice" recommendations. In the current Volume II, the focus is on the clinical validation, validation of clinical utility and regulatory considerations for biomarker development. Together, this two volume series is meant to provide guidance on the entire biomarker development process, with a particular focus on the unique aspects of developing immune-based biomarkers. Specifically, knowledge about the challenges to clinical validation of predictive biomarkers, which has been gained from numerous successes and failures in other contexts, will be reviewed together with statistical methodological issues related to bias and overfitting. The different trial designs used for the clinical validation of biomarkers will also be discussed, as the selection of clinical metrics and endpoints becomes critical to establish the clinical utility of the biomarker during the clinical validation phase of the biomarker development. Finally, the regulatory aspects of submission of biomarker assays to the U.S. Food and

  8. A methodology for collecting valid software engineering data

    NASA Technical Reports Server (NTRS)

    Basili, Victor R.; Weiss, David M.

    1983-01-01

    An effective data collection method for evaluating software development methodologies and for studying the software development process is described. The method uses goal-directed data collection to evaluate methodologies with respect to the claims made for them. Such claims are used as a basis for defining the goals of the data collection, establishing a list of questions of interest to be answered by data analysis, defining a set of data categorization schemes, and designing a data collection form. The data to be collected are based on the changes made to the software during development, and are obtained when the changes are made. To insure accuracy of the data, validation is performed concurrently with software development and data collection. Validation is based on interviews with those people supplying the data. Results from using the methodology show that data validation is a necessary part of change data collection. Without it, as much as 50% of the data may be erroneous. Feasibility of the data collection methodology was demonstrated by applying it to five different projects in two different environments. The application showed that the methodology was both feasible and useful.

  9. Application of validity theory and methodology to patient-reported outcome measures (PROMs): building an argument for validity.

    PubMed

    Hawkins, Melanie; Elsworth, Gerald R; Osborne, Richard H

    2018-07-01

    Data from subjective patient-reported outcome measures (PROMs) are now being used in the health sector to make or support decisions about individuals, groups and populations. Contemporary validity theorists define validity not as a statistical property of the test but as the extent to which empirical evidence supports the interpretation of test scores for an intended use. However, validity testing theory and methodology are rarely evident in the PROM validation literature. Application of this theory and methodology would provide structure for comprehensive validation planning to support improved PROM development and sound arguments for the validity of PROM score interpretation and use in each new context. This paper proposes the application of contemporary validity theory and methodology to PROM validity testing. The validity testing principles will be applied to a hypothetical case study with a focus on the interpretation and use of scores from a translated PROM that measures health literacy (the Health Literacy Questionnaire or HLQ). Although robust psychometric properties of a PROM are a pre-condition to its use, a PROM's validity lies in the sound argument that a network of empirical evidence supports the intended interpretation and use of PROM scores for decision making in a particular context. The health sector is yet to apply contemporary theory and methodology to PROM development and validation. The theoretical and methodological processes in this paper are offered as an advancement of the theory and practice of PROM validity testing in the health sector.

  10. Volume and methodological quality of randomized controlled trials in laparoscopic surgery: assessment over a 10-year period.

    PubMed

    Antoniou, Stavros A; Andreou, Alexandros; Antoniou, George A; Koch, Oliver O; Köhler, Gernot; Luketina, Ruzica-R; Bertsias, Antonios; Pointner, Rudolph; Granderath, Frank-Alexander

    2015-11-01

    Measures have been taken to improve methodological quality of randomized controlled trials (RCTs). This review systematically assessed the trends in volume and methodological quality of RCTs on minimally invasive surgery within a 10-year period. RCTs on minimally invasive surgery were searched in the 10 most cited general surgical journals and the 5 most cited journals of laparoscopic interest for the years 2002 and 2012. Bibliometric and methodological quality components were abstracted using the Scottish Intercollegiate Guidelines Network. The pooled number of RCTs from low-contribution regions demonstrated an increasing proportion of the total published RCTs, compensating for a concomitant decrease of the respective contributions from Europe and North America. International collaborations were more frequent in 2012. Acceptable or high quality RCTs accounted for 37.9% and 54.4% of RCTs published in 2002 and 2012, respectively. Components of external validity were poorly reported. Both the volume and the reporting quality of laparoscopic RCTs have increased from 2002 to 2012, but there seems to be ample room for improvement of methodological quality. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Geostatistical approach for assessing soil volumes requiring remediation: validation using lead-polluted soils underlying a former smelting works.

    PubMed

    Demougeot-Renard, Helene; De Fouquet, Chantal

    2004-10-01

    Assessing the volume of soil requiring remediation and the accuracy of this assessment constitutes an essential step in polluted site management. If this remediation volume is not properly assessed, misclassification may lead both to environmental risks (polluted soils may not be remediated) and financial risks (unexpected discovery of polluted soils may generate additional remediation costs). To minimize such risks, this paper proposes a geostatistical methodology based on stochastic simulations that allows the remediation volume and the uncertainty to be assessed using investigation data. The methodology thoroughly reproduces the conditions in which the soils are classified and extracted at the remediation stage. The validity of the approach is tested by applying it on the data collected during the investigation phase of a former lead smelting works and by comparing the results with the volume that has actually been remediated. This real remediated volume was composed of all the remediation units that were classified as polluted after systematic sampling and analysis during clean-up stage. The volume estimated from the 75 samples collected during site investigation slightly overestimates (5.3% relative error) the remediated volume deduced from 212 remediation units. Furthermore, the real volume falls within the range of uncertainty predicted using the proposed methodology.

  12. Risk-based Methodology for Validation of Pharmaceutical Batch Processes.

    PubMed

    Wiles, Frederick

    2013-01-01

    In January 2011, the U.S. Food and Drug Administration published new process validation guidance for pharmaceutical processes. The new guidance debunks the long-held industry notion that three consecutive validation batches or runs are all that are required to demonstrate that a process is operating in a validated state. Instead, the new guidance now emphasizes that the level of monitoring and testing performed during process performance qualification (PPQ) studies must be sufficient to demonstrate statistical confidence both within and between batches. In some cases, three qualification runs may not be enough. Nearly two years after the guidance was first published, little has been written defining a statistical methodology for determining the number of samples and qualification runs required to satisfy Stage 2 requirements of the new guidance. This article proposes using a combination of risk assessment, control charting, and capability statistics to define the monitoring and testing scheme required to show that a pharmaceutical batch process is operating in a validated state. In this methodology, an assessment of process risk is performed through application of a process failure mode, effects, and criticality analysis (PFMECA). The output of PFMECA is used to select appropriate levels of statistical confidence and coverage which, in turn, are used in capability calculations to determine when significant Stage 2 (PPQ) milestones have been met. The achievement of Stage 2 milestones signals the release of batches for commercial distribution and the reduction of monitoring and testing to commercial production levels. Individuals, moving range, and range/sigma charts are used in conjunction with capability statistics to demonstrate that the commercial process is operating in a state of statistical control. The new process validation guidance published by the U.S. Food and Drug Administration in January of 2011 indicates that the number of process validation batches

  13. Construct Validity: Advances in Theory and Methodology

    PubMed Central

    Strauss, Milton E.; Smith, Gregory T.

    2008-01-01

    Measures of psychological constructs are validated by testing whether they relate to measures of other constructs as specified by theory. Each test of relations between measures reflects on the validity of both the measures and the theory driving the test. Construct validation concerns the simultaneous process of measure and theory validation. In this chapter, we review the recent history of validation efforts in clinical psychological science that has led to this perspective, and we review five recent advances in validation theory and methodology of importance for clinical researchers. These are: the emergence of nonjustificationist philosophy of science; an increasing appreciation for theory and the need for informative tests of construct validity; valid construct representation in experimental psychopathology; the need to avoid representing multidimensional constructs with a single score; and the emergence of effective new statistical tools for the evaluation of convergent and discriminant validity. PMID:19086835

  14. [Definition of low threshold volumes for quality assurance: conceptual and methodological issues involved in the definition and evaluation of thresholds for volume outcome relations in clinical care].

    PubMed

    Wetzel, Hermann

    2006-01-01

    In a large number of mostly retrospective association studies, a statistical relationship between volume and quality of health care has been reported. However, the relevance of these results is frequently limited by methodological shortcomings. In this article, criteria for the evidence and definition of thresholds for volume-outcome relations are proposed, e.g. the specification of relevant outcomes for quality indicators, analysis of volume as a continuous variable with an adequate case-mix and risk adjustment, accounting for cluster effects and considering mathematical models for the derivation of cut-off values. Moreover, volume thresholds are regarded as surrogate parameters for the indirect classification of the quality of care, whose diagnostic validity and effectiveness in improving health care quality need to be evaluated in prospective studies.

  15. Definition and Demonstration of a Methodology for Validating Aircraft Trajectory Predictors

    NASA Technical Reports Server (NTRS)

    Vivona, Robert A.; Paglione, Mike M.; Cate, Karen T.; Enea, Gabriele

    2010-01-01

    This paper presents a new methodology for validating an aircraft trajectory predictor, inspired by the lessons learned from a number of field trials, flight tests and simulation experiments for the development of trajectory-predictor-based automation. The methodology introduces new techniques and a new multi-staged approach to reduce the effort in identifying and resolving validation failures, avoiding the potentially large costs associated with failures during a single-stage, pass/fail approach. As a case study, the validation effort performed by the Federal Aviation Administration for its En Route Automation Modernization (ERAM) system is analyzed to illustrate the real-world applicability of this methodology. During this validation effort, ERAM initially failed to achieve six of its eight requirements associated with trajectory prediction and conflict probe. The ERAM validation issues have since been addressed, but to illustrate how the methodology could have benefited the FAA effort, additional techniques are presented that could have been used to resolve some of these issues. Using data from the ERAM validation effort, it is demonstrated that these new techniques could have identified trajectory prediction error sources that contributed to several of the unmet ERAM requirements.

  16. Improved Conceptual Models Methodology (ICoMM) for Validation of Non-Observable Systems

    DTIC Science & Technology

    2015-12-01

    distribution is unlimited IMPROVED CONCEPTUAL MODELS METHODOLOGY (ICoMM) FOR VALIDATION OF NON-OBSERVABLE SYSTEMS by Sang M. Sok December 2015...REPORT TYPE AND DATES COVERED Dissertation 4. TITLE AND SUBTITLE IMPROVED CONCEPTUAL MODELS METHODOLOGY (ICoMM) FOR VALIDATION OF NON-OBSERVABLE...importance of the CoM. The improved conceptual model methodology (ICoMM) is developed in support of improving the structure of the CoM for both face and

  17. Validation methodology in publications describing epidemiological registration methods of dental caries: a systematic review.

    PubMed

    Sjögren, P; Ordell, S; Halling, A

    2003-12-01

    The aim was to describe and systematically review the methodology and reporting of validation in publications describing epidemiological registration methods for dental caries. BASIC RESEARCH METHODOLOGY: Literature searches were conducted in six scientific databases. All publications fulfilling the predetermined inclusion criteria were assessed for methodology and reporting of validation using a checklist including items described previously as well as new items. The frequency of endorsement of the assessed items was analysed. Moreover, the type and strength of evidence, was evaluated. Reporting of predetermined items relating to methodology of validation and the frequency of endorsement of the assessed items were of primary interest. Initially 588 publications were located. 74 eligible publications were obtained, 23 of which fulfilled the inclusion criteria and remained throughout the analyses. A majority of the studies reported the methodology of validation. The reported methodology of validation was generally inadequate, according to the recommendations of evidence-based medicine. The frequencies of reporting the assessed items (frequencies of endorsement) ranged from four to 84 per cent. A majority of the publications contributed to a low strength of evidence. There seems to be a need to improve the methodology and the reporting of validation in publications describing professionally registered caries epidemiology. Four of the items assessed in this study are potentially discriminative for quality assessments of reported validation.

  18. C3I system modification and EMC (electromagnetic compatibility) methodology, volume 1

    NASA Astrophysics Data System (ADS)

    Wilson, J. L.; Jolly, M. B.

    1984-01-01

    A methodology (i.e., consistent set of procedures) for assessing the electromagnetic compatibility (EMC) of RF subsystem modifications on C3I aircraft was generated during this study (Volume 1). An IEMCAP (Intrasystem Electromagnetic Compatibility Analysis Program) database for the E-3A (AWACS) C3I aircraft RF subsystem was extracted to support the design of the EMC assessment methodology (Volume 2). Mock modifications were performed on the E-3A database to assess, using a preliminary form of the methodology, the resulting EMC impact. Application of the preliminary assessment methodology to modifications in the E-3A database served to fine tune the form of a final assessment methodology. The resulting final assessment methodology is documented in this report in conjunction with the overall study goals, procedures, and database. It is recommended that a similar EMC assessment methodology be developed for the power subsystem within C3I aircraft. It is further recommended that future EMC assessment methodologies be developed around expert systems (i.e., computer intelligent agents) to control both the excruciating detail and user requirement for transparency.

  19. The validity of ultrasound estimation of muscle volumes.

    PubMed

    Infantolino, Benjamin W; Gales, Daniel J; Winter, Samantha L; Challis, John H

    2007-08-01

    The purpose of this study was to validate ultrasound muscle volume estimation in vivo. To examine validity, vastus lateralis ultrasound images were collected from cadavers before muscle dissection; after dissection, the volumes were determined by hydrostatic weighing. Seven thighs from cadaver specimens were scanned using a 7.5-MHz ultrasound probe (SSD-1000, Aloka, Japan). The perimeter of the vastus lateralis was identified in the ultrasound images and manually digitized. Volumes were then estimated using the Cavalieri principle, by measuring the image areas of sets of parallel two-dimensional slices through the muscles. The muscles were then dissected from the cadavers, and muscle volume was determined via hydrostatic weighing. There was no statistically significant difference between the ultrasound estimation of muscle volume and that estimated using hydrostatic weighing (p > 0.05). The mean percentage error between the two volume estimates was 0.4% +/- 6.9. Three operators all performed four digitizations of all images from one randomly selected muscle; there was no statistical difference between operators or trials and the intraclass correlation was high (>0.8). The results of this study indicate that ultrasound is an accurate method for estimating muscle volumes in vivo.

  20. Select Methodology for Validating Advanced Satellite Measurement Systems

    NASA Technical Reports Server (NTRS)

    Larar, Allen M.; Zhou, Daniel K.; Liu, Xi; Smith, William L.

    2008-01-01

    Advanced satellite sensors are tasked with improving global measurements of the Earth's atmosphere, clouds, and surface to enable enhancements in weather prediction, climate monitoring capability, and environmental change detection. Measurement system validation is crucial to achieving this goal and maximizing research and operational utility of resultant data. Field campaigns including satellite under-flights with well calibrated FTS sensors aboard high-altitude aircraft are an essential part of the validation task. This presentation focuses on an overview of validation methodology developed for assessment of high spectral resolution infrared systems, and includes results of preliminary studies performed to investigate the performance of the Infrared Atmospheric Sounding Interferometer (IASI) instrument aboard the MetOp-A satellite.

  1. Collecting and validating experiential expertise is doable but poses methodological challenges.

    PubMed

    Burda, Marika H F; van den Akker, Marjan; van der Horst, Frans; Lemmens, Paul; Knottnerus, J André

    2016-04-01

    To give an overview of important methodological challenges in collecting, validating, and further processing experiential expertise and how to address these challenges. Based on our own experiences in studying the concept, operationalization, and contents of experiential expertise, we have formulated methodological issues regarding the inventory and application of experiential expertise. The methodological challenges can be categorized in six developmental research stages, comprising the conceptualization of experiential expertise, methods to harvest experiential expertise, the validation of experiential expertise, evaluation of the effectiveness, how to translate experiential expertise into acceptable guidelines, and how to implement these. The description of methodological challenges and ways to handle those are illustrated using diabetes mellitus as an example. Experiential expertise can be defined and operationalized in terms of successful illness-related behaviors and translated into recommendations regarding life domains. Pathways have been identified to bridge the gaps between the world of patients' daily lives and the medical world. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Methodology for Computational Fluid Dynamic Validation for Medical Use: Application to Intracranial Aneurysm.

    PubMed

    Paliwal, Nikhil; Damiano, Robert J; Varble, Nicole A; Tutino, Vincent M; Dou, Zhongwang; Siddiqui, Adnan H; Meng, Hui

    2017-12-01

    Computational fluid dynamics (CFD) is a promising tool to aid in clinical diagnoses of cardiovascular diseases. However, it uses assumptions that simplify the complexities of the real cardiovascular flow. Due to high-stakes in the clinical setting, it is critical to calculate the effect of these assumptions in the CFD simulation results. However, existing CFD validation approaches do not quantify error in the simulation results due to the CFD solver's modeling assumptions. Instead, they directly compare CFD simulation results against validation data. Thus, to quantify the accuracy of a CFD solver, we developed a validation methodology that calculates the CFD model error (arising from modeling assumptions). Our methodology identifies independent error sources in CFD and validation experiments, and calculates the model error by parsing out other sources of error inherent in simulation and experiments. To demonstrate the method, we simulated the flow field of a patient-specific intracranial aneurysm (IA) in the commercial CFD software star-ccm+. Particle image velocimetry (PIV) provided validation datasets for the flow field on two orthogonal planes. The average model error in the star-ccm+ solver was 5.63 ± 5.49% along the intersecting validation line of the orthogonal planes. Furthermore, we demonstrated that our validation method is superior to existing validation approaches by applying three representative existing validation techniques to our CFD and experimental dataset, and comparing the validation results. Our validation methodology offers a streamlined workflow to extract the "true" accuracy of a CFD solver.

  3. Ground validation of DPR precipitation rate over Italy using H-SAF validation methodology

    NASA Astrophysics Data System (ADS)

    Puca, Silvia; Petracca, Marco; Sebastianelli, Stefano; Vulpiani, Gianfranco

    2017-04-01

    The H-SAF project (Satellite Application Facility on support to Operational Hydrology and Water Management, funded by EUMETSAT) is aimed at retrieving key hydrological parameters such as precipitation, soil moisture and snow cover. Within the H-SAF consortium, the Product Precipitation Validation Group (PPVG) evaluate the accuracy of instantaneous and accumulated precipitation products with respect to ground radar and rain gauge data adopting the same methodology (using a Unique Common Code) throughout Europe. The adopted validation methodology can be summarized by the following few steps: (1) ground data (radar and rain gauge) quality control; (2) spatial interpolation of rain gauge measurements; (3) up-scaling of radar data to satellite native grid; (4) temporal comparison of satellite and ground-based precipitation products; and (5) production and evaluation of continuous and multi-categorical statistical scores for long time series and case studies. The statistical scores are evaluated taking into account the satellite product native grid. With the recent advent of the GPM era starting in march 2014, more new global precipitation products are available. The validation methodology developed in H-SAF can be easily applicable to different precipitation products. In this work, we have validated instantaneous precipitation data estimated from DPR (Dual-frequency Precipitation Radar) instrument onboard of the GPM-CO (Global Precipitation Measurement Core Observatory) satellite. In particular, we have analyzed the near surface and estimated precipitation fields collected in the 2A-Level for 3 different scans (NS, MS and HS). The Italian radar mosaic managed by the National Department of Civil Protection available operationally every 10 minutes is used as ground reference data. The results obtained highlight the capability of the DPR to identify properly the precipitation areas with higher accuracy in estimating the stratiform precipitation (especially for the HS). An

  4. Results of Fall 2001 Pilot: Methodology for Validation of Course Prerequisites.

    ERIC Educational Resources Information Center

    Serban, Andreea M.; Fleming, Steve

    The purpose of this study was to test a methodology that will help Santa Barbara City College (SBCC), California, to validate the course prerequisites that fall under the category of highest level of scrutiny--data collection and analysis--as defined by the Chancellor's Office. This study gathered data for the validation of prerequisites for three…

  5. Bayesian cross-entropy methodology for optimal design of validation experiments

    NASA Astrophysics Data System (ADS)

    Jiang, X.; Mahadevan, S.

    2006-07-01

    An important concern in the design of validation experiments is how to incorporate the mathematical model in the design in order to allow conclusive comparisons of model prediction with experimental output in model assessment. The classical experimental design methods are more suitable for phenomena discovery and may result in a subjective, expensive, time-consuming and ineffective design that may adversely impact these comparisons. In this paper, an integrated Bayesian cross-entropy methodology is proposed to perform the optimal design of validation experiments incorporating the computational model. The expected cross entropy, an information-theoretic distance between the distributions of model prediction and experimental observation, is defined as a utility function to measure the similarity of two distributions. A simulated annealing algorithm is used to find optimal values of input variables through minimizing or maximizing the expected cross entropy. The measured data after testing with the optimum input values are used to update the distribution of the experimental output using Bayes theorem. The procedure is repeated to adaptively design the required number of experiments for model assessment, each time ensuring that the experiment provides effective comparison for validation. The methodology is illustrated for the optimal design of validation experiments for a three-leg bolted joint structure and a composite helicopter rotor hub component.

  6. Development of a Valid and Reliable Knee Articular Cartilage Condition-Specific Study Methodological Quality Score.

    PubMed

    Harris, Joshua D; Erickson, Brandon J; Cvetanovich, Gregory L; Abrams, Geoffrey D; McCormick, Frank M; Gupta, Anil K; Verma, Nikhil N; Bach, Bernard R; Cole, Brian J

    2014-02-01

    Condition-specific questionnaires are important components in evaluation of outcomes of surgical interventions. No condition-specific study methodological quality questionnaire exists for evaluation of outcomes of articular cartilage surgery in the knee. To develop a reliable and valid knee articular cartilage-specific study methodological quality questionnaire. Cross-sectional study. A stepwise, a priori-designed framework was created for development of a novel questionnaire. Relevant items to the topic were identified and extracted from a recent systematic review of 194 investigations of knee articular cartilage surgery. In addition, relevant items from existing generic study methodological quality questionnaires were identified. Items for a preliminary questionnaire were generated. Redundant and irrelevant items were eliminated, and acceptable items modified. The instrument was pretested and items weighed. The instrument, the MARK score (Methodological quality of ARticular cartilage studies of the Knee), was tested for validity (criterion validity) and reliability (inter- and intraobserver). A 19-item, 3-domain MARK score was developed. The 100-point scale score demonstrated face validity (focus group of 8 orthopaedic surgeons) and criterion validity (strong correlation to Cochrane Quality Assessment score and Modified Coleman Methodology Score). Interobserver reliability for the overall score was good (intraclass correlation coefficient [ICC], 0.842), and for all individual items of the MARK score, acceptable to perfect (ICC, 0.70-1.000). Intraobserver reliability ICC assessed over a 3-week interval was strong for 2 reviewers (≥0.90). The MARK score is a valid and reliable knee articular cartilage condition-specific study methodological quality instrument. This condition-specific questionnaire may be used to evaluate the quality of studies reporting outcomes of articular cartilage surgery in the knee.

  7. Analytical methodology for safety validation of computer controlled subsystems. Volume 1 : state-of-the-art and assessment of safety verification/validation methodologies

    DOT National Transportation Integrated Search

    1995-09-01

    This report describes the development of a methodology designed to assure that a sufficiently high level of safety is achieved and maintained in computer-based systems which perform safety critical functions in high-speed rail or magnetic levitation ...

  8. Methodological challenges of validating a clinical decision-making tool in the practice environment.

    PubMed

    Brennan, Caitlin W; Daly, Barbara J

    2015-04-01

    Validating a measurement tool intended for use in the practice environment poses challenges that may not be present when validating a tool intended solely for research purposes. The aim of this article is to describe the methodological challenges of validating a clinical decision-making tool, the Oncology Acuity Tool, which nurses use to make nurse assignment and staffing decisions prospectively each shift. Data were derived from a larger validation study, during which several methodological challenges arose. Revisions to the tool, including conducting iterative feedback cycles with end users, were necessary before the validation study was initiated. The "true" value of patient acuity is unknown, and thus, two approaches to inter-rater reliability assessment were used. Discordant perspectives existed between experts and end users. Balancing psychometric rigor with clinical relevance may be achieved through establishing research-practice partnerships, seeking active and continuous feedback with end users, and weighing traditional statistical rules of thumb with practical considerations. © The Author(s) 2014.

  9. LLCEDATA and LLCECALC for Windows version 1.0, Volume 3: Software verification and validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McFadden, J.G.

    1998-09-04

    LLCEDATA and LLCECALC for Windows are user-friendly computer software programs that work together to determine the proper waste designation, handling, and disposition requirements for Long Length Contaminated Equipment (LLCE). LLCEDATA reads from a variety of data bases to produce an equipment data file(EDF) that represents a snapshot of both the LLCE and the tank from which it originates. LLCECALC reads the EDF and the gamma assay file (AV2) that is produced by the flexible Receiver Gamma Energy Analysis System. LLCECALC performs corrections to the AV2 file as it is being read and characterizes the LLCE. Both programs produce a varietymore » of reports, including a characterization report and a status report. The status report documents each action taken by the user, LLCEDATA, and LLCECALC. Documentation for LLCEDATA and LLCECALC for Windows is available in three volumes. Volume 1 is a user`s manual, which is intended as a quick reference for both LLCEDATA and LLCECALC. Volume 2 is a technical manual, which discusses system limitations and provides recommendations to the LLCE process. Volume 3 documents LLCEDATA and LLCECALC`s verification and validation. Two of the three installation test cases, from Volume 1, are independently confirmed. Data bases used in LLCEDATA are verified and referenced. Both phases of LLCECALC process gamma and characterization, are extensively tested to verify that the methodology and algorithms used are correct.« less

  10. Experimental Validation of an Integrated Controls-Structures Design Methodology

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.; Gupta, Sandeep; Elliot, Kenny B.; Walz, Joseph E.

    1996-01-01

    The first experimental validation of an integrated controls-structures design methodology for a class of large order, flexible space structures is described. Integrated redesign of the controls-structures-interaction evolutionary model, a laboratory testbed at NASA Langley, was described earlier. The redesigned structure was fabricated, assembled in the laboratory, and experimentally tested against the original structure. Experimental results indicate that the structure redesigned using the integrated design methodology requires significantly less average control power than the nominal structure with control-optimized designs, while maintaining the required line-of-sight pointing performance. Thus, the superiority of the integrated design methodology over the conventional design approach is experimentally demonstrated. Furthermore, amenability of the integrated design structure to other control strategies is evaluated, both analytically and experimentally. Using Linear-Quadratic-Guassian optimal dissipative controllers, it is observed that the redesigned structure leads to significantly improved performance with alternate controllers as well.

  11. Validating agent oriented methodology (AOM) for netlogo modelling and simulation

    NASA Astrophysics Data System (ADS)

    WaiShiang, Cheah; Nissom, Shane; YeeWai, Sim; Sharbini, Hamizan

    2017-10-01

    AOM (Agent Oriented Modeling) is a comprehensive and unified agent methodology for agent oriented software development. AOM methodology was proposed to aid developers with the introduction of technique, terminology, notation and guideline during agent systems development. Although AOM methodology is claimed to be capable of developing a complex real world system, its potential is yet to be realized and recognized by the mainstream software community and the adoption of AOM is still at its infancy. Among the reason is that there are not much case studies or success story of AOM. This paper presents two case studies on the adoption of AOM for individual based modelling and simulation. It demonstrate how the AOM is useful for epidemiology study and ecological study. Hence, it further validate the AOM in a qualitative manner.

  12. Working Papers in Dialogue Modeling, Volume 2.

    ERIC Educational Resources Information Center

    Mann, William C.; And Others

    The technical working papers that comprise the two volumes of this document are related to the problem of creating a valid process model of human communication in dialogue. In Volume 2, the first paper concerns study methodology, and raises such issues as the choice between system-building and process-building, and the advantages of studying cases…

  13. Public acceptability of highway safety countermeasures. Volume 1, Background of study and methodology

    DOT National Transportation Integrated Search

    1981-06-01

    This study provides information about public attitudes towards proposed highway safety countermeasures in three program areas: alcohol and drugs, unsafe driving behaviors, and pedestrian safety. This volume describes the three research methodologies ...

  14. External Validity in the Study of Human Development: Theoretical and Methodological Issues

    ERIC Educational Resources Information Center

    Hultsch, David F.; Hickey, Tom

    1978-01-01

    An examination of the concept of external validity from two theoretical perspectives: a traditional mechanistic approach and a dialectical organismic approach. Examines the theoretical and methodological implications of these perspectives. (BD)

  15. Reformulation of Geometric Validations Created by Students, Revealed When Using the ACODESA Methodology

    ERIC Educational Resources Information Center

    Rubilar, Álvaro Sebastián Bustos; Badillo, Gonzalo Zubieta

    2017-01-01

    In this article, we report how a geometric task based on the ACODESA methodology (collaborative learning, scientific debate and self-reflection) promotes the reformulation of the students' validations and allows revealing the students' aims in each of the stages of the methodology. To do so, we present the case of a team and, particularly, one of…

  16. Methodologies for pre-validation of biofilters and wetlands for stormwater treatment.

    PubMed

    Zhang, Kefeng; Randelovic, Anja; Aguiar, Larissa M; Page, Declan; McCarthy, David T; Deletic, Ana

    2015-01-01

    Water Sensitive Urban Design (WSUD) systems are frequently used as part of a stormwater harvesting treatment trains (e.g. biofilters (bio-retentions and rain-gardens) and wetlands). However, validation frameworks for such systems do not exist, limiting their adoption for end-uses such as drinking water. The first stage in the validation framework is pre-validation, which prepares information for further validation monitoring. A pre-validation roadmap, consisting of five steps, is suggested in this paper. Detailed methods for investigating target micropollutants in stormwater, and determining challenge conditions for biofilters and wetlands, are provided. A literature review was undertaken to identify and quantify micropollutants in stormwater. MUSIC V5.1 was utilized to simulate the behaviour of the systems based on 30-year rainfall data in three distinct climate zones; outputs were evaluated to identify the threshold of operational variables, including length of dry periods (LDPs) and volume of water treated per event. The paper highlights that a number of micropollutants were found in stormwater at levels above various worldwide drinking water guidelines (eight pesticides, benzene, benzo(a)pyrene, pentachlorophenol, di-(2-ethylhexyl)-phthalate and a total of polychlorinated biphenyls). The 95th percentile LDPs was exponentially related to system design area while the 5th percentile length of dry periods remained within short durations (i.e. 2-8 hours). 95th percentile volume of water treated per event was exponentially related to system design area as a percentage of an impervious catchment area. The out-comings of this study show that pre-validation could be completed through a roadmap consisting of a series of steps; this will help in the validation of stormwater treatment systems.

  17. A Validated Methodology for Genetic Identification of Tuna Species (Genus Thunnus)

    PubMed Central

    Viñas, Jordi; Tudela, Sergi

    2009-01-01

    Background Tuna species of the genus Thunnus, such as the bluefin tunas, are some of the most important and yet most endangered trade fish in the world. Identification of these species in traded forms, however, may be difficult depending on the presentation of the products, which may hamper conservation efforts on trade control. In this paper, we validated a genetic methodology that can fully distinguish between the eight Thunnus species from any kind of processed tissue. Methodology After testing several genetic markers, a complete discrimination of the eight tuna species was achieved using Forensically Informative Nucleotide Sequencing based primarily on the sequence variability of the hypervariable genetic marker mitochondrial DNA control region (mtDNA CR), followed, in some specific cases, by a second validation by a nuclear marker rDNA first internal transcribed spacer (ITS1). This methodology was able to distinguish all tuna species, including those belonging to the subgenus Neothunnus that are very closely related, and in consequence can not be differentiated with other genetic markers of lower variability. This methodology also took into consideration the presence of introgression that has been reported in past studies between T. thynnus, T. orientalis and T. alalunga. Finally, we applied the methodology to cross-check the species identity of 26 processed tuna samples. Conclusions Using the combination of two genetic markers, one mitochondrial and another nuclear, allows a full discrimination between all eight tuna species. Unexpectedly, the genetic marker traditionally used for DNA barcoding, cytochrome oxidase 1, could not differentiate all species, thus its use as a genetic marker for tuna species identification is questioned. PMID:19898615

  18. CFD methodology and validation for turbomachinery flows

    NASA Astrophysics Data System (ADS)

    Hirsch, Ch.

    1994-05-01

    The essential problem today, in the application of 3D Navier-Stokes simulations to the design and analysis of turbomachinery components, is the validation of the numerical approximation and of the physical models, in particular the turbulence modelling. Although most of the complex 3D flow phenomena occurring in turbomachinery bladings can be captured with relatively coarse meshes, many detailed flow features are dependent on mesh size, on the turbulence and transition models. A brief review of the present state of the art of CFD methodology is given with emphasis on quality and accuracy of numerical approximations related to viscous flow computations. Considerations related to the mesh influence on solution accuracy are stressed. The basic problems of turbulence and transition modelling are discussed next, with a short summary of the main turbulence models and their applications to representative turbomachinery flows. Validations of present turbulence models indicate that none of the available turbulence models is able to predict all the detailed flow behavior in complex flow interactions. In order to identify the phenomena that can be captured on coarser meshes a detailed understanding of the complex 3D flow in compressor and turbines is necessary. Examples of global validations for different flow configurations, representative of compressor and turbine aerodynamics are presented, including secondary and tip clearance flows.

  19. Lessons Learned From Methodological Validation Research in E-Epidemiology.

    PubMed

    Kesse-Guyot, Emmanuelle; Assmann, Karen; Andreeva, Valentina; Castetbon, Katia; Méjean, Caroline; Touvier, Mathilde; Salanave, Benoît; Deschamps, Valérie; Péneau, Sandrine; Fezeu, Léopold; Julia, Chantal; Allès, Benjamin; Galan, Pilar; Hercberg, Serge

    2016-10-18

    Traditional epidemiological research methods exhibit limitations leading to high logistics, human, and financial burden. The continued development of innovative digital tools has the potential to overcome many of the existing methodological issues. Nonetheless, Web-based studies remain relatively uncommon, partly due to persistent concerns about validity and generalizability. The objective of this viewpoint is to summarize findings from methodological studies carried out in the NutriNet-Santé study, a French Web-based cohort study. On the basis of the previous findings from the NutriNet-Santé e-cohort (>150,000 participants are currently included), we synthesized e-epidemiological knowledge on sample representativeness, advantageous recruitment strategies, and data quality. Overall, the reported findings support the usefulness of Web-based studies in overcoming common methodological deficiencies in epidemiological research, in particular with regard to data quality (eg, the concordance for body mass index [BMI] classification was 93%), reduced social desirability bias, and access to a wide range of participant profiles, including the hard-to-reach subgroups such as young (12.30% [15,118/122,912], <25 years) and old people (6.60% [8112/122,912], ≥65 years), unemployed or homemaker (12.60% [15,487/122,912]), and low educated (38.50% [47,312/122,912]) people. However, some selection bias remained (78.00% (95,871/122,912) of the participants were women, and 61.50% (75,590/122,912) had postsecondary education), which is an inherent aspect of cohort study inclusion; other specific types of bias may also have occurred. Given the rapidly growing access to the Internet across social strata, the recruitment of participants with diverse socioeconomic profiles and health risk exposures was highly feasible. Continued efforts concerning the identification of specific biases in e-cohorts and the collection of comprehensive and valid data are still needed. This summary of

  20. Lessons Learned From Methodological Validation Research in E-Epidemiology

    PubMed Central

    Assmann, Karen; Andreeva, Valentina; Castetbon, Katia; Méjean, Caroline; Touvier, Mathilde; Salanave, Benoît; Deschamps, Valérie; Péneau, Sandrine; Fezeu, Léopold; Julia, Chantal; Allès, Benjamin; Galan, Pilar; Hercberg, Serge

    2016-01-01

    Background Traditional epidemiological research methods exhibit limitations leading to high logistics, human, and financial burden. The continued development of innovative digital tools has the potential to overcome many of the existing methodological issues. Nonetheless, Web-based studies remain relatively uncommon, partly due to persistent concerns about validity and generalizability. Objective The objective of this viewpoint is to summarize findings from methodological studies carried out in the NutriNet-Santé study, a French Web-based cohort study. Methods On the basis of the previous findings from the NutriNet-Santé e-cohort (>150,000 participants are currently included), we synthesized e-epidemiological knowledge on sample representativeness, advantageous recruitment strategies, and data quality. Results Overall, the reported findings support the usefulness of Web-based studies in overcoming common methodological deficiencies in epidemiological research, in particular with regard to data quality (eg, the concordance for body mass index [BMI] classification was 93%), reduced social desirability bias, and access to a wide range of participant profiles, including the hard-to-reach subgroups such as young (12.30% [15,118/122,912], <25 years) and old people (6.60% [8112/122,912], ≥65 years), unemployed or homemaker (12.60% [15,487/122,912]), and low educated (38.50% [47,312/122,912]) people. However, some selection bias remained (78.00% (95,871/122,912) of the participants were women, and 61.50% (75,590/122,912) had postsecondary education), which is an inherent aspect of cohort study inclusion; other specific types of bias may also have occurred. Conclusions Given the rapidly growing access to the Internet across social strata, the recruitment of participants with diverse socioeconomic profiles and health risk exposures was highly feasible. Continued efforts concerning the identification of specific biases in e-cohorts and the collection of comprehensive and

  1. Safety of High Speed Ground Transportation Systems : Analytical Methodology for Safety Validation of Computer Controlled Subsystems : Volume 2. Development of a Safety Validation Methodology

    DOT National Transportation Integrated Search

    1995-01-01

    This report describes the development of a methodology designed to assure that a sufficiently high level of safety is achieved and maintained in computer-based systems which perform safety cortical functions in high-speed rail or magnetic levitation ...

  2. Temporal validation for landsat-based volume estimation model

    Treesearch

    Renaldo J. Arroyo; Emily B. Schultz; Thomas G. Matney; David L. Evans; Zhaofei Fan

    2015-01-01

    Satellite imagery can potentially reduce the costs and time associated with ground-based forest inventories; however, for satellite imagery to provide reliable forest inventory data, it must produce consistent results from one time period to the next. The objective of this study was to temporally validate a Landsat-based volume estimation model in a four county study...

  3. Methodologies for Pre-Validation of Biofilters and Wetlands for Stormwater Treatment

    PubMed Central

    Zhang, Kefeng; Randelovic, Anja; Aguiar, Larissa M.; Page, Declan; McCarthy, David T.; Deletic, Ana

    2015-01-01

    Background Water Sensitive Urban Design (WSUD) systems are frequently used as part of a stormwater harvesting treatment trains (e.g. biofilters (bio-retentions and rain-gardens) and wetlands). However, validation frameworks for such systems do not exist, limiting their adoption for end-uses such as drinking water. The first stage in the validation framework is pre-validation, which prepares information for further validation monitoring. Objectives A pre-validation roadmap, consisting of five steps, is suggested in this paper. Detailed methods for investigating target micropollutants in stormwater, and determining challenge conditions for biofilters and wetlands, are provided. Methods A literature review was undertaken to identify and quantify micropollutants in stormwater. MUSIC V5.1 was utilized to simulate the behaviour of the systems based on 30-year rainfall data in three distinct climate zones; outputs were evaluated to identify the threshold of operational variables, including length of dry periods (LDPs) and volume of water treated per event. Results The paper highlights that a number of micropollutants were found in stormwater at levels above various worldwide drinking water guidelines (eight pesticides, benzene, benzo(a)pyrene, pentachlorophenol, di-(2-ethylhexyl)-phthalate and a total of polychlorinated biphenyls). The 95th percentile LDPs was exponentially related to system design area while the 5th percentile length of dry periods remained within short durations (i.e. 2–8 hours). 95th percentile volume of water treated per event was exponentially related to system design area as a percentage of an impervious catchment area. Conclusions The out-comings of this study show that pre-validation could be completed through a roadmap consisting of a series of steps; this will help in the validation of stormwater treatment systems. PMID:25955688

  4. A methodology for the validated design space exploration of fuel cell powered unmanned aerial vehicles

    NASA Astrophysics Data System (ADS)

    Moffitt, Blake Almy

    Unmanned Aerial Vehicles (UAVs) are the most dynamic growth sector of the aerospace industry today. The need to provide persistent intelligence, surveillance, and reconnaissance for military operations is driving the planned acquisition of over 5,000 UAVs over the next five years. The most pressing need is for quiet, small UAVs with endurance beyond what is capable with advanced batteries or small internal combustion propulsion systems. Fuel cell systems demonstrate high efficiency, high specific energy, low noise, low temperature operation, modularity, and rapid refuelability making them a promising enabler of the small, quiet, and persistent UAVs that military planners are seeking. Despite the perceived benefits, the actual near-term performance of fuel cell powered UAVs is unknown. Until the auto industry began spending billions of dollars in research, fuel cell systems were too heavy for useful flight applications. However, the last decade has seen rapid development with fuel cell gravimetric and volumetric power density nearly doubling every 2--3 years. As a result, a few design studies and demonstrator aircraft have appeared, but overall the design methodology and vehicles are still in their infancy. The design of fuel cell aircraft poses many challenges. Fuel cells differ fundamentally from combustion based propulsion in how they generate power and interact with other aircraft subsystems. As a result, traditional multidisciplinary analysis (MDA) codes are inappropriate. Building new MDAs is difficult since fuel cells are rapidly changing in design, and various competitive architectures exist for balance of plant, hydrogen storage, and all electric aircraft subsystems. In addition, fuel cell design and performance data is closely protected which makes validation difficult and uncertainty significant. Finally, low specific power and high volumes compared to traditional combustion based propulsion result in more highly constrained design spaces that are

  5. Impacts of Outer Continental Shelf (OCS) development on recreation and tourism. Volume 3. Detailed methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The final report for the project is presented in five volumes. This volume, Detailed Methodology Review, presents a discussion of the methods considered and used to estimate the impacts of Outer Continental Shelf (OCS) oil and gas development on coastal recreation in California. The purpose is to provide the Minerals Management Service with data and methods to improve their ability to analyze the socio-economic impacts of OCS development. Chapter II provides a review of previous attempts to evaluate the effects of OCS development and of oil spills on coastal recreation. The review also discusses the strengths and weaknesses of differentmore » approaches and presents the rationale for the methodology selection made. Chapter III presents a detailed discussion of the methods actually used in the study. The volume contains the bibliography for the entire study.« less

  6. A validated methodology for genetic identification of tuna species (genus Thunnus).

    PubMed

    Viñas, Jordi; Tudela, Sergi

    2009-10-27

    Tuna species of the genus Thunnus, such as the bluefin tunas, are some of the most important and yet most endangered trade fish in the world. Identification of these species in traded forms, however, may be difficult depending on the presentation of the products, which may hamper conservation efforts on trade control. In this paper, we validated a genetic methodology that can fully distinguish between the eight Thunnus species from any kind of processed tissue. After testing several genetic markers, a complete discrimination of the eight tuna species was achieved using Forensically Informative Nucleotide Sequencing based primarily on the sequence variability of the hypervariable genetic marker mitochondrial DNA control region (mtDNA CR), followed, in some specific cases, by a second validation by a nuclear marker rDNA first internal transcribed spacer (ITS1). This methodology was able to distinguish all tuna species, including those belonging to the subgenus Neothunnus that are very closely related, and in consequence can not be differentiated with other genetic markers of lower variability. This methodology also took into consideration the presence of introgression that has been reported in past studies between T. thynnus, T. orientalis and T. alalunga. Finally, we applied the methodology to cross-check the species identity of 26 processed tuna samples. Using the combination of two genetic markers, one mitochondrial and another nuclear, allows a full discrimination between all eight tuna species. Unexpectedly, the genetic marker traditionally used for DNA barcoding, cytochrome oxidase 1, could not differentiate all species, thus its use as a genetic marker for tuna species identification is questioned.

  7. A standardised protocol for the validation of banking methodologies for arterial allografts.

    PubMed

    Lomas, R J; Dodd, P D F; Rooney, P; Pegg, D E; Hogg, P A; Eagle, M E; Bennett, K E; Clarkson, A; Kearney, J N

    2013-09-01

    The objective of this study was to design and test a protocol for the validation of banking methodologies for arterial allografts. A series of in vitro biomechanical and biological assessments were derived, and applied to paired fresh and banked femoral arteries. The ultimate tensile stress and strain, suture pullout stress and strain, expansion/rupture under hydrostatic pressure, histological structure and biocompatibility properties of disinfected and cryopreserved femoral arteries were compared to those of fresh controls. No significant differences were detected in any of the test criteria. This validation protocol provides an effective means of testing and validating banking protocols for arterial allografts.

  8. Automation Applications in an Advanced Air Traffic Management System : Volume 3. Methodology for Man-Machine Task Allocation

    DOT National Transportation Integrated Search

    1974-08-01

    Volume 3 describes the methodology for man-machine task allocation. It contains a description of man and machine performance capabilities and an explanation of the methodology employed to allocate tasks to human or automated resources. It also presen...

  9. Repetitive deliberate fires: Development and validation of a methodology to detect series.

    PubMed

    Bruenisholz, Eva; Delémont, Olivier; Ribaux, Olivier; Wilson-Wilde, Linzi

    2017-08-01

    The detection of repetitive deliberate fire events is challenging and still often ineffective due to a case-by-case approach. A previous study provided a critical review of the situation and analysis of the main challenges. This study suggested that the intelligence process, integrating forensic data, could be a valid framework to provide a follow-up and systematic analysis provided it is adapted to the specificities of repetitive deliberate fires. In this current manuscript, a specific methodology to detect deliberate fires series, i.e. set by the same perpetrators, is presented and validated. It is based on case profiles relying on specific elements previously identified. The method was validated using a dataset of approximately 8000 deliberate fire events collected over 12 years in a Swiss state. Twenty possible series were detected, including 6 of 9 known series. These results are very promising and lead the way to a systematic implementation of this methodology in an intelligence framework, whilst demonstrating the need and benefit of increasing the collection of forensic specific information to strengthen the value of links between cases. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.

  10. A semi-automatic method for left ventricle volume estimate: an in vivo validation study

    NASA Technical Reports Server (NTRS)

    Corsi, C.; Lamberti, C.; Sarti, A.; Saracino, G.; Shiota, T.; Thomas, J. D.

    2001-01-01

    This study aims to the validation of the left ventricular (LV) volume estimates obtained by processing volumetric data utilizing a segmentation model based on level set technique. The validation has been performed by comparing real-time volumetric echo data (RT3DE) and magnetic resonance (MRI) data. A validation protocol has been defined. The validation protocol was applied to twenty-four estimates (range 61-467 ml) obtained from normal and pathologic subjects, which underwent both RT3DE and MRI. A statistical analysis was performed on each estimate and on clinical parameters as stroke volume (SV) and ejection fraction (EF). Assuming MRI estimates (x) as a reference, an excellent correlation was found with volume measured by utilizing the segmentation procedure (y) (y=0.89x + 13.78, r=0.98). The mean error on SV was 8 ml and the mean error on EF was 2%. This study demonstrated that the segmentation technique is reliably applicable on human hearts in clinical practice.

  11. RTL validation methodology on high complexity wireless microcontroller using OVM technique for fast time to market

    NASA Astrophysics Data System (ADS)

    Zhafirah Muhammad, Nurul; Harun, A.; Hambali, N. A. M. A.; Murad, S. A. Z.; Mohyar, S. N.; Isa, M. N.; Jambek, AB

    2017-11-01

    Increased demand in internet of thing (IOT) application based has inadvertently forced the move towards higher complexity of integrated circuit supporting SoC. Such spontaneous increased in complexity poses unequivocal complicated validation strategies. Hence, the complexity allows researchers to come out with various exceptional methodologies in order to overcome this problem. This in essence brings about the discovery of dynamic verification, formal verification and hybrid techniques. In reserve, it is very important to discover bugs at infancy of verification process in (SoC) in order to reduce time consuming and fast time to market for the system. Ergo, in this paper we are focusing on the methodology of verification that can be done at Register Transfer Level of SoC based on the AMBA bus design. On top of that, the discovery of others verification method called Open Verification Methodology (OVM) brings out an easier way in RTL validation methodology neither as the replacement for the traditional method yet as an effort for fast time to market for the system. Thus, the method called OVM is proposed in this paper as the verification method for larger design to avert the disclosure of the bottleneck in validation platform.

  12. Assessing the quality of the volume-outcome relationship in uro-oncology.

    PubMed

    Mayer, Erik K; Purkayastha, Sanjay; Athanasiou, Thanos; Darzi, Ara; Vale, Justin A

    2009-02-01

    To assess systematically the quality of evidence for the volume-outcome relationship in uro-oncology, and thus facilitate the formulating of health policy within this speciality, as 'Implementation of Improving Outcome Guidance' has led to centralization of uro-oncology based on published studies that have supported a 'higher volume-better outcome' relationship, but improved awareness of methodological drawbacks in health service research has questioned the strength of this proposed volume-outcome relationship. We systematically searched previous relevant reports and extracted all articles from 1980 onwards assessing the volume-outcome relationship for cystectomy, prostatectomy and nephrectomy at the institution and/or surgeon level. Studies were assessed for their methodological quality using a previously validated rating system. Where possible, meta-analytical methods were used to calculate overall differences in outcome measures between low and high volume healthcare providers. In all, 22 studies were included in the final analysis; 19 of these were published in the last 5 years. Only four studies appropriately explored the effect of both the institution and surgeon volume on outcome measures. Mortality and length of stay were the most frequently measured outcomes. The median total quality scores within each of the operation types were 8.5, 9 and 8 for cystectomy, prostatectomy and nephrectomy, respectively (possible maximum score 18). Random-effects modelling showed a higher risk of mortality in low-volume institutions than in higher-volume institutions for both cystectomy and nephrectomy (odds ratio 1.88, 95% confidence interval 1.54-2.29, and 1.28, 1.10-1.49, respectively). The methodological quality of volume-outcome research as applied to cystectomy, prostatectomy and nephrectomy is only modest at best. Accepting several limitations, pooled analysis confirms a higher-volume, lower-mortality relationship for cystectomy and nephrectomy. Future research should

  13. A Methodology for the Derivation of Unloaded Abdominal Aortic Aneurysm Geometry With Experimental Validation

    PubMed Central

    Chandra, Santanu; Gnanaruban, Vimalatharmaiyah; Riveros, Fabian; Rodriguez, Jose F.; Finol, Ender A.

    2016-01-01

    In this work, we present a novel method for the derivation of the unloaded geometry of an abdominal aortic aneurysm (AAA) from a pressurized geometry in turn obtained by 3D reconstruction of computed tomography (CT) images. The approach was experimentally validated with an aneurysm phantom loaded with gauge pressures of 80, 120, and 140 mm Hg. The unloaded phantom geometries estimated from these pressurized states were compared to the actual unloaded phantom geometry, resulting in mean nodal surface distances of up to 3.9% of the maximum aneurysm diameter. An in-silico verification was also performed using a patient-specific AAA mesh, resulting in maximum nodal surface distances of 8 μm after running the algorithm for eight iterations. The methodology was then applied to 12 patient-specific AAA for which their corresponding unloaded geometries were generated in 5–8 iterations. The wall mechanics resulting from finite element analysis of the pressurized (CT image-based) and unloaded geometries were compared to quantify the relative importance of using an unloaded geometry for AAA biomechanics. The pressurized AAA models underestimate peak wall stress (quantified by the first principal stress component) on average by 15% compared to the unloaded AAA models. The validation and application of the method, readily compatible with any finite element solver, underscores the importance of generating the unloaded AAA volume mesh prior to using wall stress as a biomechanical marker for rupture risk assessment. PMID:27538124

  14. Methodology for turbulence code validation: Quantification of simulation-experiment agreement and application to the TORPEX experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ricci, Paolo; Theiler, C.; Fasoli, A.

    A methodology for plasma turbulence code validation is discussed, focusing on quantitative assessment of the agreement between experiments and simulations. The present work extends the analysis carried out in a previous paper [P. Ricci et al., Phys. Plasmas 16, 055703 (2009)] where the validation observables were introduced. Here, it is discussed how to quantify the agreement between experiments and simulations with respect to each observable, how to define a metric to evaluate this agreement globally, and - finally - how to assess the quality of a validation procedure. The methodology is then applied to the simulation of the basic plasmamore » physics experiment TORPEX [A. Fasoli et al., Phys. Plasmas 13, 055902 (2006)], considering both two-dimensional and three-dimensional simulation models.« less

  15. Validation of the ULCEAT methodology by applying it in retrospect to the Roboticbed.

    PubMed

    Nakamura, Mio; Suzurikawa, Jun; Tsukada, Shohei; Kume, Yohei; Kawakami, Hideo; Inoue, Kaoru; Inoue, Takenobu

    2015-01-01

    In answer to the increasing demand for care by the Japanese oldest portion of the population, an extensive programme of life support robots is under development, advocated by the Japanese government. Roboticbed® (RB) is developed to facilitate patients in their daily life in making independent transfers from and to the bed. The bed is intended both for elderly and persons with a disability. The purpose of this study is to examine the validity of the user and user's life centred clinical evaluation of assistive technology (ULCEAT) methodology. To support user centred development of life support robots the ULCEAT method was developed. By means of the ULCEAT method the target users and the use environment were re-established in an earlier study. The validity of the method is tested by re-evaluating the development of RB in retrospect. Six participants used the first prototype of RB (RB1) and eight participants used the second prototype of RB (RB2). The results indicated that the functionality was improved owing to the end-user evaluations. Therefore, we confirmed the content validity of the proposed ULCEAT method. In this study we confirmed the validation of the ULCEAT methodology by applying it in retrospect to RB using development process. This method will be used for the development of Life-support robots and prototype assistive technologies.

  16. A Proposed Methodology for the Conceptualization, Operationalization, and Empirical Validation of the Concept of Information Need

    ERIC Educational Resources Information Center

    Afzal, Waseem

    2017-01-01

    Introduction: The purpose of this paper is to propose a methodology to conceptualize, operationalize, and empirically validate the concept of information need. Method: The proposed methodology makes use of both qualitative and quantitative perspectives, and includes a broad array of approaches such as literature reviews, expert opinions, focus…

  17. Validation of biomarkers to predict response to immunotherapy in cancer: Volume I - pre-analytical and analytical validation.

    PubMed

    Masucci, Giuseppe V; Cesano, Alessandra; Hawtin, Rachael; Janetzki, Sylvia; Zhang, Jenny; Kirsch, Ilan; Dobbin, Kevin K; Alvarez, John; Robbins, Paul B; Selvan, Senthamil R; Streicher, Howard Z; Butterfield, Lisa H; Thurin, Magdalena

    2016-01-01

    Immunotherapies have emerged as one of the most promising approaches to treat patients with cancer. Recently, there have been many clinical successes using checkpoint receptor blockade, including T cell inhibitory receptors such as cytotoxic T-lymphocyte-associated antigen 4 (CTLA-4) and programmed cell death-1 (PD-1). Despite demonstrated successes in a variety of malignancies, responses only typically occur in a minority of patients in any given histology. Additionally, treatment is associated with inflammatory toxicity and high cost. Therefore, determining which patients would derive clinical benefit from immunotherapy is a compelling clinical question. Although numerous candidate biomarkers have been described, there are currently three FDA-approved assays based on PD-1 ligand expression (PD-L1) that have been clinically validated to identify patients who are more likely to benefit from a single-agent anti-PD-1/PD-L1 therapy. Because of the complexity of the immune response and tumor biology, it is unlikely that a single biomarker will be sufficient to predict clinical outcomes in response to immune-targeted therapy. Rather, the integration of multiple tumor and immune response parameters, such as protein expression, genomics, and transcriptomics, may be necessary for accurate prediction of clinical benefit. Before a candidate biomarker and/or new technology can be used in a clinical setting, several steps are necessary to demonstrate its clinical validity. Although regulatory guidelines provide general roadmaps for the validation process, their applicability to biomarkers in the cancer immunotherapy field is somewhat limited. Thus, Working Group 1 (WG1) of the Society for Immunotherapy of Cancer (SITC) Immune Biomarkers Task Force convened to address this need. In this two volume series, we discuss pre-analytical and analytical (Volume I) as well as clinical and regulatory (Volume II) aspects of the validation process as applied to predictive biomarkers

  18. Validation of equations for pleural effusion volume estimation by ultrasonography.

    PubMed

    Hassan, Maged; Rizk, Rana; Essam, Hatem; Abouelnour, Ahmed

    2017-12-01

    To validate the accuracy of previously published equations that estimate pleural effusion volume using ultrasonography. Only equations using simple measurements were tested. Three measurements were taken at the posterior axillary line for each case with effusion: lateral height of effusion ( H ), distance between collapsed lung and chest wall ( C ) and distance between lung and diaphragm ( D ). Cases whose effusion was aspirated to dryness were included and drained volume was recorded. Intra-class correlation coefficient (ICC) was used to determine the predictive accuracy of five equations against the actual volume of aspirated effusion. 46 cases with effusion were included. The most accurate equation in predicting effusion volume was ( H  +  D ) × 70 (ICC 0.83). The simplest and yet accurate equation was H  × 100 (ICC 0.79). Pleural effusion height measured by ultrasonography gives a reasonable estimate of effusion volume. Incorporating distance between lung base and diaphragm into estimation improves accuracy from 79% with the first method to 83% with the latter.

  19. Validated low-volume aldosterone immunoassay tailored to GCLP-compliant investigations in small sample volumes.

    PubMed

    Schaefer, J; Burckhardt, B B; Tins, J; Bartel, A; Laeer, S

    2017-12-01

    Heart failure is well investigated in adults, but data in children is lacking. To overcome this shortage of reliable data, appropriate bioanalytical assays are required. Development and validation of a bioanalytical assay for the determination of aldosterone concentrations in small sample volumes applicable to clinical studies under Good Clinical Laboratory Practice. An immunoassay was developed based on a commercially available enzyme-linked immunosorbent assay and validated according to current bioanalytical guidelines of the EMA and FDA. The assay (range 31.3-1000 pg/mL [86.9-2775 pmol/L]) is characterized by a between-run accuracy from - 3.8% to - 0.8% and a between-run imprecision ranging from 4.9% to 8.9% (coefficient of variation). For within-run accuracy, the relative error was between - 11.1% and + 9.0%, while within-run imprecision ranged from 1.2% to 11.8% (CV). For parallelism and dilutional linearity, the relative error of back-calculated concentrations varied from - 14.1% to + 8.4% and from - 7.4% to + 10.5%, respectively. The immunoassay is compliant with the bioanalytical guidelines of the EMA and FDA and allows accurate and precise aldosterone determinations. As the assay can run low-volume samples, it is especially valuable for pediatric investigations.

  20. Broadband Fan Noise Prediction System for Turbofan Engines. Volume 3; Validation and Test Cases

    NASA Technical Reports Server (NTRS)

    Morin, Bruce L.

    2010-01-01

    Pratt & Whitney has developed a Broadband Fan Noise Prediction System (BFaNS) for turbofan engines. This system computes the noise generated by turbulence impinging on the leading edges of the fan and fan exit guide vane, and noise generated by boundary-layer turbulence passing over the fan trailing edge. BFaNS has been validated on three fan rigs that were tested during the NASA Advanced Subsonic Technology Program (AST). The predicted noise spectra agreed well with measured data. The predicted effects of fan speed, vane count, and vane sweep also agreed well with measurements. The noise prediction system consists of two computer programs: Setup_BFaNS and BFaNS. Setup_BFaNS converts user-specified geometry and flow-field information into a BFaNS input file. From this input file, BFaNS computes the inlet and aft broadband sound power spectra generated by the fan and FEGV. The output file from BFaNS contains the inlet, aft and total sound power spectra from each noise source. This report is the third volume of a three-volume set documenting the Broadband Fan Noise Prediction System: Volume 1: Setup_BFaNS User s Manual and Developer s Guide; Volume 2: BFaNS User s Manual and Developer s Guide; and Volume 3: Validation and Test Cases. The present volume begins with an overview of the Broadband Fan Noise Prediction System, followed by validation studies that were done on three fan rigs. It concludes with recommended improvements and additional studies for BFaNS.

  1. Methodology and issues of integral experiments selection for nuclear data validation

    NASA Astrophysics Data System (ADS)

    Tatiana, Ivanova; Ivanov, Evgeny; Hill, Ian

    2017-09-01

    Nuclear data validation involves a large suite of Integral Experiments (IEs) for criticality, reactor physics and dosimetry applications. [1] Often benchmarks are taken from international Handbooks. [2, 3] Depending on the application, IEs have different degrees of usefulness in validation, and usually the use of a single benchmark is not advised; indeed, it may lead to erroneous interpretation and results. [1] This work aims at quantifying the importance of benchmarks used in application dependent cross section validation. The approach is based on well-known General Linear Least Squared Method (GLLSM) extended to establish biases and uncertainties for given cross sections (within a given energy interval). The statistical treatment results in a vector of weighting factors for the integral benchmarks. These factors characterize the value added by a benchmark for nuclear data validation for the given application. The methodology is illustrated by one example, selecting benchmarks for 239Pu cross section validation. The studies were performed in the framework of Subgroup 39 (Methods and approaches to provide feedback from nuclear and covariance data adjustment for improvement of nuclear data files) established at the Working Party on International Nuclear Data Evaluation Cooperation (WPEC) of the Nuclear Science Committee under the Nuclear Energy Agency (NEA/OECD).

  2. Assessment Methodology for Process Validation Lifecycle Stage 3A.

    PubMed

    Sayeed-Desta, Naheed; Pazhayattil, Ajay Babu; Collins, Jordan; Chen, Shu; Ingram, Marzena; Spes, Jana

    2017-07-01

    The paper introduces evaluation methodologies and associated statistical approaches for process validation lifecycle Stage 3A. The assessment tools proposed can be applied to newly developed and launched small molecule as well as bio-pharma products, where substantial process and product knowledge has been gathered. The following elements may be included in Stage 3A: number of 3A batch determination; evaluation of critical material attributes, critical process parameters, critical quality attributes; in vivo in vitro correlation; estimation of inherent process variability (IPV) and PaCS index; process capability and quality dashboard (PCQd); and enhanced control strategy. US FDA guidance on Process Validation: General Principles and Practices, January 2011 encourages applying previous credible experience with suitably similar products and processes. A complete Stage 3A evaluation is a valuable resource for product development and future risk mitigation of similar products and processes. Elements of 3A assessment were developed to address industry and regulatory guidance requirements. The conclusions made provide sufficient information to make a scientific and risk-based decision on product robustness.

  3. Development and Validation of a Photonumeric Scale for Evaluation of Volume Deficit of the Hand

    PubMed Central

    Donofrio, Lisa; Hardas, Bhushan; Murphy, Diane K.; Carruthers, Jean; Carruthers, Alastair; Sykes, Jonathan M.; Creutz, Lela; Marx, Ann; Dill, Sara

    2016-01-01

    BACKGROUND A validated scale is needed for objective and reproducible comparisons of hand appearance before and after treatment in practice and clinical studies. OBJECTIVE To describe the development and validation of the 5-point photonumeric Allergan Hand Volume Deficit Scale. METHODS The scale was developed to include an assessment guide, verbal descriptors, morphed images, and real-subject images for each grade. The clinical significance of a 1-point score difference was evaluated in a review of image pairs representing varying differences in severity. Interrater and intrarater reliability was evaluated in a live-subject validation study (N = 296) completed during 2 sessions occurring 3 weeks apart. RESULTS A score difference of ≥1 point was shown to reflect a clinically significant difference (mean [95% confidence interval] absolute score difference, 1.12 [0.99–1.26] for clinically different image pairs and 0.45 [0.33–0.57] for not clinically different pairs). Intrarater agreement between the 2 validation sessions was almost perfect (mean weighted kappa = 0.83). Interrater agreement was almost perfect during the second session (0.82, primary end point). CONCLUSION The Allergan Hand Volume Deficit Scale is a validated and reliable scale for physician rating of hand volume deficit. PMID:27661741

  4. The Self-Concept. Volume 1, A Review of Methodological Considerations and Measuring Instruments. Revised Edition.

    ERIC Educational Resources Information Center

    Wylie, Ruth C.

    This volume of the revised edition describes and evaluates measurement methods, research designs, and procedures which have been or might appropriately be used in self-concept research. Working from the perspective that self-concept or phenomenal personality theories can be scientifically investigated, methodological flaws and questionable…

  5. Monte Carol-based validation of neutronic methodology for EBR-II analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liaw, J.R.; Finck, P.J.

    1993-01-01

    The continuous-energy Monte Carlo code VIM (Ref. 1) has been validated extensively over the years against fast critical experiments and other neutronic analysis codes. A high degree of confidence in VIM for predicting reactor physics parameters has been firmly established. This paper presents a numerical validation of two conventional multigroup neutronic analysis codes, DIF3D (Ref. 4) and VARIANT (Ref. 5), against VIM for two Experimental Breeder Reactor II (EBR-II) core loadings in detailed three-dimensional hexagonal-z geometry. The DIF3D code is based on nodal diffusion theory, and it is used in calculations for day-today reactor operations, whereas the VARIANT code ismore » based on nodal transport theory and is used with increasing frequency for specific applications. Both DIF3D and VARIANT rely on multigroup cross sections generated from ENDF/B-V by the ETOE-2/MC[sup 2]-II/SDX (Ref. 6) code package. Hence, this study also validates the multigroup cross-section processing methodology against the continuous-energy approach used in VIM.« less

  6. Methodological Validation of Quality of Life Questionnaire for Coal Mining Groups-Indian Scenario

    ERIC Educational Resources Information Center

    Sen, Sayanti; Sen, Goutam; Tewary, B. K.

    2012-01-01

    Maslow's hierarchy-of-needs theory has been used to predict development of Quality of Life (QOL) in countries over time. In this paper an attempt has been taken to derive a methodological validation of quality of life questionnaire which have been prepared for the study area. The objective of the study is to standardize a questionnaire tool to…

  7. An arbitrary grid CFD algorithm for configuration aerodynamics analysis. Volume 1: Theory and validations

    NASA Technical Reports Server (NTRS)

    Baker, A. J.; Iannelli, G. S.; Manhardt, Paul D.; Orzechowski, J. A.

    1993-01-01

    This report documents the user input and output data requirements for the FEMNAS finite element Navier-Stokes code for real-gas simulations of external aerodynamics flowfields. This code was developed for the configuration aerodynamics branch of NASA ARC, under SBIR Phase 2 contract NAS2-124568 by Computational Mechanics Corporation (COMCO). This report is in two volumes. Volume 1 contains the theory for the derived finite element algorithm and describes the test cases used to validate the computer program described in the Volume 2 user guide.

  8. ICP-MS/MS-Based Ionomics: A Validated Methodology to Investigate the Biological Variability of the Human Ionome.

    PubMed

    Konz, Tobias; Migliavacca, Eugenia; Dayon, Loïc; Bowman, Gene; Oikonomidi, Aikaterini; Popp, Julius; Rezzi, Serge

    2017-05-05

    We here describe the development, validation and application of a quantitative methodology for the simultaneous determination of 29 elements in human serum using state-of-the-art inductively coupled plasma triple quadrupole mass spectrometry (ICP-MS/MS). This new methodology offers high-throughput elemental profiling using simple dilution of minimal quantity of serum samples. We report the outcomes of the validation procedure including limits of detection/quantification, linearity of calibration curves, precision, recovery and measurement uncertainty. ICP-MS/MS-based ionomics was used to analyze human serum of 120 older adults. Following a metabolomic data mining approach, the generated ionome profiles were subjected to principal component analysis revealing gender and age-specific differences. The ionome of female individuals was marked by higher levels of calcium, phosphorus, copper and copper to zinc ratio, while iron concentration was lower with respect to male subjects. Age was associated with lower concentrations of zinc. These findings were complemented with additional readouts to interpret micronutrient status including ceruloplasmin, ferritin and inorganic phosphate. Our data supports a gender-specific compartmentalization of the ionome that may reflect different bone remodelling in female individuals. Our ICP-MS/MS methodology enriches the panel of validated "Omics" approaches to study molecular relationships between the exposome and the ionome in relation with nutrition and health.

  9. Systematic investigation of gastrointestinal diseases in China (SILC): validation of survey methodology.

    PubMed

    Yan, Xiaoyan; Wang, Rui; Zhao, Yanfang; Ma, Xiuqiang; Fang, Jiqian; Yan, Hong; Kang, Xiaoping; Yin, Ping; Hao, Yuantao; Li, Qiang; Dent, John; Sung, Joseph; Zou, Duowu; Johansson, Saga; Halling, Katarina; Liu, Wenbin; He, Jia

    2009-11-19

    Symptom-based surveys suggest that the prevalence of gastrointestinal diseases is lower in China than in Western countries. The aim of this study was to validate a methodology for the epidemiological investigation of gastrointestinal symptoms and endoscopic findings in China. A randomized, stratified, multi-stage sampling methodology was used to select 18,000 adults aged 18-80 years from Shanghai, Beijing, Xi'an, Wuhan and Guangzhou. Participants from Shanghai were invited to provide blood samples and undergo upper gastrointestinal endoscopy. All participants completed Chinese versions of the Reflux Disease Questionnaire (RDQ) and the modified Rome II questionnaire; 20% were also invited to complete the 36-item Short Form Health Survey (SF-36) and Epworth Sleepiness Scale (ESS). The psychometric properties of the questionnaires were evaluated statistically. The study was completed by 16,091 individuals (response rate: 89.4%), with 3219 (89.4% of those invited) completing the SF-36 and ESS. All 3153 participants in Shanghai provided blood samples and 1030 (32.7%) underwent endoscopy. Cronbach's alpha coefficients were 0.89, 0.89, 0.80 and 0.91, respectively, for the RDQ, modified Rome II questionnaire, ESS and SF-36, supporting internal consistency. Factor analysis supported construct validity of all questionnaire dimensions except SF-36 psychosocial dimensions. This population-based study has great potential to characterize the relationship between gastrointestinal symptoms and endoscopic findings in China.

  10. Validation of MIL-F-9490D. General Specification for Flight Control System for Piloted Military Aircraft. Volume III. C-5A Heavy Logistics Transport Validation

    DTIC Science & Technology

    1977-04-01

    U* AFFDL-TR-77-7 0 VOLUME III " 󈧦 VALIDATION OF MIL-F-9490D - GENERAL SPECIFICATION FOR FLIGHT CONTROL SYSTEM "FOR PILOTED MILITARY AIRCRAFT VOLUME...ý A1O 1 C I\\.FFBL Ti(-77-7. Vol. III f Validatio~n of UL-P-9-490D#,*. General Spacificatior "~inal 1’l -_t e for Flight ContrsA Zyn’om for Piloted...cation MIL-F-9490D (USAF), "Flight Control Systems - Design, Installation and Test of Piloted Aircraft, General Specifications for," dated 6 June 1975, by

  11. Validating a new methodology for optical probe design and image registration in fNIRS studies

    PubMed Central

    Wijeakumar, Sobanawartiny; Spencer, John P.; Bohache, Kevin; Boas, David A.; Magnotta, Vincent A.

    2015-01-01

    Functional near-infrared spectroscopy (fNIRS) is an imaging technique that relies on the principle of shining near-infrared light through tissue to detect changes in hemodynamic activation. An important methodological issue encountered is the creation of optimized probe geometry for fNIRS recordings. Here, across three experiments, we describe and validate a processing pipeline designed to create an optimized, yet scalable probe geometry based on selected regions of interest (ROIs) from the functional magnetic resonance imaging (fMRI) literature. In experiment 1, we created a probe geometry optimized to record changes in activation from target ROIs important for visual working memory. Positions of the sources and detectors of the probe geometry on an adult head were digitized using a motion sensor and projected onto a generic adult atlas and a segmented head obtained from the subject's MRI scan. In experiment 2, the same probe geometry was scaled down to fit a child's head and later digitized and projected onto the generic adult atlas and a segmented volume obtained from the child's MRI scan. Using visualization tools and by quantifying the amount of intersection between target ROIs and channels, we show that out of 21 ROIs, 17 and 19 ROIs intersected with fNIRS channels from the adult and child probe geometries, respectively. Further, both the adult atlas and adult subject-specific MRI approaches yielded similar results and can be used interchangeably. However, results suggest that segmented heads obtained from MRI scans be used for registering children's data. Finally, in experiment 3, we further validated our processing pipeline by creating a different probe geometry designed to record from target ROIs involved in language and motor processing. PMID:25705757

  12. Three-dimensional registration of intravascular optical coherence tomography and cryo-image volumes for microscopic-resolution validation.

    PubMed

    Prabhu, David; Mehanna, Emile; Gargesha, Madhusudhana; Brandt, Eric; Wen, Di; van Ditzhuijzen, Nienke S; Chamie, Daniel; Yamamoto, Hirosada; Fujino, Yusuke; Alian, Ali; Patel, Jaymin; Costa, Marco; Bezerra, Hiram G; Wilson, David L

    2016-04-01

    Evidence suggests high-resolution, high-contrast, [Formula: see text] intravascular optical coherence tomography (IVOCT) can distinguish plaque types, but further validation is needed, especially for automated plaque characterization. We developed experimental and three-dimensional (3-D) registration methods to provide validation of IVOCT pullback volumes using microscopic, color, and fluorescent cryo-image volumes with optional registered cryo-histology. A specialized registration method matched IVOCT pullback images acquired in the catheter reference frame to a true 3-D cryo-image volume. Briefly, an 11-parameter registration model including a polynomial virtual catheter was initialized within the cryo-image volume, and perpendicular images were extracted, mimicking IVOCT image acquisition. Virtual catheter parameters were optimized to maximize cryo and IVOCT lumen overlap. Multiple assessments suggested that the registration error was better than the [Formula: see text] spacing between IVOCT image frames. Tests on a digital synthetic phantom gave a registration error of only [Formula: see text] (signed distance). Visual assessment of randomly presented nearby frames suggested registration accuracy within 1 IVOCT frame interval ([Formula: see text]). This would eliminate potential misinterpretations confronted by the typical histological approaches to validation, with estimated 1-mm errors. The method can be used to create annotated datasets and automated plaque classification methods and can be extended to other intravascular imaging modalities.

  13. The Adequacy of the Q Methodology for Clinical Validation of Nursing Diagnoses Related to Subjective Foci.

    PubMed

    Miguel, Susana; Caldeira, Sílvia; Vieira, Margarida

    2018-04-01

    This article describes the adequacy of the Q methodology as a new option for the validation of nursing diagnoses related to subjective foci. Discussion paper about the characteristics of the Q methodology. This method has been used in nursing research particularly related to subjective concepts and includes both a quantitative and qualitative dimension. The Q methodology seems to be an adequate and innovative method for the clinical validation of nursing diagnoses. The validation of nursing diagnoses related to subjective foci using the Q methodology could improve the level of evidence and provide nurses with clinical indicators for clinical reasoning and for the planning of effective interventions. Descrever a adequação da metodologia Q como uma nova opção para a validação clínica de diagnósticos de enfermagem relacionados com focos subjetivos. MÉTODOS: Artigo de discussão sobre as características da metodologia Q. Este método tem sido utilizado na pesquisa em enfermagem relacionada com conceitos subjetivos e inclui em simultâneo uma vertente qualitativa e quantitativa. CONCLUSÕES: A metodologia Q parece ser uma opção metodológica adequada para a validação clínica de diagnósticos de enfermagem. IMPLICAÇÕES PARA A PRÁTICA: A utilização da metodologia Q na validação clínica de diagnósticos de enfermagem relacionados com focos subjetivos pode melhorar o nível e evidência e facilitar o raciocínio clínico dos enfermeiros, ao providenciar indicadores clínicos também necessários ao desenvolvimento de intervenções efetivas. © 2016 NANDA International, Inc.

  14. Ocean Optics Protocols for Satellite Ocean Color Sensor Validation. Volume 6; Special Topics in Ocean Optics Protocols and Appendices; Revised

    NASA Technical Reports Server (NTRS)

    Mueller, J. L. (Editor); Fargion, Giulietta S. (Editor); McClain, Charles R. (Editor)

    2003-01-01

    This document stipulates protocols for measuring bio-optical and radiometric data for the Sensor Intercomparison and Merger for Biological and Interdisciplinary Oceanic Studies (SIMBIOS) Project activities and algorithm development. The document is organized into 6 separate volumes as Ocean Optics Protocols for Satellite Ocean Color Sensor Validation, Revision 4. Volume I: Introduction, Background and Conventions; Volume II: Instrument Specifications, Characterization and Calibration; Volume III: Radiometric Measurements and Data Analysis Methods; Volume IV: Inherent Optical Properties: Instruments, Characterization, Field Measurements and Data Analysis Protocols; Volume V: Biogeochemical and Bio-Optical Measurements and Data Analysis Methods; Volume VI: Special Topics in Ocean Optics Protocols and Appendices. The earlier version of Ocean Optics Protocols for Satellite Ocean Color Sensor Validation, Revision 3 (Mueller and Fargion 2002, Volumes 1 and 2) is entirely superseded by the six volumes of Revision 4 listed above.

  15. A normative price for a manufactured product: The SAMICS methodology. Volume 1: Executive summary

    NASA Technical Reports Server (NTRS)

    Chamberlain, R. G.

    1979-01-01

    A summary for the Solar Array Manufacturing Industry Costing Standards report contains a discussion of capabilities and limitations, a non-technical overview of the methodology, and a description of the input data which must be collected. It also describes the activities that were and are being taken to ensure validity of the results and contains an up-to-date bibliography of related documents.

  16. Orbital flight test shuttle external tank aerothermal flight evaluation, volume 1

    NASA Technical Reports Server (NTRS)

    Praharaj, Sarat C.; Engel, Carl D.; Warmbrod, John D.

    1986-01-01

    This 3-volume report discusses the evaluation of aerothermal flight measurements made on the orbital flight test Space Shuttle External Tanks (ETs). Six ETs were instrumented to measure various quantities during flight; including heat transfer, pressure, and structural temperature. The flight data was reduced and analyzed against math models established from an extensive wind tunnel data base and empirical heat-transfer relationships. This analysis has supported the validity of the current aeroheating methodology and existing data base; and, has also identified some problem areas which require methodology modifications. This is Volume 1, an Executive Summary. Volume 2 contains Appendices A (Aerothermal Comparisons) and B (Flight Derived h sub 1/h sub u vs. M sub inf. Plots), and Volume 3 contains Appendix C (Comparison of Interference Factors among OFT Flight, Prediction and 1H-97A Data), Appendix D (Freestream Stanton Number and Reynolds Number Correlation for Flight and Tunnel Data), and Appendix E (Flight-Derived h sub i/h sub u Tables).

  17. Orbital flight test shuttle external tank aerothermal flight evaluation, volume 3

    NASA Technical Reports Server (NTRS)

    Praharaj, Sarat C.; Engel, Carl D.; Warmbrod, John D.

    1986-01-01

    This 3-volume report discusses the evaluation of aerothermal flight measurements made on the orbital flight test Space Shuttle External Tanks (ETs). Six ETs were instrumented to measure various quantities during flight; including heat transfer, pressure, and structural temperature. The flight data was reduced and analyzed against math models established from an extensive wind tunnel data base and empirical heat-transfer relationships. This analysis has supported the validity of the current aeroheating methodology and existing data base; and, has also identified some problem areas which require methodology modifications. Volume 1 is the Executive Summary. Volume 2 contains Appendix A (Aerothermal Comparisons), and Appendix B (Flight-Derived h sub 1/h sub u vs. M sub inf. Plots). This is Volume 3, containing Appendix C (Comparison of Interference Factors between OFT Flight, Prediction and 1H-97A Data), Appendix D (Freestream Stanton Number and Reynolds Number Correlation for Flight and Tunnel Data), and Appendix E (Flight-Derived h sub i/h sub u Tables).

  18. Orbital flight test shuttle external tank aerothermal flight evaluation, volume 2

    NASA Technical Reports Server (NTRS)

    Praharaj, Sarat C.; Engel, Carl D.; Warmbrod, John D.

    1986-01-01

    This 3-volume report discusses the evaluation of aerothermal flight measurements made on the orbital flight test Space Shuttle External Tanks (ETs). Six ETs were instrumented to measure various quantities during flight; including heat transfer, pressure, and structural temperature. The flight data was reduced and analyzed against math models established from an extensive wind tunnel data base and empirical heat-transfer relationships. This analysis has supported the validity of the current aeroheating methodology and existing data base; and, has also identified some problem areas which require methodology modifications. Volume 1 is the Executive Summary. This is volume 2, containing Appendix A (Aerothermal Comparisons), and Appendix B (Flight-Derived h sub i/h sub u vs. M sub inf. Plots). Volume 3 contains Appendix C (Comparison of Interference Factors between OFT Flight, Prediction and 1H-97A Data), Appendix D (Freestream Stanton Number and Reynolds Number Correlation for Flight and Tunnel Data), and Appendix E (Flight-Derived h sub i/h sub u Tables).

  19. Validated low-volume immunoassay for the reliable determination of direct renin especially valuable for pediatric investigations.

    PubMed

    Schaefer, J; Burckhardt, B B; Tins, J; Bartel, A; Laeer, S

    2017-01-01

    The pharmacotherapy of pediatric patients suffering from heart failure is extrapolated from adults due to missing data in children. Development and validation of a low-volume immunoassay for the reliable determination of renin. The immunoassay was validated according to international guidelines. The assay allows the reliable determination of renin in 40 μL plasma within a calibration range of 4-128 pg/mL. Between-run accuracy varied from -3.3 to +3.0% (relative error), while between-run precision ranged from 4.9 to 11.3% (coefficient of variation). The low-volume immunoassay facilitates the reliable collection of pharmacodynamic data in children.

  20. Validation of Imaging With Pathology in Laryngeal Cancer: Accuracy of the Registration Methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caldas-Magalhaes, Joana, E-mail: J.CaldasMagalhaes@umcutrecht.nl; Kasperts, Nicolien; Kooij, Nina

    2012-02-01

    Purpose: To investigate the feasibility and accuracy of an automated method to validate gross tumor volume (GTV) delineations with pathology in laryngeal and hypopharyngeal cancer. Methods and Materials: High-resolution computed tomography (CT{sub HR}), magnetic resonance imaging (MRI), and positron emission tomography (PET) scans were obtained from 10 patients before total laryngectomy. The GTV was delineated separately in each imaging modality. The laryngectomy specimen was sliced transversely in 3-mm-thick slices, and whole-mount hematoxylin-eosin stained (H and E) sections were obtained. A pathologist delineated tumor tissue in the H and E sections (GTV{sub PATH}). An automatic three-dimensional (3D) reconstruction of the specimenmore » was performed, and the CT{sub HR}, MRI, and PET were semiautomatically and rigidly registered to the 3D specimen. The accuracy of the pathology-imaging registration and the specimen deformation and shrinkage were assessed. The tumor delineation inaccuracies were compared with the registration errors. Results: Good agreement was observed between anatomical landmarks in the 3D specimen and in the in vivo images. Limited deformations and shrinkage (3% {+-} 1%) were found inside the cartilage skeleton. The root mean squared error of the registration between the 3D specimen and the CT, MRI, and PET was on average 1.5, 3.0, and 3.3 mm, respectively, in the cartilage skeleton. The GTV{sub PATH} volume was 7.2 mL, on average. The GTVs based on CT, MRI, and PET generated a mean volume of 14.9, 18.3, and 9.8 mL and covered the GTV{sub PATH} by 85%, 88%, and 77%, respectively. The tumor delineation inaccuracies exceeded the registration error in all the imaging modalities. Conclusions: Validation of GTV delineations with pathology is feasible with an average overall accuracy below 3.5 mm inside the laryngeal skeleton. The tumor delineation inaccuracies were larger than the registration error. Therefore, an accurate histological

  1. Systematic investigation of gastrointestinal diseases in China (SILC): validation of survey methodology

    PubMed Central

    2009-01-01

    Background Symptom-based surveys suggest that the prevalence of gastrointestinal diseases is lower in China than in Western countries. The aim of this study was to validate a methodology for the epidemiological investigation of gastrointestinal symptoms and endoscopic findings in China. Methods A randomized, stratified, multi-stage sampling methodology was used to select 18 000 adults aged 18-80 years from Shanghai, Beijing, Xi'an, Wuhan and Guangzhou. Participants from Shanghai were invited to provide blood samples and undergo upper gastrointestinal endoscopy. All participants completed Chinese versions of the Reflux Disease Questionnaire (RDQ) and the modified Rome II questionnaire; 20% were also invited to complete the 36-item Short Form Health Survey (SF-36) and Epworth Sleepiness Scale (ESS). The psychometric properties of the questionnaires were evaluated statistically. Results The study was completed by 16 091 individuals (response rate: 89.4%), with 3219 (89.4% of those invited) completing the SF-36 and ESS. All 3153 participants in Shanghai provided blood samples and 1030 (32.7%) underwent endoscopy. Cronbach's alpha coefficients were 0.89, 0.89, 0.80 and 0.91, respectively, for the RDQ, modified Rome II questionnaire, ESS and SF-36, supporting internal consistency. Factor analysis supported construct validity of all questionnaire dimensions except SF-36 psychosocial dimensions. Conclusion This population-based study has great potential to characterize the relationship between gastrointestinal symptoms and endoscopic findings in China. PMID:19925662

  2. The PEDro scale had acceptably high convergent validity, construct validity, and interrater reliability in evaluating methodological quality of pharmaceutical trials.

    PubMed

    Yamato, Tie Parma; Maher, Chris; Koes, Bart; Moseley, Anne

    2017-06-01

    The Physiotherapy Evidence Database (PEDro) scale has been widely used to investigate methodological quality in physiotherapy randomized controlled trials; however, its validity has not been tested for pharmaceutical trials. The aim of this study was to investigate the validity and interrater reliability of the PEDro scale for pharmaceutical trials. The reliability was also examined for the Cochrane Back and Neck (CBN) Group risk of bias tool. This is a secondary analysis of data from a previous study. We considered randomized placebo controlled trials evaluating any pain medication for chronic spinal pain or osteoarthritis. Convergent validity was evaluated by correlating the PEDro score with the summary score of the CBN risk of bias tool. The construct validity was tested using a linear regression analysis to determine the degree to which the total PEDro score is associated with treatment effect sizes, journal impact factor, and the summary score for the CBN risk of bias tool. The interrater reliability was estimated using the Prevalence and Bias Adjusted Kappa coefficient and 95% confidence interval (CI) for the PEDro scale and CBN risk of bias tool. Fifty-three trials were included, with 91 treatment effect sizes included in the analyses. The correlation between PEDro scale and CBN risk of bias tool was 0.83 (95% CI 0.76-0.88) after adjusting for reliability, indicating strong convergence. The PEDro score was inversely associated with effect sizes, significantly associated with the summary score for the CBN risk of bias tool, and not associated with the journal impact factor. The interrater reliability for each item of the PEDro scale and CBN risk of bias tool was at least substantial for most items (>0.60). The intraclass correlation coefficient for the PEDro score was 0.80 (95% CI 0.68-0.88), and for the CBN, risk of bias tool was 0.81 (95% CI 0.69-0.88). There was evidence for the convergent and construct validity for the PEDro scale when used to evaluate

  3. LOX/hydrocarbon rocket engine analytical design methodology development and validation. Volume 1: Executive summary and technical narrative

    NASA Technical Reports Server (NTRS)

    Pieper, Jerry L.; Walker, Richard E.

    1993-01-01

    During the past three decades, an enormous amount of resources were expended in the design and development of Liquid Oxygen/Hydrocarbon and Hydrogen (LOX/HC and LOX/H2) rocket engines. A significant portion of these resources were used to develop and demonstrate the performance and combustion stability for each new engine. During these efforts, many analytical and empirical models were developed that characterize design parameters and combustion processes that influence performance and stability. Many of these models are suitable as design tools, but they have not been assembled into an industry-wide usable analytical design methodology. The objective of this program was to assemble existing performance and combustion stability models into a usable methodology capable of producing high performing and stable LOX/hydrocarbon and LOX/hydrogen propellant booster engines.

  4. A methodology for finding the optimal iteration number of the SIRT algorithm for quantitative Electron Tomography.

    PubMed

    Okariz, Ana; Guraya, Teresa; Iturrondobeitia, Maider; Ibarretxe, Julen

    2017-02-01

    The SIRT (Simultaneous Iterative Reconstruction Technique) algorithm is commonly used in Electron Tomography to calculate the original volume of the sample from noisy images, but the results provided by this iterative procedure are strongly dependent on the specific implementation of the algorithm, as well as on the number of iterations employed for the reconstruction. In this work, a methodology for selecting the iteration number of the SIRT reconstruction that provides the most accurate segmentation is proposed. The methodology is based on the statistical analysis of the intensity profiles at the edge of the objects in the reconstructed volume. A phantom which resembles a a carbon black aggregate has been created to validate the methodology and the SIRT implementations of two free software packages (TOMOJ and TOMO3D) have been used. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. 3D registration of intravascular optical coherence tomography and cryo-image volumes for microscopic-resolution validation

    NASA Astrophysics Data System (ADS)

    Prabhu, David; Mehanna, Emile; Gargesha, Madhusudhana; Wen, Di; Brandt, Eric; van Ditzhuijzen, Nienke S.; Chamie, Daniel; Yamamoto, Hirosada; Fujino, Yusuke; Farmazilian, Ali; Patel, Jaymin; Costa, Marco; Bezerra, Hiram G.; Wilson, David L.

    2016-03-01

    High resolution, 100 frames/sec intravascular optical coherence tomography (IVOCT) can distinguish plaque types, but further validation is needed, especially for automated plaque characterization. We developed experimental and 3D registration methods, to provide validation of IVOCT pullback volumes using microscopic, brightfield and fluorescent cryoimage volumes, with optional, exactly registered cryo-histology. The innovation was a method to match an IVOCT pullback images, acquired in the catheter reference frame, to a true 3D cryo-image volume. Briefly, an 11-parameter, polynomial virtual catheter was initialized within the cryo-image volume, and perpendicular images were extracted, mimicking IVOCT image acquisition. Virtual catheter parameters were optimized to maximize cryo and IVOCT lumen overlap. Local minima were possible, but when we started within reasonable ranges, every one of 24 digital phantom cases converged to a good solution with a registration error of only +1.34+/-2.65μm (signed distance). Registration was applied to 10 ex-vivo cadaver coronary arteries (LADs), resulting in 10 registered cryo and IVOCT volumes yielding a total of 421 registered 2D-image pairs. Image overlays demonstrated high continuity between vascular and plaque features. Bland- Altman analysis comparing cryo and IVOCT lumen area, showed mean and standard deviation of differences as 0.01+/-0.43 mm2. DICE coefficients were 0.91+/-0.04. Finally, visual assessment on 20 representative cases with easily identifiable features suggested registration accuracy within one frame of IVOCT (+/-200μm), eliminating significant misinterpretations introduced by 1mm errors in the literature. The method will provide 3D data for training of IVOCT plaque algorithms and can be used for validation of other intravascular imaging modalities.

  6. Multiphysics Simulation of Welding-Arc and Nozzle-Arc System: Mathematical-Model, Solution-Methodology and Validation

    NASA Astrophysics Data System (ADS)

    Pawar, Sumedh; Sharma, Atul

    2018-01-01

    This work presents mathematical model and solution methodology for a multiphysics engineering problem on arc formation during welding and inside a nozzle. A general-purpose commercial CFD solver ANSYS FLUENT 13.0.0 is used in this work. Arc formation involves strongly coupled gas dynamics and electro-dynamics, simulated by solution of coupled Navier-Stoke equations, Maxwell's equations and radiation heat-transfer equation. Validation of the present numerical methodology is demonstrated with an excellent agreement with the published results. The developed mathematical model and the user defined functions (UDFs) are independent of the geometry and are applicable to any system that involves arc-formation, in 2D axisymmetric coordinates system. The high-pressure flow of SF6 gas in the nozzle-arc system resembles arc chamber of SF6 gas circuit breaker; thus, this methodology can be extended to simulate arcing phenomenon during current interruption.

  7. AMOVA ["Accumulative Manifold Validation Analysis"]: An Advanced Statistical Methodology Designed to Measure and Test the Validity, Reliability, and Overall Efficacy of Inquiry-Based Psychometric Instruments

    ERIC Educational Resources Information Center

    Osler, James Edward, II

    2015-01-01

    This monograph provides an epistemological rational for the Accumulative Manifold Validation Analysis [also referred by the acronym "AMOVA"] statistical methodology designed to test psychometric instruments. This form of inquiry is a form of mathematical optimization in the discipline of linear stochastic modelling. AMOVA is an in-depth…

  8. Validation of the minimal citrate tube fill volume for routine coagulation tests on ACL TOP 500 CTS®.

    PubMed

    Ver Elst, K; Vermeiren, S; Schouwers, S; Callebaut, V; Thomson, W; Weekx, S

    2013-12-01

    CLSI recommends a minimal citrate tube fill volume of 90%. A validation protocol with clinical and analytical components was set up to determine the tube fill threshold for international normalized ratio of prothrombin time (PT-INR), activated partial thromboplastin time (aPTT) and fibrinogen. Citrated coagulation samples from 16 healthy donors and eight patients receiving vitamin K antagonists (VKA) were evaluated. Eighty-nine tubes were filled to varying volumes of >50%. Coagulation tests were performed on ACL TOP 500 CTS(®) . Receiver Operating Characteristic (ROC) plot, with Total error (TE) and critical difference (CD) as possible acceptance criteria, was used to determine the fill threshold. Receiving Operating Characteristic was the most accurate with CD for PT-INR and TE for aPTT resulting in thresholds of 63% for PT and 80% for aPTT. By adapted ROC, based on threshold setting at a point of 100% sensitivity at a maximum specificity, CD was best for PT and TE for aPTT resulting in thresholds of 73% for PT and 90% for aPTT. For fibrinogen, the method was only valid with the TE criterion at a 63% fill volume. In our study, we validated the minimal citrate tube fill volumes of 73%, 90% and 63% for PT-INR, aPTT and fibrinogen, respectively. © 2013 John Wiley & Sons Ltd.

  9. A self-contained, automated methodology for optimal flow control validated for transition delay

    NASA Technical Reports Server (NTRS)

    Joslin, Ronald D.; Gunzburger, Max D.; Nicolaides, R. A.; Erlebacher, Gordon; Hussaini, M. Yousuff

    1995-01-01

    This paper describes a self-contained, automated methodology for flow control along with a validation of the methodology for the problem of boundary layer instability suppression. The objective of control is to match the stress vector along a portion of the boundary to a given vector; instability suppression is achieved by choosing the given vector to be that of a steady base flow, e.g., Blasius boundary layer. Control is effected through the injection or suction of fluid through a single orifice on the boundary. The present approach couples the time-dependent Navier-Stokes system with an adjoint Navier-Stokes system and optimality conditions from which optimal states, i.e., unsteady flow fields, and control, e.g., actuators, may be determined. The results demonstrate that instability suppression can be achieved without any a priori knowledge of the disturbance, which is significant because other control techniques have required some knowledge of the flow unsteadiness such as frequencies, instability type, etc.

  10. Validity of contents of a paediatric critical comfort scale using mixed methodology.

    PubMed

    Bosch-Alcaraz, A; Jordan-Garcia, I; Alcolea-Monge, S; Fernández-Lorenzo, R; Carrasquer-Feixa, E; Ferrer-Orona, M; Falcó-Pegueroles, A

    Critical illness in paediatric patients includes acute conditions in a healthy child as well as exacerbations of chronic disease, and therefore these situations must be clinically managed in Critical Care Units. The role of the paediatric nurse is to ensure the comfort of these critically ill patients. To that end, instruments are required that correctly assess critical comfort. To describe the process for validating the content of a paediatric critical comfort scale using mixed-method research. Initially, a cross-cultural adaptation of the Comfort Behavior Scale from English to Spanish using the translation and back-translation method was made. After that, its content was evaluated using mixed method research. This second step was divided into a quantitative stage in which an ad hoc questionnaire was used in order to assess each scale's item relevance and wording and a qualitative stage with two meetings with health professionals, patients and a family member following the Delphi Method recommendations. All scale items obtained a content validity index >0.80, except physical movement in its relevance, which obtained 0.76. Global content scale validity was 0.87 (high). During the qualitative stage, items from each of the scale domains were reformulated or eliminated in order to make the scale more comprehensible and applicable. The use of a mixed-method research methodology during the scale content validity phase allows the design of a richer and more assessment-sensitive instrument. Copyright © 2017 Sociedad Española de Enfermería Intensiva y Unidades Coronarias (SEEIUC). Publicado por Elsevier España, S.L.U. All rights reserved.

  11. Ocean Optics Protocols for Satellite Ocean Color Sensor Validation. Volume 4; Inherent Optical Properties: Instruments, Characterizations, Field Measurements and Data Analysis Protocols; Revised

    NASA Technical Reports Server (NTRS)

    Mueller, J. L. (Editor); Fargion, Giuletta S. (Editor); McClain, Charles R. (Editor); Pegau, Scott; Zaneveld, J. Ronald V.; Mitchell, B. Gregg; Kahru, Mati; Wieland, John; Stramska, Malgorzat

    2003-01-01

    This document stipulates protocols for measuring bio-optical and radiometric data for the Sensor Intercomparison and Merger for Biological and Interdisciplinary Oceanic Studies (SIMBIOS) Project activities and algorithm development. The document is organized into 6 separate volumes as Ocean Optics Protocols for Satellite Ocean Color Sensor Validation, Revision 4. Volume I: Introduction, Background and Conventions; Volume II: Instrument Specifications, Characterization and Calibration; Volume III: Radiometric Measurements and Data Analysis Methods; Volume IV: Inherent Optical Properties: Instruments, Characterization, Field Measurements and Data Analysis Protocols; Volume V: Biogeochemical and Bio-Optical Measurements and Data Analysis Methods; Volume VI: Special Topics in Ocean Optics Protocols and Appendices. The earlier version of Ocean Optics Protocols for Satellite Ocean Color Sensor Validation, Revision 3 (Mueller and Fargion 2002, Volumes 1 and 2) is entirely superseded by the six volumes of Revision 4 listed above.

  12. Concurrent measurement of "real-world" stress and arousal in individuals with psychosis: assessing the feasibility and validity of a novel methodology.

    PubMed

    Kimhy, David; Delespaul, Philippe; Ahn, Hongshik; Cai, Shengnan; Shikhman, Marina; Lieberman, Jeffrey A; Malaspina, Dolores; Sloan, Richard P

    2010-11-01

    Psychosis has been repeatedly suggested to be affected by increases in stress and arousal. However, there is a dearth of evidence supporting the temporal link between stress, arousal, and psychosis during "real-world" functioning. This paucity of evidence may stem from limitations of current research methodologies. Our aim is to the test the feasibility and validity of a novel methodology designed to measure concurrent stress and arousal in individuals with psychosis during "real-world" daily functioning. Twenty patients with psychosis completed a 36-hour ambulatory assessment of stress and arousal. We used experience sampling method with palm computers to assess stress (10 times per day, 10 AM → 10 PM) along with concurrent ambulatory measurement of cardiac autonomic regulation using a Holter monitor. The clocks of the palm computer and Holter monitor were synchronized, allowing the temporal linking of the stress and arousal data. We used power spectral analysis to determine the parasympathetic contributions to autonomic regulation and sympathovagal balance during 5 minutes before and after each experience sample. Patients completed 79% of the experience samples (75% with a valid concurrent arousal data). Momentary increases in stress had inverse correlation with concurrent parasympathetic activity (ρ = -.27, P < .0001) and positive correlation with sympathovagal balance (ρ = .19, P = .0008). Stress and heart rate were not significantly related (ρ = -.05, P = .3875). The findings support the feasibility and validity of our methodology in individuals with psychosis. The methodology offers a novel way to study in high time resolution the concurrent, "real-world" interactions between stress, arousal, and psychosis. The authors discuss the methodology's potential applications and future research directions.

  13. Assessment of the Validity of the Research Diagnostic Criteria for Temporomandibular Disorders: Overview and Methodology

    PubMed Central

    Schiffman, Eric L.; Truelove, Edmond L.; Ohrbach, Richard; Anderson, Gary C.; John, Mike T.; List, Thomas; Look, John O.

    2011-01-01

    AIMS The purpose of the Research Diagnostic Criteria for Temporomandibular Disorders (RDC/TMD) Validation Project was to assess the diagnostic validity of this examination protocol. An overview is presented, including Axis I and II methodology and descriptive statistics for the study participant sample. This paper details the development of reliable methods to establish the reference standards for assessing criterion validity of the Axis I RDC/TMD diagnoses. Validity testing for the Axis II biobehavioral instruments was based on previously validated reference standards. METHODS The Axis I reference standards were based on the consensus of 2 criterion examiners independently performing a comprehensive history, clinical examination, and evaluation of imaging. Intersite reliability was assessed annually for criterion examiners and radiologists. Criterion exam reliability was also assessed within study sites. RESULTS Study participant demographics were comparable to those of participants in previous studies using the RDC/TMD. Diagnostic agreement of the criterion examiners with each other and with the consensus-based reference standards was excellent with all kappas ≥ 0.81, except for osteoarthrosis (moderate agreement, k = 0.53). Intrasite criterion exam agreement with reference standards was excellent (k ≥ 0.95). Intersite reliability of the radiologists for detecting computed tomography-disclosed osteoarthrosis and magnetic resonance imaging-disclosed disc displacement was good to excellent (k = 0.71 and 0.84, respectively). CONCLUSION The Validation Project study population was appropriate for assessing the reliability and validity of the RDC/TMD Axis I and II. The reference standards used to assess the validity of Axis I TMD were based on reliable and clinically credible methods. PMID:20213028

  14. Implementing DBS methodology for the determination of Compound A in monkey blood: GLP method validation and investigation of the impact of blood spreading on performance.

    PubMed

    Fan, Leimin; Lee, Jacob; Hall, Jeffrey; Tolentino, Edward J; Wu, Huaiqin; El-Shourbagy, Tawakol

    2011-06-01

    This article describes validation work for analysis of an Abbott investigational drug (Compound A) in monkey whole blood with dried blood spots (DBS). The impact of DBS spotting volume on analyte concentration was investigated. The quantitation range was between 30.5 and 10,200 ng/ml. Accuracy and precision of quality controls, linearity of calibration curves, matrix effect, selectivity, dilution, recovery and multiple stabilities were evaluated in the validation, and all demonstrated acceptable results. Incurred sample reanalysis was performed with 57 out of 58 samples having a percentage difference (versus the mean value) less than 20%. A linear relationship between the spotting volume and the spot area was drawn. The influence of spotting volume on concentration was discussed. All validation results met good laboratory practice acceptance requirements. Radial spreading of blood on DBS cards can be a factor in DBS concentrations at smaller spotting volumes.

  15. Ocean Optics Protocols for Satellite Ocean Color Sensor Validation, Revision 4, Volume IV: Inherent Optical Properties: Instruments, Characterizations, Field Measurements and Data Analysis Protocols

    NASA Technical Reports Server (NTRS)

    Mueller, J. L.; Fargion, G. S.; McClain, C. R. (Editor); Pegau, S.; Zanefeld, J. R. V.; Mitchell, B. G.; Kahru, M.; Wieland, J.; Stramska, M.

    2003-01-01

    This document stipulates protocols for measuring bio-optical and radiometric data for the Sensor Intercomparision and Merger for Biological and Interdisciplinary Oceanic Studies (SIMBIOS) Project activities and algorithm development. The document is organized into 6 separate volumes as Ocean Optics Protocols for Satellite Ocean Color Sensor Validation, Revision 4. Volume I: Introduction, Background, and Conventions; Volume II: Instrument Specifications, Characterization and Calibration; Volume III: Radiometric Measurements and Data Analysis Methods; Volume IV: Inherent Optical Properties: Instruments, Characterization, Field Measurements and Data Analysis Protocols; Volume V: Biogeochemical and Bio-Optical Measurements and Data Analysis Methods; Volume VI: Special Topics in Ocean Optics Protocols and Appendices. The earlier version of Ocean Optics Protocols for Satellite Ocean Color Sensor Validation, Revision 3 is entirely superseded by the six volumes of Revision 4 listed above.

  16. Semiautomatic regional segmentation to measure orbital fat volumes in thyroid-associated ophthalmopathy. A validation study.

    PubMed

    Comerci, M; Elefante, A; Strianese, D; Senese, R; Bonavolontà, P; Alfano, B; Bonavolontà, B; Brunetti, A

    2013-08-01

    This study was designed to validate a novel semi-automated segmentation method to measure regional intra-orbital fat tissue volume in Graves' ophthalmopathy. Twenty-four orbits from 12 patients with Graves' ophthalmopathy, 24 orbits from 12 controls, ten orbits from five MRI study simulations and two orbits from a digital model were used. Following manual region of interest definition of the orbital volumes performed by two operators with different levels of expertise, an automated procedure calculated intra-orbital fat tissue volumes (global and regional, with automated definition of four quadrants). In patients with Graves' disease, clinical activity score and degree of exophthalmos were measured and correlated with intra-orbital fat volumes. Operator performance was evaluated and statistical analysis of the measurements was performed. Accurate intra-orbital fat volume measurements were obtained with coefficients of variation below 5%. The mean operator difference in total fat volume measurements was 0.56%. Patients had significantly higher intra-orbital fat volumes than controls (p<0.001 using Student's t test). Fat volumes and clinical score were significantly correlated (p<0.001). The semi-automated method described here can provide accurate, reproducible intra-orbital fat measurements with low inter-operator variation and good correlation with clinical data.

  17. Point-to-Point! Validation of the Small Aircraft Transportation System Higher Volume Operations Concept

    NASA Technical Reports Server (NTRS)

    Williams, Daniel M.

    2006-01-01

    Described is the research process that NASA researchers used to validate the Small Aircraft Transportation System (SATS) Higher Volume Operations (HVO) concept. The four phase building-block validation and verification process included multiple elements ranging from formal analysis of HVO procedures to flight test, to full-system architecture prototype that was successfully shown to the public at the June 2005 SATS Technical Demonstration in Danville, VA. Presented are significant results of each of the four research phases that extend early results presented at ICAS 2004. HVO study results have been incorporated into the development of the Next Generation Air Transportation System (NGATS) vision and offer a validated concept to provide a significant portion of the 3X capacity improvement sought after in the United States National Airspace System (NAS).

  18. Bridging the gap between neurocognitive processing theory and performance validity assessment among the cognitively impaired: a review and methodological approach.

    PubMed

    Leighton, Angela; Weinborn, Michael; Maybery, Murray

    2014-10-01

    Bigler (2012) and Larrabee (2012) recently addressed the state of the science surrounding performance validity tests (PVTs) in a dialogue highlighting evidence for the valid and increased use of PVTs, but also for unresolved problems. Specifically, Bigler criticized the lack of guidance from neurocognitive processing theory in the PVT literature. For example, individual PVTs have applied the simultaneous forced-choice methodology using a variety of test characteristics (e.g., word vs. picture stimuli) with known neurocognitive processing implications (e.g., the "picture superiority effect"). However, the influence of such variations on classification accuracy has been inadequately evaluated, particularly among cognitively impaired individuals. The current review places the PVT literature in the context of neurocognitive processing theory, and identifies potential methodological factors to account for the significant variability we identified in classification accuracy across current PVTs. We subsequently evaluated the utility of a well-known cognitive manipulation to provide a Clinical Analogue Methodology (CAM), that is, to alter the PVT performance of healthy individuals to be similar to that of a cognitively impaired group. Initial support was found, suggesting the CAM may be useful alongside other approaches (analogue malingering methodology) for the systematic evaluation of PVTs, particularly the influence of specific neurocognitive processing components on performance.

  19. Methodology for the nuclear design validation of an Alternate Emergency Management Centre (CAGE)

    NASA Astrophysics Data System (ADS)

    Hueso, César; Fabbri, Marco; de la Fuente, Cristina; Janés, Albert; Massuet, Joan; Zamora, Imanol; Gasca, Cristina; Hernández, Héctor; Vega, J. Ángel

    2017-09-01

    The methodology is devised by coupling different codes. The study of weather conditions as part of the data of the site will determine the relative concentrations of radionuclides in the air using ARCON96. The activity in the air is characterized depending on the source and release sequence specified in NUREG-1465 by RADTRAD code, which provides results of the inner cloud source term contribution. Known activities, energy spectra are inferred using ORIGEN-S, which are used as input for the models of the outer cloud, filters and containment generated with MCNP5. The sum of the different contributions must meet the conditions of habitability specified by the CSN (Spanish Nuclear Regulatory Body) (TEDE <50 mSv and equivalent dose to the thyroid <500 mSv within 30 days following the accident doses) so that the dose is optimized by varying parameters such as CAGE location, flow filtering need for recirculation, thicknesses and compositions of the walls, etc. The results for the most penalizing area meet the established criteria, and therefore the CAGE building design based on the methodology presented is radiologically validated.

  20. Intravascular volume in cirrhosis. Reassessment using improved methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rector, W.G. Jr.; Ibarra, F.

    1988-04-01

    Previous studies of blood volume (BV) in cirrhosis have either not adjusted BV properly for body size; determined plasma volume from the dilution of labeled albumin 10-20 min postinjection, when some extravascular redistribution has already occurred; and/or not used the correct whole body-peripheral hematocrit ratio (0.82) in calculating whole BV from plasma volume and the peripheral hematocrit. We measured BV with attention to these considerations in 19 patients with cirrhosis and reexamined the determinants of vascular volume and the relationship between vascular volume and sodium retention. BV was calculated as plasma volume (determined from extrapolated plasma activity of intravenously injectedmore » (/sup 131/I)+albumin at time 0) divided by (peripheral hematocrit X 0.82). The result was expressed per kilogram dry body weight, determined by subtracting the mass of ascites (measured by isotope dilution; 1 liter = 1 kg) from the actual body weight of nonedematous patients. Measured and expressed in this way, BV correlated strongly with esophageal variceal size (r = 0.87, P less than 0.05), although not with net portal, right atrial, inferior vena caval, or arterial pressure, and was significantly greater in patients with sodium retention as compared to patients without sodium retention. The principal modifier of vascular volume in cirrhosis is vascular capacity, which is probably mainly determined by the extent of the portasystemic collateral circulation. Increased vascular volume in patients with sodium retention as compared to patients without sodium retention supports the overflow theory of ascites formation.« less

  1. Development and Validation of a Collocated Exposure Monitoring Methodology using Portable Air Monitors

    NASA Astrophysics Data System (ADS)

    Li, Z.; Che, W.; Frey, H. C.; Lau, A. K. H.

    2016-12-01

    Portable air monitors are currently being developed and used to enable a move towards exposure monitoring as opposed to fixed site monitoring. Reliable methods are needed regarding capturing spatial and temporal variability in exposure concentration to obtain credible data from which to develop efficient exposure mitigation measures. However, there are few studies that quantify the validity and repeatability of the collected data. The objective of this study is to present and evaluate a collocated exposure monitoring (CEM) methodology including the calibration of portable air monitors against stationary reference equipment, side-by-side comparison of portable air monitors, personal or microenvironmental exposure monitoring and the processing and interpretation of the collected data. The CEM methodology was evaluated based on application to portable monitors TSI DustTrak II Aerosol Monitor 8530 for fine particulate matter (PM2.5) and TSI Q-Trak model 7575 with probe model 982 for CO, CO2, temperature and relative humidity. Taking a school sampling campaign in Hong Kong in January and June, 2015 as an example, the calibrated side-by-side measured 1 Hz PM2.5 concentrations showed good consistency between two sets of portable air monitors. Confidence in side-by-side comparison, PM2.5 concentrations of which most of the time were within 2 percent, enabled robust inference regarding differences when the monitors measured in classroom and pedestrian during school hour. The proposed CEM methodology can be widely applied in sampling campaigns with the objective of simultaneously characterizing pollutant concentrations in two or more locations or microenvironments. The further application of the CEM methodology to transportation exposure will be presented and discussed.

  2. The Research Diagnostic Criteria for Temporomandibular Disorders. I: overview and methodology for assessment of validity.

    PubMed

    Schiffman, Eric L; Truelove, Edmond L; Ohrbach, Richard; Anderson, Gary C; John, Mike T; List, Thomas; Look, John O

    2010-01-01

    The purpose of the Research Diagnostic Criteria for Temporomandibular Disorders (RDC/TMD) Validation Project was to assess the diagnostic validity of this examination protocol. The aim of this article is to provide an overview of the project's methodology, descriptive statistics, and data for the study participant sample. This article also details the development of reliable methods to establish the reference standards for assessing criterion validity of the Axis I RDC/TMD diagnoses. The Axis I reference standards were based on the consensus of two criterion examiners independently performing a comprehensive history, clinical examination, and evaluation of imaging. Intersite reliability was assessed annually for criterion examiners and radiologists. Criterion examination reliability was also assessed within study sites. Study participant demographics were comparable to those of participants in previous studies using the RDC/TMD. Diagnostic agreement of the criterion examiners with each other and with the consensus-based reference standards was excellent with all kappas > or = 0.81, except for osteoarthrosis (moderate agreement, k = 0.53). Intrasite criterion examiner agreement with reference standards was excellent (k > or = 0.95). Intersite reliability of the radiologists for detecting computed tomography-disclosed osteoarthrosis and magnetic resonance imaging-disclosed disc displacement was good to excellent (k = 0.71 and 0.84, respectively). The Validation Project study population was appropriate for assessing the reliability and validity of the RDC/TMD Axis I and II. The reference standards used to assess the validity of Axis I TMD were based on reliable and clinically credible methods.

  3. South Atlantic Omega Validation. Volume 1. Summary, Analysis, Appendices A-E.

    DTIC Science & Technology

    1983-01-01

    JAN 63 UNLASSIFIED DTC O23-0 -C-4-23 F/O 17?3 L Wi 1j.1 Vll 11.2 MICOCPV ESLUIO TETC6 R NATINAL U~ i L 4.& STNAO .0 64:4 1ILE ()PY < SPAGE MILL RD. I ...NCO. I 10C .PALO M.O, CAUPOIUi CCII) 1ir,- SOUTH ATLANTIC OMEGA VALIDATION Final Report VOLUME I : SUMMARY, ANALYSIS, APPENDICES A-E January 1983...OPERATIONS DETAIL Washington, D.C. 20590 e.lS~ i ..lz... CEU I . ASSIFICATION OF THIS PAGE’ REPORT DOCUMENTATION PAGE Ia. REPORT SECURITY CLASSIFICATION lb

  4. Validation of multi-detector computed tomography as a non-invasive method for measuring ovarian volume in macaques (Macaca fascicularis).

    PubMed

    Jones, Jeryl C; Appt, Susan E; Werre, Stephen R; Tan, Joshua C; Kaplan, Jay R

    2010-06-01

    The purpose of this study was to validate low radiation dose, contrast-enhanced, multi-detector computed tomography (MDCT) as a non-invasive method for measuring ovarian volume in macaques. Computed tomography scans of four known-volume phantoms and nine mature female cynomolgus macaques were acquired using a previously described, low radiation dose scanning protocol, intravenous contrast enhancement, and a 32-slice MDCT scanner. Immediately following MDCT, ovaries were surgically removed and the ovarian weights were measured. The ovarian volumes were determined using water displacement. A veterinary radiologist who was unaware of actual volumes measured ovarian CT volumes three times, using a laptop computer, pen display tablet, hand-traced regions of interest, and free image analysis software. A statistician selected and performed all tests comparing the actual and CT data. Ovaries were successfully located in all MDCT scans. The iliac arteries and veins, uterus, fallopian tubes, cervix, ureters, urinary bladder, rectum, and colon were also consistently visualized. Large antral follicles were detected in six ovaries. Phantom mean CT volume was 0.702+/-SD 0.504 cc and the mean actual volume was 0.743+/-SD 0.526 cc. Ovary mean CT volume was 0.258+/-SD 0.159 cc and mean water displacement volume was 0.257+/-SD 0.145 cc. For phantoms, the mean coefficient of variation for CT volumes was 2.5%. For ovaries, the least squares mean coefficient of variation for CT volumes was 5.4%. The ovarian CT volume was significantly associated with actual ovarian volume (ICC coefficient 0.79, regression coefficient 0.5, P=0.0006) and the actual ovarian weight (ICC coefficient 0.62, regression coefficient 0.6, P=0.015). There was no association between the CT volume accuracy and mean ovarian CT density (degree of intravenous contrast enhancement), and there was no proportional or fixed bias in the CT volume measurements. Findings from this study indicate that MDCT is a valid non

  5. HYDRA-II: A hydrothermal analysis computer code: Volume 3, Verification/validation assessments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCann, R.A.; Lowery, P.S.

    1987-10-01

    HYDRA-II is a hydrothermal computer code capable of three-dimensional analysis of coupled conduction, convection, and thermal radiation problems. This code is especially appropriate for simulating the steady-state performance of spent fuel storage systems. The code has been evaluated for this application for the US Department of Energy's Commercial Spent Fuel Management Program. HYDRA-II provides a finite difference solution in cartesian coordinates to the equations governing the conservation of mass, momentum, and energy. A cylindrical coordinate system may also be used to enclose the cartesian coordinate system. This exterior coordinate system is useful for modeling cylindrical cask bodies. The difference equationsmore » for conservation of momentum are enhanced by the incorporation of directional porosities and permeabilities that aid in modeling solid structures whose dimensions may be smaller than the computational mesh. The equation for conservation of energy permits modeling of orthotropic physical properties and film resistances. Several automated procedures are available to model radiation transfer within enclosures and from fuel rod to fuel rod. The documentation of HYDRA-II is presented in three separate volumes. Volume I - Equations and Numerics describes the basic differential equations, illustrates how the difference equations are formulated, and gives the solution procedures employed. Volume II - User's Manual contains code flow charts, discusses the code structure, provides detailed instructions for preparing an input file, and illustrates the operation of the code by means of a model problem. This volume, Volume III - Verification/Validation Assessments, provides a comparison between the analytical solution and the numerical simulation for problems with a known solution. This volume also documents comparisons between the results of simulations of single- and multiassembly storage systems and actual experimental data. 11 refs., 55 figs., 13 tabs.« less

  6. The methodological quality of three foundational law enforcement Drug Influence Evaluation validation studies.

    PubMed

    Kane, Greg

    2013-11-04

    A Drug Influence Evaluation (DIE) is a formal assessment of an impaired driving suspect, performed by a trained law enforcement officer who uses circumstantial facts, questioning, searching, and a physical exam to form an unstandardized opinion as to whether a suspect's driving was impaired by drugs. This paper first identifies the scientific studies commonly cited in American criminal trials as evidence of DIE accuracy, and second, uses the QUADAS tool to investigate whether the methodologies used by these studies allow them to correctly quantify the diagnostic accuracy of the DIEs currently administered by US law enforcement. Three studies were selected for analysis. For each study, the QUADAS tool identified biases that distorted reported accuracies. The studies were subject to spectrum bias, selection bias, misclassification bias, verification bias, differential verification bias, incorporation bias, and review bias. The studies quantified DIE performance with prevalence-dependent accuracy statistics that are internally but not externally valid. The accuracies reported by these studies do not quantify the accuracy of the DIE process now used by US law enforcement. These studies do not validate current DIE practice.

  7. Early Childhood Longitudinal Study, Birth Cohort (ECLS-B): Methodology Report for the 9-Month Data Collection (2001-02). Volume 2: Sampling. NCES 2005-147

    ERIC Educational Resources Information Center

    Bethel, James; Green, James L.; Nord, Christine; Kalton, Graham; West, Jerry

    2005-01-01

    This report is Volume 2 of the methodology report that provides information about the development, design, and conduct of the 9-month data collection of the Early Childhood Longitudinal Study, Birth Cohort (ECLS-B). This volume begins with a brief overview of the ECLS-B, but focuses on the sample design, calculation of response rates, development…

  8. Validation of a White-light 3D Body Volume Scanner to Assess Body Composition.

    PubMed

    Medina-Inojosa, Jose; Somers, Virend; Jenkins, Sarah; Zundel, Jennifer; Johnson, Lynne; Grimes, Chassidy; Lopez-Jimenez, Francisco

    2017-01-01

    Estimating body fat content has shown to be a better predictor of adiposity-related cardiovascular risk than the commonly used body mass index (BMI). The white-light 3D body volume index (BVI) scanner is a non-invasive device normally used in the clothing industry to assess body shapes and sizes. We assessed the hypothesis that volume obtained by BVI is comparable to the volume obtained by air displacement plethysmography (Bod-Pod) and thus capable of assessing body fat mass using the bi-compartmental principles of body composition. We compared BVI to Bod-pod, a validated bicompartmental method to assess body fat percent that uses pressure/volume relationships in isothermal conditions to estimate body volume. Volume is then used to calculate body density (BD) applying the formula density=Body Mass/Volume. Body fat mass percentage is then calculated using the Siri formula (4.95/BD - 4.50) × 100. Subjects were undergoing a wellness evaluation. Measurements from both devices were obtained the same day. A prediction model for total Bod-pod volume was developed using linear regression based on 80% of the observations (N=971), as follows: Predicted Bod-pod Volume (L)=9.498+0.805*(BVI volume, L)-0.0411*(Age, years)-3.295*(Male=0, Female=1)+0.0554*(BVI volume, L)*(Male=0, Female=1)+0.0282*(Age, years)*(Male=0, Female=1). Predictions for Bod-pod volume based on the estimated model were then calculated for the remaining 20% (N=243) and compared to the volume measured by the Bod-pod. Mean age among the 971 individuals was 41.5 ± 12.9 years, 39.4% were men, weight 81.6 ± 20.9 kg, BMI was 27.8 ± 6.3kg/m 2 . Average difference between volume measured by Bod-pod- predicted volume by BVI was 0.0 L, median: -0.4 L, IQR: -1.8 L to 1.5 L, R2=0.9845. Average difference between body fat measured-predicted was-1%, median: -2.7%, IQR: -13.2 to 9.9, R2=0.9236. Volume and BFM can be estimated by using volume measurements obtained by a white- light 3D body scanner and the prediction

  9. Validation of a physical anthropology methodology using mandibles for gender estimation in a Brazilian population

    PubMed Central

    CARVALHO, Suzana Papile Maciel; BRITO, Liz Magalhães; de PAIVA, Luiz Airton Saavedra; BICUDO, Lucilene Arilho Ribeiro; CROSATO, Edgard Michel; de OLIVEIRA, Rogério Nogueira

    2013-01-01

    Validation studies of physical anthropology methods in the different population groups are extremely important, especially in cases in which the population variations may cause problems in the identification of a native individual by the application of norms developed for different communities. Objective This study aimed to estimate the gender of skeletons by application of the method of Oliveira, et al. (1995), previously used in a population sample from Northeast Brazil. Material and Methods The accuracy of this method was assessed for a population from Southeast Brazil and validated by statistical tests. The method used two mandibular measurements, namely the bigonial distance and the mandibular ramus height. The sample was composed of 66 skulls and the method was applied by two examiners. The results were statistically analyzed by the paired t test, logistic discriminant analysis and logistic regression. Results The results demonstrated that the application of the method of Oliveira, et al. (1995) in this population achieved very different outcomes between genders, with 100% for females and only 11% for males, which may be explained by ethnic differences. However, statistical adjustment of measurement data for the population analyzed allowed accuracy of 76.47% for males and 78.13% for females, with the creation of a new discriminant formula. Conclusion It was concluded that methods involving physical anthropology present high rate of accuracy for human identification, easy application, low cost and simplicity; however, the methodologies must be validated for the different populations due to differences in ethnic patterns, which are directly related to the phenotypic aspects. In this specific case, the method of Oliveira, et al. (1995) presented good accuracy and may be used for gender estimation in Brazil in two geographic regions, namely Northeast and Southeast; however, for other regions of the country (North, Central West and South), previous methodological

  10. Validation of a physical anthropology methodology using mandibles for gender estimation in a Brazilian population.

    PubMed

    Carvalho, Suzana Papile Maciel; Brito, Liz Magalhães; Paiva, Luiz Airton Saavedra de; Bicudo, Lucilene Arilho Ribeiro; Crosato, Edgard Michel; Oliveira, Rogério Nogueira de

    2013-01-01

    Validation studies of physical anthropology methods in the different population groups are extremely important, especially in cases in which the population variations may cause problems in the identification of a native individual by the application of norms developed for different communities. This study aimed to estimate the gender of skeletons by application of the method of Oliveira, et al. (1995), previously used in a population sample from Northeast Brazil. The accuracy of this method was assessed for a population from Southeast Brazil and validated by statistical tests. The method used two mandibular measurements, namely the bigonial distance and the mandibular ramus height. The sample was composed of 66 skulls and the method was applied by two examiners. The results were statistically analyzed by the paired t test, logistic discriminant analysis and logistic regression. The results demonstrated that the application of the method of Oliveira, et al. (1995) in this population achieved very different outcomes between genders, with 100% for females and only 11% for males, which may be explained by ethnic differences. However, statistical adjustment of measurement data for the population analyzed allowed accuracy of 76.47% for males and 78.13% for females, with the creation of a new discriminant formula. It was concluded that methods involving physical anthropology present high rate of accuracy for human identification, easy application, low cost and simplicity; however, the methodologies must be validated for the different populations due to differences in ethnic patterns, which are directly related to the phenotypic aspects. In this specific case, the method of Oliveira, et al. (1995) presented good accuracy and may be used for gender estimation in Brazil in two geographic regions, namely Northeast and Southeast; however, for other regions of the country (North, Central West and South), previous methodological adjustment is recommended as demonstrated in this

  11. Experimental validation of an integrated controls-structures design methodology for a class of flexible space structures

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.; Gupta, Sandeep; Elliott, Kenny B.; Joshi, Suresh M.; Walz, Joseph E.

    1994-01-01

    This paper describes the first experimental validation of an optimization-based integrated controls-structures design methodology for a class of flexible space structures. The Controls-Structures-Interaction (CSI) Evolutionary Model, a laboratory test bed at Langley, is redesigned based on the integrated design methodology with two different dissipative control strategies. The redesigned structure is fabricated, assembled in the laboratory, and experimentally compared with the original test structure. Design guides are proposed and used in the integrated design process to ensure that the resulting structure can be fabricated. Experimental results indicate that the integrated design requires greater than 60 percent less average control power (by thruster actuators) than the conventional control-optimized design while maintaining the required line-of-sight performance, thereby confirming the analytical findings about the superiority of the integrated design methodology. Amenability of the integrated design structure to other control strategies is considered and evaluated analytically and experimentally. This work also demonstrates the capabilities of the Langley-developed design tool CSI DESIGN which provides a unified environment for structural and control design.

  12. Towards a sharp-interface volume-of-fluid methodology for modeling evaporation

    NASA Astrophysics Data System (ADS)

    Pathak, Ashish; Raessi, Mehdi

    2017-11-01

    In modeling evaporation, the diffuse-interface (one-domain) formulation yields inaccurate results. Recent efforts approaching the problem via a sharp-interface (two-domain) formulation have shown significant improvements. The reasons behind their better performance are discussed in the present work. All available sharp-interface methods, however, exclusively employ the level-set. In the present work, we develop a sharp-interface evaporation model in a volume-of-fluid (VOF) framework in order to leverage its mass-conserving property as well as its ability to handle large topographical changes. We start with a critical review of the assumptions underlying the mathematical equations governing evaporation. For example, it is shown that the assumption of incompressibility can only be applied in special circumstances. The famous D2 law used for benchmarking is valid exclusively to steady-state test problems. Transient is present over significant lifetime of a micron-size droplet. Therefore, a 1D spherical fully transient model is developed to provide a benchmark transient solution. Finally, a 3D Cartesian Navier-Stokes evaporation solver is developed. Some preliminary validation test-cases are presented for static and moving drop evaporation. This material is based upon work supported by the Department of Energy, Office of Energy Efficiency and Renewable Energy and the Department of Defense, Tank and Automotive Research, Development, and Engineering Center, under Award Number DEEE0007292.

  13. The validity of anthropometric leg muscle volume estimation across a wide spectrum: From able-bodied adults to individuals with a spinal cord injury

    PubMed Central

    Venturelli, Massimo; Jeong, Eun-Kee; Richardson, Russell S.

    2014-01-01

    The assessment of muscle volume, and changes over time, have significant clinical and research-related implications. Methods to assess muscle volume vary from simple and inexpensive to complex and expensive. Therefore this study sought to examine the validity of muscle volume estimated simply by anthropometry compared with the more complex proton magnetic resonance imaging (1H-MRI) across a wide spectrum of individuals including those with a spinal cord injury (SCI), a group recognized to exhibit significant muscle atrophy. Accordingly, muscle volume of the thigh and lower leg of eight subjects with a SCI and eight able-bodied subjects (controls) was determined by anthropometry and 1H-MRI. With either method, muscle volumes were significantly lower in the SCI compared with the controls (P < 0.05) and, using pooled data from both groups, anthropometric measurements of muscle volume were strongly correlated to the values assessed by 1H-MRI in both the thigh (r2 = 0.89; P < 0.05) and lower leg (r2 = 0.98; P < 0.05). However, the anthropometric approach systematically overestimated muscle volume compared with 1H-MRI in both the thigh (mean bias = 2407cm3) and the lower (mean bias = 170 cm3) leg. Thus with an appropriate correction for this systemic overestimation, muscle volume estimated from anthropometric measurements is a valid approach and provides acceptable accuracy across a spectrum of adults with normal muscle mass to a SCI and severe muscle atrophy. In practical terms this study provides the formulas that add validity to the already simple and inexpensive anthropometric approach to assess muscle volume in clinical and research settings. PMID:24458749

  14. Validation of a reaction volume reduction protocol for analysis of Y chromosome haplotypes targeting DNA databases.

    PubMed

    Souza, C A; Oliveira, T C; Crovella, S; Santos, S M; Rabêlo, K C N; Soriano, E P; Carvalho, M V D; Junior, A F Caldas; Porto, G G; Campello, R I C; Antunes, A A; Queiroz, R A; Souza, S M

    2017-04-28

    The use of Y chromosome haplotypes, important for the detection of sexual crimes in forensics, has gained prominence with the use of databases that incorporate these genetic profiles in their system. Here, we optimized and validated an amplification protocol for Y chromosome profile retrieval in reference samples using lesser materials than those in commercial kits. FTA ® cards (Flinders Technology Associates) were used to support the oral cells of male individuals, which were amplified directly using the SwabSolution reagent (Promega). First, we optimized and validated the process to define the volume and cycling conditions. Three reference samples and nineteen 1.2 mm-diameter perforated discs were used per sample. Amplification of one or two discs (samples) with the PowerPlex ® Y23 kit (Promega) was performed using 25, 26, and 27 thermal cycles. Twenty percent, 32%, and 100% reagent volumes, one disc, and 26 cycles were used for the control per sample. Thereafter, all samples (N = 270) were amplified using 27 cycles, one disc, and 32% reagents (optimized conditions). Data was analyzed using a study of equilibrium values between fluorophore colors. In the samples analyzed with 20% volume, an imbalance was observed in peak heights, both inside and in-between each dye. In samples amplified with 32% reagents, the values obtained for the intra-color and inter-color standard balance calculations for verification of the quality of the analyzed peaks were similar to those of samples amplified with 100% of the recommended volume. The quality of the profiles obtained with 32% reagents was suitable for insertion into databases.

  15. The methodological quality of three foundational law enforcement drug influence evaluation validation studies

    PubMed Central

    2013-01-01

    Background A Drug Influence Evaluation (DIE) is a formal assessment of an impaired driving suspect, performed by a trained law enforcement officer who uses circumstantial facts, questioning, searching, and a physical exam to form an unstandardized opinion as to whether a suspect’s driving was impaired by drugs. This paper first identifies the scientific studies commonly cited in American criminal trials as evidence of DIE accuracy, and second, uses the QUADAS tool to investigate whether the methodologies used by these studies allow them to correctly quantify the diagnostic accuracy of the DIEs currently administered by US law enforcement. Results Three studies were selected for analysis. For each study, the QUADAS tool identified biases that distorted reported accuracies. The studies were subject to spectrum bias, selection bias, misclassification bias, verification bias, differential verification bias, incorporation bias, and review bias. The studies quantified DIE performance with prevalence-dependent accuracy statistics that are internally but not externally valid. Conclusion The accuracies reported by these studies do not quantify the accuracy of the DIE process now used by US law enforcement. These studies do not validate current DIE practice. PMID:24188398

  16. Content validation of the operational definitions of the nursing diagnoses of activity intolerance, excess fluid volume, and decreased cardiac output in patients with heart failure.

    PubMed

    de Souza, Vanessa; Zeitoun, Sandra Salloum; Lopes, Camila Takao; de Oliveira, Ana Paula Dias; Lopes, Juliana de Lima; de Barros, Alba Lucia Botura Leite

    2014-06-01

    To consensually validate the operational definitions of the nursing diagnoses activity intolerance, excessive fluid volume, and decreased cardiac output in patients with decompensated heart failure. Consensual validation was performed in two stages: analogy by similarity of defining characteristics, and development of operational definitions and validation with experts. A total of 38 defining characteristics were found. Operational definitions were developed and content-validated. One hundred percent of agreement was achieved among the seven experts after five rounds. "Ascites" was added in the nursing diagnosis excessive fluid volume. The consensual validation improves interpretation of human response, grounding the selection of nursing interventions and contributing to improved nursing outcomes. Support the assessment of patients with decompensated heart failure. © 2013 NANDA International.

  17. Feasibility of single-beat full-volume capture real-time three-dimensional echocardiography for quantification of right ventricular volume: validation by cardiac magnetic resonance imaging.

    PubMed

    Zhang, Quan Bin; Sun, Jing Ping; Gao, Rui Feng; Lee, Alex Pui-Wai; Feng, Yan Lin; Liu, Xiao Rong; Sheng, Wei; Liu, Feng; Yang, Xing Sheng; Fang, Fang; Yu, Cheuk-Man

    2013-10-09

    The lack of an accurate noninvasive method for assessing right ventricular (RV) volume and function has been a major deficiency of two-dimensional (2D) echocardiography. The aim of our study was to test the feasibility of single-beat full-volume capture with real-time three-dimensional echo (3DE) imaging system for the evaluation of RV volumes and function validated by cardiac magnetic resonance imaging (CMRI). Sixty-one subjects (16 normal subjects, 20 patients with hypertension, 16 patients with pulmonary heart disease and 9 patients with coronary heart disease) were studied. RV volume and function assessments using 3DE were compared with manual tracing with CMRI as the reference method. Fifty-nine of 61 patients (96.7%; 36 male, mean age, 62 ± 15 years) had adequate three-dimensional echocardiographic data sets for analysis. The mean RV end diastolic volume (EDV) was 105 ± 38 ml, end-systolic volume (ESV) was 60 ± 30 and RV ejection fraction (EF) was 44 ± 11% by CMRI; and EDV 103 ± 38 ml, ESV 60 ± 28 ml and RV EF 41 ± 13% by 3DE. The correlations and agreements between measurements estimated by two methods were acceptable. RV volumes and function can be analyzed with 3DE software in most of subjects with or without heart diseases, which is able to be estimated with single-beat full-volume capture with real-time 3DE compared with CMRI. © 2013.

  18. Validating alternative methodologies to estimate the regime of temporary rivers when flow data are unavailable.

    PubMed

    Gallart, F; Llorens, P; Latron, J; Cid, N; Rieradevall, M; Prat, N

    2016-09-15

    Hydrological data for assessing the regime of temporary rivers are often non-existent or scarce. The scarcity of flow data makes impossible to characterize the hydrological regime of temporary streams and, in consequence, to select the correct periods and methods to determine their ecological status. This is why the TREHS software is being developed, in the framework of the LIFE Trivers project. It will help managers to implement adequately the European Water Framework Directive in this kind of water body. TREHS, using the methodology described in Gallart et al. (2012), defines six transient 'aquatic states', based on hydrological conditions representing different mesohabitats, for a given reach at a particular moment. Because of its qualitative nature, this approach allows using alternative methodologies to assess the regime of temporary rivers when there are no observed flow data. These methods, based on interviews and high-resolution aerial photographs, were tested for estimating the aquatic regime of temporary rivers. All the gauging stations (13) belonging to the Catalan Internal Catchments (NE Spain) with recurrent zero-flow periods were selected to validate this methodology. On the one hand, non-structured interviews were conducted with inhabitants of villages near the gauging stations. On the other hand, the historical series of available orthophotographs were examined. Flow records measured at the gauging stations were used to validate the alternative methods. Flow permanence in the reaches was estimated reasonably by the interviews and adequately by aerial photographs, when compared with the values estimated using daily flows. The degree of seasonality was assessed only roughly by the interviews. The recurrence of disconnected pools was not detected by flow records but was estimated with some divergences by the two methods. The combination of the two alternative methods allows substituting or complementing flow records, to be updated in the future through

  19. Methodological Considerations in Designing and Evaluating Animal-Assisted Interventions.

    PubMed

    Stern, Cindy; Chur-Hansen, Anna

    2013-02-27

    This paper presents a discussion of the literature on animal-assisted interventions and describes limitations surrounding current methodological quality. Benefits to human physical, psychological and social health cannot be empirically confirmed due to the methodological limitations of the existing body of research, and comparisons cannot validly be made across different studies. Without a solid research base animal-assisted interventions will not receive recognition and acceptance as a credible alternative health care treatment. The paper draws on the work of four systematic reviews conducted over April-May 2009, with no date restrictions, focusing exclusively on the use of canine-assisted interventions for older people residing in long-term care. The reviews revealed a lack of good quality studies. Although the literature base has grown in volume since its inception, it predominantly consists of anecdotal accounts and reports. Experimental studies undertaken are often flawed in aspects of design, conduct and reporting. There are few qualitative studies available leading to the inability to draw definitive conclusions. It is clear that due to the complexities associated with these interventions not all weaknesses can be eliminated. However, there are basic methodological weaknesses that can be addressed in future studies in the area. Checklists for quantitative and qualitative research designs to guide future research are offered to help address methodological rigour.

  20. Base heating methodology improvements, volume 1

    NASA Technical Reports Server (NTRS)

    Bender, Robert L.; Reardon, John E.; Somers, Richard E.; Fulton, Michael S.; Smith, Sheldon D.; Pergament, Harold

    1992-01-01

    This document is the final report for NASA MSFC Contract NAS8-38141. The contracted effort had the broad objective of improving the launch vehicles ascent base heating methodology to improve and simplify the determination of that environment for Advanced Launch System (ALS) concepts. It was pursued as an Advanced Development Plan (ADP) for the Joint DoD/NASA ALS program office with project management assigned to NASA/MSFC. The original study was to be completed in 26 months beginning Sep. 1989. Because of several program changes and emphasis on evolving launch vehicle concepts, the period of performance was extended to the current completion date of Nov. 1992. A computer code incorporating the methodology improvements into a quick prediction tool was developed and is operational for basic configuration and propulsion concepts. The code and its users guide are also provided as part of the contract documentation. Background information describing the specific objectives, limitations, and goals of the contract is summarized. A brief chronology of the ALS/NLS program history is also presented to provide the reader with an overview of the many variables influencing the development of the code over the past three years.

  1. Methodological convergence of program evaluation designs.

    PubMed

    Chacón-Moscoso, Salvador; Anguera, M Teresa; Sanduvete-Chaves, Susana; Sánchez-Martín, Milagrosa

    2014-01-01

    Nowadays, the confronting dichotomous view between experimental/quasi-experimental and non-experimental/ethnographic studies still exists but, despite the extensive use of non-experimental/ethnographic studies, the most systematic work on methodological quality has been developed based on experimental and quasi-experimental studies. This hinders evaluators and planners' practice of empirical program evaluation, a sphere in which the distinction between types of study is changing continually and is less clear. Based on the classical validity framework of experimental/quasi-experimental studies, we carry out a review of the literature in order to analyze the convergence of design elements in methodological quality in primary studies in systematic reviews and ethnographic research. We specify the relevant design elements that should be taken into account in order to improve validity and generalization in program evaluation practice in different methodologies from a practical methodological and complementary view. We recommend ways to improve design elements so as to enhance validity and generalization in program evaluation practice.

  2. Infrared Light Structured Sensor 3D Approach to Estimate Kidney Volume: A Validation Study.

    PubMed

    Garisto, Juan; Bertolo, Riccardo; Dagenais, Julien; Kaouk, Jihad

    2018-06-26

    To validate a new procedure for the three-dimensional (3D) estimation of total renal parenchyma volume (RPV) using a structured-light infrared laser sensor. To evaluate the accuracy of the sensor for assessing renal volume, we performed three experiments. Twenty freshly excised porcine kidneys were obtained. Experiment A, the water displacement method was used to obtain a determination of the RPV after immersing every kidney into 0.9% saline. Thereafter a structured sensor (Occipital, San Francisco, CA, USA) was used to scan the kidney. Kidney sample surface was presented initially as a mesh and then imported into MeshLab (Visual Computing Lab, Pisa, Italy) software to obtain the surface volume. Experiment B, a partial excision of the kidney with measurement of the excised volume and remnant was performed. Experiment C, a renorrhaphy of the remnant kidney was performed then measured. Bias and limits of agreement (LOA) were determined using the Bland-Altman method. Reliability was assessed using the intraclass correlation coefficient (ICC). Experiment A, the sensor bias was -1.95 mL (LOA: -19.5 to 15.59, R2= 0.410) with slightly overestimating the volumes. Experiment B, remnant kidney after partial excision and excised kidney volume were measured showing a sensor bias of -0.5 mL (LOA -5.34 to 4.20, R2= 0.490) and -0.6 mL (LOA: -1.97.08 to 0.77, R2= 0.561), respectively. Experiment C, the sensor bias was -0.89 mL (LOA -12.9 to 11.1, R2= 0.888). ICC was 0.9998. The sensor is a reliable method for assessing total renal volume with high levels of accuracy. Copyright © 2018. Published by Elsevier Inc.

  3. BIGHORN Computational Fluid Dynamics Theory, Methodology, and Code Verification & Validation Benchmark Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xia, Yidong; Andrs, David; Martineau, Richard Charles

    This document presents the theoretical background for a hybrid finite-element / finite-volume fluid flow solver, namely BIGHORN, based on the Multiphysics Object Oriented Simulation Environment (MOOSE) computational framework developed at the Idaho National Laboratory (INL). An overview of the numerical methods used in BIGHORN are discussed and followed by a presentation of the formulation details. The document begins with the governing equations for the compressible fluid flow, with an outline of the requisite constitutive relations. A second-order finite volume method used for solving the compressible fluid flow problems is presented next. A Pressure-Corrected Implicit Continuous-fluid Eulerian (PCICE) formulation for timemore » integration is also presented. The multi-fluid formulation is being developed. Although multi-fluid is not fully-developed, BIGHORN has been designed to handle multi-fluid problems. Due to the flexibility in the underlying MOOSE framework, BIGHORN is quite extensible, and can accommodate both multi-species and multi-phase formulations. This document also presents a suite of verification & validation benchmark test problems for BIGHORN. The intent for this suite of problems is to provide baseline comparison data that demonstrates the performance of the BIGHORN solution methods on problems that vary in complexity from laminar to turbulent flows. Wherever possible, some form of solution verification has been attempted to identify sensitivities in the solution methods, and suggest best practices when using BIGHORN.« less

  4. Payload training methodology study

    NASA Technical Reports Server (NTRS)

    1990-01-01

    The results of the Payload Training Methodology Study (PTMS) are documented. Methods and procedures are defined for the development of payload training programs to be conducted at the Marshall Space Flight Center Payload Training Complex (PCT) for the Space Station Freedom program. The study outlines the overall training program concept as well as the six methodologies associated with the program implementation. The program concept outlines the entire payload training program from initial identification of training requirements to the development of detailed design specifications for simulators and instructional material. The following six methodologies are defined: (1) The Training and Simulation Needs Assessment Methodology; (2) The Simulation Approach Methodology; (3) The Simulation Definition Analysis Methodology; (4) The Simulator Requirements Standardization Methodology; (5) The Simulator Development Verification Methodology; and (6) The Simulator Validation Methodology.

  5. Cuadernos de Autoformacion en Participacion Social: Metodologia. Volumen 2. Primera Edicion (Self-Instructional Notebooks on Social Participation: Methodology. Volume 2. First Edition).

    ERIC Educational Resources Information Center

    Instituto Nacional para la Educacion de los Adultos, Mexico City (Mexico).

    The series "Self-instructional Notes on Social Participation" is a six-volume series intended as teaching aids for adult educators. The theoretical, methodological, informative and practical elements of this series will assist professionals in their work and help them achieve greater success. The specific purpose of each notebook is…

  6. Learning Methodology in the Classroom to Encourage Participation

    ERIC Educational Resources Information Center

    Luna, Esther; Folgueiras, Pilar

    2014-01-01

    Service learning is a methodology that promotes the participation of citizens in their community. This article presents a brief conceptualization of citizen participation, characteristics of service learning methodology, and validation of a programme that promotes service-learning projects. This validation highlights the suitability of this…

  7. Discriminant content validity: a quantitative methodology for assessing content of theory-based measures, with illustrative applications.

    PubMed

    Johnston, Marie; Dixon, Diane; Hart, Jo; Glidewell, Liz; Schröder, Carin; Pollard, Beth

    2014-05-01

    In studies involving theoretical constructs, it is important that measures have good content validity and that there is not contamination of measures by content from other constructs. While reliability and construct validity are routinely reported, to date, there has not been a satisfactory, transparent, and systematic method of assessing and reporting content validity. In this paper, we describe a methodology of discriminant content validity (DCV) and illustrate its application in three studies. Discriminant content validity involves six steps: construct definition, item selection, judge identification, judgement format, single-sample test of content validity, and assessment of discriminant items. In three studies, these steps were applied to a measure of illness perceptions (IPQ-R) and control cognitions. The IPQ-R performed well with most items being purely related to their target construct, although timeline and consequences had small problems. By contrast, the study of control cognitions identified problems in measuring constructs independently. In the final study, direct estimation response formats for theory of planned behaviour constructs were found to have as good DCV as Likert format. The DCV method allowed quantitative assessment of each item and can therefore inform the content validity of the measures assessed. The methods can be applied to assess content validity before or after collecting data to select the appropriate items to measure theoretical constructs. Further, the data reported for each item in Appendix S1 can be used in item or measure selection. Statement of contribution What is already known on this subject? There are agreed methods of assessing and reporting construct validity of measures of theoretical constructs, but not their content validity. Content validity is rarely reported in a systematic and transparent manner. What does this study add? The paper proposes discriminant content validity (DCV), a systematic and transparent method

  8. Development and Validation of a Photonumeric Scale for Evaluation of Volume Deficit of the Temple

    PubMed Central

    Jones, Derek; Hardas, Bhushan; Murphy, Diane K.; Donofrio, Lisa; Sykes, Jonathan M.; Carruthers, Alastair; Creutz, Lela; Marx, Ann; Dill, Sara

    2016-01-01

    BACKGROUND A validated scale is needed for objective and reproducible comparisons of temple appearance before and after aesthetic treatment in practice and clinical studies. OBJECTIVE To describe the development and validation of the 5-point photonumeric Allergan Temple Hollowing Scale. METHODS The scale was developed to include an assessment guide, verbal descriptors, morphed images, and real subject images for each grade. The clinical significance of a 1-point score difference was evaluated in a review of image pairs representing varying differences in severity. Interrater and intrarater reliability was evaluated in a live-subject validation study (N = 298) completed during 2 sessions occurring 3 weeks apart. RESULTS A score difference of ≥1 point was shown to reflect a clinically significant difference (mean [95% confidence interval] absolute score difference, 1.1 [0.94–1.26] for clinically different image pairs and 0.67 [0.51–0.83] for not clinically different pairs). Intrarater agreement between the 2 validation sessions was almost perfect (mean weighted kappa = 0.86). Interrater agreement was almost perfect during the second session (0.81, primary endpoint). CONCLUSION The Allergan Temple Hollowing Scale is a validated and reliable scale for physician rating of temple volume deficit. PMID:27661742

  9. A validated methodology for the 3D reconstruction of cochlea geometries using human microCT images

    NASA Astrophysics Data System (ADS)

    Sakellarios, A. I.; Tachos, N. S.; Rigas, G.; Bibas, T.; Ni, G.; Böhnke, F.; Fotiadis, D. I.

    2017-05-01

    Accurate reconstruction of the inner ear is a prerequisite for the modelling and understanding of the inner ear mechanics. In this study, we present a semi-automated methodology for accurate reconstruction of the major inner ear structures (scalae, basilar membrane, stapes and semicircular canals). For this purpose, high resolution microCT images of a human specimen were used. The segmentation methodology is based on an iterative level set algorithm which provides the borders of the structures of interest. An enhanced coupled level set method which allows the simultaneous multiple image labeling without any overlapping regions has been developed for this purpose. The marching cube algorithm was applied in order to extract the surface from the segmented volume. The reconstructed geometries are then post-processed to improve the basilar membrane geometry to realistically represent physiologic dimensions. The final reconstructed model is compared to the available data from the literature. The results show that our generated inner ear structures are in good agreement with the published ones, while our approach is the most realistic in terms of the basilar membrane thickness and width reconstruction.

  10. Methodology for testing and validating knowledge bases

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, C.; Padalkar, S.; Sztipanovits, J.; Purves, B. R.

    1987-01-01

    A test and validation toolset developed for artificial intelligence programs is described. The basic premises of this method are: (1) knowledge bases have a strongly declarative character and represent mostly structural information about different domains, (2) the conditions for integrity, consistency, and correctness can be transformed into structural properties of knowledge bases, and (3) structural information and structural properties can be uniformly represented by graphs and checked by graph algorithms. The interactive test and validation environment have been implemented on a SUN workstation.

  11. Simulation validation and management

    NASA Astrophysics Data System (ADS)

    Illgen, John D.

    1995-06-01

    Illgen Simulation Technologies, Inc., has been working interactive verification and validation programs for the past six years. As a result, they have evolved a methodology that has been adopted and successfully implemented by a number of different verification and validation programs. This methodology employs a unique case of computer-assisted software engineering (CASE) tools to reverse engineer source code and produce analytical outputs (flow charts and tables) that aid the engineer/analyst in the verification and validation process. We have found that the use of CASE tools saves time,which equate to improvements in both schedule and cost. This paper will describe the ISTI-developed methodology and how CASe tools are used in its support. Case studies will be discussed.

  12. Evaluation Methodology. The Evaluation Exchange. Volume 11, Number 2, Summer 2005

    ERIC Educational Resources Information Center

    Coffman, Julia, Ed.

    2005-01-01

    This is the third issue of "The Evaluation Exchange" devoted entirely to the theme of methodology, though every issue tries to identify new methodological choices, the instructive ways in which people have applied or combined different methods, and emerging methodological trends. For example, lately "theories of change" have gained almost…

  13. A Comprehensive Validation Methodology for Sparse Experimental Data

    NASA Technical Reports Server (NTRS)

    Norman, Ryan B.; Blattnig, Steve R.

    2010-01-01

    A comprehensive program of verification and validation has been undertaken to assess the applicability of models to space radiation shielding applications and to track progress as models are developed over time. The models are placed under configuration control, and automated validation tests are used so that comparisons can readily be made as models are improved. Though direct comparisons between theoretical results and experimental data are desired for validation purposes, such comparisons are not always possible due to lack of data. In this work, two uncertainty metrics are introduced that are suitable for validating theoretical models against sparse experimental databases. The nuclear physics models, NUCFRG2 and QMSFRG, are compared to an experimental database consisting of over 3600 experimental cross sections to demonstrate the applicability of the metrics. A cumulative uncertainty metric is applied to the question of overall model accuracy, while a metric based on the median uncertainty is used to analyze the models from the perspective of model development by analyzing subsets of the model parameter space.

  14. Did we describe what you meant? Findings and methodological discussion of an empirical validation study for a systematic review of reasons

    PubMed Central

    2014-01-01

    Background The systematic review of reasons is a new way to obtain comprehensive information about specific ethical topics. One such review was carried out for the question of why post-trial access to trial drugs should or need not be provided. The objective of this study was to empirically validate this review using an author check method. The article also reports on methodological challenges faced by our study. Methods We emailed a questionnaire to the 64 corresponding authors of those papers that were assessed in the review of reasons on post-trial access. The questionnaire consisted of all quotations (“reason mentions”) that were identified by the review to represent a reason in a given author’s publication, together with a set of codings for the quotations. The authors were asked to rate the correctness of the codings. Results We received 19 responses, from which only 13 were completed questionnaires. In total, 98 quotations and their related codes in the 13 questionnaires were checked by the addressees. For 77 quotations (79%), all codings were deemed correct, for 21 quotations (21%), some codings were deemed to need correction. Most corrections were minor and did not imply a complete misunderstanding of the citation. Conclusions This first attempt to validate a review of reasons leads to four crucial methodological questions relevant to the future conduct of such validation studies: 1) How can a description of a reason be deemed incorrect? 2) Do the limited findings of this author check study enable us to determine whether the core results of the analysed SRR are valid? 3) Why did the majority of surveyed authors refrain from commenting on our understanding of their reasoning? 4) How can the method for validating reviews of reasons be improved? PMID:25262532

  15. Did we describe what you meant? Findings and methodological discussion of an empirical validation study for a systematic review of reasons.

    PubMed

    Mertz, Marcel; Sofaer, Neema; Strech, Daniel

    2014-09-27

    The systematic review of reasons is a new way to obtain comprehensive information about specific ethical topics. One such review was carried out for the question of why post-trial access to trial drugs should or need not be provided. The objective of this study was to empirically validate this review using an author check method. The article also reports on methodological challenges faced by our study. We emailed a questionnaire to the 64 corresponding authors of those papers that were assessed in the review of reasons on post-trial access. The questionnaire consisted of all quotations ("reason mentions") that were identified by the review to represent a reason in a given author's publication, together with a set of codings for the quotations. The authors were asked to rate the correctness of the codings. We received 19 responses, from which only 13 were completed questionnaires. In total, 98 quotations and their related codes in the 13 questionnaires were checked by the addressees. For 77 quotations (79%), all codings were deemed correct, for 21 quotations (21%), some codings were deemed to need correction. Most corrections were minor and did not imply a complete misunderstanding of the citation. This first attempt to validate a review of reasons leads to four crucial methodological questions relevant to the future conduct of such validation studies: 1) How can a description of a reason be deemed incorrect? 2) Do the limited findings of this author check study enable us to determine whether the core results of the analysed SRR are valid? 3) Why did the majority of surveyed authors refrain from commenting on our understanding of their reasoning? 4) How can the method for validating reviews of reasons be improved?

  16. Epistemological Dialogue of Validity: Building Validity in Educational and Social Research

    ERIC Educational Resources Information Center

    Cakir, Mustafa

    2012-01-01

    The notion of validity in the social sciences is evolving and is influenced by philosophy of science, critiques of objectivity, and epistemological debates. Methodology for validation of the knowledge claims is diverse across different philosophies of science. In other words, definition and the way to establish of validity have evolved as…

  17. [Measurement of left atrial and ventricular volumes in real-time 3D echocardiography. Validation by nuclear magnetic resonance

    NASA Technical Reports Server (NTRS)

    Bauer, F.; Shiota, T.; Qin, J. X.; White, R. D.; Thomas, J. D.

    2001-01-01

    The measurement of the left ventricular ejection fraction is important for the evaluation of cardiomyopathy and depends on the measurement of left ventricular volumes. There are no existing conventional echocardiographic means of measuring the true left atrial and ventricular volumes without mathematical approximations. The aim of this study was to test anew real time 3-dimensional echocardiographic system of calculating left atrial and ventricular volumes in 40 patients after in vitro validation. The volumes of the left atrium and ventricle acquired from real time 3-D echocardiography in the apical view, were calculated in 7 sections parallel to the surface of the probe and compared with atrial (10 patients) and ventricular (30 patients) volumes calculated by nuclear magnetic resonance with the simpson method and with volumes of water in balloons placed in a cistern. Linear regression analysis showed an excellent correlation between the real volume of water in the balloons and volumes given in real time 3-dimensional echocardiography (y = 0.94x + 5.5, r = 0.99, p < 0.001, D = -10 +/- 4.5 ml). A good correlation was observed between real time 3-dimensional echocardiography and nuclear magnetic resonance for the measurement of left atrial and ventricular volumes (y = 0.95x - 10, r = 0.91, p < 0.001, D = -14.8 +/- 19.5 ml and y = 0.87x + 10, r = 0.98, P < 0.001, D = -8.3 +/- 18.7 ml, respectively. The authors conclude that real time three-dimensional echocardiography allows accurate measurement of left heart volumes underlying the clinical potential of this new 3-D method.

  18. Validation of State Counts of Handicapped Children. Volume II--Estimation of the Number of Handicapped Children in Each State.

    ERIC Educational Resources Information Center

    Kaskowitz, David H.

    The booklet provides detailed estimates on handicapping conditions for school aged populations. The figures are intended to help the federal government validate state child count data as required by P.L. 94-142, the Education for All Handicapped Children. Section I uncovers the methodology used to arrive at the estimates, and it identifies the…

  19. Understanding Skill in EVA Mass Handling. Volume 4; An Integrated Methodology for Evaluating Space Suit Mobility and Stability

    NASA Technical Reports Server (NTRS)

    McDonald, P. Vernon; Newman, Dava

    1999-01-01

    The empirical investigation of extravehicular activity (EVA) mass handling conducted on NASA's Precision Air-Bearing Floor led to a Phase I SBIR from JSC. The purpose of the SBIR was to design an innovative system for evaluating space suit mobility and stability in conditions that simulate EVA on the surface of the Moon or Mars. The approach we used to satisfy the Phase I objectives was based on a structured methodology for the development of human-systems technology. Accordingly the project was broken down into a number of tasks and subtasks. In sequence, the major tasks were: 1) Identify missions and tasks that will involve EVA and resulting mobility requirements in the near and long term; 2) Assess possible methods for evaluating mobility of space suits during field-based EVA tests; 3) Identify requirements for behavioral evaluation by interacting with NASA stakeholders;.4) Identify necessary and sufficient technology for implementation of a mobility evaluation system; and 5) Prioritize and select technology solutions. The work conducted in these tasks is described in this final volume of the series on EVA mass handling. While prior volumes in the series focus on novel data-analytic techniques, this volume addresses technology that is necessary for minimally intrusive data collection and near-real-time data analysis and display.

  20. Development, repeatability and validity regarding energy and macronutrient intake of a semi-quantitative food frequency questionnaire: methodological considerations.

    PubMed

    Bountziouka, V; Bathrellou, E; Giotopoulou, A; Katsagoni, C; Bonou, M; Vallianou, N; Barbetseas, J; Avgerinos, P C; Panagiotakos, D B

    2012-08-01

    The aim of this work was to evaluate the repeatability and the validity of a food frequency questionnaire (FFQ), and to discuss the methodological framework of such procedures. The semi-quantitative FFQ included 69 questions regarding the frequency of consumption of all main food groups and beverages usually consumed and 7 questions regarding eating behaviors. Five hundred individuals (37 ± 15 yrs, 38% males) were recruited for the repeatability process, while another 432 (46 ± 16 yrs, 40% males) also completed 3-Day Diaries (3DD) for the validation process. The repeatability of the FFQ was adequate for all food items tested (Kendall's tau-b: 0.26-0.67, p < 0.05), energy and macronutrients intake (energy adjusted correlation coefficients ranged between 0.56-0.69, p < 0.05). Moderate validity of the FFQ was observed for "dairy products", "fruit", "alcohol" and "stimulants" (tau-b: 0.31-0.60, p < 0.05), whereas low agreement was shown for "starchy products", "legumes", "vegetables", "meat", "fish", "sweets", "eggs", "fats and oils" (tau-b < 0.30, p < 0.05). The FFQ was also valid regarding energy and macronutrients intake. Sensitivity analyses by sex and BMI category (< or ≥25 kg/m(2)) showed similar validity of the FFQ for all food groups (apart from "fats and oils" intake), as well as energy and nutrient intake. The proposed FFQ has proven repeatable and relatively valid for foods' intake, and could therefore be used for nutritional assessment purposes. Copyright © 2010 Elsevier B.V. All rights reserved.

  1. Tunnel and Station Cost Methodology Volume II: Stations

    DOT National Transportation Integrated Search

    1981-01-01

    The main objective of this study was to develop a model for estimating the cost of subway station and tunnel construction. This report describes a cost estimating methodology for subway tunnels that can be used by planners, designers, owners, and gov...

  2. Design and validation of a methodology using the International Classification of Diseases, 9th Revision, to identify secondary conditions in people with disabilities.

    PubMed

    Chan, Leighton; Shumway-Cook, Anne; Yorkston, Kathryn M; Ciol, Marcia A; Dudgeon, Brian J; Hoffman, Jeanne M

    2005-05-01

    To design and validate a methodology that identifies secondary conditions using International Classification of Disease, 9th Revision (ICD-9) codes. Secondary conditions were identified through a literature search and a survey of Washington State physiatrists. These conditions were translated into ICD-9 codes and this list was then validated against a national sample of Medicare survey respondents with differing levels of mobility and activities of daily living (ADL) disability. National survey. Participants (N=9731) in the 1999 Medicare Current Beneficiary Survey with no, mild, moderate, and severe mobility and ADL disability. Not applicable. Percentage of survey respondents with a secondary condition. The secondary conditions were grouped into 4 categories: medical, psychosocial, musculoskeletal, and dysphagia related (problems associated with difficulty in swallowing). Our literature search and survey of 26 physiatrists identified 64 secondary conditions, including depression, decubitus ulcers, and deconditioning. Overall, 70.4% of all survey respondents were treated for a secondary condition. We found a significant relation between increasing mobility as well as ADL disability and increasing numbers of secondary conditions (chi 2 test for trend, P <.001). This relation existed for all categories of secondary conditions: medical (chi 2 test for trend, P <.001), psychosocial (chi 2 test for trend, P <.001), musculoskeletal (chi 2 test for trend, P <.001), and dysphagia related (chi 2 test for trend, P <.001). We created a valid ICD-9-based methodology that identified secondary conditions in Medicare survey respondents and discriminated between people with different degrees of disability. This methodology will be useful for health services researchers who study the frequency and impact of secondary conditions.

  3. Are validated outcome measures used in distal radial fractures truly valid?

    PubMed Central

    Nienhuis, R. W.; Bhandari, M.; Goslings, J. C.; Poolman, R. W.; Scholtes, V. A. B.

    2016-01-01

    Objectives Patient-reported outcome measures (PROMs) are often used to evaluate the outcome of treatment in patients with distal radial fractures. Which PROM to select is often based on assessment of measurement properties, such as validity and reliability. Measurement properties are assessed in clinimetric studies, and results are often reviewed without considering the methodological quality of these studies. Our aim was to systematically review the methodological quality of clinimetric studies that evaluated measurement properties of PROMs used in patients with distal radial fractures, and to make recommendations for the selection of PROMs based on the level of evidence of each individual measurement property. Methods A systematic literature search was performed in PubMed, EMbase, CINAHL and PsycINFO databases to identify relevant clinimetric studies. Two reviewers independently assessed the methodological quality of the studies on measurement properties, using the COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) checklist. Level of evidence (strong / moderate / limited / lacking) for each measurement property per PROM was determined by combining the methodological quality and the results of the different clinimetric studies. Results In all, 19 out of 1508 identified unique studies were included, in which 12 PROMs were rated. The Patient-rated wrist evaluation (PRWE) and the Disabilities of Arm, Shoulder and Hand questionnaire (DASH) were evaluated on most measurement properties. The evidence for the PRWE is moderate that its reliability, validity (content and hypothesis testing), and responsiveness are good. The evidence is limited that its internal consistency and cross-cultural validity are good, and its measurement error is acceptable. There is no evidence for its structural and criterion validity. The evidence for the DASH is moderate that its responsiveness is good. The evidence is limited that its reliability and the

  4. Eye-Tracking as a Tool in Process-Oriented Reading Test Validation

    ERIC Educational Resources Information Center

    Solheim, Oddny Judith; Uppstad, Per Henning

    2011-01-01

    The present paper addresses the continuous need for methodological reflection on how to validate inferences made on the basis of test scores. Validation is a process that requires many lines of evidence. In this article we discuss the potential of eye tracking methodology in process-oriented reading test validation. Methodological considerations…

  5. Validation of GEOLAND-2 Spot/vgt Albedo Products by Using Ceos Olive Methodology

    NASA Astrophysics Data System (ADS)

    Camacho de Coca, F.; Sanchez, J.; Schaaf, C.; Baret, F.; Weiss, M.; Cescatti, A.; Lacaze, R. N.

    2012-12-01

    This study evaluates the scientific merit of the global surface albedo products developed in the framework of the Geoland-2 project based on SPOT/VEGETATION observations. The methodology follows the OLIVE (On-Line Validation Exercise) approach supported by the CEOS Land Product Validation subgroup (calvalportal.ceos.org/cvp/web/olive). First, the spatial and temporal consistency of SPOT/VGT albedo products was assessed by intercomparison with reference global products (MODIS/Terra+Aqua and POLDER-3/PARASOL) for the period 2006-2007. A bulk statistical analysis over a global network of 420 homogeneous sites (BELMANIP-2) was performed and analyzed per biome types. Additional sites were included to study albedo under snow conditions. Second, the accuracy and realism of temporal variations were evaluated using a number of ground measurements from FLUXNET sites suitable for use in direct comparison to the co-located satellite data. Our results show that SPOT/VGT albedo products present reliable spatial and temporal distribution of retrievals. The SPOT/VGT albedo performs admirably with MODIS, with a mean bias and RMSE for the shortwave black-sky albedo over BELMANIP-2 sites lower than 0.006 and 0.03 (13% in relative terms) respectively, and even better for snow free pixels. Similar results were found for the white-sky albedo quantities. Discrepancies are larger when comparing with POLDER-3 products: for the shortwave black-sky albedo a mean bias of -0.014 and RMSE of 0.04 (20%) was found. This overall performance figures are however land-cover dependent and larger uncertainties were found over some biomes (or regions) or specific periods (e.g. winter in the north hemisphere). The comparison of SPOT/VGT blue-sky albedo estimates with ground measurements (mainly over Needle-leaf forest sites) show a RMSE of 0.04 and a bias of 0.003 when only snow-free pixels are considered. Moreover, this work shows that the OLIVE tool is also suitable for validation of global albedo

  6. SANSMIC Validation.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weber, Paula D.; Rudeen, David Keith; Lord, David L.

    2014-08-01

    SANSMIC is solution mining software that was developed and utilized by SNL in its role as geotechnical advisor to the US DOE SPR for planning purposes. Three SANSMIC leach modes - withdrawal, direct, and reverse leach - have been revalidated with multiple test cases for each mode. The withdrawal mode was validated using high quality data from recent leach activity while the direct and reverse modes utilized data from historical cavern completion reports. Withdrawal results compared very well with observed data, including the location and size of shelves due to string breaks with relative leached volume differences ranging from 6more » - 10% and relative radius differences from 1.5 - 3%. Profile comparisons for the direct mode were very good with relative leached volume differences ranging from 6 - 12% and relative radius differences from 5 - 7%. First, second, and third reverse configurations were simulated in order to validate SANSMIC over a range of relative hanging string and OBI locations. The first-reverse was simulated reasonably well with relative leached volume differences ranging from 1 - 9% and relative radius differences from 5 - 12%. The second-reverse mode showed the largest discrepancies in leach profile. Leached volume differences ranged from 8 - 12% and relative radius differences from 1 - 10%. In the third-reverse, relative leached volume differences ranged from 10 - 13% and relative radius differences were %7E4 %. Comparisons to historical reports were quite good, indicating that SANSMIC is essentially the same as documented and validated in the early 1980's.« less

  7. A methodology for producing reliable software, volume 1

    NASA Technical Reports Server (NTRS)

    Stucki, L. G.; Moranda, P. B.; Foshee, G.; Kirchoff, M.; Omre, R.

    1976-01-01

    An investigation into the areas having an impact on producing reliable software including automated verification tools, software modeling, testing techniques, structured programming, and management techniques is presented. This final report contains the results of this investigation, analysis of each technique, and the definition of a methodology for producing reliable software.

  8. Preliminary Validation of the Small Aircraft Transportation System Higher Volume Operations (SATS HVO) Concept

    NASA Technical Reports Server (NTRS)

    Williams, Daniel; Consiglio, Maria; Murdoch, Jennifer; Adams, Catherine

    2004-01-01

    This document provides a preliminary validation of the Small Aircraft Transportation System (SATS) Higher Volume Operations (HVO) concept for normal conditions. Initial results reveal that the concept provides reduced air traffic delays when compared to current operations without increasing pilot workload. Characteristic to the SATS HVO concept is the establishment of a newly defined area of flight operations called a Self-Controlled Area (SCA) which would be activated by air traffic control (ATC) around designated non-towered, non-radar airports. During periods of poor visibility, SATS pilots would take responsibility for separation assurance between their aircraft and other similarly equipped aircraft in the SCA. Using onboard equipment and simple instrument flight procedures, they would then be better able to approach and land at the airport or depart from it. This concept would also require a new, ground-based automation system, typically located at the airport that would provide appropriate sequencing information to the arriving aircraft. Further validation of the SATS HVO concept is required and is the subject of ongoing research and subsequent publications.

  9. Fault-tolerant clock synchronization validation methodology. [in computer systems

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Palumbo, Daniel L.; Johnson, Sally C.

    1987-01-01

    A validation method for the synchronization subsystem of a fault-tolerant computer system is presented. The high reliability requirement of flight-crucial systems precludes the use of most traditional validation methods. The method presented utilizes formal design proof to uncover design and coding errors and experimentation to validate the assumptions of the design proof. The experimental method is described and illustrated by validating the clock synchronization system of the Software Implemented Fault Tolerance computer. The design proof of the algorithm includes a theorem that defines the maximum skew between any two nonfaulty clocks in the system in terms of specific system parameters. Most of these parameters are deterministic. One crucial parameter is the upper bound on the clock read error, which is stochastic. The probability that this upper bound is exceeded is calculated from data obtained by the measurement of system parameters. This probability is then included in a detailed reliability analysis of the system.

  10. Short-term energy outlook. Volume 2. Methodology

    NASA Astrophysics Data System (ADS)

    1983-05-01

    Recent changes in forecasting methodology for nonutility distillate fuel oil demand and for the near-term petroleum forecasts are discussed. The accuracy of previous short-term forecasts of most of the major energy sources published in the last 13 issues of the Outlook is evaluated. Macroeconomic and weather assumptions are included in this evaluation. Energy forecasts for 1983 are compared. Structural change in US petroleum consumption, the use of appropriate weather data in energy demand modeling, and petroleum inventories, imports, and refinery runs are discussed.

  11. Membranes with artificial free-volume for biofuel production

    NASA Astrophysics Data System (ADS)

    Petzetakis, Nikos; Doherty, Cara M.; Thornton, Aaron W.; Chen, X. Chelsea; Cotanda, Pepa; Hill, Anita J.; Balsara, Nitash P.

    2015-06-01

    Free-volume of polymers governs transport of penetrants through polymeric films. Control over free-volume is thus important for the development of better membranes for a wide variety of applications such as gas separations, pharmaceutical purifications and energy storage. To date, methodologies used to create materials with different amounts of free-volume are based primarily on chemical synthesis of new polymers. Here we report a simple methodology for generating free-volume based on the self-assembly of polyethylene-b-polydimethylsiloxane-b-polyethylene triblock copolymers. We have used this method to fabricate a series of membranes with identical compositions but with different amounts of free-volume. We use the term artificial free-volume to refer to the additional free-volume created by self-assembly. The effect of artificial free-volume on selective transport through the membranes was tested using butanol/water and ethanol/water mixtures due to their importance in biofuel production. We found that the introduction of artificial free-volume improves both alcohol permeability and selectivity.

  12. Validation of Body Volume Acquisition by Using Elliptical Zone Method.

    PubMed

    Chiu, C-Y; Pease, D L; Fawkner, S; Sanders, R H

    2016-12-01

    The elliptical zone method (E-Zone) can be used to obtain reliable body volume data including total body volume and segmental volumes with inexpensive and portable equipment. The purpose of this research was to assess the accuracy of body volume data obtained from E-Zone by comparing them with those acquired from the 3D photonic scanning method (3DPS). 17 male participants with diverse somatotypes were recruited. Each participant was scanned twice on the same day by a 3D whole-body scanner and photographed twice for the E-Zone analysis. The body volume data acquired from 3DPS was regarded as the reference against which the accuracy of the E-Zone was assessed. The relative technical error of measurement (TEM) of total body volume estimations was around 3% for E-Zone. E-Zone can estimate the segmental volumes of upper torso, lower torso, thigh, shank, upper arm and lower arm accurately (relative TEM<10%) but the accuracy for small segments including the neck, hand and foot were poor. In summary, E-Zone provides a reliable, inexpensive, portable, and simple method to obtain reasonable estimates of total body volume and to indicate segmental volume distribution. © Georg Thieme Verlag KG Stuttgart · New York.

  13. A methodology to derive Synthetic Design Hydrographs for river flood management

    NASA Astrophysics Data System (ADS)

    Tomirotti, Massimo; Mignosa, Paolo

    2017-12-01

    The design of flood protection measures requires in many cases not only the estimation of the peak discharges, but also of the volume of the floods and its time distribution. A typical solution to this kind of problems is the formulation of Synthetic Design Hydrographs (SDHs). In this paper a methodology to derive SDHs is proposed on the basis of the estimation of the Flow Duration Frequency (FDF) reduction curve and of a Peak-Duration (PD) relationship furnishing respectively the quantiles of the maximum average discharge and the average peak position in each duration. The methodology is intended to synthesize the main features of the historical floods in a unique SDH for each return period. The shape of the SDH is not selected a priori but is a result of the behaviour of FDF and PD curves, allowing to account in a very convenient way for the variability of the shapes of the observed hydrographs at local time scale. The validation of the methodology is performed with reference to flood routing problems in reservoirs, lakes and rivers. The results obtained demonstrate the capability of the SDHs to describe the effects of different hydraulic systems on the statistical regime of floods, even in presence of strong modifications induced on the probability distribution of peak flows.

  14. Flight data acquisition methodology for validation of passive ranging algorithms for obstacle avoidance

    NASA Technical Reports Server (NTRS)

    Smith, Phillip N.

    1990-01-01

    The automation of low-altitude rotorcraft flight depends on the ability to detect, locate, and navigate around obstacles lying in the rotorcraft's intended flightpath. Computer vision techniques provide a passive method of obstacle detection and range estimation, for obstacle avoidance. Several algorithms based on computer vision methods have been developed for this purpose using laboratory data; however, further development and validation of candidate algorithms require data collected from rotorcraft flight. A data base containing low-altitude imagery augmented with the rotorcraft and sensor parameters required for passive range estimation is not readily available. Here, the emphasis is on the methodology used to develop such a data base from flight-test data consisting of imagery, rotorcraft and sensor parameters, and ground-truth range measurements. As part of the data preparation, a technique for obtaining the sensor calibration parameters is described. The data base will enable the further development of algorithms for computer vision-based obstacle detection and passive range estimation, as well as provide a benchmark for verification of range estimates against ground-truth measurements.

  15. Methodology for Software Reliability Prediction. Volume 2.

    DTIC Science & Technology

    1987-11-01

    The overall acquisition ,z program shall include the resources, schedule, management, structure , and controls necessary to ensure that specified AD...Independent Verification/Validation - Programming Team Structure - Educational Level of Team Members - Experience Level of Team Members * Methods Used...Prediction or Estimation Parameter Supported: Software - Characteristics 3. Objectives: Structured programming studies and Government Ur.’.. procurement

  16. Methodologies for validating ray-based forward model using finite element method in ultrasonic array data simulation

    NASA Astrophysics Data System (ADS)

    Zhang, Jie; Nixon, Andrew; Barber, Tom; Budyn, Nicolas; Bevan, Rhodri; Croxford, Anthony; Wilcox, Paul

    2018-04-01

    In this paper, a methodology of using finite element (FE) model to validate a ray-based model in the simulation of full matrix capture (FMC) ultrasonic array data set is proposed. The overall aim is to separate signal contributions from different interactions in FE results for easier comparing each individual component in the ray-based model results. This is achieved by combining the results from multiple FE models of the system of interest that include progressively more geometrical features while preserving the same mesh structure. It is shown that the proposed techniques allow the interactions from a large number of different ray-paths to be isolated in FE results and compared directly to the results from a ray-based forward model.

  17. External validation of change formulae in neuropsychology with neuroimaging biomarkers: a methodological recommendation and preliminary clinical data.

    PubMed

    Duff, Kevin; Suhrie, Kayla R; Dalley, Bonnie C A; Anderson, Jeffrey S; Hoffman, John M

    2018-06-08

    Within neuropsychology, a number of mathematical formulae (e.g. reliable change index, standardized regression based) have been used to determine if change across time has reliably occurred. When these formulae have been compared, they often produce different results, but 'different' results do not necessarily indicate which formulae are 'best.' The current study sought to further our understanding of change formulae by comparing them to clinically relevant external criteria (amyloid deposition and hippocampal volume). In a sample of 25 older adults with varying levels of cognitive intactness, participants were tested twice across one week with a brief cognitive battery. Seven different change scores were calculated for each participant. An amyloid PET scan (to get a composite of amyloid deposition) and an MRI (to get hippocampal volume) were also obtained. Deviation-based change formulae (e.g. simple discrepancy score, reliable change index with or without correction for practice effects) were all identical in their relationship to the two neuroimaging biomarkers, and all were non-significant. Conversely, regression-based change formulae (e.g. simple and complex indices) showed stronger relationships to amyloid deposition and hippocampal volume. These results highlight the need for external validation of the various change formulae used by neuropsychologists in clinical settings and research projects. The findings also preliminarily suggest that regression-based change formulae may be more relevant than deviation-based change formulae in this context.

  18. A refined methodology for modeling volume quantification performance in CT

    NASA Astrophysics Data System (ADS)

    Chen, Baiyu; Wilson, Joshua; Samei, Ehsan

    2014-03-01

    The utility of CT lung nodule volume quantification technique depends on the precision of the quantification. To enable the evaluation of quantification precision, we previously developed a mathematical model that related precision to image resolution and noise properties in uniform backgrounds in terms of an estimability index (e'). The e' was shown to predict empirical precision across 54 imaging and reconstruction protocols, but with different correlation qualities for FBP and iterative reconstruction (IR) due to the non-linearity of IR impacted by anatomical structure. To better account for the non-linearity of IR, this study aimed to refine the noise characterization of the model in the presence of textured backgrounds. Repeated scans of an anthropomorphic lung phantom were acquired. Subtracted images were used to measure the image quantum noise, which was then used to adjust the noise component of the e' calculation measured from a uniform region. In addition to the model refinement, the validation of the model was further extended to 2 nodule sizes (5 and 10 mm) and 2 segmentation algorithms. Results showed that the magnitude of IR's quantum noise was significantly higher in structured backgrounds than in uniform backgrounds (ASiR, 30-50%; MBIR, 100-200%). With the refined model, the correlation between e' values and empirical precision no longer depended on reconstruction algorithm. In conclusion, the model with refined noise characterization relfected the nonlinearity of iterative reconstruction in structured background, and further showed successful prediction of quantification precision across a variety of nodule sizes, dose levels, slice thickness, reconstruction algorithms, and segmentation software.

  19. Validation of Multilevel Constructs: Validation Methods and Empirical Findings for the EDI

    ERIC Educational Resources Information Center

    Forer, Barry; Zumbo, Bruno D.

    2011-01-01

    The purposes of this paper are to highlight the foundations of multilevel construct validation, describe two methodological approaches and associated analytic techniques, and then apply these approaches and techniques to the multilevel construct validation of a widely-used school readiness measure called the Early Development Instrument (EDI;…

  20. Validating automated kidney stone volumetry in computed tomography and mathematical correlation with estimated stone volume based on diameter.

    PubMed

    Wilhelm, Konrad; Miernik, Arkadiusz; Hein, Simon; Schlager, Daniel; Adams, Fabian; Benndorf, Matthias; Fritz, Benjamin; Langer, Mathias; Hesse, Albrecht; Schoenthaler, Martin; Neubauer, Jakob

    2018-06-02

    To validate AutoMated UroLithiasis Evaluation Tool (AMULET) software for kidney stone volumetry and compare its performance to standard clinical practice. Maximum diameter and volume of 96 urinary stones were measured as reference standard by three independent urologists. The same stones were positioned in an anthropomorphic phantom and CT scans acquired in standard settings. Three independent radiologists blinded to the reference values took manual measurements of the maximum diameter and automatic measurements of maximum diameter and volume. An "expected volume" was calculated based on manual diameter measurements using the formula: V=4/3 πr³. 96 stones were analyzed in the study. We had initially aimed to assess 100. Nine were replaced during data acquisition due of crumbling and 4 had to be excluded because the automated measurement did not work. Mean reference maximum diameter was 13.3 mm (5.2-32.1 mm). Correlation coefficients among all measured outcomes were compared. The correlation between the manual and automatic diameter measurements to the reference was 0.98 and 0.91, respectively (p<0.001). Mean reference volume was 1200 mm³ (10-9000 mm³). The correlation between the "expected volume" and automatically measured volume to the reference was 0.95 and 0.99, respectively (p<0.001). Patients' kidney stone burden is usually assessed according to maximum diameter. However, as most stones are not spherical, this entails a potential bias. Automated stone volumetry is possible and significantly more accurate than diameter-based volumetric calculations. To avoid bias in clinical trials, size should be measured as volume. However, automated diameter measurements are not as accurate as manual measurements.

  1. Membranes with artificial free-volume for biofuel production

    PubMed Central

    Petzetakis, Nikos; Doherty, Cara M.; Thornton, Aaron W.; Chen, X. Chelsea; Cotanda, Pepa; Hill, Anita J.; Balsara, Nitash P.

    2015-01-01

    Free-volume of polymers governs transport of penetrants through polymeric films. Control over free-volume is thus important for the development of better membranes for a wide variety of applications such as gas separations, pharmaceutical purifications and energy storage. To date, methodologies used to create materials with different amounts of free-volume are based primarily on chemical synthesis of new polymers. Here we report a simple methodology for generating free-volume based on the self-assembly of polyethylene-b-polydimethylsiloxane-b-polyethylene triblock copolymers. We have used this method to fabricate a series of membranes with identical compositions but with different amounts of free-volume. We use the term artificial free-volume to refer to the additional free-volume created by self-assembly. The effect of artificial free-volume on selective transport through the membranes was tested using butanol/water and ethanol/water mixtures due to their importance in biofuel production. We found that the introduction of artificial free-volume improves both alcohol permeability and selectivity. PMID:26104672

  2. Membranes with artificial free-volume for biofuel production

    DOE PAGES

    Petzetakis, Nikos; Doherty, Cara M.; Thornton, Aaron W.; ...

    2015-06-24

    Free-volume of polymers governs transport of penetrants through polymeric films. Control over free-volume is thus important for the development of better membranes for a wide variety of applications such as gas separations, pharmaceutical purifications and energy storage. To date, methodologies used to create materials with different amounts of free-volume are based primarily on chemical synthesis of new polymers. Here we report a simple methodology for generating free-volume based on the self-assembly of polyethylene-b-polydimethylsiloxane-b-polyethylene triblock copolymers. Here, we have used this method to fabricate a series of membranes with identical compositions but with different amounts of free-volume. We use the termmore » artificial free-volume to refer to the additional free-volume created by self-assembly. The effect of artificial free-volume on selective transport through the membranes was tested using butanol/water and ethanol/water mixtures due to their importance in biofuel production. Moreover, we found that the introduction of artificial free-volume improves both alcohol permeability and selectivity.« less

  3. A Methodology for Validating Safety Heuristics Using Clinical Simulations: Identifying and Preventing Possible Technology-Induced Errors Related to Using Health Information Systems

    PubMed Central

    Borycki, Elizabeth; Kushniruk, Andre; Carvalho, Christopher

    2013-01-01

    Internationally, health information systems (HIS) safety has emerged as a significant concern for governments. Recently, research has emerged that has documented the ability of HIS to be implicated in the harm and death of patients. Researchers have attempted to develop methods that can be used to prevent or reduce technology-induced errors. Some researchers are developing methods that can be employed prior to systems release. These methods include the development of safety heuristics and clinical simulations. In this paper, we outline our methodology for developing safety heuristics specific to identifying the features or functions of a HIS user interface design that may lead to technology-induced errors. We follow this with a description of a methodological approach to validate these heuristics using clinical simulations. PMID:23606902

  4. Complicating Methodological Transparency

    ERIC Educational Resources Information Center

    Bridges-Rhoads, Sarah; Van Cleave, Jessica; Hughes, Hilary E.

    2016-01-01

    A historical indicator of the quality, validity, and rigor of qualitative research has been the documentation and disclosure of the behind-the-scenes work of the researcher. In this paper, we use what we call "methodological data" as a tool to complicate the possibility and desirability of such transparency. Specifically, we draw on our…

  5. Validating alternative methodologies to estimate the hydrological regime of temporary streams when flow data are unavailable

    NASA Astrophysics Data System (ADS)

    Llorens, Pilar; Gallart, Francesc; Latron, Jérôme; Cid, Núria; Rieradevall, Maria; Prat, Narcís

    2016-04-01

    Aquatic life in temporary streams is strongly conditioned by the temporal variability of the hydrological conditions that control the occurrence and connectivity of diverse mesohabitats. In this context, the software TREHS (Temporary Rivers' Ecological and Hydrological Status) has been developed, in the framework of the LIFE Trivers project, to help managers for adequately implement the Water Framework Directive in this type of water bodies. TREHS, using the methodology described in Gallart et al (2012), defines six temporal 'aquatic states', based on the hydrological conditions representing different mesohabitats, for a given reach at a particular moment. Nevertheless, hydrological data for assessing the regime of temporary streams are often non-existent or scarce. The scarcity of flow data makes frequently impossible the characterization of temporary streams hydrological regimes and, as a consequence, the selection of the correct periods and methods to determine their ecological status. Because of its qualitative nature, the TREHS approach allows the use of alternative methodologies to assess the regime of temporary streams in the lack of observed flow data. However, to adapt the TREHS to this qualitative data both the temporal scheme (from monthly to seasonal) as well as the number of aquatic states (from 6 to 3) have been modified. Two alternatives complementary methodologies were tested within the TREHS framework to assess the regime of temporary streams: interviews and aerial photographs. All the gauging stations (13) belonging to the Catalan Internal Catchments (NE, Spain) with recurrent zero flows periods were selected to validate both methodologies. On one hand, non-structured interviews were carried out to inhabitants of villages and small towns near the gauging stations. Flow permanence metrics for input into TREHS were drawn from the notes taken during the interviews. On the other hand, the historical series of available aerial photographs (typically 10

  6. FIELD VALIDATION OF EXPOSURE ASSESSMENT MODELS. VOLUME 1. DATA

    EPA Science Inventory

    This is the first of two volumes describing work done to evaluate the PAL-DS model, a Gaussian diffusion code modified to account for dry deposition and settling. This first volume describes the experimental techniques employed to dispense, collect, and measure depositing (zinc s...

  7. Second Language Listening Strategy Research: Methodological Challenges and Perspectives

    ERIC Educational Resources Information Center

    Santos, Denise; Graham, Suzanne; Vanderplank, Robert

    2008-01-01

    This paper explores methodological issues related to research into second language listening strategies. We argue that a number of central questions regarding research methodology in this line of enquiry are underexamined, and we engage in the discussion of three key methodological questions: (1) To what extent is a verbal report a valid and…

  8. Hyperbolic reformulation of a 1D viscoelastic blood flow model and ADER finite volume schemes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Montecinos, Gino I.; Müller, Lucas O.; Toro, Eleuterio F.

    2014-06-01

    The applicability of ADER finite volume methods to solve hyperbolic balance laws with stiff source terms in the context of well-balanced and non-conservative schemes is extended to solve a one-dimensional blood flow model for viscoelastic vessels, reformulated as a hyperbolic system, via a relaxation time. A criterion for selecting relaxation times is found and an empirical convergence rate assessment is carried out to support this result. The proposed methodology is validated by applying it to a network of viscoelastic vessels for which experimental and numerical results are available. The agreement between the results obtained in the present paper and thosemore » available in the literature is satisfactory. Key features of the present formulation and numerical methodologies, such as accuracy, efficiency and robustness, are fully discussed in the paper.« less

  9. Scalability and Validation of Big Data Bioinformatics Software.

    PubMed

    Yang, Andrian; Troup, Michael; Ho, Joshua W K

    2017-01-01

    This review examines two important aspects that are central to modern big data bioinformatics analysis - software scalability and validity. We argue that not only are the issues of scalability and validation common to all big data bioinformatics analyses, they can be tackled by conceptually related methodological approaches, namely divide-and-conquer (scalability) and multiple executions (validation). Scalability is defined as the ability for a program to scale based on workload. It has always been an important consideration when developing bioinformatics algorithms and programs. Nonetheless the surge of volume and variety of biological and biomedical data has posed new challenges. We discuss how modern cloud computing and big data programming frameworks such as MapReduce and Spark are being used to effectively implement divide-and-conquer in a distributed computing environment. Validation of software is another important issue in big data bioinformatics that is often ignored. Software validation is the process of determining whether the program under test fulfils the task for which it was designed. Determining the correctness of the computational output of big data bioinformatics software is especially difficult due to the large input space and complex algorithms involved. We discuss how state-of-the-art software testing techniques that are based on the idea of multiple executions, such as metamorphic testing, can be used to implement an effective bioinformatics quality assurance strategy. We hope this review will raise awareness of these critical issues in bioinformatics.

  10. Plasma volume methodology: Evans blue, hemoglobin-hematocrit, and mass density transformations

    NASA Technical Reports Server (NTRS)

    Greenleaf, J. E.; Hinghofer-Szalkay, H.

    1985-01-01

    Methods for measuring absolute levels and changes in plasma volume are presented along with derivations of pertinent equations. Reduction in variability of the Evans blue dye dilution technique using chromatographic column purification suggests that the day-to-day variability in the plasma volume in humans is less than + or - 20 m1. Mass density determination using the mechanical-oscillator technique provides a method for measuring vascular fluid shifts continuously for assessing the density of the filtrate, and for quantifying movements of protein across microvascular walls. Equations for the calculation of volume and density of shifted fluid are presented.

  11. Development of Probabilistic Rigid Pavement Design Methodologies for Military Airfields.

    DTIC Science & Technology

    1983-12-01

    4A161102AT22, Task AO, Work Unit 009, "Methodology for Considering Material Variability in Pavement Design." OCE Project Monitor was Mr. S. S. Gillespie. The...PREFACE. .. ............................. VOLUME 1: STATE OF THE ART VARIABILITY OF AIRFIELD PAVEMENT MATERIALS VOLUME 11: MATHEMATICAL FORMULATION OF...VOLUME IV: PROBABILISTIC ANALYSIS OF RIGID AIRFIELD DESIGN BY ELASTIC LAYERED THEORY VOLUME I STATE OF THE ART VARIABILITY OF AIRFIELD PAVEMENT MATERIALS

  12. Simplification and validation of a large volume polyurethane foam sampler for the analysis of persistent hydrophobic compounds in drinking water.

    PubMed

    Choi, J W; Lee, J H; Moon, B S; Kannan, K

    2008-08-01

    The use of a large volume polyurethane foam (PUF) sampler was validated for rapid extraction of persistent organic pollutants (POPs), such as polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs), in raw water and treated water from drinking water plants. To validate the recovery of target compounds in the sampling process, a (37)Cl-labeled standard was spiked into the 1st PUF plug prior to filtration. An accelerated solvent extraction method, as a pressurized liquid extractor (PLE), was optimized to extract the PUF plug. For sample preparation, tandem column chromatography (TCC) clean-up was used for rapid analysis. The recoveries of labeled compounds in the analytical method were 80-110% (n = 9). The optimized PUF-PLE-TCC method was applied in the analysis of raw water and treated potable water from seven drinking water plants in South Korea. The sample volume used was between 18 and 102 L for raw water at a flow rate of 0.4-2 L min(-1), 95 and 107 L for treated water at a flow rate of 1.5-2.2 L min(-1). Limit of quantitation (LOQ) was a function of sample volume and it decreased with increasing sample volume. The LOQ of PCDD/Fs in raw waters analyzed by this method was 3-11 times lower than that described using large-size disk-type solid phase extraction (SPE) method. The LOQ of PCDD/F congeners in raw water and treated water were 0.022-3.9 ng L(-1) and 0.018-0.74 ng L(-1), respectively. Octachlorinated dibenzo-p-dioxin (OCDD) was found in some raw water samples, while their concentrations were well below the tentative criterion set by the Japanese Environmental Ministry for drinking water. OCDD was below the LOQ in the treated drinking water.

  13. Advanced piloted aircraft flight control system design methodology. Volume 1: Knowledge base

    NASA Technical Reports Server (NTRS)

    Mcruer, Duane T.; Myers, Thomas T.

    1988-01-01

    The development of a comprehensive and electric methodology for conceptual and preliminary design of flight control systems is presented and illustrated. The methodology is focused on the design stages starting with the layout of system requirements and ending when some viable competing system architectures (feedback control structures) are defined. The approach is centered on the human pilot and the aircraft as both the sources of, and the keys to the solution of, many flight control problems. The methodology relies heavily on computational procedures which are highly interactive with the design engineer. To maximize effectiveness, these techniques, as selected and modified to be used together in the methodology, form a cadre of computational tools specifically tailored for integrated flight control system preliminary design purposes. While theory and associated computational means are an important aspect of the design methodology, the lore, knowledge and experience elements, which guide and govern applications are critical features. This material is presented as summary tables, outlines, recipes, empirical data, lists, etc., which encapsulate a great deal of expert knowledge. Much of this is presented in topical knowledge summaries which are attached as Supplements. The composite of the supplements and the main body elements constitutes a first cut at a a Mark 1 Knowledge Base for manned-aircraft flight control.

  14. Multivariate optimization and validation of an analytical methodology by RP-HPLC for the determination of losartan potassium in capsules.

    PubMed

    Bonfilio, Rudy; Tarley, César Ricardo Teixeira; Pereira, Gislaine Ribeiro; Salgado, Hérida Regina Nunes; de Araújo, Magali Benjamim

    2009-11-15

    This paper describes the optimization and validation of an analytical methodology for the determination of losartan potassium in capsules by HPLC using 2(5-1) fractional factorial and Doehlert designs. This multivariate approach allows a considerable improvement in chromatographic performance using fewer experiments, without additional cost for columns or other equipment. The HPLC method utilized potassium phosphate buffer (pH 6.2; 58 mmol L(-1))-acetonitrile (65:35, v/v) as the mobile phase, pumped at a flow rate of 1.0 mL min(-1). An octylsilane column (100 mm x 4.6mm i.d., 5 microm) maintained at 35 degrees C was used as the stationary phase. UV detection was performed at 254 nm. The method was validated according to the ICH guidelines, showing accuracy, precision (intra-day relative standard deviation (R.S.D.) and inter-day R.S.D values <2.0%), selectivity, robustness and linearity (r=0.9998) over a concentration range from 30 to 70 mg L(-1) of losartan potassium. The limits of detection and quantification were 0.114 and 0.420 mg L(-1), respectively. The validated method may be used to quantify losartan potassium in capsules and to determine the stability of this drug.

  15. Methodology update for estimating volume to service flow ratio.

    DOT National Transportation Integrated Search

    2015-12-01

    Volume/service flow ratio (VSF) is calculated by the Highway Performance Monitoring System (HPMS) software as an indicator of peak hour congestion. It is an essential input to the Kentucky Transportation Cabinets (KYTC) key planning applications, ...

  16. Site characterization methodology for aquifers in support of bioreclamation activities. Volume 2: Borehole flowmeter technique, tracer tests, geostatistics and geology. Final report, August 1987-September 1989

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Young, S.C.

    1993-08-01

    This report discusses a field demonstration of a methodology for characterizing an aquifer's geohydrology in the detail required to design an optimum network of wells and/or infiltration galleries for bioreclamation systems. The project work was conducted on a 1-hectare test site at Columbus AFB, Mississippi. The technical report is divided into two volumes. Volume I describes the test site and the well network, the assumptions, and the application of equations that define groundwater flow to a well, the results of three large-scale aquifer tests, and the results of 160 single-pump tests. Volume II describes the bore hole flowmeter tests, themore » tracer tests, the geological investigations, the geostatistical analysis and the guidelines for using groundwater models to design bioreclamation systems. Site characterization, Hydraulic conductivity, Groundwater flow, Geostatistics, Geohydrology, Monitoring wells.« less

  17. Normalized pulse volume (NPV) derived photo-plethysmographically as a more valid measure of the finger vascular tone.

    PubMed

    Sawada, Y; Tanaka, G; Yamakoshi, K

    2001-05-01

    Normalized pulse volume (NPV) was advocated as a more valid measure for the assessment of finger vascular tone. Based on the optical model in the finger tip expressed by Lambert--Beer's law, NPV is expressed as Delta I(a)/I. Here, Delta I(a) is the intensity of pulsatile component superimposed on the transmitted light (I). Theoretically, NPV seems to be superior to the conventional pulse volume (PV; corresponding to Delta I(a)). Firstly, NPV is in direct proportion to Delta V(a), which is the pulsatile component of the arterial blood volume, in a more exact manner. Relatedly, NPV can be processed as if it is an absolute value. Secondly, the sensitivity of NPV during stressful stimulations is expected to be higher. These expectations were supported experimentally using 13 male students. Firstly, the correlation between cutaneous vascular resistance in the finger tip (CVR) and NPV was higher than that between CVR and PV among all the subjects, although there was not much difference between these correlations within each subject. Secondly, NPV decreased much more than PV during mental stress. Some limitations of the present study were addressed, including the point that certain factors can violate the direct proportional relationship of NPV and PV to Delta V(a).

  18. Estimating the volume of glaciers in the Himalayan-Karakoram region using different methods

    NASA Astrophysics Data System (ADS)

    Frey, H.; Machguth, H.; Huss, M.; Huggel, C.; Bajracharya, S.; Bolch, T.; Kulkarni, A.; Linsbauer, A.; Salzmann, N.; Stoffel, M.

    2014-12-01

    Ice volume estimates are crucial for assessing water reserves stored in glaciers. Due to its large glacier coverage, such estimates are of particular interest for the Himalayan-Karakoram (HK) region. In this study, different existing methodologies are used to estimate the ice reserves: three area-volume relations, one slope-dependent volume estimation method, and two ice-thickness distribution models are applied to a recent, detailed, and complete glacier inventory of the HK region, spanning over the period 2000-2010 and revealing an ice coverage of 40 775 km2. An uncertainty and sensitivity assessment is performed to investigate the influence of the observed glacier area and important model parameters on the resulting total ice volume. Results of the two ice-thickness distribution models are validated with local ice-thickness measurements at six glaciers. The resulting ice volumes for the entire HK region range from 2955 to 4737 km3, depending on the approach. This range is lower than most previous estimates. Results from the ice thickness distribution models and the slope-dependent thickness estimations agree well with measured local ice thicknesses. However, total volume estimates from area-related relations are larger than those from other approaches. The study provides evidence on the significant effect of the selected method on results and underlines the importance of a careful and critical evaluation.

  19. Comparison of pre/post-operative CT image volumes to preoperative digitization of partial hepatectomies: a feasibility study in surgical validation

    NASA Astrophysics Data System (ADS)

    Dumpuri, Prashanth; Clements, Logan W.; Li, Rui; Waite, Jonathan M.; Stefansic, James D.; Geller, David A.; Miga, Michael I.; Dawant, Benoit M.

    2009-02-01

    Preoperative planning combined with image-guidance has shown promise towards increasing the accuracy of liver resection procedures. The purpose of this study was to validate one such preoperative planning tool for four patients undergoing hepatic resection. Preoperative computed tomography (CT) images acquired before surgery were used to identify tumor margins and to plan the surgical approach for resection of these tumors. Surgery was then performed with intraoperative digitization data acquire by an FDA approved image-guided liver surgery system (Pathfinder Therapeutics, Inc., Nashville, TN). Within 5-7 days after surgery, post-operative CT image volumes were acquired. Registration of data within a common coordinate reference was achieved and preoperative plans were compared to the postoperative volumes. Semi-quantitative comparisons are presented in this work and preliminary results indicate that significant liver regeneration/hypertrophy in the postoperative CT images may be present post-operatively. This could challenge pre/post operative CT volume change comparisons as a means to evaluate the accuracy of preoperative surgical plans.

  20. Mathematical modeling of elementary trapping-reduction processes in positron annihilation lifetime spectroscopy: methodology of Ps-to-positron trapping conversion

    NASA Astrophysics Data System (ADS)

    Shpotyuk, Ya; Cebulski, J.; Ingram, A.; Shpotyuk, O.

    2017-12-01

    Methodological possibilities of positron annihilation lifetime (PAL) spectroscopy in application to nanostructurized substances treated within three-term fitting procedure are reconsidered to parameterize their atomic-deficient structural arrangement. In contrast to conventional three-term fitting analysis of the detected PAL spectra based on admixed positron trapping and positronium (Ps) decaying, the nanostructurization due to guest nanoparticles embedded in host matrix is considered as producing modified trapping, which involves conversion between these channels. The developed approach referred to as x3-x2-coupling decomposition algorithm allows estimation free volumes of interfacial voids responsible for positron trapping and bulk lifetimes in nanoparticle-embedded substances. This methodology is validated using experimental data of Chakraverty et al. [Phys. Rev. B71 (2005) 024115] on PAL study of composites formed by guest NiFe2O4 nanocrystals grown in host SiO2 matrix.

  1. The Study of the Relationship between Probabilistic Design and Axiomatic Design Methodology. Volume 1

    NASA Technical Reports Server (NTRS)

    Onwubiko, Chinyere; Onyebueke, Landon

    1996-01-01

    This program report is the final report covering all the work done on this project. The goal of this project is technology transfer of methodologies to improve design process. The specific objectives are: 1. To learn and understand the Probabilistic design analysis using NESSUS. 2. To assign Design Projects to either undergraduate or graduate students on the application of NESSUS. 3. To integrate the application of NESSUS into some selected senior level courses in Civil and Mechanical Engineering curricula. 4. To develop courseware in Probabilistic Design methodology to be included in a graduate level Design Methodology course. 5. To study the relationship between the Probabilistic design methodology and Axiomatic design methodology.

  2. Derivation and Validation of Supraglacial Lake Volumes on the Greenland Ice Sheet from High-Resolution Satellite Imagery

    NASA Technical Reports Server (NTRS)

    Moussavi, Mahsa S.; Abdalati, Waleed; Pope, Allen; Scambos, Ted; Tedesco, Marco; MacFerrin, Michael; Grigsby, Shane

    2016-01-01

    Supraglacial meltwater lakes on the western Greenland Ice Sheet (GrIS) are critical components of its surface hydrology and surface mass balance, and they also affect its ice dynamics. Estimates of lake volume, however, are limited by the availability of in situ measurements of water depth,which in turn also limits the assessment of remotely sensed lake depths. Given the logistical difficulty of collecting physical bathymetric measurements, methods relying upon in situ data are generally restricted to small areas and thus their application to largescale studies is difficult to validate. Here, we produce and validate spaceborne estimates of supraglacial lake volumes across a relatively large area (1250 km(exp 2) of west Greenland's ablation region using data acquired by the WorldView-2 (WV-2) sensor, making use of both its stereo-imaging capability and its meter-scale resolution. We employ spectrally-derived depth retrieval models, which are either based on absolute reflectance (single-channel model) or a ratio of spectral reflectances in two bands (dual-channel model). These models are calibrated by usingWV-2multispectral imagery acquired early in the melt season and depth measurements from a high resolutionWV-2 DEM over the same lake basins when devoid of water. The calibrated models are then validated with different lakes in the area, for which we determined depths. Lake depth estimates based on measurements recorded in WV-2's blue (450-510 nm), green (510-580 nm), and red (630-690 nm) bands and dual-channel modes (blue/green, blue/red, and green/red band combinations) had near-zero bias, an average root-mean-squared deviation of 0.4 m (relative to post-drainage DEMs), and an average volumetric error of b1%. The approach outlined in this study - image-based calibration of depth-retrieval models - significantly improves spaceborne supraglacial bathymetry retrievals, which are completely independent from in situ measurements.

  3. The RAAF Logistics Study. Volume 4,

    DTIC Science & Technology

    1986-10-01

    Use of Issue-Based Root Definitions Application of Soft Systems Methodology to 27 Information Systems Analysis Conclusion 30 LIST OF ABBREVIATIONS 58 k...Management Control Systems’, Journal of Applied Systems Analysis, Volume 6, 1979, pages 51 to 67. 5. The soft systems methodology was developed to tackle...the soft systems methodology has many advantages whi-h recmmenrl it to this type of study area, it does not mcklel the timo ev, lut i, n :-f a system

  4. Torso-Tank Validation of High-Resolution Electrogastrography (EGG): Forward Modelling, Methodology and Results.

    PubMed

    Calder, Stefan; O'Grady, Greg; Cheng, Leo K; Du, Peng

    2018-04-27

    Electrogastrography (EGG) is a non-invasive method for measuring gastric electrical activity. Recent simulation studies have attempted to extend the current clinical utility of the EGG, in particular by providing a theoretical framework for distinguishing specific gastric slow wave dysrhythmias. In this paper we implement an experimental setup called a 'torso-tank' with the aim of expanding and experimentally validating these previous simulations. The torso-tank was developed using an adult male torso phantom with 190 electrodes embedded throughout the torso. The gastric slow waves were reproduced using an artificial current source capable of producing 3D electrical fields. Multiple gastric dysrhythmias were reproduced based on high-resolution mapping data from cases of human gastric dysfunction (gastric re-entry, conduction blocks and ectopic pacemakers) in addition to normal test data. Each case was recorded and compared to the previously-presented simulated results. Qualitative and quantitative analyses were performed to define the accuracy showing [Formula: see text] 1.8% difference, [Formula: see text] 0.99 correlation, and [Formula: see text] 0.04 normalised RMS error between experimental and simulated findings. These results reaffirm previous findings and these methods in unison therefore present a promising morphological-based methodology for advancing the understanding and clinical applications of EGG.

  5. Validation of a 4D-PET Maximum Intensity Projection for Delineation of an Internal Target Volume

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Callahan, Jason, E-mail: jason.callahan@petermac.org; Kron, Tomas; Peter MacCallum Department of Oncology, The University of Melbourne, Melbourne

    2013-07-15

    Purpose: The delineation of internal target volumes (ITVs) in radiation therapy of lung tumors is currently performed by use of either free-breathing (FB) {sup 18}F-fluorodeoxyglucose-positron emission tomography-computed tomography (FDG-PET/CT) or 4-dimensional (4D)-CT maximum intensity projection (MIP). In this report we validate the use of 4D-PET-MIP for the delineation of target volumes in both a phantom and in patients. Methods and Materials: A phantom with 3 hollow spheres was prepared surrounded by air then water. The spheres and water background were filled with a mixture of {sup 18}F and radiographic contrast medium. A 4D-PET/CT scan was performed of the phantom whilemore » moving in 4 different breathing patterns using a programmable motion device. Nine patients with an FDG-avid lung tumor who underwent FB and 4D-PET/CT and >5 mm of tumor motion were included for analysis. The 3 spheres and patient lesions were contoured by 2 contouring methods (40% of maximum and PET edge) on the FB-PET, FB-CT, 4D-PET, 4D-PET-MIP, and 4D-CT-MIP. The concordance between the different contoured volumes was calculated using a Dice coefficient (DC). The difference in lung tumor volumes between FB-PET and 4D-PET volumes was also measured. Results: The average DC in the phantom using 40% and PET edge, respectively, was lowest for FB-PET/CT (DCAir = 0.72/0.67, DCBackground 0.63/0.62) and highest for 4D-PET/CT-MIP (DCAir = 0.84/0.83, DCBackground = 0.78/0.73). The average DC in the 9 patients using 40% and PET edge, respectively, was also lowest for FB-PET/CT (DC = 0.45/0.44) and highest for 4D-PET/CT-MIP (DC = 0.72/0.73). In the 9 lesions, the target volumes of the FB-PET using 40% and PET edge, respectively, were on average 40% and 45% smaller than the 4D-PET-MIP. Conclusion: A 4D-PET-MIP produces volumes with the highest concordance with 4D-CT-MIP across multiple breathing patterns and lesion sizes in both a phantom and among patients. Freebreathing PET/CT consistently

  6. Enhancement of docosahexaenoic acid production by Schizochytrium SW1 using response surface methodology

    NASA Astrophysics Data System (ADS)

    Nazir, Mohd Yusuf Mohd; Al-Shorgani, Najeeb Kaid Nasser; Kalil, Mohd Sahaid; Hamid, Aidil Abdul

    2015-09-01

    In this study, three factors (fructose concentration, agitation speed and monosodium glutamate (MSG) concentration) were optimized to enhance DHA production by Schizochytrium SW1 using response surface methodology (RSM). Central composite design was applied as the experimental design and analysis of variance (ANOVA) was used to analyze the data. The experiments were conducted using 500 mL flask with 100 mL working volume at 30°C for 96 hours. ANOVA analysis revealed that the process was adequately represented significantly by the quadratic model (p<0.0001) and two of the factors namely agitation speed and MSG concentration significantly affect DHA production (p<0.005). Level of influence for each variable and quadratic polynomial equation were obtained for DHA production by multiple regression analyses. The estimated optimum conditions for maximizing DHA production by SW1 were 70 g/L fructose, 250 rpm agitation speed and 12 g/L MSG. Consequently, the quadratic model was validated by applying of the estimated optimum conditions, which confirmed the model validity and 52.86% of DHA was produced.

  7. Developments in Sensitivity Methodologies and the Validation of Reactor Physics Calculations

    DOE PAGES

    Palmiotti, Giuseppe; Salvatores, Massimo

    2012-01-01

    The sensitivity methodologies have been a remarkable story when adopted in the reactor physics field. Sensitivity coefficients can be used for different objectives like uncertainty estimates, design optimization, determination of target accuracy requirements, adjustment of input parameters, and evaluations of the representativity of an experiment with respect to a reference design configuration. A review of the methods used is provided, and several examples illustrate the success of the methodology in reactor physics. A new application as the improvement of nuclear basic parameters using integral experiments is also described.

  8. How to Assess the External Validity and Model Validity of Therapeutic Trials: A Conceptual Approach to Systematic Review Methodology

    PubMed Central

    2014-01-01

    Background. Evidence rankings do not consider equally internal (IV), external (EV), and model validity (MV) for clinical studies including complementary and alternative medicine/integrative medicine (CAM/IM) research. This paper describe this model and offers an EV assessment tool (EVAT©) for weighing studies according to EV and MV in addition to IV. Methods. An abbreviated systematic review methodology was employed to search, assemble, and evaluate the literature that has been published on EV/MV criteria. Standard databases were searched for keywords relating to EV, MV, and bias-scoring from inception to Jan 2013. Tools identified and concepts described were pooled to assemble a robust tool for evaluating these quality criteria. Results. This study assembled a streamlined, objective tool to incorporate for the evaluation of quality of EV/MV research that is more sensitive to CAM/IM research. Conclusion. Improved reporting on EV can help produce and provide information that will help guide policy makers, public health researchers, and other scientists in their selection, development, and improvement in their research-tested intervention. Overall, clinical studies with high EV have the potential to provide the most useful information about “real-world” consequences of health interventions. It is hoped that this novel tool which considers IV, EV, and MV on equal footing will better guide clinical decision making. PMID:24734111

  9. Statistical methodology: II. Reliability and validity assessment in study design, Part B.

    PubMed

    Karras, D J

    1997-02-01

    Validity measures the correspondence between a test and other purported measures of the same or similar qualities. When a reference standard exists, a criterion-based validity coefficient can be calculated. If no such standard is available, the concepts of content and construct validity may be used, but quantitative analysis may not be possible. The Pearson and Spearman tests of correlation are often used to assess the correspondence between tests, but do not account for measurement biases and may yield misleading results. Techniques that measure interest differences may be more meaningful in validity assessment, and the kappa statistic is useful for analyzing categorical variables. Questionnaires often can be designed to allow quantitative assessment of reliability and validity, although this may be difficult. Inclusion of homogeneous questions is necessary to assess reliability. Analysis is enhanced by using Likert scales or similar techniques that yield ordinal data. Validity assessment of questionnaires requires careful definition of the scope of the test and comparison with previously validated tools.

  10. Validation study of an interpolation method for calculating whole lung volumes and masses from reduced numbers of CT-images in ponies.

    PubMed

    Reich, H; Moens, Y; Braun, C; Kneissl, S; Noreikat, K; Reske, A

    2014-12-01

    Quantitative computer tomographic analysis (qCTA) is an accurate but time intensive method used to quantify volume, mass and aeration of the lungs. The aim of this study was to validate a time efficient interpolation technique for application of qCTA in ponies. Forty-one thoracic computer tomographic (CT) scans obtained from eight anaesthetised ponies positioned in dorsal recumbency were included. Total lung volume and mass and their distribution into four compartments (non-aerated, poorly aerated, normally aerated and hyperaerated; defined based on the attenuation in Hounsfield Units) were determined for the entire lung from all 5 mm thick CT-images, 59 (55-66) per animal. An interpolation technique validated for use in humans was then applied to calculate qCTA results for lung volumes and masses from only 10, 12, and 14 selected CT-images per scan. The time required for both procedures was recorded. Results were compared statistically using the Bland-Altman approach. The bias ± 2 SD for total lung volume calculated from interpolation of 10, 12, and 14 CT-images was -1.2 ± 5.8%, 0.1 ± 3.5%, and 0.0 ± 2.5%, respectively. The corresponding results for total lung mass were -1.1 ± 5.9%, 0.0 ± 3.5%, and 0.0 ± 3.0%. The average time for analysis of one thoracic CT-scan using the interpolation method was 1.5-2 h compared to 8 h for analysis of all images of one complete thoracic CT-scan. The calculation of pulmonary qCTA data by interpolation from 12 CT-images was applicable for equine lung CT-scans and reduced the time required for analysis by 75%. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Passenger rail vehicle safety assessment methodology. Volume I, Summary of safe performance limits.

    DOT National Transportation Integrated Search

    2000-04-01

    This report presents a methodology based on computer simulation that asseses the safe dyamic performance limits of commuter passenger vehicles. The methodology consists of determining the critical design parameters and characteristic properties of bo...

  12. QESA: Quarantine Extraterrestrial Sample Analysis Methodology

    NASA Astrophysics Data System (ADS)

    Simionovici, A.; Lemelle, L.; Beck, P.; Fihman, F.; Tucoulou, R.; Kiryukhina, K.; Courtade, F.; Viso, M.

    2018-04-01

    Our nondestructive, nm-sized, hyperspectral analysis methodology of combined X-rays/Raman/IR probes in BSL4 quarantine, renders our patented mini-sample holder ideal for detecting extraterrestrial life. Our Stardust and Archean results validate it.

  13. Ares I-X Range Safety Simulation Verification and Analysis Independent Validation and Verification

    NASA Technical Reports Server (NTRS)

    Merry, Carl M.; Tarpley, Ashley F.; Craig, A. Scott; Tartabini, Paul V.; Brewer, Joan D.; Davis, Jerel G.; Dulski, Matthew B.; Gimenez, Adrian; Barron, M. Kyle

    2011-01-01

    NASA s Ares I-X vehicle launched on a suborbital test flight from the Eastern Range in Florida on October 28, 2009. To obtain approval for launch, a range safety final flight data package was generated to meet the data requirements defined in the Air Force Space Command Manual 91-710 Volume 2. The delivery included products such as a nominal trajectory, trajectory envelopes, stage disposal data and footprints, and a malfunction turn analysis. The Air Force s 45th Space Wing uses these products to ensure public and launch area safety. Due to the criticality of these data, an independent validation and verification effort was undertaken to ensure data quality and adherence to requirements. As a result, the product package was delivered with the confidence that independent organizations using separate simulation software generated data to meet the range requirements and yielded consistent results. This document captures Ares I-X final flight data package verification and validation analysis, including the methodology used to validate and verify simulation inputs, execution, and results and presents lessons learned during the process

  14. Integrated payload and mission planning, phase 3. Volume 2: Logic/Methodology for preliminary grouping of spacelab and mixed cargo payloads

    NASA Technical Reports Server (NTRS)

    Rodgers, T. E.; Johnson, J. F.

    1977-01-01

    The logic and methodology for a preliminary grouping of Spacelab and mixed-cargo payloads is proposed in a form that can be readily coded into a computer program by NASA. The logic developed for this preliminary cargo grouping analysis is summarized. Principal input data include the NASA Payload Model, payload descriptive data, Orbiter and Spacelab capabilities, and NASA guidelines and constraints. The first step in the process is a launch interval selection in which the time interval for payload grouping is identified. Logic flow steps are then taken to group payloads and define flight configurations based on criteria that includes dedication, volume, area, orbital parameters, pointing, g-level, mass, center of gravity, energy, power, and crew time.

  15. 1998 motor vehicle occupant safety survey. Volume 1, methodology report

    DOT National Transportation Integrated Search

    2000-03-01

    This is the Methodology Report for the 1998 Motor Vehicle Occupant Safety Survey. The survey is conducted on a biennial basis (initiated in 1994), and is administered by telephone to a randomly selected national sample. Two questionnaires are used, e...

  16. Current Concerns in Validity Theory.

    ERIC Educational Resources Information Center

    Kane, Michael

    Validity is concerned with the clarification and justification of the intended interpretations and uses of observed scores. It has not been easy to formulate a general methodology set of principles for validation, but progress has been made, especially as the field has moved from relatively limited criterion-related models to sophisticated…

  17. Evaluation of methodologies for interpolation of data for hydrological modeling in glacierized basins with limited information

    NASA Astrophysics Data System (ADS)

    Muñoz, Randy; Paredes, Javier; Huggel, Christian; Drenkhan, Fabian; García, Javier

    2017-04-01

    The availability and consistency of data is a determining factor for the reliability of any hydrological model and simulated results. Unfortunately, there are many regions worldwide where data is not available in the desired quantity and quality. The Santa River basin (SRB), located within a complex topographic and climatic setting in the tropical Andes of Peru is a clear example of this challenging situation. A monitoring network of in-situ stations in the SRB recorded series of hydro-meteorological variables which finally ceased to operate in 1999. In the following years, several researchers evaluated and completed many of these series. This database was used by multiple research and policy-oriented projects in the SRB. However, hydroclimatic information remains limited, making it difficult to perform research, especially when dealing with the assessment of current and future water resources. In this context, here the evaluation of different methodologies to interpolate temperature and precipitation data at a monthly time step as well as ice volume data in glacierized basins with limited data is presented. The methodologies were evaluated for the Quillcay River, a tributary of the SRB, where the hydro-meteorological data is available from nearby monitoring stations since 1983. The study period was 1983 - 1999 with a validation period among 1993 - 1999. For temperature series the aim was to extend the observed data and interpolate it. Data from Reanalysis NCEP was used to extend the observed series: 1) using a simple correlation with multiple field stations, or 2) applying the altitudinal correction proposed in previous studies. The interpolation then was applied as a function of altitude. Both methodologies provide very close results, by parsimony simple correlation is shown as a viable choice. For precipitation series, the aim was to interpolate observed data. Two methodologies were evaluated: 1) Inverse Distance Weighting whose results underestimate the amount

  18. New methodology to reconstruct in 2-D the cuspal enamel of modern human lower molars.

    PubMed

    Modesto-Mata, Mario; García-Campos, Cecilia; Martín-Francés, Laura; Martínez de Pinillos, Marina; García-González, Rebeca; Quintino, Yuliet; Canals, Antoni; Lozano, Marina; Dean, M Christopher; Martinón-Torres, María; Bermúdez de Castro, José María

    2017-08-01

    In the last years different methodologies have been developed to reconstruct worn teeth. In this article, we propose a new 2-D methodology to reconstruct the worn enamel of lower molars. Our main goals are to reconstruct molars with a high level of accuracy when measuring relevant histological variables and to validate the methodology calculating the errors associated with the measurements. This methodology is based on polynomial regression equations, and has been validated using two different dental variables: cuspal enamel thickness and crown height of the protoconid. In order to perform the validation process, simulated worn modern human molars were employed. The associated errors of the measurements were also estimated applying methodologies previously proposed by other authors. The mean percentage error estimated in reconstructed molars for these two variables in comparison with their own real values is -2.17% for the cuspal enamel thickness of the protoconid and -3.18% for the crown height of the protoconid. This error significantly improves the results of other methodologies, both in the interobserver error and in the accuracy of the measurements. The new methodology based on polynomial regressions can be confidently applied to the reconstruction of cuspal enamel of lower molars, as it improves the accuracy of the measurements and reduces the interobserver error. The present study shows that it is important to validate all methodologies in order to know the associated errors. This new methodology can be easily exportable to other modern human populations, the human fossil record and forensic sciences. © 2017 Wiley Periodicals, Inc.

  19. Use of Machine Learning Algorithms to Propose a New Methodology to Conduct, Critique and Validate Urban Scale Building Energy Modeling

    NASA Astrophysics Data System (ADS)

    Pathak, Maharshi

    City administrators and real-estate developers have been setting up rather aggressive energy efficiency targets. This, in turn, has led the building science research groups across the globe to focus on urban scale building performance studies and level of abstraction associated with the simulations of the same. The increasing maturity of the stakeholders towards energy efficiency and creating comfortable working environment has led researchers to develop methodologies and tools for addressing the policy driven interventions whether it's urban level energy systems, buildings' operational optimization or retrofit guidelines. Typically, these large-scale simulations are carried out by grouping buildings based on their design similarities i.e. standardization of the buildings. Such an approach does not necessarily lead to potential working inputs which can make decision-making effective. To address this, a novel approach is proposed in the present study. The principle objective of this study is to propose, to define and evaluate the methodology to utilize machine learning algorithms in defining representative building archetypes for the Stock-level Building Energy Modeling (SBEM) which are based on operational parameter database. The study uses "Phoenix- climate" based CBECS-2012 survey microdata for analysis and validation. Using the database, parameter correlations are studied to understand the relation between input parameters and the energy performance. Contrary to precedence, the study establishes that the energy performance is better explained by the non-linear models. The non-linear behavior is explained by advanced learning algorithms. Based on these algorithms, the buildings at study are grouped into meaningful clusters. The cluster "mediod" (statistically the centroid, meaning building that can be represented as the centroid of the cluster) are established statistically to identify the level of abstraction that is acceptable for the whole building energy

  20. Methodology, Methods, and Metrics for Testing and Evaluating Augmented Cognition Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greitzer, Frank L.

    The augmented cognition research community seeks cognitive neuroscience-based solutions to improve warfighter performance by applying and managing mitigation strategies to reduce workload and improve the throughput and quality of decisions. The focus of augmented cognition mitigation research is to define, demonstrate, and exploit neuroscience and behavioral measures that support inferences about the warfighter’s cognitive state that prescribe the nature and timing of mitigation. A research challenge is to develop valid evaluation methodologies, metrics and measures to assess the impact of augmented cognition mitigations. Two considerations are external validity, which is the extent to which the results apply to operational contexts;more » and internal validity, which reflects the reliability of performance measures and the conclusions based on analysis of results. The scientific rigor of the research methodology employed in conducting empirical investigations largely affects the validity of the findings. External validity requirements also compel us to demonstrate operational significance of mitigations. Thus it is important to demonstrate effectiveness of mitigations under specific conditions. This chapter reviews some cognitive science and methodological considerations in designing augmented cognition research studies and associated human performance metrics and analysis methods to assess the impact of augmented cognition mitigations.« less

  1. Railroad Classification Yard Technology Manual: Volume II : Yard Computer Systems

    DOT National Transportation Integrated Search

    1981-08-01

    This volume (Volume II) of the Railroad Classification Yard Technology Manual documents the railroad classification yard computer systems methodology. The subjects covered are: functional description of process control and inventory computer systems,...

  2. Research methodology in recurrent pregnancy loss.

    PubMed

    Christiansen, Ole B

    2014-03-01

    The aim of this article is to highlight pitfalls in research methodology that may explain why studies in recurrent pregnancy loss (RPL) often provide very divergent results. It is hoped that insight into this issue may help clinicians decide which published studies are the most valid. It may help researchers to eliminate methodological flaws in future studies, which may hopefully come to some kind of agreement about the usefulness of diagnostic tests and treatments in RPL. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. Atlas based brain volumetry: How to distinguish regional volume changes due to biological or physiological effects from inherent noise of the methodology.

    PubMed

    Opfer, Roland; Suppa, Per; Kepp, Timo; Spies, Lothar; Schippling, Sven; Huppertz, Hans-Jürgen

    2016-05-01

    Fully-automated regional brain volumetry based on structural magnetic resonance imaging (MRI) plays an important role in quantitative neuroimaging. In clinical trials as well as in clinical routine multiple MRIs of individual patients at different time points need to be assessed longitudinally. Measures of inter- and intrascanner variability are crucial to understand the intrinsic variability of the method and to distinguish volume changes due to biological or physiological effects from inherent noise of the methodology. To measure regional brain volumes an atlas based volumetry (ABV) approach was deployed using a highly elastic registration framework and an anatomical atlas in a well-defined template space. We assessed inter- and intrascanner variability of the method in 51 cognitively normal subjects and 27 Alzheimer dementia (AD) patients from the Alzheimer's Disease Neuroimaging Initiative by studying volumetric results of repeated scans for 17 compartments and brain regions. Median percentage volume differences of scan-rescans from the same scanner ranged from 0.24% (whole brain parenchyma in healthy subjects) to 1.73% (occipital lobe white matter in AD), with generally higher differences in AD patients as compared to normal subjects (e.g., 1.01% vs. 0.78% for the hippocampus). Minimum percentage volume differences detectable with an error probability of 5% were in the one-digit percentage range for almost all structures investigated, with most of them being below 5%. Intrascanner variability was independent of magnetic field strength. The median interscanner variability was up to ten times higher than the intrascanner variability. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Validation Process Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewis, John E.; English, Christine M.; Gesick, Joshua C.

    This report documents the validation process as applied to projects awarded through Funding Opportunity Announcements (FOAs) within the U.S. Department of Energy Bioenergy Technologies Office (DOE-BETO). It describes the procedures used to protect and verify project data, as well as the systematic framework used to evaluate and track performance metrics throughout the life of the project. This report also describes the procedures used to validate the proposed process design, cost data, analysis methodologies, and supporting documentation provided by the recipients.

  5. Dosimetric accuracy of a treatment planning system for actively scanned proton beams and small target volumes: Monte Carlo and experimental validation

    NASA Astrophysics Data System (ADS)

    Magro, G.; Molinelli, S.; Mairani, A.; Mirandola, A.; Panizza, D.; Russo, S.; Ferrari, A.; Valvo, F.; Fossati, P.; Ciocca, M.

    2015-09-01

    This study was performed to evaluate the accuracy of a commercial treatment planning system (TPS), in optimising proton pencil beam dose distributions for small targets of different sizes (5-30 mm side) located at increasing depths in water. The TPS analytical algorithm was benchmarked against experimental data and the FLUKA Monte Carlo (MC) code, previously validated for the selected beam-line. We tested the Siemens syngo® TPS plan optimisation module for water cubes fixing the configurable parameters at clinical standards, with homogeneous target coverage to a 2 Gy (RBE) dose prescription as unique goal. Plans were delivered and the dose at each volume centre was measured in water with a calibrated PTW Advanced Markus® chamber. An EBT3® film was also positioned at the phantom entrance window for the acquisition of 2D dose maps. Discrepancies between TPS calculated and MC simulated values were mainly due to the different lateral spread modeling and resulted in being related to the field-to-spot size ratio. The accuracy of the TPS was proved to be clinically acceptable in all cases but very small and shallow volumes. In this contest, the use of MC to validate TPS results proved to be a reliable procedure for pre-treatment plan verification.

  6. Dosimetric accuracy of a treatment planning system for actively scanned proton beams and small target volumes: Monte Carlo and experimental validation.

    PubMed

    Magro, G; Molinelli, S; Mairani, A; Mirandola, A; Panizza, D; Russo, S; Ferrari, A; Valvo, F; Fossati, P; Ciocca, M

    2015-09-07

    This study was performed to evaluate the accuracy of a commercial treatment planning system (TPS), in optimising proton pencil beam dose distributions for small targets of different sizes (5-30 mm side) located at increasing depths in water. The TPS analytical algorithm was benchmarked against experimental data and the FLUKA Monte Carlo (MC) code, previously validated for the selected beam-line. We tested the Siemens syngo(®) TPS plan optimisation module for water cubes fixing the configurable parameters at clinical standards, with homogeneous target coverage to a 2 Gy (RBE) dose prescription as unique goal. Plans were delivered and the dose at each volume centre was measured in water with a calibrated PTW Advanced Markus(®) chamber. An EBT3(®) film was also positioned at the phantom entrance window for the acquisition of 2D dose maps. Discrepancies between TPS calculated and MC simulated values were mainly due to the different lateral spread modeling and resulted in being related to the field-to-spot size ratio. The accuracy of the TPS was proved to be clinically acceptable in all cases but very small and shallow volumes. In this contest, the use of MC to validate TPS results proved to be a reliable procedure for pre-treatment plan verification.

  7. Use of response surface methodology for development of new microwell-based spectrophotometric method for determination of atrovastatin calcium in tablets

    PubMed Central

    2012-01-01

    Background Response surface methodology by Box–Behnken design employing the multivariate approach enables substantial improvement in the method development using fewer experiments, without wastage of large volumes of organic solvents, which leads to high analysis cost. This methodology has not been employed for development of a method for analysis of atorvastatin calcium (ATR-Ca). Results The present research study describes the use of in optimization and validation of a new microwell-based UV-Visible spectrophotometric method of for determination of ATR-Ca in its tablets. By the use of quadratic regression analysis, equations were developed to describe the behavior of the response as simultaneous functions of the selected independent variables. Accordingly, the optimum conditions were determined which included concentration of 2,3-dichloro-5,6-dicyano-1,4-benzoquinone (DDQ), time of reaction and temperature. The absorbance of the colored-CT complex was measured at 460 nm by microwell-plate absorbance reader. The method was validated, in accordance with ICH guidelines for accuracy, precision, selectivity and linearity (r² = 0.9993) over the concentration range of 20–200 μg/ml. The assay was successfully applied to the analysis of ATR-Ca in its pharmaceutical dosage forms with good accuracy and precision. Conclusion The assay described herein has great practical value in the routine analysis of ATR-Ca in quality control laboratories, as it has high throughput property, consumes minimum volume of organic solvent thus it offers the reduction in the exposures of the analysts to the toxic effects of organic solvents, environmentally friendly "Green" approach) and reduction in the analysis cost by 50-fold. PMID:23146143

  8. Development and Validation of a Combined Methodology for Assessing the Total Quality Control of Herbal Medicinal Products – Application to Oleuropein Preparations

    PubMed Central

    Lemonakis, Nikolaos; Gikas, Evagelos; Halabalaki, Maria; Skaltsounis, Alexios-Leandros

    2013-01-01

    Oleuropein (OE) is a secoiridoid glycoside, which occurs mostly in the Oleaceae family presenting several pharmacological properties, including antioxidant, cardio-protective, anti-atherogenic effects etc. Based on these findings OE is commercially available, as Herbal Medicinal Product (HMP), claimed for its antioxidant effects. As there are general provisions of the medicine regulating bodies e.g. European Medicines Agency, the quality of the HMP’s must always be demonstrated. Therefore, a novel LC-MS methodology was developed and validated for the simultaneous quantification of OE and its main degradation product, hydroxytyrosol (HT), for the relevant OE claimed HMP’s. The internal standard (IS) methodology was employed and separation of OE, HT and IS was achieved on a C18 Fused Core column with 3.1 min overall run time employing the SIM method for the analytical signal acquisition. The method was validated according to the International Conference on Harmonisation requirements and the results show adequate linearity (r2 > 0.99) over a wide concentration range [0.1–15 μg/mL (n=12)] and a LLOQ value of 0.1 μg/mL, for both OE and HT. Furthermore, as it would be beneficial to control the quality taking into account all the substances of the OE claimed HMP’s; a metabolomics-like approach has been developed and applied for the total quality control of the different preparations employing UHPLC-HRMS-multivariate analysis (MVA). Four OE-claimed commercial HMP’s have been randomly selected and MVA similarity-based measurements were performed. The results showed that the examined samples could also be differentiated as evidenced according to their scores plot. Batch to batch reproducibility between the samples of the same brand has also been determined and found to be acceptable. Overall, the developed combined methodology has been found to be an efficient tool for the monitoring of the HMP’s total quality. Only one OE HMP has been found to be consistent to

  9. Quality in the Basic Grant Delivery System: Volume 3, Methodology.

    ERIC Educational Resources Information Center

    Advanced Technology, Inc., McLean, VA.

    The research methodology of a study to assess 1980-1981 award accuracy of the Basic Educational Opportunity Grants (BEOG), or Pell grants, is described. The study is the first stage of a three-stage quality control project. During the spring of 1981 a nationally representative sample of 305 public, private, and proprietary institutions was…

  10. Direct Measurement of Proximal Isovelocity Surface Area by Real-Time Three-Dimensional Color Doppler for Quantitation of Aortic Regurgitant Volume: An In Vitro Validation

    PubMed Central

    Pirat, Bahar; Little, Stephen H.; Igo, Stephen R.; McCulloch, Marti; Nosé, Yukihiko; Hartley, Craig J.; Zoghbi, William A.

    2012-01-01

    Objective The proximal isovelocity surface area (PISA) method is useful in the quantitation of aortic regurgitation (AR). We hypothesized that actual measurement of PISA provided with real-time 3-dimensional (3D) color Doppler yields more accurate regurgitant volumes than those estimated by 2-dimensional (2D) color Doppler PISA. Methods We developed a pulsatile flow model for AR with an imaging chamber in which interchangeable regurgitant orifices with defined shapes and areas were incorporated. An ultrasonic flow meter was used to calculate the reference regurgitant volumes. A total of 29 different flow conditions for 5 orifices with different shapes were tested at a rate of 72 beats/min. 2D PISA was calculated as 2π r2, and 3D PISA was measured from 8 equidistant radial planes of the 3D PISA. Regurgitant volume was derived as PISA × aliasing velocity × time velocity integral of AR/peak AR velocity. Results Regurgitant volumes by flow meter ranged between 12.6 and 30.6 mL/beat (mean 21.4 ± 5.5 mL/beat). Regurgitant volumes estimated by 2D PISA correlated well with volumes measured by flow meter (r = 0.69); however, a significant underestimation was observed (y = 0.5x + 0.6). Correlation with flow meter volumes was stronger for 3D PISA-derived regurgitant volumes (r = 0.83); significantly less underestimation of regurgitant volumes was seen, with a regression line close to identity (y = 0.9x + 3.9). Conclusion Direct measurement of PISA is feasible, without geometric assumptions, using real-time 3D color Doppler. Calculation of aortic regurgitant volumes with 3D color Doppler using this methodology is more accurate than conventional 2D method with hemispheric PISA assumption. PMID:19168322

  11. Direct measurement of proximal isovelocity surface area by real-time three-dimensional color Doppler for quantitation of aortic regurgitant volume: an in vitro validation.

    PubMed

    Pirat, Bahar; Little, Stephen H; Igo, Stephen R; McCulloch, Marti; Nosé, Yukihiko; Hartley, Craig J; Zoghbi, William A

    2009-03-01

    The proximal isovelocity surface area (PISA) method is useful in the quantitation of aortic regurgitation (AR). We hypothesized that actual measurement of PISA provided with real-time 3-dimensional (3D) color Doppler yields more accurate regurgitant volumes than those estimated by 2-dimensional (2D) color Doppler PISA. We developed a pulsatile flow model for AR with an imaging chamber in which interchangeable regurgitant orifices with defined shapes and areas were incorporated. An ultrasonic flow meter was used to calculate the reference regurgitant volumes. A total of 29 different flow conditions for 5 orifices with different shapes were tested at a rate of 72 beats/min. 2D PISA was calculated as 2pi r(2), and 3D PISA was measured from 8 equidistant radial planes of the 3D PISA. Regurgitant volume was derived as PISA x aliasing velocity x time velocity integral of AR/peak AR velocity. Regurgitant volumes by flow meter ranged between 12.6 and 30.6 mL/beat (mean 21.4 +/- 5.5 mL/beat). Regurgitant volumes estimated by 2D PISA correlated well with volumes measured by flow meter (r = 0.69); however, a significant underestimation was observed (y = 0.5x + 0.6). Correlation with flow meter volumes was stronger for 3D PISA-derived regurgitant volumes (r = 0.83); significantly less underestimation of regurgitant volumes was seen, with a regression line close to identity (y = 0.9x + 3.9). Direct measurement of PISA is feasible, without geometric assumptions, using real-time 3D color Doppler. Calculation of aortic regurgitant volumes with 3D color Doppler using this methodology is more accurate than conventional 2D method with hemispheric PISA assumption.

  12. Nuclear Dynamics Consequence Analysis (NDCA) for the Disposal of Spent Nuclear Fuel in an Underground Geologic Repository--Volume 2: Methodology and Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taylor, L.L.; Wilson, J.R.; Sanchez, L.C.

    1998-10-01

    The US Department of Energy Office of Environmental Management's (DOE/EM's) National Spent Nuclear Fuel Program (NSNFP), through a collaboration between Sandia National Laboratories (SNL) and Idaho National Engineering and Environmental Laboratory (INEEL), is conducting a systematic Nuclear Dynamics Consequence Analysis (NDCA) of the disposal of SNFs in an underground geologic repository sited in unsaturated tuff. This analysis is intended to provide interim guidance to the DOE for the management of the SNF while they prepare for final compliance evaluation. This report presents results from a Nuclear Dynamics Consequence Analysis (NDCA) that examined the potential consequences and risks of criticality duringmore » the long-term disposal of spent nuclear fuel owned by DOE-EM. This analysis investigated the potential of post-closure criticality, the consequences of a criticality excursion, and the probability frequency for post-closure criticality. The results of the NDCA are intended to provide the DOE-EM with a technical basis for measuring risk which can be used for screening arguments to eliminate post-closure criticality FEPs (features, events and processes) from consideration in the compliance assessment because of either low probability or low consequences. This report is composed of an executive summary (Volume 1), the methodology and results of the NDCA (Volume 2), and the applicable appendices (Volume 3).« less

  13. Validation of SMAP surface soil moisture products with core validation sites

    USDA-ARS?s Scientific Manuscript database

    The NASA Soil Moisture Active Passive (SMAP) mission has utilized a set of core validation sites as the primary methodology in assessing the soil moisture retrieval algorithm performance. Those sites provide well-calibrated in situ soil moisture measurements within SMAP product grid pixels for diver...

  14. A parallel multi-domain solution methodology applied to nonlinear thermal transport problems in nuclear fuel pins

    DOE PAGES

    Philip, Bobby; Berrill, Mark A.; Allu, Srikanth; ...

    2015-01-26

    We describe an efficient and nonlinearly consistent parallel solution methodology for solving coupled nonlinear thermal transport problems that occur in nuclear reactor applications over hundreds of individual 3D physical subdomains. Efficiency is obtained by leveraging knowledge of the physical domains, the physics on individual domains, and the couplings between them for preconditioning within a Jacobian Free Newton Krylov method. Details of the computational infrastructure that enabled this work, namely the open source Advanced Multi-Physics (AMP) package developed by the authors are described. The details of verification and validation experiments, and parallel performance analysis in weak and strong scaling studies demonstratingmore » the achieved efficiency of the algorithm are presented. Moreover, numerical experiments demonstrate that the preconditioner developed is independent of the number of fuel subdomains in a fuel rod, which is particularly important when simulating different types of fuel rods. Finally, we demonstrate the power of the coupling methodology by considering problems with couplings between surface and volume physics and coupling of nonlinear thermal transport in fuel rods to an external radiation transport code.« less

  15. Validation of bending tests by nanoindentation for micro-contact analysis of MEMS switches

    NASA Astrophysics Data System (ADS)

    Broue, Adrien; Fourcade, Thibaut; Dhennin, Jérémie; Courtade, Frédéric; Charvet, Pierre–Louis; Pons, Patrick; Lafontan, Xavier; Plana, Robert

    2010-08-01

    Research on contact characterization for microelectromechanical system (MEMS) switches has been driven by the necessity to reach a high-reliability level for micro-switch applications. One of the main failures observed during cycling of the devices is the increase of the electrical contact resistance. The key issue is the electromechanical behaviour of the materials used at the contact interface where the current flows through. Metal contact switches have a large and complex set of failure mechanisms according to the current level. This paper demonstrates the validity of a new methodology using a commercial nanoindenter coupled with electrical measurements on test vehicles specially designed to investigate the micro-scale contact physics. Dedicated validation tests and modelling are performed to assess the introduced methodology by analyzing the gold contact interface with 5 µm2 square bumps at various current levels. Contact temperature rise is measured, which affects the mechanical properties of the contact materials and modifies the contact topology. In addition, the data provide a better understanding of micro-contact behaviour related to the impact of current at low- to medium-power levels. This article was originally submitted for the special section 'Selected papers from the 20th Micromechanics Europe Workshop (MME 09) (Toulouse, France, 20-22 September 2009)', Journal of Micromechanics and Microengineering, volume 20, issue 6.

  16. Validation Methodology to Allow Simulated Peak Reduction and Energy Performance Analysis of Residential Building Envelope with Phase Change Materials: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tabares-Velasco, P. C.; Christensen, C.; Bianchi, M.

    2012-08-01

    Phase change materials (PCM) represent a potential technology to reduce peak loads and HVAC energy consumption in residential buildings. This paper summarizes NREL efforts to obtain accurate energy simulations when PCMs are modeled in residential buildings: the overall methodology to verify and validate Conduction Finite Difference (CondFD) and PCM algorithms in EnergyPlus is presented in this study. It also shows preliminary results of three residential building enclosure technologies containing PCM: PCM-enhanced insulation, PCM impregnated drywall and thin PCM layers. The results are compared based on predicted peak reduction and energy savings using two algorithms in EnergyPlus: the PCM and Conductionmore » Finite Difference (CondFD) algorithms.« less

  17. Benefit-Cost Analysis of Integrated Paratransit Systems : Volume 6. Technical Appendices.

    DOT National Transportation Integrated Search

    1979-09-01

    This last volume, includes five technical appendices which document the methodologies used in the benefit-cost analysis. They are the following: Scenario analysis methodology; Impact estimation; Example of impact estimation; Sensitivity analysis; Agg...

  18. Development of AMSTAR: a measurement tool to assess the methodological quality of systematic reviews.

    PubMed

    Shea, Beverley J; Grimshaw, Jeremy M; Wells, George A; Boers, Maarten; Andersson, Neil; Hamel, Candyce; Porter, Ashley C; Tugwell, Peter; Moher, David; Bouter, Lex M

    2007-02-15

    Our objective was to develop an instrument to assess the methodological quality of systematic reviews, building upon previous tools, empirical evidence and expert consensus. A 37-item assessment tool was formed by combining 1) the enhanced Overview Quality Assessment Questionnaire (OQAQ), 2) a checklist created by Sacks, and 3) three additional items recently judged to be of methodological importance. This tool was applied to 99 paper-based and 52 electronic systematic reviews. Exploratory factor analysis was used to identify underlying components. The results were considered by methodological experts using a nominal group technique aimed at item reduction and design of an assessment tool with face and content validity. The factor analysis identified 11 components. From each component, one item was selected by the nominal group. The resulting instrument was judged to have face and content validity. A measurement tool for the 'assessment of multiple systematic reviews' (AMSTAR) was developed. The tool consists of 11 items and has good face and content validity for measuring the methodological quality of systematic reviews. Additional studies are needed with a focus on the reproducibility and construct validity of AMSTAR, before strong recommendations can be made on its use.

  19. Noninvasive evaluation of left ventricular elastance according to pressure-volume curves modeling in arterial hypertension.

    PubMed

    Bonnet, Benjamin; Jourdan, Franck; du Cailar, Guilhem; Fesler, Pierre

    2017-08-01

    End-systolic left ventricular (LV) elastance ( E es ) has been previously calculated and validated invasively using LV pressure-volume (P-V) loops. Noninvasive methods have been proposed, but clinical application remains complex. The aims of the present study were to 1 ) estimate E es according to modeling of the LV P-V curve during ejection ("ejection P-V curve" method) and validate our method with existing published LV P-V loop data and 2 ) test the clinical applicability of noninvasively detecting a difference in E es between normotensive and hypertensive subjects. On the basis of the ejection P-V curve and a linear relationship between elastance and time during ejection, we used a nonlinear least-squares method to fit the pressure waveform. We then computed the slope and intercept of time-varying elastance as well as the volume intercept (V 0 ). As a validation, 22 P-V loops obtained from previous invasive studies were digitized and analyzed using the ejection P-V curve method. To test clinical applicability, ejection P-V curves were obtained from 33 hypertensive subjects and 32 normotensive subjects with carotid tonometry and real-time three-dimensional echocardiography during the same procedure. A good univariate relationship ( r 2  = 0.92, P < 0.005) and good limits of agreement were found between the invasive calculation of E es and our new proposed ejection P-V curve method. In hypertensive patients, an increase in arterial elastance ( E a ) was compensated by a parallel increase in E es without change in E a / E es In addition, the clinical reproducibility of our method was similar to that of another noninvasive method. In conclusion, E es and V 0 can be estimated noninvasively from modeling of the P-V curve during ejection. This approach was found to be reproducible and sensitive enough to detect an expected increase in LV contractility in hypertensive patients. Because of its noninvasive nature, this methodology may have clinical implications in

  20. An innovative iterative thresholding algorithm for tumour segmentation and volumetric quantification on SPECT images: Monte Carlo-based methodology and validation.

    PubMed

    Pacilio, M; Basile, C; Shcherbinin, S; Caselli, F; Ventroni, G; Aragno, D; Mango, L; Santini, E

    2011-06-01

    Positron emission tomography (PET) and single-photon emission computed tomography (SPECT) imaging play an important role in the segmentation of functioning parts of organs or tumours, but an accurate and reproducible delineation is still a challenging task. In this work, an innovative iterative thresholding method for tumour segmentation has been proposed and implemented for a SPECT system. This method, which is based on experimental threshold-volume calibrations, implements also the recovery coefficients (RC) of the imaging system, so it has been called recovering iterative thresholding method (RIThM). The possibility to employ Monte Carlo (MC) simulations for system calibration was also investigated. The RIThM is an iterative algorithm coded using MATLAB: after an initial rough estimate of the volume of interest, the following calculations are repeated: (i) the corresponding source-to-background ratio (SBR) is measured and corrected by means of the RC curve; (ii) the threshold corresponding to the amended SBR value and the volume estimate is then found using threshold-volume data; (iii) new volume estimate is obtained by image thresholding. The process goes on until convergence. The RIThM was implemented for an Infinia Hawkeye 4 (GE Healthcare) SPECT/CT system, using a Jaszczak phantom and several test objects. Two MC codes were tested to simulate the calibration images: SIMIND and SimSet. For validation, test images consisting of hot spheres and some anatomical structures of the Zubal head phantom were simulated with SIMIND code. Additional test objects (flasks and vials) were also imaged experimentally. Finally, the RIThM was applied to evaluate three cases of brain metastases and two cases of high grade gliomas. Comparing experimental thresholds and those obtained by MC simulations, a maximum difference of about 4% was found, within the errors (+/- 2% and +/- 5%, for volumes > or = 5 ml or < 5 ml, respectively). Also for the RC data, the comparison showed

  1. Complexity, Representation and Practice: Case Study as Method and Methodology

    ERIC Educational Resources Information Center

    Miles, Rebecca

    2015-01-01

    While case study is considered a common approach to examining specific and particular examples in research disciplines such as law, medicine and psychology, in the social sciences case study is often treated as a lesser, flawed or undemanding methodology which is less valid, reliable or theoretically rigorous than other methodologies. Building on…

  2. Evaluation of automated decisionmaking methodologies and development of an integrated robotic system simulation. Volume 2, Part 2: Appendixes B, C, D and E

    NASA Technical Reports Server (NTRS)

    Lowrie, J. W.; Fermelia, A. J.; Haley, D. C.; Gremban, K. D.; Vanbaalen, J.; Walsh, R. W.

    1982-01-01

    The derivation of the equations is presented, the rate control algorithm described, and simulation methodologies summarized. A set of dynamics equations that can be used recursively to calculate forces and torques acting at the joints of an n link manipulator given the manipulator joint rates are derived. The equations are valid for any n link manipulator system with any kind of joints connected in any sequence. The equations of motion for the class of manipulators consisting of n rigid links interconnected by rotary joints are derived. A technique is outlined for reducing the system of equations to eliminate contraint torques. The linearized dynamics equations for an n link manipulator system are derived. The general n link linearized equations are then applied to a two link configuration. The coordinated rate control algorithm used to compute individual joint rates when given end effector rates is described. A short discussion of simulation methodologies is presented.

  3. Validation of a Three-Dimensional Method for Counting and Sizing Podocytes in Whole Glomeruli

    PubMed Central

    van der Wolde, James W.; Schulze, Keith E.; Short, Kieran M.; Wong, Milagros N.; Bensley, Jonathan G.; Cullen-McEwen, Luise A.; Caruana, Georgina; Hokke, Stacey N.; Li, Jinhua; Firth, Stephen D.; Harper, Ian S.; Nikolic-Paterson, David J.; Bertram, John F.

    2016-01-01

    Podocyte depletion is sufficient for the development of numerous glomerular diseases and can be absolute (loss of podocytes) or relative (reduced number of podocytes per volume of glomerulus). Commonly used methods to quantify podocyte depletion introduce bias, whereas gold standard stereologic methodologies are time consuming and impractical. We developed a novel approach for assessing podocyte depletion in whole glomeruli that combines immunofluorescence, optical clearing, confocal microscopy, and three-dimensional analysis. We validated this method in a transgenic mouse model of selective podocyte depletion, in which we determined dose-dependent alterations in several quantitative indices of podocyte depletion. This new approach provides a quantitative tool for the comprehensive and time-efficient analysis of podocyte depletion in whole glomeruli. PMID:26975438

  4. Validation of sterilizing grade filtration.

    PubMed

    Jornitz, M W; Meltzer, T H

    2003-01-01

    Validation consideration of sterilizing grade filters, namely 0.2 micron, changed when FDA voiced concerns about the validity of Bacterial Challenge tests performed in the past. Such validation exercises are nowadays considered to be filter qualification. Filter validation requires more thorough analysis, especially Bacterial Challenge testing with the actual drug product under process conditions. To do so, viability testing is a necessity to determine the Bacterial Challenge test methodology. Additionally to these two compulsory tests, other evaluations like extractable, adsorption and chemical compatibility tests should be considered. PDA Technical Report # 26, Sterilizing Filtration of Liquids, describes all parameters and aspects required for the comprehensive validation of filters. The report is a most helpful tool for validation of liquid filters used in the biopharmaceutical industry. It sets the cornerstones of validation requirements and other filtration considerations.

  5. Methodologies for Verification and Validation of Space Launch System (SLS) Structural Dynamic Models

    NASA Technical Reports Server (NTRS)

    Coppolino, Robert N.

    2018-01-01

    Responses to challenges associated with verification and validation (V&V) of Space Launch System (SLS) structural dynamics models are presented in this paper. Four methodologies addressing specific requirements for V&V are discussed. (1) Residual Mode Augmentation (RMA), which has gained acceptance by various principals in the NASA community, defines efficient and accurate FEM modal sensitivity models that are useful in test-analysis correlation and reconciliation and parametric uncertainty studies. (2) Modified Guyan Reduction (MGR) and Harmonic Reduction (HR, introduced in 1976), developed to remedy difficulties encountered with the widely used Classical Guyan Reduction (CGR) method, are presented. MGR and HR are particularly relevant for estimation of "body dominant" target modes of shell-type SLS assemblies that have numerous "body", "breathing" and local component constituents. Realities associated with configuration features and "imperfections" cause "body" and "breathing" mode characteristics to mix resulting in a lack of clarity in the understanding and correlation of FEM- and test-derived modal data. (3) Mode Consolidation (MC) is a newly introduced procedure designed to effectively "de-feature" FEM and experimental modes of detailed structural shell assemblies for unambiguous estimation of "body" dominant target modes. Finally, (4) Experimental Mode Verification (EMV) is a procedure that addresses ambiguities associated with experimental modal analysis of complex structural systems. Specifically, EMV directly separates well-defined modal data from spurious and poorly excited modal data employing newly introduced graphical and coherence metrics.

  6. A methodology to estimate representativeness of LAI station observation for validation: a case study with Chinese Ecosystem Research Network (CERN) in situ data

    NASA Astrophysics Data System (ADS)

    Xu, Baodong; Li, Jing; Liu, Qinhuo; Zeng, Yelu; Yin, Gaofei

    2014-11-01

    Leaf Area Index (LAI) is known as a key vegetation biophysical variable. To effectively use remote sensing LAI products in various disciplines, it is critical to understand the accuracy of them. The common method for the validation of LAI products is firstly establish the empirical relationship between the field data and high-resolution imagery, to derive LAI maps, then aggregate high-resolution LAI maps to match moderate-resolution LAI products. This method is just suited for the small region, and its frequencies of measurement are limited. Therefore, the continuous observing LAI datasets from ground station network are important for the validation of multi-temporal LAI products. However, due to the scale mismatch between the point observation in the ground station and the pixel observation, the direct comparison will bring the scale error. Thus it is needed to evaluate the representativeness of ground station measurement within pixel scale of products for the reasonable validation. In this paper, a case study with Chinese Ecosystem Research Network (CERN) in situ data was taken to introduce a methodology to estimate representativeness of LAI station observation for validating LAI products. We first analyzed the indicators to evaluate the observation representativeness, and then graded the station measurement data. Finally, the LAI measurement data which can represent the pixel scale was used to validate the MODIS, GLASS and GEOV1 LAI products. The result shows that the best agreement is reached between the GLASS and GEOV1, while the lowest uncertainty is achieved by GEOV1 followed by GLASS and MODIS. We conclude that the ground station measurement data can validate multi-temporal LAI products objectively based on the evaluation indicators of station observation representativeness, which can also improve the reliability for the validation of remote sensing products.

  7. Cerebrospinal fluid volume measurements in hydrocephalic rats.

    PubMed

    Basati, Sukhraaj; Desai, Bhargav; Alaraj, Ali; Charbel, Fady; Linninger, Andreas

    2012-10-01

    Object Experimental data about the evolution of intracranial volume and pressure in cases of hydrocephalus are limited due to the lack of available monitoring techniques. In this study, the authors validate intracranial CSF volume measurements within the lateral ventricle, while simultaneously using impedance sensors and pressure transducers in hydrocephalic animals. Methods A volume sensor was fabricated and connected to a catheter that was used as a shunt to withdraw CSF. In vitro bench-top calibration experiments were created to provide data for the animal experiments and to validate the sensors. To validate the measurement technique in a physiological system, hydrocephalus was induced in weanling rats by kaolin injection into the cisterna magna. At 28 days after induction, the sensor was implanted into the lateral ventricles. After sealing the skull using dental cement, an acute CSF drainage/infusion protocol consisting of 4 sequential phases was performed with a pump. Implant location was confirmed via radiography using intraventricular iohexol contrast administration. Results Controlled CSF shunting in vivo with hydrocephalic rats resulted in precise and accurate sensor measurements (r = 0.98). Shunting resulted in a 17.3% maximum measurement error between measured volume and actual volume as assessed by a Bland-Altman plot. A secondary outcome confirmed that both ventricular volume and intracranial pressure decreased during CSF shunting and increased during infusion. Ventricular enlargement consistent with successful hydrocephalus induction was confirmed using imaging, as well as postmortem. These results indicate that volume monitoring is feasible for clinical cases of hydrocephalus. Conclusions This work marks a departure from traditional shunting systems currently used to treat hydrocephalus. The overall clinical application is to provide alternative monitoring and treatment options for patients. Future work includes development and testing of a chronic

  8. A Finite-Volume "Shaving" Method for Interfacing NASA/DAO''s Physical Space Statistical Analysis System to the Finite-Volume GCM with a Lagrangian Control-Volume Vertical Coordinate

    NASA Technical Reports Server (NTRS)

    Lin, Shian-Jiann; DaSilva, Arlindo; Atlas, Robert (Technical Monitor)

    2001-01-01

    Toward the development of a finite-volume Data Assimilation System (fvDAS), a consistent finite-volume methodology is developed for interfacing the NASA/DAO's Physical Space Statistical Analysis System (PSAS) to the joint NASA/NCAR finite volume CCM3 (fvCCM3). To take advantage of the Lagrangian control-volume vertical coordinate of the fvCCM3, a novel "shaving" method is applied to the lowest few model layers to reflect the surface pressure changes as implied by the final analysis. Analysis increments (from PSAS) to the upper air variables are then consistently put onto the Lagrangian layers as adjustments to the volume-mean quantities during the analysis cycle. This approach is demonstrated to be superior to the conventional method of using independently computed "tendency terms" for surface pressure and upper air prognostic variables.

  9. VOLUMNECT: measuring volumes with Kinect

    NASA Astrophysics Data System (ADS)

    Quintino Ferreira, Beatriz; Griné, Miguel; Gameiro, Duarte; Costeira, João. Paulo; Sousa Santos, Beatriz

    2014-03-01

    This article presents a solution to volume measurement object packing using 3D cameras (such as the Microsoft KinectTM). We target application scenarios, such as warehouses or distribution and logistics companies, where it is important to promptly compute package volumes, yet high accuracy is not pivotal. Our application auto- matically detects cuboid objects using the depth camera data and computes their volume and sorting it allowing space optimization. The proposed methodology applies to a point cloud simple computer vision and image processing methods, as connected components, morphological operations and Harris corner detector, producing encouraging results, namely an accuracy in volume measurement of 8mm. Aspects that can be further improved are identified; nevertheless, the current solution is already promising turning out to be cost effective for the envisaged scenarios.

  10. Validation of intracranial area as a surrogate measure of intracranial volume when using clinical MRI.

    PubMed

    Nandigam, R N Kaveer; Chen, Yu-Wei; Gurol, Mahmut E; Rosand, Jonathan; Greenberg, Steven M; Smith, Eric E

    2007-01-01

    We sought to determine whether mid-sagittal intracranial area (ICA) is a valid surrogate of intracranial volume (ICV) when using retrospective data with relatively thick (6-7 mm) sagittal slices. Data were retrospectively analyzed from 47 subjects who had two MRI scans taken at least nine months apart. Twenty-three subjects had manual segmentation of ICV on the T2-weighted sequence for comparison. Intraclass correlation coefficient (ICC) for intraobserver, interobserver, and intraobserver scan-rescan comparisons were 0.96, 0.97 and 0.95. Pearson correlation coefficients between ICV and ICA, averaging the cumulative 1, 2, 3, and 4 most midline slices, were 0.89, 0.94, 0.93, and 0.95. There was a significant marginal increase in explained variance of ICV by measuring two, rather than one, slices (P= 0.001). These data suggest that ICA, even measured without high-resolution imaging, is a reasonable substitute for ICV.

  11. In vitro validation of a Pitot-based flow meter for the measurement of respiratory volume and flow in large animal anaesthesia.

    PubMed

    Moens, Yves P S; Gootjes, Peter; Ionita, Jean-Claude; Heinonen, Erkki; Schatzmann, Urs

    2009-05-01

    To remodel and validate commercially available monitors and their Pitot tube-based flow sensors for use in large animals, using in vitro techniques. Prospective, in vitro experiment. Both the original and the remodelled sensor were studied with a reference flow generator. Measurements were taken of the static flow-pressure relationship and linearity of the flow signal. Sensor airway resistance was calculated. Following recalibration of the host monitor, volumes ranging from 1 to 7 L were generated by a calibration syringe, and bias and precision of spirometric volume was determined. Where manual recalibration was not available, a conversion factor for volume measurement was determined. The influence of gas composition mixture and peak flow on the conversion factor was studied. Both the original and the remodelled sensor showed similar static flow-pressure relationships and linearity of the flow signal. Mean bias (%) of displayed values compared with the reference volume of 3, 5 and 7 L varied between -0.4% and +2.4%, and this was significantly smaller than that for 1 L (4.8% to +5.0%). Conversion factors for 3, 5 and 7 L were very similar (mean 6.00 +/- 0.2, range 5.91-6.06) and were not significantly influenced by the gas mixture used. Increasing peak flow caused a small decrease in the conversion factor. Volume measurement error and conversion factors for inspiration and expiration were close to identity. The combination of the host monitor with the remodelled flow sensor allowed accurate in vitro measurement of flows and volumes in a range expected during large animal anaesthesia. This combination has potential as a reliable spirometric monitor for use during large animal anaesthesia.

  12. High-Order Moving Overlapping Grid Methodology in a Spectral Element Method

    NASA Astrophysics Data System (ADS)

    Merrill, Brandon E.

    A moving overlapping mesh methodology that achieves spectral accuracy in space and up to second-order accuracy in time is developed for solution of unsteady incompressible flow equations in three-dimensional domains. The targeted applications are in aerospace and mechanical engineering domains and involve problems in turbomachinery, rotary aircrafts, wind turbines and others. The methodology is built within the dual-session communication framework initially developed for stationary overlapping meshes. The methodology employs semi-implicit spectral element discretization of equations in each subdomain and explicit treatment of subdomain interfaces with spectrally-accurate spatial interpolation and high-order accurate temporal extrapolation, and requires few, if any, iterations, yet maintains the global accuracy and stability of the underlying flow solver. Mesh movement is enabled through the Arbitrary Lagrangian-Eulerian formulation of the governing equations, which allows for prescription of arbitrary velocity values at discrete mesh points. The stationary and moving overlapping mesh methodologies are thoroughly validated using two- and three-dimensional benchmark problems in laminar and turbulent flows. The spatial and temporal global convergence, for both methods, is documented and is in agreement with the nominal order of accuracy of the underlying solver. Stationary overlapping mesh methodology was validated to assess the influence of long integration times and inflow-outflow global boundary conditions on the performance. In a turbulent benchmark of fully-developed turbulent pipe flow, the turbulent statistics are validated against the available data. Moving overlapping mesh simulations are validated on the problems of two-dimensional oscillating cylinder and a three-dimensional rotating sphere. The aerodynamic forces acting on these moving rigid bodies are determined, and all results are compared with published data. Scaling tests, with both methodologies

  13. Airport Landside. Volume I. Planning Guide.

    DOT National Transportation Integrated Search

    1982-01-01

    This volume describes a methodology for performing airport landside planning by applying the Airport Landside Simulation Model (ALSIM) developed by TSC. For this analysis, the airport landside is defined as extending from the airport boundary to the ...

  14. Evaluation of the Effect of the Volume Throughput and Maximum Flux of Low-Surface-Tension Fluids on Bacterial Penetration of 0.2 Micron-Rated Filters during Process-Specific Filter Validation Testing.

    PubMed

    Folmsbee, Martha

    2015-01-01

    Approximately 97% of filter validation tests result in the demonstration of absolute retention of the test bacteria, and thus sterile filter validation failure is rare. However, while Brevundimonas diminuta (B. diminuta) penetration of sterilizing-grade filters is rarely detected, the observation that some fluids (such as vaccines and liposomal fluids) may lead to an increased incidence of bacterial penetration of sterilizing-grade filters by B. diminuta has been reported. The goal of the following analysis was to identify important drivers of filter validation failure in these rare cases. The identification of these drivers will hopefully serve the purpose of assisting in the design of commercial sterile filtration processes with a low risk of filter validation failure for vaccine, liposomal, and related fluids. Filter validation data for low-surface-tension fluids was collected and evaluated with regard to the effect of bacterial load (CFU/cm(2)), bacterial load rate (CFU/min/cm(2)), volume throughput (mL/cm(2)), and maximum filter flux (mL/min/cm(2)) on bacterial penetration. The data set (∼1162 individual filtrations) included all instances of process-specific filter validation failures performed at Pall Corporation, including those using other filter media, but did not include all successful retentive filter validation bacterial challenges. It was neither practical nor necessary to include all filter validation successes worldwide (Pall Corporation) to achieve the goals of this analysis. The percentage of failed filtration events for the selected total master data set was 27% (310/1162). Because it is heavily weighted with penetration events, this percentage is considerably higher than the actual rate of failed filter validations, but, as such, facilitated a close examination of the conditions that lead to filter validation failure. In agreement with our previous reports, two of the significant drivers of bacterial penetration identified were the total

  15. Validation of On-Orbit Methodology for the Assessment of Cardiac Function and Changes in the Circulating Volume Using "Braslet-M" Occlusion Cuffs

    NASA Technical Reports Server (NTRS)

    Hamilton, D. R.; Sargsyan, A. E.; Garcia, K. M.; Ebert, D.; Feiveson, A. H.; Alferova, I. V.; Dulchavsky, S. A.; Matveev, V. P.; Bogomolov, V. V.; Duncan, J. M.

    2011-01-01

    BACKGROUND: The transition to microgravity eliminates the hydrostatic gradients in the vascular system. The resulting fluid redistribution commonly manifests as facial edema, engorgement of the external neck veins, nasal congestion, and headache. This experiment examined the responses to modified Valsalva and Mueller maneuvers as measured by cardiac and vascular ultrasound in a baseline microgravity steady state, and under the influence of thigh occlusion cuffs (Braslet cuffs). METHODS: Nine International Space Station crewmember subjects (Expeditions 16 - 20) were examined in 15 experiment sessions 101 46 days after launch (mean SD; 33 - 185). 27 cardiac and vascular parameters were obtained under three respiratory conditions (baseline, Valsalva, and Mueller) before and after tightening of the Braslet cuffs for a total of 162 data points per session. The quality of cardiac and vascular ultrasound examinations was assured through remote monitoring and guidance by Investigators from the NASA Telescience Center in Houston, TX, USA. RESULTS: Fourteen of the 81 measured conditions were significantly different with Braslet application and were apparently related to cardiac preload reduction or increase in the venous volume sequestered in the lower extremity. These changes represented 10 of the 27 parameters measured. In secondary analysis, 7 of the 27 parameters were found to respond differently to respiratory maneuvers depending on the presence or absence of thigh compression, with a total of 11 differences. CONCLUSIONS: Acute application of Braslet occlusion cuffs causes lower extremity fluid sequestration and exerts proportionate measurable effects on cardiac performance in microgravity. Ultrasound techniques measuring the hemodynamic effects of thigh cuffs in combination with respiratory maneuvers may serve as an effective tool in determining the volume status of a cardiac or hemodynamically compromised patient in microgravity.

  16. Validated Alzheimer's Disease Risk Index (ANU-ADRI) is associated with smaller volumes in the default mode network in the early 60s.

    PubMed

    Cherbuin, Nicolas; Shaw, Marnie E; Walsh, Erin; Sachdev, Perminder; Anstey, Kaarin J

    2017-12-14

    Strong evidence is available suggesting that effective reduction of exposure to demonstrated modifiable risk factors in mid-life or before could significantly decrease the incidence of Alzheimer's disease (AD) and delay its onset. A key ingredient to achieving this goal is the reliable identification of individuals at risk well before they develop clinical symptoms. The aim of this study was to provide further neuroimaging evidence of the effectiveness of a validated tool, the ANU Alzheimer's Disease Risk Index, for the assessment of future risk of cognitive decline. Participants were 461 (60-64 years, 48% female) community-living individuals free of dementia at baseline. Associations between risk estimates obtained with the ANU-ADRI, total and regional brain volumes including in the default mode network (DMN) measured at the same assessment and diagnosis of MCI/dementia over a 12-year follow-up were tested in a large sample of community-living individuals free of dementia at baseline. Higher risk estimates on the ANU-ADRI were associated with lower cortical gray matter and particularly in the DMN. Importantly, difference in participants with high and low risk scores explained 7-9% of the observed difference in gray matter volume. In this sample, every one additional risk point on the ANU-ADRI was associated with an 8% increased risk of developing MCI/dementia over a 12-year follow-up and this association was partly mediated by a sub-region of the DMN. Risk of cognitive decline assessed with a validated instrument is associated with gray matter volume, particularly in the DMN, a region known to be implicated in the pathological process of the disease.

  17. When is good, good enough? Methodological pragmatism for sustainable guideline development.

    PubMed

    Browman, George P; Somerfield, Mark R; Lyman, Gary H; Brouwers, Melissa C

    2015-03-06

    Continuous escalation in methodological and procedural rigor for evidence-based processes in guideline development is associated with increasing costs and production delays that threaten sustainability. While health research methodologists are appropriately responsible for promoting increasing rigor in guideline development, guideline sponsors are responsible for funding such processes. This paper acknowledges that other stakeholders in addition to methodologists should be more involved in negotiating trade-offs between methodological procedures and efficiency in guideline production to produce guidelines that are 'good enough' to be trustworthy and affordable under specific circumstances. The argument for reasonable methodological compromise to meet practical circumstances is consistent with current implicit methodological practice. This paper proposes a conceptual tool as a framework to be used by different stakeholders in negotiating, and explicitly reporting, reasonable compromises for trustworthy as well as cost-worthy guidelines. The framework helps fill a transparency gap in how methodological choices in guideline development are made. The principle, 'when good is good enough' can serve as a basis for this approach. The conceptual tool 'Efficiency-Validity Methodological Continuum' acknowledges trade-offs between validity and efficiency in evidence-based guideline development and allows for negotiation, guided by methodologists, of reasonable methodological compromises among stakeholders. Collaboration among guideline stakeholders in the development process is necessary if evidence-based guideline development is to be sustainable.

  18. A simple randomisation procedure for validating discriminant analysis: a methodological note.

    PubMed

    Wastell, D G

    1987-04-01

    Because the goal of discriminant analysis (DA) is to optimise classification, it designedly exaggerates between-group differences. This bias complicates validation of DA. Jack-knifing has been used for validation but is inappropriate when stepwise selection (SWDA) is employed. A simple randomisation test is presented which is shown to give correct decisions for SWDA. The general superiority of randomisation tests over orthodox significance tests is discussed. Current work on non-parametric methods of estimating the error rates of prediction rules is briefly reviewed.

  19. Preliminary Validation of Composite Material Constitutive Characterization

    Treesearch

    John G. Michopoulos; Athanasios lliopoulos; John C. Hermanson; Adrian C. Orifici; Rodney S. Thomson

    2012-01-01

    This paper is describing the preliminary results of an effort to validate a methodology developed for composite material constitutive characterization. This methodology involves using massive amounts of data produced from multiaxially tested coupons via a 6-DoF robotic system called NRL66.3 developed at the Naval Research Laboratory. The testing is followed by...

  20. Note: Methodology for the analysis of Bluetooth gateways in an implemented scatternet.

    PubMed

    Etxaniz, J; Monje, P M; Aranguren, G

    2014-03-01

    This Note introduces a novel methodology to analyze the time performance of Bluetooth gateways in multi-hop networks, known as scatternets. The methodology is focused on distinguishing between the processing time and the time that each communication between nodes takes along an implemented scatternet. This technique is not only valid for Bluetooth networks but also for other wireless networks that offer access to their middleware in order to include beacons in the operation of the nodes. We show in this Note the results of the tests carried out on a Bluetooth scatternet in order to highlight the reliability and effectiveness of the methodology. The results also validate this technique showing convergence in the results when subtracting the time for the beacons from the delay measurements.

  1. Quantification of right ventricular volumes and function by real time three-dimensional echocardiographic longitudinal axial plane method: validation in the clinical setting.

    PubMed

    Endo, Yuka; Maddukuri, Prasad V; Vieira, Marcelo L C; Pandian, Natesa G; Patel, Ayan R

    2006-11-01

    Measurement of right ventricular (RV) volumes and right ventricular ejection fraction (RVEF) by three-dimensional echocardiographic (3DE) short-axis disc summation method has been validated in multiple studies. However, in some patients, short-axis images are of insufficient quality for accurate tracing of the RV endocardial border. This study examined the accuracy of long-axis analysis in multiple planes (longitudinal axial plane method) for assessment of RV volumes and RVEF. 3DE images were analyzed in 40 subjects with a broad range of RV function. RV end-diastolic (RVEDV) and end-systolic volumes (RVESV) and RVEF were calculated by both short-axis disc summation method and longitudinal axial plane method. Excellent correlation was obtained between the two methods for RVEDV, RVESV, and RVEF (r = 0.99, 0.99, 0.94, respectively; P < 0.0001 for all comparisons). 3DE longitudinal-axis analysis is a promising technique for the evaluation of RV function, and may provide an alternative method of assessment in patients with suboptimal short-axis images.

  2. Methodological Issues in Curriculum-Based Reading Assessment.

    ERIC Educational Resources Information Center

    Fuchs, Lynn S.; And Others

    1984-01-01

    Three studies involving elementary students examined methodological issues in curriculum-based reading assessment. Results indicated that (1) whereas sample duration did not affect concurrent validity, increasing duration reduced performance instability and increased performance slopes and (2) domain size was related inversely to performance slope…

  3. Validation Methods Research for Fault-Tolerant Avionics and Control Systems: Working Group Meeting, 2

    NASA Technical Reports Server (NTRS)

    Gault, J. W. (Editor); Trivedi, K. S. (Editor); Clary, J. B. (Editor)

    1980-01-01

    The validation process comprises the activities required to insure the agreement of system realization with system specification. A preliminary validation methodology for fault tolerant systems documented. A general framework for a validation methodology is presented along with a set of specific tasks intended for the validation of two specimen system, SIFT and FTMP. Two major areas of research are identified. First, are those activities required to support the ongoing development of the validation process itself, and second, are those activities required to support the design, development, and understanding of fault tolerant systems.

  4. A three-dimensional electrostatic particle-in-cell methodology on unstructured Delaunay-Voronoi grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gatsonis, Nikolaos A.; Spirkin, Anton

    2009-06-01

    The mathematical formulation and computational implementation of a three-dimensional particle-in-cell methodology on unstructured Delaunay-Voronoi tetrahedral grids is presented. The method allows simulation of plasmas in complex domains and incorporates the duality of the Delaunay-Voronoi in all aspects of the particle-in-cell cycle. Charge assignment and field interpolation weighting schemes of zero- and first-order are formulated based on the theory of long-range constraints. Electric potential and fields are derived from a finite-volume formulation of Gauss' law using the Voronoi-Delaunay dual. Boundary conditions and the algorithms for injection, particle loading, particle motion, and particle tracking are implemented for unstructured Delaunay grids. Error andmore » sensitivity analysis examines the effects of particles/cell, grid scaling, and timestep on the numerical heating, the slowing-down time, and the deflection times. The problem of current collection by cylindrical Langmuir probes in collisionless plasmas is used for validation. Numerical results compare favorably with previous numerical and analytical solutions for a wide range of probe radius to Debye length ratios, probe potentials, and electron to ion temperature ratios. The versatility of the methodology is demonstrated with the simulation of a complex plasma microsensor, a directional micro-retarding potential analyzer that includes a low transparency micro-grid.« less

  5. Use of lean and six sigma methodology to improve operating room efficiency in a high-volume tertiary-care academic medical center.

    PubMed

    Cima, Robert R; Brown, Michael J; Hebl, James R; Moore, Robin; Rogers, James C; Kollengode, Anantha; Amstutz, Gwendolyn J; Weisbrod, Cheryl A; Narr, Bradly J; Deschamps, Claude

    2011-07-01

    Operating rooms (ORs) are resource-intense and costly hospital units. Maximizing OR efficiency is essential to maintaining an economically viable institution. OR efficiency projects often focus on a limited number of ORs or cases. Efforts across an entire OR suite have not been reported. Lean and Six Sigma methodologies were developed in the manufacturing industry to increase efficiency by eliminating non-value-added steps. We applied Lean and Six Sigma methodologies across an entire surgical suite to improve efficiency. A multidisciplinary surgical process improvement team constructed a value stream map of the entire surgical process from the decision for surgery to discharge. Each process step was analyzed in 3 domains, ie, personnel, information processed, and time. Multidisciplinary teams addressed 5 work streams to increase value at each step: minimizing volume variation; streamlining the preoperative process; reducing nonoperative time; eliminating redundant information; and promoting employee engagement. Process improvements were implemented sequentially in surgical specialties. Key performance metrics were collected before and after implementation. Across 3 surgical specialties, process redesign resulted in substantial improvements in on-time starts and reduction in number of cases past 5 pm. Substantial gains were achieved in nonoperative time, staff overtime, and ORs saved. These changes resulted in substantial increases in margin/OR/day. Use of Lean and Six Sigma methodologies increased OR efficiency and financial performance across an entire operating suite. Process mapping, leadership support, staff engagement, and sharing performance metrics are keys to enhancing OR efficiency. The performance gains were substantial, sustainable, positive financially, and transferrable to other specialties. Copyright © 2011 American College of Surgeons. Published by Elsevier Inc. All rights reserved.

  6. Removal of BCG artifacts from EEG recordings inside the MR scanner: a comparison of methodological and validation-related aspects.

    PubMed

    Vanderperren, Katrien; De Vos, Maarten; Ramautar, Jennifer R; Novitskiy, Nikolay; Mennes, Maarten; Assecondi, Sara; Vanrumste, Bart; Stiers, Peter; Van den Bergh, Bea R H; Wagemans, Johan; Lagae, Lieven; Sunaert, Stefan; Van Huffel, Sabine

    2010-04-15

    Multimodal approaches are of growing interest in the study of neural processes. To this end much attention has been paid to the integration of electroencephalographic (EEG) and functional magnetic resonance imaging (fMRI) data because of their complementary properties. However, the simultaneous acquisition of both types of data causes serious artifacts in the EEG, with amplitudes that may be much larger than those of EEG signals themselves. The most challenging of these artifacts is the ballistocardiogram (BCG) artifact, caused by pulse-related electrode movements inside the magnetic field. Despite numerous efforts to find a suitable approach to remove this artifact, still a considerable discrepancy exists between current EEG-fMRI studies. This paper attempts to clarify several methodological issues regarding the different approaches with an extensive validation based on event-related potentials (ERPs). More specifically, Optimal Basis Set (OBS) and Independent Component Analysis (ICA) based methods were investigated. Their validation was not only performed with measures known from previous studies on the average ERPs, but most attention was focused on task-related measures, including their use on trial-to-trial information. These more detailed validation criteria enabled us to find a clearer distinction between the most widely used cleaning methods. Both OBS and ICA proved to be able to yield equally good results. However, ICA methods needed more parameter tuning, thereby making OBS more robust and easy to use. Moreover, applying OBS prior to ICA can optimize the data quality even more, but caution is recommended since the effect of the additional ICA step may be strongly subject-dependent. Copyright 2010 Elsevier Inc. All rights reserved.

  7. Training effectiveness assessment: Methodological problems and issues

    NASA Technical Reports Server (NTRS)

    Cross, Kenneth D.

    1992-01-01

    The U.S. military uses a large number of simulators to train and sustain the flying skills of helicopter pilots. Despite the enormous resources required to purchase, maintain, and use those simulators, little effort has been expended in assessing their training effectiveness. One reason for this is the lack of an evaluation methodology that yields comprehensive and valid data at a practical cost. Some of these methodological problems and issues that arise in assessing simulator training effectiveness, as well as problems with the classical transfer-of-learning paradigm were discussed.

  8. Validation of urban freeway models.

    DOT National Transportation Integrated Search

    2015-01-01

    This report describes the methodology, data, conclusions, and enhanced models regarding the validation of two sets of models developed in the Strategic Highway Research Program 2 (SHRP 2) Reliability Project L03, Analytical Procedures for Determining...

  9. Locomotive crashworthiness research : volume 1 : model development and validation

    DOT National Transportation Integrated Search

    1995-07-01

    This report is the first of four volumes concerning a study to investigate the costs and benefits of equipping locomotives with various crashworthiness features beyond those currently specified by the Association of American Railroads S-580 specifica...

  10. Two-Nucleon Systems in a Finite Volume

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Briceno, Raul

    2014-11-01

    I present the formalism and methodology for determining the nucleon-nucleon scattering parameters from the finite volume spectra obtained from lattice quantum chromodynamics calculations. Using the recently derived energy quantization conditions and the experimentally determined scattering parameters, the bound state spectra for finite volume systems with overlap with the 3S1-3D3 channel are predicted for a range of volumes. It is shown that the extractions of the infinite-volume deuteron binding energy and the low-energy scattering parameters, including the S-D mixing angle, are possible from Lattice QCD calculations of two-nucleon systems with boosts of |P| <= 2pi sqrt{3}/L in volumes with spatial extentsmore » L satisfying fm <~ L <~ 14 fm.« less

  11. A normative price for a manufactured product: The SAMICS methodology. Volume 2: Analysis

    NASA Technical Reports Server (NTRS)

    Chamberlain, R. G.

    1979-01-01

    The Solar Array Manufacturing Industry Costing Standards provide standard formats, data, assumptions, and procedures for determining the price a hypothetical solar array manufacturer would have to be able to obtain in the market to realize a specified after-tax rate of return on equity for a specified level of production. The methodology and its theoretical background are presented. The model is sufficiently general to be used in any production-line manufacturing environment. Implementation of this methodology by the Solar Array Manufacturing Industry Simultation computer program is discussed.

  12. Appendix B: Methodology. [2014 Teacher Prep Review

    ERIC Educational Resources Information Center

    Greenberg, Julie; Walsh, Kate; McKee, Arthur

    2014-01-01

    The "NCTQ Teacher Prep Review" evaluates the quality of programs that provide preservice preparation of public school teachers. This appendix describes the scope, methodology, timeline, staff, and standards involved in the production of "Teacher Prep Review 2014." Data collection, validation, and analysis for the report are…

  13. Developing and validating a novel metabolic tumor volume risk stratification system for supplementing non-small cell lung cancer staging.

    PubMed

    Pu, Yonglin; Zhang, James X; Liu, Haiyan; Appelbaum, Daniel; Meng, Jianfeng; Penney, Bill C

    2018-06-07

    We hypothesized that whole-body metabolic tumor volume (MTVwb) could be used to supplement non-small cell lung cancer (NSCLC) staging due to its independent prognostic value. The goal of this study was to develop and validate a novel MTVwb risk stratification system to supplement NSCLC staging. We performed an IRB-approved retrospective review of 935 patients with NSCLC and FDG-avid tumor divided into modeling and validation cohorts based on the type of PET/CT scanner used for imaging. In addition, sensitivity analysis was conducted by dividing the patient population into two randomized cohorts. Cox regression and Kaplan-Meier survival analyses were performed to determine the prognostic value of the MTVwb risk stratification system. The cut-off values (10.0, 53.4 and 155.0 mL) between the MTVwb quartiles of the modeling cohort were applied to both the modeling and validation cohorts to determine each patient's MTVwb risk stratum. The survival analyses showed that a lower MTVwb risk stratum was associated with better overall survival (all p < 0.01), independent of TNM stage together with other clinical prognostic factors, and the discriminatory power of the MTVwb risk stratification system, as measured by Gönen and Heller's concordance index, was not significantly different from that of TNM stage in both cohorts. Also, the prognostic value of the MTVwb risk stratum was robust in the two randomized cohorts. The discordance rate between the MTVwb risk stratum and TNM stage or substage was 45.1% in the modeling cohort and 50.3% in the validation cohort. This study developed and validated a novel MTVwb risk stratification system, which has prognostic value independent of the TNM stage and other clinical prognostic factors in NSCLC, suggesting that it could be used for further NSCLC pretreatment assessment and for refining treatment decisions in individual patients.

  14. Assessing cross-cultural validity of scales: a methodological review and illustrative example.

    PubMed

    Beckstead, Jason W; Yang, Chiu-Yueh; Lengacher, Cecile A

    2008-01-01

    In this article, we assessed the cross-cultural validity of the Women's Role Strain Inventory (WRSI), a multi-item instrument that assesses the degree of strain experienced by women who juggle the roles of working professional, student, wife and mother. Cross-cultural validity is evinced by demonstrating the measurement invariance of the WRSI. Measurement invariance is the extent to which items of multi-item scales function in the same way across different samples of respondents. We assessed measurement invariance by comparing a sample of working women in Taiwan with a similar sample from the United States. Structural equation models (SEMs) were employed to determine the invariance of the WRSI and to estimate the unique validity variance of its items. This article also provides nurse-researchers with the necessary underlying measurement theory and illustrates how SEMs may be applied to assess cross-cultural validity of instruments used in nursing research. Overall performance of the WRSI was acceptable but our analysis showed that some items did not display invariance properties across samples. Item analysis is presented and recommendations for improving the instrument are discussed.

  15. Construction concepts and validation of the 3D printed UST_2 modular stellarator

    NASA Astrophysics Data System (ADS)

    Queral, V.

    2015-03-01

    High accuracy, geometric complexity and thus high cost of stellarators tend to hinder the advance of stellarator research. Nowadays, new manufacturing methods might be developed for the production of small and middle-size stellarators. The methods should demonstrate advantages with respect common fabrication methods, like casting, cutting, forging and welding, for the construction of advanced highly convoluted modular stellarators. UST2 is a small modular three period quasi-isodynamic stellarator of major radius 0.26 m and plasma volume 10 litres being currently built to validate additive manufacturing (3D printing) for stellarator construction. The modular coils are wound in grooves defined on six 3D printed half period frames designed as light truss structures filled by a strong filler. A geometrically simple assembling configuration has been concocted for UST2 so as to try to lower the cost of the device while keeping the positioning accuracy of the different elements. The paper summarizes the construction and assembling concepts developed, the devised positioning methodology, the design of the coil frames and positioning elements and, an initial validation of the assembling of the components.

  16. Human Rehabilitation Techniques. Project Papers. Volume IV, Part B.

    ERIC Educational Resources Information Center

    Dudek, R. A.; And Others

    Volume IV, Part B of a six-volume final report (which covers the findings of a research project on policy and technology related to rehabilitation of disabled individuals) presents a continuation of papers (Part A) giving an overview of project methodology, much of the data used in projecting consequences and policymaking impacts in project…

  17. Situating Standard Setting within Argument-Based Validity

    ERIC Educational Resources Information Center

    Papageorgiou, Spiros; Tannenbaum, Richard J.

    2016-01-01

    Although there has been substantial work on argument-based approaches to validation as well as standard-setting methodologies, it might not always be clear how standard setting fits into argument-based validity. The purpose of this article is to address this lack in the literature, with a specific focus on topics related to argument-based…

  18. SeaWiFS Postlaunch Calibration and Validation Analyses

    NASA Technical Reports Server (NTRS)

    Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor); McClain, Charles R.; Ainsworth, Ewa J.; Barnes, Robert A.; Eplee, Robert E., Jr.; Patt, Frederick S.; Robinson, Wayne D.; Wang, Menghua; Bailey, Sean W.

    2000-01-01

    The effort to resolve data quality issues and improve on the initial data evaluation methodologies of the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) Project was an extensive one. These evaluations have resulted, to date, in three major reprocessings of the entire data set where each reprocessing addressed the data quality issues that could be identified up to the time of each reprocessing. The number of chapters (21) needed to document this extensive work in the SeaWiFS Postlaunch Technical Report Series requires three volumes. The chapters in Volumes 9, 10, and 11 are in a logical order sequencing through sensor calibration, atmospheric correction, masks and flags, product evaluations, and bio-optical algorithms. The first chapter of Volume 9 is an overview of the calibration and validation program, including a table of activities from the inception of the SeaWiFS Project. Chapter 2 describes the fine adjustments of sensor detector knee radiances, i.e., radiance levels where three of the four detectors in each SeaWiFS band saturate. Chapters 3 and 4 describe the analyses of the lunar and solar calibration time series, respectively, which are used to track the temporal changes in radiometric sensitivity in each band. Chapter 5 outlines the procedure used to adjust band 7 relative to band 8 to derive reasonable aerosol radiances in band 7 as compared to those in band 8 in the vicinity of Lanai, Hawaii, the vicarious calibration site. Chapter 6 presents the procedure used to estimate the vicarious calibration gain adjustment factors for bands 1-6 using the waterleaving radiances from the Marine Optical Buoy (MOBY) offshore of Lanai. Chapter 7 provides the adjustments to the coccolithophore flag algorithm which were required for improved performance over the prelaunch version. Chapter 8 is an overview of the numerous modifications to the atmospheric correction algorithm that have been implemented. Chapter 9 describes the methodology used to remove artifacts of

  19. A Review of Traditional Cloze Testing Methodology.

    ERIC Educational Resources Information Center

    Heerman, Charles E.

    To analyze the validity of W. L. Taylor's cloze testing methodology, this paper first examines three areas contributing to Taylor's thinking: communications theory, the psychology of speech and communication, and the theory of dispositional mechanisms--or nonessential words--in speech. It then evaluates Taylor's research to determine how he…

  20. Effects of obesity on lung volume and capacity in children and adolescents: a systematic review

    PubMed Central

    Winck, Aline Dill; Heinzmann-Filho, João Paulo; Soares, Rafaela Borges; da Silva, Juliana Severo; Woszezenki, Cristhiele Taís; Zanatta, Letiane Bueno

    2016-01-01

    Abstract Objective: To assess the effects of obesity on lung volume and capacity in children and adolescents. Data source: This is a systematic review, carried out in Pubmed, Lilacs, Scielo and PEDro databases, using the following Keywords: Plethysmography; Whole Body OR Lung Volume Measurements OR Total Lung Capacity OR Functional Residual Capacity OR Residual Volume AND Obesity. Observational studies or clinical trials that assessed the effects of obesity on lung volume and capacity in children and adolescents (0-18 years) without any other associated disease; in English; Portuguese and Spanish languages were selected. Methodological quality was assessed by the Agency for Healthcare Research and Quality. Data synthesis: Of the 1,030 articles, only four were included in the review. The studies amounted to 548 participants, predominantly males, with sample size ranging from 45 to 327 individuals. 100% of the studies evaluated nutritional status through BMI (z-score) and 50.0% reported the data on abdominal circumference. All demonstrated that obesity causes negative effects on lung volume and capacity, causing a reduction mainly in functional residual capacity in 75.0% of the studies; in the expiratory reserve volume in 50.0% and in the residual volume in 25.0%. The methodological quality ranged from moderate to high, with 75.0% of the studies classified as having high methodological quality. Conclusions: Obesity causes deleterious effects on lung volume and capacity in children and adolescents, mainly by reducing functional residual capacity, expiratory reserve volume and residual volume. PMID:27130483

  1. The Study of the Relationship between Probabilistic Design and Axiomatic Design Methodology. Volume 2

    NASA Technical Reports Server (NTRS)

    Onwubiko, Chin-Yere; Onyebueke, Landon

    1996-01-01

    The structural design, or the design of machine elements, has been traditionally based on deterministic design methodology. The deterministic method considers all design parameters to be known with certainty. This methodology is, therefore, inadequate to design complex structures that are subjected to a variety of complex, severe loading conditions. A nonlinear behavior that is dependent on stress, stress rate, temperature, number of load cycles, and time is observed on all components subjected to complex conditions. These complex conditions introduce uncertainties; hence, the actual factor of safety margin remains unknown. In the deterministic methodology, the contingency of failure is discounted; hence, there is a use of a high factor of safety. It may be most useful in situations where the design structures are simple. The probabilistic method is concerned with the probability of non-failure performance of structures or machine elements. It is much more useful in situations where the design is characterized by complex geometry, possibility of catastrophic failure, sensitive loads and material properties. Also included: Comparative Study of the use of AGMA Geometry Factors and Probabilistic Design Methodology in the Design of Compact Spur Gear Set.

  2. Evaluation of validity and reliability of a methodology for measuring human postural attitude and its relation to temporomandibular joint disorders.

    PubMed

    Fuentes Fernández, Ramón; Carter, Pablo; Muñoz, Sergio; Silva, Héctor; Oporto Venegas, Gonzalo Hernán; Cantin, Mario; Ottone, Nicolás Ernesto

    2016-04-01

    Temporomandibular joint disorders (TMJDs) are caused by several factors such as anatomical, neuromuscular and psychological alterations. A relationship has been established between TMJDs and postural alterations, a type of anatomical alteration. An anterior position of the head requires hyperactivity of the posterior neck region and shoulder muscles to prevent the head from falling forward. This compensatory muscular function may cause fatigue, discomfort and trigger point activation. To our knowledge, a method for assessing human postural attitude in more than one plane has not been reported. Thus, the aim of this study was to design a methodology to measure the external human postural attitude in frontal and sagittal planes, with proper validity and reliability analyses. The variable postures of 78 subjects (36 men, 42 women; age 18-24 years) were evaluated. The postural attitudes of the subjects were measured in the frontal and sagittal planes, using an acromiopelvimeter, grid panel and Fox plane. The method we designed for measuring postural attitudes had adequate reliability and validity, both qualitatively and quantitatively, based on Cohen's Kappa coefficient (> 0.87) and Pearson's correlation coefficient (r = 0.824, > 80%). This method exhibits adequate metrical properties and can therefore be used in further research on the association of human body posture with skeletal types and TMJDs. Copyright © Singapore Medical Association.

  3. A new methodological approach to assess cardiac work by pressure-volume and stress-length relations in patients with aortic valve stenosis and dilated cardiomyopathy.

    PubMed

    Alter, P; Rupp, H; Rominger, M B; Klose, K J; Maisch, B

    2008-01-01

    In experimental animals, cardiac work is derived from pressure-volume area and analyzed further using stress-length relations. Lack of methods for determining accurately myocardial mass has until now prevented the use of stress-length relations in patients. We hypothesized, therefore, that not only pressure-volume loops but also stress-length diagrams can be derived from cardiac volume and cardiac mass as assessed by cardiac magnetic resonance imaging (CMR) and invasively measured pressure. Left ventricular (LV) volume and myocardial mass were assessed in seven patients with aortic valve stenosis (AS), eight with dilated cardiomyopathy (DCM), and eight controls using electrocardiogram (ECG)-gated CMR. LV pressure was measured invasively. Pressure-volume curves were calculated based on ECG triggering. Stroke work was assessed as area within the pressure-volume loop. LV wall stress was calculated using a thick-wall sphere model. Similarly, stress-length loops were calculated to quantify stress-length-based work. Taking the LV geometry into account, the normalization with regard to ventricular circumference resulted in "myocardial work." Patients with AS (valve area 0.73+/-0.18 cm(2)) exhibited an increased LV myocardial mass when compared with controls (P<0.05). LV wall stress was increased in DCM but not in AS. Stroke work of AS was unchanged when compared with controls (0.539+/-0.272 vs 0.621+/-0.138 Nm, not significant), whereas DCM exhibited a significant depression (0.367+/-0.157 Nm, P<0.05). Myocardial work was significantly reduced in both AS and DCM when compared with controls (129.8+/-69.6, 200.6+/-80.1, 332.2+/-89.6 Nm/m(2), P<0.05), also after normalization (7.40+/-5.07, 6.27+/-3.20, 14.6+/-4.07 Nm/m(2), P<0.001). It is feasible to obtain LV pressure-volume and stress-length diagrams in patients based on the present novel methodological approach of using CMR and invasive pressure measurement. Myocardial work was reduced in patients with DCM and noteworthy

  4. External gear pumps operating with non-Newtonian fluids: Modelling and experimental validation

    NASA Astrophysics Data System (ADS)

    Rituraj, Fnu; Vacca, Andrea

    2018-06-01

    External Gear Pumps are used in various industries to pump non-Newtonian viscoelastic fluids like plastics, paints, inks, etc. For both design and analysis purposes, it is often a matter of interest to understand the features of the displacing action realized by meshing of the gears and the description of the behavior of the leakages for this kind of pumps. However, very limited work can be found in literature about methodologies suitable to model such phenomena. This article describes the technique of modelling external gear pumps that operate with non-Newtonian fluids. In particular, it explains how the displacing action of the unit can be modelled using a lumped parameter approach which involves dividing fluid domain into several control volumes and internal flow connections. This work is built upon the HYGESim simulation tool, conceived by the authors' research team in the last decade, which is for the first time extended for the simulation of non-Newtonian fluids. The article also describes several comparisons between simulation results and experimental data obtained from numerous experiments performed for validation of the presented methodology. Finally, operation of external gear pump with fluids having different viscosity characteristics is discussed.

  5. A cost evaluation methodology for surgical technologies.

    PubMed

    Ismail, Imad; Wolff, Sandrine; Gronfier, Agnes; Mutter, Didier; Swanström, Lee L; Swantröm, Lee L

    2015-08-01

    To create and validate a micro-costing methodology that surgeons and hospital administrators can use to evaluate the cost of implementing innovative surgical technologies. Our analysis is broken down into several elements of fixed and variable costs which are used to effectively and easily calculate the cost of surgical operations. As an example of application, we use data from 86 robot assisted gastric bypass operations made in our hospital. To validate our methodology, we discuss the cost reporting approaches used in 16 surgical publications with respect to 7 predefined criteria. Four formulas are created which allow users to import data from their health system or particular situation and derive the total cost. We have established that the robotic surgical system represents 97.53 % of our operating room's medical device costs which amounts to $4320.11. With a mean surgery time of 303 min, personnel cost per operation amounts to $1244.73, whereas reusable instruments and disposable costs are, respectively, $1539.69 and $3629.55 per case. The literature survey demonstrates that the cost of surgery is rarely reported or emphasized, and authors who do cover this concept do so with variable methodologies which make their findings difficult to interpret. Using a micro-costing methodology, it is possible to identify the cost of any new surgical procedure/technology using formulas that can be adapted to a variety of operations and healthcare systems. We hope that this paper will provide guidance for decision makers and a means for surgeons to harmonise cost reporting in the literature.

  6. Quantification of Posterior Globe Flattening: Methodology Development and Validation

    NASA Technical Reports Server (NTRS)

    Lumpkins, Sarah B.; Garcia, Kathleen M.; Sargsyan, Ashot E.; Hamilton, Douglas R.; Berggren, Michael D.; Ebert, Douglas

    2012-01-01

    Microgravity exposure affects visual acuity in a subset of astronauts and mechanisms may include structural changes in the posterior globe and orbit. Particularly, posterior globe flattening has been implicated in the eyes of several astronauts. This phenomenon is known to affect some terrestrial patient populations and has been shown to be associated with intracranial hypertension. It is commonly assessed by magnetic resonance imaging (MRI), computed tomography (CT) or B-mode Ultrasound (US), without consistent objective criteria. NASA uses a semiquantitative scale of 0-3 as part of eye/orbit MRI and US analysis for occupational monitoring purposes. The goal of this study was ot initiate development of an objective quantification methodology to monitor small changes in posterior globe flattening.

  7. Compulsory Education: Statistics, Methodology, Reforms and New Tendencies. Conference Papers for the 8th Session of the International Standing Conference for the History of Education (Parma, Italy, September 3-6, 1986). Volume IV.

    ERIC Educational Resources Information Center

    Genovesi, Giovanni, Ed.

    This collection, the last of four volumes on the history of compulsory education among the nations of Europe and the western hemisphere, analyzes statistics, methodology, reforms, and new tendencies. Twelve of the document's 18 articles are written in English, 3 are written in French and 3 are in Italian. Summaries accompany most articles; three…

  8. Educational Validity of Business Gaming Simulation: A Research Methodology Framework

    ERIC Educational Resources Information Center

    Stainton, Andrew J.; Johnson, Johnnie E.; Borodzicz, Edward P.

    2010-01-01

    Many past educational validity studies of business gaming simulation, and more specifically total enterprise simulation, have been inconclusive. Studies have focused on the weaknesses of business gaming simulation; which is often regarded as an educational medium that has limitations regarding learning effectiveness. However, no attempts have been…

  9. An improved approach for flight readiness certification: Methodology for failure risk assessment and application examples, volume 1

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.

  10. Somatic Sensitivity and Reflexivity as Validity Tools in Qualitative Research

    ERIC Educational Resources Information Center

    Green, Jill

    2015-01-01

    Validity is a key concept in qualitative educational research. Yet, it is often not addressed in methodological writing about dance. This essay explores validity in a postmodern world of diverse approaches to scholarship, by looking at the changing face of validity in educational qualitative research and at how new understandings of the concept…

  11. Determining blood and plasma volumes using bioelectrical response spectroscopy

    NASA Technical Reports Server (NTRS)

    Siconolfi, S. F.; Nusynowitz, M. L.; Suire, S. S.; Moore, A. D. Jr; Leig, J.

    1996-01-01

    We hypothesized that an electric field (inductance) produced by charged blood components passing through the many branches of arteries and veins could assess total blood volume (TBV) or plasma volume (PV). Individual (N = 29) electrical circuits (inductors, two resistors, and a capacitor) were determined from bioelectrical response spectroscopy (BERS) using a Hewlett Packard 4284A Precision LCR Meter. Inductance, capacitance, and resistance from the circuits of 19 subjects modeled TBV (sum of PV and computed red cell volume) and PV (based on 125I-albumin). Each model (N = 10, cross validation group) had good validity based on 1) mean differences (-2.3 to 1.5%) between the methods that were not significant and less than the propagated errors (+/- 5.2% for TBV and PV), 2) high correlations (r > 0.92) with low SEE (< 7.7%) between dilution and BERS assessments, and 3) Bland-Altman pairwise comparisons that indicated "clinical equivalency" between the methods. Given the limitation of this study (10 validity subjects), we concluded that BERS models accurately assessed TBV and PV. Further evaluations of the models' validities are needed before they are used in clinical or research settings.

  12. Does Metformin Reduce Cancer Risks? Methodologic Considerations.

    PubMed

    Golozar, Asieh; Liu, Shuiqing; Lin, Joeseph A; Peairs, Kimberly; Yeh, Hsin-Chieh

    2016-01-01

    The substantial burden of cancer and diabetes and the association between the two conditions has been a motivation for researchers to look for targeted strategies that can simultaneously affect both diseases and reduce their overlapping burden. In the absence of randomized clinical trials, researchers have taken advantage of the availability and richness of administrative databases and electronic medical records to investigate the effects of drugs on cancer risk among diabetic individuals. The majority of these studies suggest that metformin could potentially reduce cancer risk. However, the validity of this purported reduction in cancer risk is limited by several methodological flaws either in the study design or in the analysis. Whether metformin use decreases cancer risk relies heavily on the availability of valid data sources with complete information on confounders, accurate assessment of drug use, appropriate study design, and robust analytical techniques. The majority of the observational studies assessing the association between metformin and cancer risk suffer from methodological shortcomings and efforts to address these issues have been incomplete. Future investigations on the association between metformin and cancer risk should clearly address the methodological issues due to confounding by indication, prevalent user bias, and time-related biases. Although the proposed strategies do not guarantee a bias-free estimate for the association between metformin and cancer, they will reduce synthesis of and reporting of erroneous results.

  13. METHANE EMISSIONS FROM THE NATURAL GAS INDUSTRY VOLUME 3: GENERAL METHODOLOGY

    EPA Science Inventory

    The 15-volume report summarizes the results of a comprehensive program to quantify methane (CH4) emissions from the U.S. natural gas industry for the base year. The objective was to determine CH4 emissions from the wellhead and ending downstream at the customer's meter. The accur...

  14. METHANE EMISSIONS FROM THE NATURAL GAS INDUSTRY VOLUME 4: STATISTICAL METHODOLOGY

    EPA Science Inventory

    The 15-volume report summarizes the results of a comprehensive program to quantify methane (CH4) emissions from the U.S. natural gas industry for the base year. The objective was to determine CH4 emissions from the wellhead and ending downstream at the customer's meter. The accur...

  15. Effects of obesity on lung volume and capacity in children and adolescents: a systematic review.

    PubMed

    Winck, Aline Dill; Heinzmann-Filho, João Paulo; Soares, Rafaela Borges; da Silva, Juliana Severo; Woszezenki, Cristhiele Taís; Zanatta, Letiane Bueno

    2016-12-01

    To assess the effects of obesity on lung volume and capacity in children and adolescents. This is a systematic review, carried out in Pubmed, Lilacs, Scielo and PEDro databases, using the following Keywords: Plethysmography; Whole Body OR Lung Volume Measurements OR Total Lung Capacity OR Functional Residual Capacity OR Residual Volume AND Obesity. Observational studies or clinical trials that assessed the effects of obesity on lung volume and capacity in children and adolescents (0-18 years) without any other associated disease; in English; Portuguese and Spanish languages were selected. Methodological quality was assessed by the Agency for Healthcare Research and Quality. Of the 1,030 articles, only four were included in the review. The studies amounted to 548 participants, predominantly males, with sample size ranging from 45 to 327 individuals. 100% of the studies evaluated nutritional status through BMI (z-score) and 50.0% reported the data on abdominal circumference. All demonstrated that obesity causes negative effects on lung volume and capacity, causing a reduction mainly in functional residual capacity in 75.0% of the studies; in the expiratory reserve volume in 50.0% and in the residual volume in 25.0%. The methodological quality ranged from moderate to high, with 75.0% of the studies classified as having high methodological quality. Obesity causes deleterious effects on lung volume and capacity in children and adolescents, mainly by reducing functional residual capacity, expiratory reserve volume and residual volume. Copyright © 2016 Sociedade de Pediatria de São Paulo. Publicado por Elsevier Editora Ltda. All rights reserved.

  16. Development, Verification and Validation of Parallel, Scalable Volume of Fluid CFD Program for Propulsion Applications

    NASA Technical Reports Server (NTRS)

    West, Jeff; Yang, H. Q.

    2014-01-01

    There are many instances involving liquid/gas interfaces and their dynamics in the design of liquid engine powered rockets such as the Space Launch System (SLS). Some examples of these applications are: Propellant tank draining and slosh, subcritical condition injector analysis for gas generators, preburners and thrust chambers, water deluge mitigation for launch induced environments and even solid rocket motor liquid slag dynamics. Commercially available CFD programs simulating gas/liquid interfaces using the Volume of Fluid approach are currently limited in their parallel scalability. In 2010 for instance, an internal NASA/MSFC review of three commercial tools revealed that parallel scalability was seriously compromised at 8 cpus and no additional speedup was possible after 32 cpus. Other non-interface CFD applications at the time were demonstrating useful parallel scalability up to 4,096 processors or more. Based on this review, NASA/MSFC initiated an effort to implement a Volume of Fluid implementation within the unstructured mesh, pressure-based algorithm CFD program, Loci-STREAM. After verification was achieved by comparing results to the commercial CFD program CFD-Ace+, and validation by direct comparison with data, Loci-STREAM-VoF is now the production CFD tool for propellant slosh force and slosh damping rate simulations at NASA/MSFC. On these applications, good parallel scalability has been demonstrated for problems sizes of tens of millions of cells and thousands of cpu cores. Ongoing efforts are focused on the application of Loci-STREAM-VoF to predict the transient flow patterns of water on the SLS Mobile Launch Platform in order to support the phasing of water for launch environment mitigation so that vehicle determinantal effects are not realized.

  17. Foundations for Measuring Volume Rendering Quality

    NASA Technical Reports Server (NTRS)

    Williams, Peter L.; Uselton, Samuel P.; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    The goal of this paper is to provide a foundation for objectively comparing volume rendered images. The key elements of the foundation are: (1) a rigorous specification of all the parameters that need to be specified to define the conditions under which a volume rendered image is generated; (2) a methodology for difference classification, including a suite of functions or metrics to quantify and classify the difference between two volume rendered images that will support an analysis of the relative importance of particular differences. The results of this method can be used to study the changes caused by modifying particular parameter values, to compare and quantify changes between images of similar data sets rendered in the same way, and even to detect errors in the design, implementation or modification of a volume rendering system. If one has a benchmark image, for example one created by a high accuracy volume rendering system, the method can be used to evaluate the accuracy of a given image.

  18. The RAAF Logistics Study. Volume 3,

    DTIC Science & Technology

    human activity system which has been analysed by the Central Studies Establishment on behalf of the Defence Logistics Organisation. This work, now reported on, stems from a conviction that improved decision making can flow from an enhanced and integrated understanding of the activities necessary to fulfil the objectives of the system, by those involved in it or affected by it. This particular Volume deals with the description, using the Soft Systems Methodology described in Volume 1, of the RAAF Technical System which constitutes a major component of the overall RAAF

  19. A validated methodology for determination of laboratory instrument computer interface efficacy

    NASA Astrophysics Data System (ADS)

    1984-12-01

    This report is intended to provide a methodology for determining when, and for which instruments, direct interfacing of laboratory instrument and laboratory computers is beneficial. This methodology has been developed to assist the Tri-Service Medical Information Systems Program Office in making future decisions regarding laboratory instrument interfaces. We have calculated the time savings required to reach a break-even point for a range of instrument interface prices and corresponding average annual costs. The break-even analyses used empirical data to estimate the number of data points run per day that are required to meet the break-even point. The results indicate, for example, that at a purchase price of $3,000, an instrument interface will be cost-effective if the instrument is utilized for at least 154 data points per day if operated in the continuous mode, or 216 points per day if operated in the discrete mode. Although this model can help to ensure that instrument interfaces are cost effective, additional information should be considered in making the interface decisions. A reduction in results transcription errors may be a major benefit of instrument interfacing.

  20. Validation of Test Weighing Protocol to Estimate Enteral Feeding Volumes in Preterm Infants.

    PubMed

    Rankin, Michael W; Jimenez, Elizabeth Yakes; Caraco, Marina; Collinson, Marie; Lostetter, Lisa; DuPont, Tara L

    2016-11-01

    To evaluate the accuracy of pre- and postfeeding weights to estimate enteral feeding volumes in preterm infants. Single-center prospective cohort study of infants 28-36 weeks' corrected age receiving gavage feedings. For each test weight, 3 pre- and 3 postgavage feeding weights were obtained by study personnel, blinded to feeding volume, via a specific protocol. The correlation between test weight difference and actual volume ingested was assessed by the use of summary statistics, Spearman rho, and graphical analyses. The relationship between categorical predictive variables and a predefined acceptable difference (±5 mL) was assessed with the χ 2 or Fisher exact test. A total of 101 test weights were performed in 68 infants. Estimated and actual feeding volumes were highly correlated (r = 0.94, P < .001), with a mean absolute difference of 2.95 mL (SD: 2.70; range: 0, 12.3 mL; 5th, 95th percentile: 0, 9.3); 85% of test weights were within ±5 mL of actual feeding volume and did not vary significantly by corrected age, feeding tube or respiratory support type, feeding duration or volume, formula vs breast milk, or caloric density. With adherence to study protocol, 89% of test weights (66/74) were within ±5 mL of actual volume, compared with 71% (19/27, P = .04) when concerns about protocol adherence were noted (eg, difficulty securing oxygen tubing). Via the use of a standard protocol, feeding volumes can be estimated accurately by pre- and postfeeding weights. Test weighing could be a valuable tool to support direct breastfeeding in the neonatal intensive care unit. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Focus control enhancement and on-product focus response analysis methodology

    NASA Astrophysics Data System (ADS)

    Kim, Young Ki; Chen, Yen-Jen; Hao, Xueli; Samudrala, Pavan; Gomez, Juan-Manuel; Mahoney, Mark O.; Kamalizadeh, Ferhad; Hanson, Justin K.; Lee, Shawn; Tian, Ye

    2016-03-01

    With decreasing CDOF (Critical Depth Of Focus) for 20/14nm technology and beyond, focus errors are becoming increasingly critical for on-product performance. Current on product focus control techniques in high volume manufacturing are limited; It is difficult to define measurable focus error and optimize focus response on product with existing methods due to lack of credible focus measurement methodologies. Next to developments in imaging and focus control capability of scanners and general tool stability maintenance, on-product focus control improvements are also required to meet on-product imaging specifications. In this paper, we discuss focus monitoring, wafer (edge) fingerprint correction and on-product focus budget analysis through diffraction based focus (DBF) measurement methodology. Several examples will be presented showing better focus response and control on product wafers. Also, a method will be discussed for a focus interlock automation system on product for a high volume manufacturing (HVM) environment.

  2. Scientific Research in Homeopathic Medicine: Validation, Methodology and Perspectives

    PubMed Central

    2007-01-01

    Verona's School of Homeopathic Medicine (www.omeopatia.org) organized a day of full immersion in the field of homeopathy, focusing on the validity of this much-debated discipline. There is widespread consensus in the medical community that evidence-based medicine is the best standard for assessing efficacy and safety of healthcare practices, and systematic reviews with strict protocols are essential to establish proof for various therapies. Students, homeopathic practitioners, academic and business representatives, who are interested in or curious about homeopathic practices attended the conference.

  3. Methodological issues in microdialysis sampling for pharmacokinetic studies.

    PubMed

    de Lange, E C; de Boer, A G; Breimer, D D

    2000-12-15

    Microdialysis is an in vivo technique that permits monitoring of local concentrations of drugs and metabolites at specific sites in the body. Microdialysis has several characteristics, which makes it an attractive tool for pharmacokinetic research. About a decade ago the microdialysis technique entered the field of pharmacokinetic research, in the brain, and later also in peripheral tissues and blood. Within this period much has been learned on the proper use of this technique. Today, it has outgrown its child diseases and its potentials and limitations have become more or less well defined. As microdialysis is a delicate technique for which experimental factors appear to be critical with respect to the validity of the experimental outcomes, several factors should be considered. These include the probe; the perfusion solution; post-surgery interval in relation to surgical trauma, tissue integrity and repeated experiments; the analysis of microdialysate samples; and the quantification of microdialysate data. Provided that experimental conditions are optimized to give valid and quantitative results, microdialysis can provide numerous data points from a relatively small number of individual animals to determine detailed pharmacokinetic information. An example of one of the added values of this technique compared with other in vivo pharmacokinetic techniques, is that microdialysis reflects free concentrations in tissues and plasma. This gives the opportunity to assess information on drug transport equilibration across membranes such as the blood-brain barrier, which already has provided new insights. With the progress of analytical methodology, especially with respect to low volume/low concentration measurements and simultaneous measurement of multiple compounds, the applications and importance of the microdialysis technique in pharmacokinetic research will continue to increase.

  4. Validation of post-operative residual contrast enhancing tumor volume as an independent prognostic factor for overall survival in newly diagnosed glioblastoma.

    PubMed

    Ellingson, Benjamin M; Abrey, Lauren E; Nelson, Sarah J; Kaufmann, Timothy J; Garcia, Josep; Chinot, Olivier; Saran, Frank; Nishikawa, Ryo; Henriksson, Roger; Mason, Warren P; Wick, Wolfgang; Butowski, Nicholas; Ligon, Keith L; Gerstner, Elizabeth R; Colman, Howard; de Groot, John; Chang, Susan; Mellinghoff, Ingo; Young, Robert J; Alexander, Brian M; Colen, Rivka; Taylor, Jennie W; Arrillaga-Romany, Isabel; Mehta, Arnav; Huang, Raymond Y; Pope, Whitney B; Reardon, David; Batchelor, Tracy; Prados, Michael; Galanis, Evanthia; Wen, Patrick Y; Cloughesy, Timothy F

    2018-04-05

    In the current study, we pooled imaging data in newly diagnosed GBM patients from international multicenter clinical trials, single institution databases, and multicenter clinical trial consortiums to identify the relationship between post-operative residual enhancing tumor volume and overall survival (OS). Data from 1,511 newly diagnosed GBM patients from 5 data sources were included in the current study: 1) a single institution database from UCLA (N=398; Discovery); 2) patients from the Ben and Cathy Ivy Foundation for Early Phase Clinical Trials Network Radiogenomics Database (N=262 from 8 centers; Confirmation); 3) the chemoradiation placebo arm from an international phase III trial (AVAglio; N=394 from 120 locations in 23 countries; Validation); 4) the experimental arm from AVAglio examining chemoradiation plus bevacizumab (N=404 from 120 locations in 23 countries; Exploratory Set 1); and 5) an Alliance (N0874) Phase I/II trial of vorinostat plus chemoradiation (N=53; Exploratory Set 2). Post-surgical, residual enhancing disease was quantified using T1 subtraction maps. Multivariate Cox regression models were used to determine influence of clinical variables, MGMT status, and residual tumor volume on OS. A log-linear relationship was observed between post-operative, residual enhancing tumor volume and OS in newly diagnosed GBM treated with standard chemoradiation. Post-operative tumor volume is a prognostic factor for OS (P<0.01), regardless of therapy, age, and MGMT promoter methylation status. Post-surgical, residual contrast-enhancing disease significantly negatively influences survival in patients with newly diagnosed glioblastoma treated with chemoradiation with or without concomitant experimental therapy.

  5. Emerging Concepts and Methodologies in Cancer Biomarker Discovery.

    PubMed

    Lu, Meixia; Zhang, Jinxiang; Zhang, Lanjing

    2017-01-01

    Cancer biomarker discovery is a critical part of cancer prevention and treatment. Despite the decades of effort, only a small number of cancer biomarkers have been identified for and validated in clinical settings. Conceptual and methodological breakthroughs may help accelerate the discovery of additional cancer biomarkers, particularly their use for diagnostics. In this review, we have attempted to review the emerging concepts in cancer biomarker discovery, including real-world evidence, open access data, and data paucity in rare or uncommon cancers. We have also summarized the recent methodological progress in cancer biomarker discovery, such as high-throughput sequencing, liquid biopsy, big data, artificial intelligence (AI), and deep learning and neural networks. Much attention has been given to the methodological details and comparison of the methodologies. Notably, these concepts and methodologies interact with each other and will likely lead to synergistic effects when carefully combined. Newer, more innovative concepts and methodologies are emerging as the current emerging ones became mainstream and widely applied to the field. Some future challenges are also discussed. This review contributes to the development of future theoretical frameworks and technologies in cancer biomarker discovery and will contribute to the discovery of more useful cancer biomarkers.

  6. Evaluation of validity and reliability of a methodology for measuring human postural attitude and its relation to temporomandibular joint disorders

    PubMed Central

    Fernández, Ramón Fuentes; Carter, Pablo; Muñoz, Sergio; Silva, Héctor; Venegas, Gonzalo Hernán Oporto; Cantin, Mario; Ottone, Nicolás Ernesto

    2016-01-01

    INTRODUCTION Temporomandibular joint disorders (TMJDs) are caused by several factors such as anatomical, neuromuscular and psychological alterations. A relationship has been established between TMJDs and postural alterations, a type of anatomical alteration. An anterior position of the head requires hyperactivity of the posterior neck region and shoulder muscles to prevent the head from falling forward. This compensatory muscular function may cause fatigue, discomfort and trigger point activation. To our knowledge, a method for assessing human postural attitude in more than one plane has not been reported. Thus, the aim of this study was to design a methodology to measure the external human postural attitude in frontal and sagittal planes, with proper validity and reliability analyses. METHODS The variable postures of 78 subjects (36 men, 42 women; age 18–24 years) were evaluated. The postural attitudes of the subjects were measured in the frontal and sagittal planes, using an acromiopelvimeter, grid panel and Fox plane. RESULTS The method we designed for measuring postural attitudes had adequate reliability and validity, both qualitatively and quantitatively, based on Cohen’s Kappa coefficient (> 0.87) and Pearson’s correlation coefficient (r = 0.824, > 80%). CONCLUSION This method exhibits adequate metrical properties and can therefore be used in further research on the association of human body posture with skeletal types and TMJDs. PMID:26768173

  7. Validating a new methodology for strain estimation from cardiac cine MRI

    NASA Astrophysics Data System (ADS)

    Elnakib, Ahmed; Beache, Garth M.; Gimel'farb, Georgy; Inanc, Tamer; El-Baz, Ayman

    2013-10-01

    This paper focuses on validating a novel framework for estimating the functional strain from cine cardiac magnetic resonance imaging (CMRI). The framework consists of three processing steps. First, the left ventricle (LV) wall borders are segmented using a level-set based deformable model. Second, the points on the wall borders are tracked during the cardiac cycle based on solving the Laplace equation between the LV edges. Finally, the circumferential and radial strains are estimated at the inner, mid-wall, and outer borders of the LV wall. The proposed framework is validated using synthetic phantoms of the material strains that account for the physiological features and the LV response during the cardiac cycle. Experimental results on simulated phantom images confirm the accuracy and robustness of our method.

  8. Alternative occupied volume integrity (OVI) tests and analyses.

    DOT National Transportation Integrated Search

    2013-10-01

    FRA, supported by the Volpe Center, conducted research on alternative methods of evaluating occupied volume integrity (OVI) in passenger railcars. Guided by this research, an alternative methodology for evaluating OVI that ensures an equivalent or gr...

  9. Volume, conservation and instruction: A classroom based solomon four group study of conflict

    NASA Astrophysics Data System (ADS)

    Rowell, J. A.; Dawson, C. J.

    The research reported is an attempt to widen the applicability of Piagetian theory-based conflict methodology from individual situations to whole classes. A Solomon four group experimental design augmented by a delayed posttest, was used to provide a controlled framework for studying the effects of conflict instruction on Grade 8 students' ability to conserve volume of noncompressible matter, and to apply that knowledge to gas volume. The results, reported for individuals and groups, show the methodology can be effective, particularly when instruction is preceded by a pretest. Immediate posttest differences in knowledge of gas volume between spontaneous (pretest) conservers and instructed conservers of volume of noncompressible matter were no longer in evidence on the delayed posttest. This observation together with the effects of pretesting and of the instructional sequence are shown to have a consistent Piagetian interpretation. Practical implications are discussed.

  10. A hierarchical clustering methodology for the estimation of toxicity.

    PubMed

    Martin, Todd M; Harten, Paul; Venkatapathy, Raghuraman; Das, Shashikala; Young, Douglas M

    2008-01-01

    ABSTRACT A quantitative structure-activity relationship (QSAR) methodology based on hierarchical clustering was developed to predict toxicological endpoints. This methodology utilizes Ward's method to divide a training set into a series of structurally similar clusters. The structural similarity is defined in terms of 2-D physicochemical descriptors (such as connectivity and E-state indices). A genetic algorithm-based technique is used to generate statistically valid QSAR models for each cluster (using the pool of descriptors described above). The toxicity for a given query compound is estimated using the weighted average of the predictions from the closest cluster from each step in the hierarchical clustering assuming that the compound is within the domain of applicability of the cluster. The hierarchical clustering methodology was tested using a Tetrahymena pyriformis acute toxicity data set containing 644 chemicals in the training set and with two prediction sets containing 339 and 110 chemicals. The results from the hierarchical clustering methodology were compared to the results from several different QSAR methodologies.

  11. Assessing Construct Validity Using Multidimensional Item Response Theory.

    ERIC Educational Resources Information Center

    Ackerman, Terry A.

    The concept of a user-specified validity sector is discussed. The idea of the validity sector combines the work of M. D. Reckase (1986) and R. Shealy and W. Stout (1991). Reckase developed a methodology to represent an item in a multidimensional latent space as a vector. Item vectors are computed using multidimensional item response theory item…

  12. Validation of a New Methodology to Determine 3-Dimensional Endograft Apposition, Position, and Expansion in the Aortic Neck After Endovascular Aneurysm Repair.

    PubMed

    Schuurmann, Richte C L; Overeem, Simon P; van Noort, Kim; de Vries, Bastiaan A; Slump, Cornelis H; de Vries, Jean-Paul P M

    2018-04-01

    To validate a novel methodology employing regular postoperative computed tomography angiography (CTA) scans to assess essential factors contributing to durable endovascular aneurysm repair (EVAR), including endograft deployment accuracy, neck adaptation to radial forces, and effective apposition of the fabric within the aortic neck. Semiautomatic calculation of the apposition surface between the endograft and the infrarenal aortic neck was validated in vitro by comparing the calculated surfaces over a cylindrical silicon model with known dimensions on CTA reconstructions with various slice thicknesses. Interobserver variabilities were assessed for calculating endograft position, apposition, and expansion in a retrospective series of 24 elective EVAR patients using the repeatability coefficient (RC) and the intraclass correlation coefficient (ICC). The variability of these calculations was compared with variability of neck length and diameter measurements on centerline reconstructions of the preoperative and first postoperative CTA scans. In vitro validation showed accurate calculation of apposition, with deviation of 2.8% from the true surface for scans with 1-mm slice thickness. Excellent agreement was achieved for calculation of the endograft dimensions (ICC 0.909 to 0.996). Variability was low for calculation of endograft diameter (RC 2.3 mm), fabric distances (RC 5.2 to 5.7 mm), and shortest apposition length (RC 4.1 mm), which was the same as variability of regular neck diameter (RC 0.9 to 1.1 mm) and length (RC 4.0 to 8.0 mm) measurements. This retrospective validation study showed that apposition surfaces between an endograft and the infrarenal neck can be calculated accurately and with low variability. Determination of the (ap)position of the endograft in the aortic neck and detection of subtle changes during follow-up are crucial to determining eventual failure after EVAR.

  13. Validation of Volume and Taper Equations For Loblolly Shortleaf and Slash Pine

    Treesearch

    Allan E. Tiarks; V. Clark Baldwin

    1999-01-01

    Inside-bark diameter measurements at 6.64 intervals of 137 loblolly, 52 shortleaf, and 64 slash pines were used to calculate the actual volume and taper of each species for comparison with volumes and tapers predicted from published equations. The loblolly pine were cut in Texas (TX) and Louisiana (LA) while the shortleaf was sampled only in TX. The slash pine were...

  14. Methodological quality of meta-analyses on treatments for chronic obstructive pulmonary disease: a cross-sectional study using the AMSTAR (Assessing the Methodological Quality of Systematic Reviews) tool.

    PubMed

    Ho, Robin S T; Wu, Xinyin; Yuan, Jinqiu; Liu, Siya; Lai, Xin; Wong, Samuel Y S; Chung, Vincent C H

    2015-01-08

    Meta-analysis (MA) of randomised trials is considered to be one of the best approaches for summarising high-quality evidence on the efficacy and safety of treatments. However, methodological flaws in MAs can reduce the validity of conclusions, subsequently impairing the quality of decision making. To assess the methodological quality of MAs on COPD treatments. A cross-sectional study on MAs of COPD trials. MAs published during 2000-2013 were sampled from the Cochrane Database of Systematic Reviews and Database of Abstracts of Reviews of Effect. Methodological quality was assessed using the validated AMSTAR (Assessing the Methodological Quality of Systematic Reviews) tool. Seventy-nine MAs were sampled. Only 18% considered the scientific quality of primary studies when formulating conclusions and 49% used appropriate meta-analytic methods to combine findings. The problems were particularly acute among MAs on pharmacological treatments. In 48% of MAs the authors did not report conflict of interest. Fifty-eight percent reported harmful effects of treatment. Publication bias was not assessed in 65% of MAs, and only 10% had searched non-English databases. The methodological quality of the included MAs was disappointing. Consideration of scientific quality when formulating conclusions should be made explicit. Future MAs should improve on reporting conflict of interest and harm, assessment of publication bias, prevention of language bias and use of appropriate meta-analytic methods.

  15. Methodological quality of meta-analyses on treatments for chronic obstructive pulmonary disease: a cross-sectional study using the AMSTAR (Assessing the Methodological Quality of Systematic Reviews) tool

    PubMed Central

    Ho, Robin ST; Wu, Xinyin; Yuan, Jinqiu; Liu, Siya; Lai, Xin; Wong, Samuel YS; Chung, Vincent CH

    2015-01-01

    Background: Meta-analysis (MA) of randomised trials is considered to be one of the best approaches for summarising high-quality evidence on the efficacy and safety of treatments. However, methodological flaws in MAs can reduce the validity of conclusions, subsequently impairing the quality of decision making. Aims: To assess the methodological quality of MAs on COPD treatments. Methods: A cross-sectional study on MAs of COPD trials. MAs published during 2000–2013 were sampled from the Cochrane Database of Systematic Reviews and Database of Abstracts of Reviews of Effect. Methodological quality was assessed using the validated AMSTAR (Assessing the Methodological Quality of Systematic Reviews) tool. Results: Seventy-nine MAs were sampled. Only 18% considered the scientific quality of primary studies when formulating conclusions and 49% used appropriate meta-analytic methods to combine findings. The problems were particularly acute among MAs on pharmacological treatments. In 48% of MAs the authors did not report conflict of interest. Fifty-eight percent reported harmful effects of treatment. Publication bias was not assessed in 65% of MAs, and only 10% had searched non-English databases. Conclusions: The methodological quality of the included MAs was disappointing. Consideration of scientific quality when formulating conclusions should be made explicit. Future MAs should improve on reporting conflict of interest and harm, assessment of publication bias, prevention of language bias and use of appropriate meta-analytic methods. PMID:25569783

  16. A prototype software methodology for the rapid evaluation of biomanufacturing process options.

    PubMed

    Chhatre, Sunil; Francis, Richard; O'Donovan, Kieran; Titchener-Hooker, Nigel J; Newcombe, Anthony R; Keshavarz-Moore, Eli

    2007-10-01

    A three-layered simulation methodology is described that rapidly evaluates biomanufacturing process options. In each layer, inferior options are screened out, while more promising candidates are evaluated further in the subsequent, more refined layer, which uses more rigorous models that require more data from time-consuming experimentation. Screening ensures laboratory studies are focused only on options showing the greatest potential. To simplify the screening, outputs of production level, cost and time are combined into a single value using multi-attribute-decision-making techniques. The methodology was illustrated by evaluating alternatives to an FDA (U.S. Food and Drug Administration)-approved process manufacturing rattlesnake antivenom. Currently, antivenom antibodies are recovered from ovine serum by precipitation/centrifugation and proteolyzed before chromatographic purification. Alternatives included increasing the feed volume, replacing centrifugation with microfiltration and replacing precipitation/centrifugation with a Protein G column. The best alternative used a higher feed volume and a Protein G step. By rapidly evaluating the attractiveness of options, the methodology facilitates efficient and cost-effective process development.

  17. Developing a Validity Argument through Abductive Reasoning with an Empirical Demonstration of the Latent Class Analysis

    ERIC Educational Resources Information Center

    Wu, Amery D.; Stone, Jake E.; Liu, Yan

    2016-01-01

    This article proposes and demonstrates a methodology for test score validation through abductive reasoning. It describes how abductive reasoning can be utilized in support of the claims made about test score validity. This methodology is demonstrated with a real data example of the Canadian English Language Proficiency Index Program…

  18. Development and Validation of a New Methodology to Assess the Vineyard Water Status by On-the-Go Near Infrared Spectroscopy

    PubMed Central

    Diago, Maria P.; Fernández-Novales, Juan; Gutiérrez, Salvador; Marañón, Miguel; Tardaguila, Javier

    2018-01-01

    Assessing water status and optimizing irrigation is of utmost importance in most winegrowing countries, as the grapevine vegetative growth, yield, and grape quality can be impaired under certain water stress situations. Conventional plant-based methods for water status monitoring are either destructive or time and labor demanding, therefore unsuited to detect the spatial variation of moisten content within a vineyard plot. In this context, this work aims at the development and comprehensive validation of a novel, non-destructive methodology to assess the vineyard water status distribution using on-the-go, contactless, near infrared (NIR) spectroscopy. Likewise, plant water status prediction models were built and intensely validated using the stem water potential (ψs) as gold standard. Predictive models were developed making use of a vast number of measurements, acquired on 15 dates with diverse environmental conditions, at two different spatial scales, on both sides of vertical shoot positioned canopies, over two consecutive seasons. Different cross-validation strategies were also tested and compared. Predictive models built from east-acquired spectra yielded the best performance indicators in both seasons, with determination coefficient of prediction (RP2) ranging from 0.68 to 0.85, and sensitivity (expressed as prediction root mean square error) between 0.131 and 0.190 MPa, regardless the spatial scale. These predictive models were implemented to map the spatial variability of the vineyard water status at two different dates, and provided useful, practical information to help delineating specific irrigation schedules. The performance and the large amount of data that this on-the-go spectral solution provides, facilitates the exploitation of this non-destructive technology to monitor and map the vineyard water status variability with high spatial and temporal resolution, in the context of precision and sustainable viticulture. PMID:29441086

  19. Development and Validation of a New Methodology to Assess the Vineyard Water Status by On-the-Go Near Infrared Spectroscopy.

    PubMed

    Diago, Maria P; Fernández-Novales, Juan; Gutiérrez, Salvador; Marañón, Miguel; Tardaguila, Javier

    2018-01-01

    Assessing water status and optimizing irrigation is of utmost importance in most winegrowing countries, as the grapevine vegetative growth, yield, and grape quality can be impaired under certain water stress situations. Conventional plant-based methods for water status monitoring are either destructive or time and labor demanding, therefore unsuited to detect the spatial variation of moisten content within a vineyard plot. In this context, this work aims at the development and comprehensive validation of a novel, non-destructive methodology to assess the vineyard water status distribution using on-the-go, contactless, near infrared (NIR) spectroscopy. Likewise, plant water status prediction models were built and intensely validated using the stem water potential (ψ s ) as gold standard. Predictive models were developed making use of a vast number of measurements, acquired on 15 dates with diverse environmental conditions, at two different spatial scales, on both sides of vertical shoot positioned canopies, over two consecutive seasons. Different cross-validation strategies were also tested and compared. Predictive models built from east-acquired spectra yielded the best performance indicators in both seasons, with determination coefficient of prediction ([Formula: see text]) ranging from 0.68 to 0.85, and sensitivity (expressed as prediction root mean square error) between 0.131 and 0.190 MPa, regardless the spatial scale. These predictive models were implemented to map the spatial variability of the vineyard water status at two different dates, and provided useful, practical information to help delineating specific irrigation schedules. The performance and the large amount of data that this on-the-go spectral solution provides, facilitates the exploitation of this non-destructive technology to monitor and map the vineyard water status variability with high spatial and temporal resolution, in the context of precision and sustainable viticulture.

  20. Market projections of cellulose nanomaterial-enabled products-- Part 2: Volume estimates

    Treesearch

    John Cowie; E.M. (Ted) Bilek; Theodore H. Wegner; Jo Anne Shatkin

    2014-01-01

    Nanocellulose has enormous potential to provide an important materials platform in numerous product sectors. This study builds on previous work by the same authors in which likely high-volume, low-volume, and novel applications for cellulosic nanomaterials were identified. In particular, this study creates a transparent methodology and estimates the potential annual...

  1. Initial Development and Validation of the Global Citizenship Scale

    ERIC Educational Resources Information Center

    Morais, Duarte B.; Ogden, Anthony C.

    2011-01-01

    The purpose of this article is to report on the initial development of a theoretically grounded and empirically validated scale to measure global citizenship. The methodology employed is multi-faceted, including two expert face validity trials, extensive exploratory and confirmatory factor analyses with multiple datasets, and a series of three…

  2. The development and validation of the Bronchiectasis Health Questionnaire.

    PubMed

    Spinou, Arietta; Siegert, Richard J; Guan, Wei-Jie; Patel, Amit S; Gosker, Harry R; Lee, Kai K; Elston, Caroline; Loebinger, Michael R; Wilson, Robert; Garrod, Rachel; Birring, Surinder S

    2017-05-01

    Health-related quality of life or health status is significantly impaired in bronchiectasis. There is a paucity of brief, simple-to-use, disease-specific health status measures. The aim of this study was to develop and validate the Bronchiectasis Health Questionnaire (BHQ), a new health status measure that is brief and generates a single overall score.Patients with bronchiectasis were recruited from two outpatient clinics, during a clinically stable stage. The development of the questionnaire followed three phases: item generation and item reduction using Rasch analysis, validation, and repeatability testing. The BHQ was translated into 11 languages using standardised methodology.206 patients with bronchiectasis completed a preliminary 65-item questionnaire. 55 items were removed due to redundancy or poor fit to the Rasch model. The final version of the BHQ consisted of 10 items. Internal consistency was good (Cronbach's α=0.85). Convergent validity of the BHQ with the St George's Respiratory Questionnaire was high (r= -0.82; p<0.001) and moderate with lung function (forced expiratory volume in 1 s % predicted r= -0.27; p=0.001). There was a significant association between BHQ scores and number of exacerbations of bronchiectasis in the last 12 months (p<0.001), hospital admissions (p=0.001) and computed tomography scan bronchiectasis pulmonary lobe counts (p<0.001). BHQ scores were significantly worse in patients with sputum bacterial colonisation versus no colonisation (p=0.048). The BHQ was highly repeatable after 2 weeks (intraclass correlation coefficient 0.89).The BHQ is a brief, valid and repeatable, self-completed health status questionnaire for bronchiectasis that generates a single total score. It can be used in the clinic to assess bronchiectasis from the patient's perspective. Copyright ©ERS 2017.

  3. Validating the Octave Allegro Information Systems Risk Assessment Methodology: A Case Study

    ERIC Educational Resources Information Center

    Keating, Corland G.

    2014-01-01

    An information system (IS) risk assessment is an important part of any successful security management strategy. Risk assessments help organizations to identify mission-critical IS assets and prioritize risk mitigation efforts. Many risk assessment methodologies, however, are complex and can only be completed successfully by highly qualified and…

  4. Methodological Issues in Measuring the Development of Character

    ERIC Educational Resources Information Center

    Card, Noel A.

    2017-01-01

    In this article I provide an overview of the methodological issues involved in measuring constructs relevant to character development and education. I begin with a nontechnical overview of the 3 fundamental psychometric properties of measurement: reliability, validity, and equivalence. Developing and evaluating measures to ensure evidence of all 3…

  5. [Methodological quality of an article on the treatment of gastric cancer adopted as protocol by some Chilean hospitals].

    PubMed

    Manterola, Carlos; Torres, Rodrigo; Burgos, Luis; Vial, Manuel; Pineda, Viviana

    2006-07-01

    Surgery is a curative treatment for gastric cancer (GC). As relapse is frequent, adjuvant therapies such as postoperative chemo radiotherapy have been tried. In Chile, some hospitals adopted Macdonald's study as a protocol for the treatment of GC. To determine methodological quality and internal and external validity of the Macdonald study. Three instruments were applied that assess methodological quality. A critical appraisal was done and the internal and external validity of the methodological quality was analyzed with two scales: MINCIR (Methodology and Research in Surgery), valid for therapy studies and CONSORT (Consolidated Standards of Reporting Trials), valid for randomized controlled trials (RCT). Guides and scales were applied by 5 researchers with training in clinical epidemiology. The reader's guide verified that the Macdonald study was not directed to answer a clearly defined question. There was random assignment, but the method used is not described and the patients were not considered until the end of the study (36% of the group with surgery plus chemo radiotherapy did not complete treatment). MINCIR scale confirmed a multicentric RCT, not blinded, with an unclear randomized sequence, erroneous sample size estimation, vague objectives and no exclusion criteria. CONSORT system proved the lack of working hypothesis and specific objectives as well as an absence of exclusion criteria and identification of the primary variable, an imprecise estimation of sample size, ambiguities in the randomization process, no blinding, an absence of statistical adjustment and the omission of a subgroup analysis. The instruments applied demonstrated methodological shortcomings that compromise the internal and external validity of the.

  6. Validity and reliability of total body volume and relative body fat mass from a 3-dimensional photonic body surface scanner

    PubMed Central

    Mähler, Anja; Boschmann, Michael; Jeran, Stephanie

    2017-01-01

    Objective Three-dimensional photonic body surface scanners (3DPS) feature a tool to estimate total body volume (BV) from 3D images of the human body, from which the relative body fat mass (%BF) can be calculated. However, information on validity and reliability of these measurements for application in epidemiological studies is limited. Methods Validity was assessed among 32 participants (men, 50%) aged 20–58 years. BV and %BF were assessed using a 3DPS (VitusSmart XXL) and air displacement plethysmography (ADP) with a BOD POD® device using equations by Siri and Brozek. Three scans were obtained per participant (standard, relaxed, exhaled scan). Validity was evaluated based on the agreement of 3DPS with ADP using Bland Altman plots, correlation analysis and Wilcoxon signed ranks test for paired samples. Reliability was investigated in a separate sample of 18 participants (men, 67%) aged 25–66 years using intraclass correlation coefficients (ICC) based on two repeated 3DPS measurements four weeks apart. Results Mean BV and %BF were higher using 3DPS compared to ADP, (3DPS-ADP BV difference 1.1 ± 0.9 L, p<0.01; %BF difference 7.0 ± 5.6, p<0.01), yet the disagreement was not associated with gender, age or body mass index (BMI). Reliability was excellent for 3DPS BV (ICC, 0.998) and good for 3DPS %BF (ICC, 0.982). Results were similar for the standard scan and the relaxed scan but somewhat weaker for the exhaled scan. Conclusions Although BV and %BF are higher than ADP measurements, our data indicate good validity and reliability for an application of 3DPS in epidemiological studies. PMID:28672039

  7. Validity and reliability of total body volume and relative body fat mass from a 3-dimensional photonic body surface scanner.

    PubMed

    Adler, Carolin; Steinbrecher, Astrid; Jaeschke, Lina; Mähler, Anja; Boschmann, Michael; Jeran, Stephanie; Pischon, Tobias

    2017-01-01

    Three-dimensional photonic body surface scanners (3DPS) feature a tool to estimate total body volume (BV) from 3D images of the human body, from which the relative body fat mass (%BF) can be calculated. However, information on validity and reliability of these measurements for application in epidemiological studies is limited. Validity was assessed among 32 participants (men, 50%) aged 20-58 years. BV and %BF were assessed using a 3DPS (VitusSmart XXL) and air displacement plethysmography (ADP) with a BOD POD® device using equations by Siri and Brozek. Three scans were obtained per participant (standard, relaxed, exhaled scan). Validity was evaluated based on the agreement of 3DPS with ADP using Bland Altman plots, correlation analysis and Wilcoxon signed ranks test for paired samples. Reliability was investigated in a separate sample of 18 participants (men, 67%) aged 25-66 years using intraclass correlation coefficients (ICC) based on two repeated 3DPS measurements four weeks apart. Mean BV and %BF were higher using 3DPS compared to ADP, (3DPS-ADP BV difference 1.1 ± 0.9 L, p<0.01; %BF difference 7.0 ± 5.6, p<0.01), yet the disagreement was not associated with gender, age or body mass index (BMI). Reliability was excellent for 3DPS BV (ICC, 0.998) and good for 3DPS %BF (ICC, 0.982). Results were similar for the standard scan and the relaxed scan but somewhat weaker for the exhaled scan. Although BV and %BF are higher than ADP measurements, our data indicate good validity and reliability for an application of 3DPS in epidemiological studies.

  8. Some Findings Concerning Requirements in Agile Methodologies

    NASA Astrophysics Data System (ADS)

    Rodríguez, Pilar; Yagüe, Agustín; Alarcón, Pedro P.; Garbajosa, Juan

    Agile methods have appeared as an attractive alternative to conventional methodologies. These methods try to reduce the time to market and, indirectly, the cost of the product through flexible development and deep customer involvement. The processes related to requirements have been extensively studied in literature, in most cases in the frame of conventional methods. However, conclusions of conventional methodologies could not be necessarily valid for Agile; in some issues, conventional and Agile processes are radically different. As recent surveys report, inadequate project requirements is one of the most conflictive issues in agile approaches and better understanding about this is needed. This paper describes some findings concerning requirements activities in a project developed under an agile methodology. The project intended to evolve an existing product and, therefore, some background information was available. The major difficulties encountered were related to non-functional needs and management of requirements dependencies.

  9. Prediction of resource volumes at untested locations using simple local prediction models

    USGS Publications Warehouse

    Attanasi, E.D.; Coburn, T.C.; Freeman, P.A.

    2006-01-01

    This paper shows how local spatial nonparametric prediction models can be applied to estimate volumes of recoverable gas resources at individual undrilled sites, at multiple sites on a regional scale, and to compute confidence bounds for regional volumes based on the distribution of those estimates. An approach that combines cross-validation, the jackknife, and bootstrap procedures is used to accomplish this task. Simulation experiments show that cross-validation can be applied beneficially to select an appropriate prediction model. The cross-validation procedure worked well for a wide range of different states of nature and levels of information. Jackknife procedures are used to compute individual prediction estimation errors at undrilled locations. The jackknife replicates also are used with a bootstrap resampling procedure to compute confidence bounds for the total volume. The method was applied to data (partitioned into a training set and target set) from the Devonian Antrim Shale continuous-type gas play in the Michigan Basin in Otsego County, Michigan. The analysis showed that the model estimate of total recoverable volumes at prediction sites is within 4 percent of the total observed volume. The model predictions also provide frequency distributions of the cell volumes at the production unit scale. Such distributions are the basis for subsequent economic analyses. ?? Springer Science+Business Media, LLC 2007.

  10. Vending machine assessment methodology. A systematic review.

    PubMed

    Matthews, Melissa A; Horacek, Tanya M

    2015-07-01

    The nutritional quality of food and beverage products sold in vending machines has been implicated as a contributing factor to the development of an obesogenic food environment. How comprehensive, reliable, and valid are the current assessment tools for vending machines to support or refute these claims? A systematic review was conducted to summarize, compare, and evaluate the current methodologies and available tools for vending machine assessment. A total of 24 relevant research studies published between 1981 and 2013 met inclusion criteria for this review. The methodological variables reviewed in this study include assessment tool type, study location, machine accessibility, product availability, healthfulness criteria, portion size, price, product promotion, and quality of scientific practice. There were wide variations in the depth of the assessment methodologies and product healthfulness criteria utilized among the reviewed studies. Of the reviewed studies, 39% evaluated machine accessibility, 91% evaluated product availability, 96% established healthfulness criteria, 70% evaluated portion size, 48% evaluated price, 52% evaluated product promotion, and 22% evaluated the quality of scientific practice. Of all reviewed articles, 87% reached conclusions that provided insight into the healthfulness of vended products and/or vending environment. Product healthfulness criteria and complexity for snack and beverage products was also found to be variable between the reviewed studies. These findings make it difficult to compare results between studies. A universal, valid, and reliable vending machine assessment tool that is comprehensive yet user-friendly is recommended. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Definition and applications of a versatile chemical pollution footprint methodology.

    PubMed

    Zijp, Michiel C; Posthuma, Leo; van de Meent, Dik

    2014-09-16

    Because of the great variety in behavior and modes of action of chemicals, impact assessment of multiple substances is complex, as is the communication of its results. Given calls for cumulative impact assessments, we developed a methodology that is aimed at expressing the expected cumulative impacts of mixtures of chemicals on aquatic ecosystems for a region and subsequently allows to present these results as a chemical pollution footprint, in short: a chemical footprint. Setting and using a boundary for chemical pollution is part of the methodology. Two case studies were executed to test and illustrate the methodology. The first case illustrates that the production and use of organic substances in Europe, judged with the European water volume, stays within the currently set policy boundaries for chemical pollution. The second case shows that the use of pesticides in Northwestern Europe, judged with the regional water volume, has exceeded the set boundaries, while showing a declining trend over time. The impact of mixtures of substances in the environment could be expressed as a chemical footprint, and the relative contribution of substances to that footprint could be evaluated. These features are a novel type of information to support risk management, by helping prioritization of management among chemicals and environmental compartments.

  12. Coyote Papers: The University of Arizona Working Papers in Linguistics, Volume 11. Special Volume on Native American Languages.

    ERIC Educational Resources Information Center

    Weinberg, Jessica P., Ed.; O'Bryan, Erin L., Ed.; Moll, Laura A., Ed.; Haugan, Jason D., Ed.

    The five papers included in this volume approach the study of American Indian languages from a diverse array of methodological and theoretical approaches to linguistics. Two papers focus on approaches that come from the applied linguistics tradition, emphasizing ethnolinguistics and discourse analysis: Sonya Bird's paper "A Cross Cultural…

  13. Validated analytical methodology for the simultaneous determination of a wide range of pesticides in human blood using GC-MS/MS and LC-ESI/MS/MS and its application in two poisoning cases.

    PubMed

    Luzardo, Octavio P; Almeida-González, Maira; Ruiz-Suárez, Norberto; Zumbado, Manuel; Henríquez-Hernández, Luis A; Meilán, María José; Camacho, María; Boada, Luis D

    2015-09-01

    Pesticides are frequently responsible for human poisoning and often the information on the involved substance is lacking. The great variety of pesticides that could be responsible for intoxication makes necessary the development of powerful and versatile analytical methodologies, which allows the identification of the unknown toxic substance. Here we developed a methodology for simultaneous identification and quantification in human blood of 109 highly toxic pesticides. The application of this analytical scheme would help in minimizing the cost of this type of chemical identification, maximizing the chances of identifying the pesticide involved. In the methodology that we present here, we use a liquid-liquid extraction, followed by one single purification step, and quantitation of analytes by a combination of liquid and gas chromatography, both coupled to triple quadrupole mass spectrometry, which is operated in the mode of multiple reaction monitoring. The methodology has been fully validated, and its applicability has been demonstrated in two recent cases involving one self-poisoning fatality and one non-fatal homicidal attempt. Copyright © 2015 The Chartered Society of Forensic Sciences. Published by Elsevier Ireland Ltd. All rights reserved.

  14. Is Ultrasound a Valid and Reliable Imaging Modality for Airway Evaluation?: An Observational Computed Tomographic Validation Study Using Submandibular Scanning of the Mouth and Oropharynx.

    PubMed

    Abdallah, Faraj W; Yu, Eugene; Cholvisudhi, Phantila; Niazi, Ahtsham U; Chin, Ki J; Abbas, Sherif; Chan, Vincent W

    2017-01-01

    Ultrasound (US) imaging of the airway may be useful in predicting difficulty of airway management (DAM); but its use is limited by lack of proof of its validity and reliability. We sought to validate US imaging of the airway by comparison to CT-scan, and to assess its inter- and intra-observer reliability. We used submandibular sonographic imaging of the mouth and oropharynx to examine how well the ratio of tongue thickness to oral cavity height correlates with the ratio of tongue volume to oral cavity volume, an established tomographic measure of DAM. A cohort of 34 patients undergoing CT-scan was recruited. Study standardized assessments included CT-measured ratios of tongue volume to oropharyngeal cavity volume; tongue thickness to oral cavity height; and US-measured ratio of tongue thickness to oral cavity height. Two sonographers independently performed US imaging of the airway before and after CT-scan. Our findings indicate that the US-measured ratio of tongue thickness to oral cavity height highly correlates with the CT-measured ratio of tongue volume to oral cavity volume. US measurements also demonstrated strong inter- and intra-observer reliability. This study suggests that US is a valid and reliable tool for imaging the oral and oropharyngeal parts of the airway, as well as for measuring the volumetric relationship between the tongue and oral cavity, and may therefore be a useful predictor of DAM. © 2016 by the American Institute of Ultrasound in Medicine.

  15. Methodological challenges when doing research that includes ethnic minorities: a scoping review.

    PubMed

    Morville, Anne-Le; Erlandsson, Lena-Karin

    2016-11-01

    There are challenging methodological issues in obtaining valid and reliable results on which to base occupational therapy interventions for ethnic minorities. The aim of this scoping review is to describe the methodological problems within occupational therapy research, when ethnic minorities are included. A thorough literature search yielded 21 articles obtained from the scientific databases PubMed, Cinahl, Web of Science and PsychInfo. Analysis followed Arksey and O'Malley's framework for scoping reviews, applying content analysis. The results showed methodological issues concerning the entire research process from defining and recruiting samples, the conceptual understanding, lack of appropriate instruments, data collection using interpreters to analyzing data. In order to avoid excluding the ethnic minorities from adequate occupational therapy research and interventions, development of methods for the entire research process is needed. It is a costly and time-consuming process, but the results will be valid and reliable, and therefore more applicable in clinical practice.

  16. Validation of real-time three-dimensional echocardiography for quantifying left ventricular volumes in the presence of a left ventricular aneurysm: in vitro and in vivo studies

    NASA Technical Reports Server (NTRS)

    Qin, J. X.; Jones, M.; Shiota, T.; Greenberg, N. L.; Tsujino, H.; Firstenberg, M. S.; Gupta, P. C.; Zetts, A. D.; Xu, Y.; Ping Sun, J.; hide

    2000-01-01

    OBJECTIVES: To validate the accuracy of real-time three-dimensional echocardiography (RT3DE) for quantifying aneurysmal left ventricular (LV) volumes. BACKGROUND: Conventional two-dimensional echocardiography (2DE) has limitations when applied for quantification of LV volumes in patients with LV aneurysms. METHODS: Seven aneurysmal balloons, 15 sheep (5 with chronic LV aneurysms and 10 without LV aneurysms) during 60 different hemodynamic conditions and 29 patients (13 with chronic LV aneurysms and 16 with normal LV) underwent RT3DE and 2DE. Electromagnetic flow meters and magnetic resonance imaging (MRI) served as reference standards in the animals and in the patients, respectively. Rotated apical six-plane method with multiplanar Simpson's rule and apical biplane Simpson's rule were used to determine LV volumes by RT3DE and 2DE, respectively. RESULTS: Both RT3DE and 2DE correlated well with actual volumes for aneurysmal balloons. However, a significantly smaller mean difference (MD) was found between RT3DE and actual volumes (-7 ml for RT3DE vs. 22 ml for 2DE, p = 0.0002). Excellent correlation and agreement between RT3DE and electromagnetic flow meters for LV stroke volumes for animals with aneurysms were observed, while 2DE showed lesser correlation and agreement (r = 0.97, MD = -1.0 ml vs. r = 0.76, MD = 4.4 ml). In patients with LV aneurysms, better correlation and agreement between RT3DE and MRI for LV volumes were obtained (r = 0.99, MD = -28 ml) than between 2DE and MRI (r = 0.91, MD = -49 ml). CONCLUSIONS: For geometrically asymmetric LVs associated with ventricular aneurysms, RT3DE can accurately quantify LV volumes.

  17. A methodology for identification and control of electro-mechanical actuators

    PubMed Central

    Tutunji, Tarek A.; Saleem, Ashraf

    2015-01-01

    Mechatronic systems are fully-integrated engineering systems that are composed of mechanical, electronic, and computer control sub-systems. These integrated systems use electro-mechanical actuators to cause the required motion. Therefore, the design of appropriate controllers for these actuators are an essential step in mechatronic system design. In this paper, a three-stage methodology for real-time identification and control of electro-mechanical actuator plants is presented, tested, and validated. First, identification models are constructed from experimental data to approximate the plants’ response. Second, the identified model is used in a simulation environment for the purpose of designing a suitable controller. Finally, the designed controller is applied and tested on the real plant through Hardware-in-the-Loop (HIL) environment. The described three-stage methodology provides the following practical contributions: • Establishes an easy-to-follow methodology for controller design of electro-mechanical actuators. • Combines off-line and on-line controller design for practical performance. • Modifies the HIL concept by using physical plants with computer control (rather than virtual plants with physical controllers). Simulated and experimental results for two case studies, induction motor and vehicle drive system, are presented in order to validate the proposed methodology. These results showed that electromechanical actuators can be identified and controlled using an easy-to-duplicate and flexible procedure. PMID:26150992

  18. A methodology for identification and control of electro-mechanical actuators.

    PubMed

    Tutunji, Tarek A; Saleem, Ashraf

    2015-01-01

    Mechatronic systems are fully-integrated engineering systems that are composed of mechanical, electronic, and computer control sub-systems. These integrated systems use electro-mechanical actuators to cause the required motion. Therefore, the design of appropriate controllers for these actuators are an essential step in mechatronic system design. In this paper, a three-stage methodology for real-time identification and control of electro-mechanical actuator plants is presented, tested, and validated. First, identification models are constructed from experimental data to approximate the plants' response. Second, the identified model is used in a simulation environment for the purpose of designing a suitable controller. Finally, the designed controller is applied and tested on the real plant through Hardware-in-the-Loop (HIL) environment. The described three-stage methodology provides the following practical contributions: •Establishes an easy-to-follow methodology for controller design of electro-mechanical actuators.•Combines off-line and on-line controller design for practical performance.•Modifies the HIL concept by using physical plants with computer control (rather than virtual plants with physical controllers). Simulated and experimental results for two case studies, induction motor and vehicle drive system, are presented in order to validate the proposed methodology. These results showed that electromechanical actuators can be identified and controlled using an easy-to-duplicate and flexible procedure.

  19. Passenger rail vehicle safety assessment methodology. Volume II, Detailed analyses and simulation results.

    DOT National Transportation Integrated Search

    2000-04-01

    This report presents detailed analytic tools and results on dynamic response which are used to develop the safe dynamic performance limits of commuter passenger vehicles. The methodology consists of determining the critical parameters and characteris...

  20. Exploring the Benefits of Respite Services to Family Caregivers: Methodological Issues and Current Findings

    PubMed Central

    Zarit, Steven H.; Liu, Yin; Bangerter, Lauren R.; Rovine, Michael J.

    2017-01-01

    Objectives There is growing emphasis on empirical validation of the efficacy of community-based services for older people and their families, but research on services such as respite care faces methodological challenges that have limited the growth of outcome studies. We identify problems associated with the usual research approaches for studying respite care, with the goal of stimulating use of novel and more appropriate research designs that can lead to improved studies of community-based services. Method Using the concept of research validity, we evaluate the methodological approaches in the current literature on respite services, including adult day services, in-home respite and overnight respite. Results Although randomized control trials (RCTs) are possible in community settings, validity is compromised by practical limitations of randomization and other problems. Quasi-experimental and interrupted time series designs offer comparable validity to RCTs and can be implemented effectively in community settings. Conclusion An emphasis on RCTs by funders and researchers is not supported by scientific evidence. Alternative designs can lead to development of a valid body of research on community services such as respite. PMID:26729467

  1. Exploring the benefits of respite services to family caregivers: methodological issues and current findings.

    PubMed

    Zarit, Steven H; Bangerter, Lauren R; Liu, Yin; Rovine, Michael J

    2017-03-01

    There is growing emphasis on empirical validation of the efficacy of community-based services for older people and their families, but research on services such as respite care faces methodological challenges that have limited the growth of outcome studies. We identify problems associated with the usual research approaches for studying respite care, with the goal of stimulating use of novel and more appropriate research designs that can lead to improved studies of community-based services. Using the concept of research validity, we evaluate the methodological approaches in the current literature on respite services, including adult day services, in-home respite and overnight respite. Although randomized control trials (RCTs) are possible in community settings, validity is compromised by practical limitations of randomization and other problems. Quasi-experimental and interrupted time series designs offer comparable validity to RCTs and can be implemented effectively in community settings. An emphasis on RCTs by funders and researchers is not supported by scientific evidence. Alternative designs can lead to development of a valid body of research on community services such as respite.

  2. Quantifying biopsychosocial aspects in everyday contexts: an integrative methodological approach from the behavioral sciences

    PubMed Central

    Portell, Mariona; Anguera, M Teresa; Hernández-Mendo, Antonio; Jonsson, Gudberg K

    2015-01-01

    Contextual factors are crucial for evaluative research in psychology, as they provide insights into what works, for whom, in what circumstances, in what respects, and why. Studying behavior in context, however, poses numerous methodological challenges. Although a comprehensive framework for classifying methods seeking to quantify biopsychosocial aspects in everyday contexts was recently proposed, this framework does not contemplate contributions from observational methodology. The aim of this paper is to justify and propose a more general framework that includes observational methodology approaches. Our analysis is rooted in two general concepts: ecological validity and methodological complementarity. We performed a narrative review of the literature on research methods and techniques for studying daily life and describe their shared properties and requirements (collection of data in real time, on repeated occasions, and in natural settings) and classification criteria (eg, variables of interest and level of participant involvement in the data collection process). We provide several examples that illustrate why, despite their higher costs, studies of behavior and experience in everyday contexts offer insights that complement findings provided by other methodological approaches. We urge that observational methodology be included in classifications of research methods and techniques for studying everyday behavior and advocate a renewed commitment to prioritizing ecological validity in behavioral research seeking to quantify biopsychosocial aspects. PMID:26089708

  3. Validation of source approval of HMA surface mix aggregate : final report.

    DOT National Transportation Integrated Search

    2016-04-01

    The main focus of this research project was to develop methodologies for the validation of source approval of hot : mix asphalt surface mix aggregate. In order to further enhance the validation process, a secondary focus was also to : create a spectr...

  4. Validity of VO(2 max) in predicting blood volume: implications for the effect of fitness on aging

    NASA Technical Reports Server (NTRS)

    Convertino, V. A.; Ludwig, D. A.

    2000-01-01

    A multiple regression model was constructed to investigate the premise that blood volume (BV) could be predicted using several anthropometric variables, age, and maximal oxygen uptake (VO(2 max)). To test this hypothesis, age, calculated body surface area (height/weight composite), percent body fat (hydrostatic weight), and VO(2 max) were regressed on to BV using data obtained from 66 normal healthy men. Results from the evaluation of the full model indicated that the most parsimonious result was obtained when age and VO(2 max) were regressed on BV expressed per kilogram body weight. The full model accounted for 52% of the total variance in BV per kilogram body weight. Both age and VO(2 max) were related to BV in the positive direction. Percent body fat contributed <1% to the explained variance in BV when expressed in absolute BV (ml) or as BV per kilogram body weight. When the model was cross validated on 41 new subjects and BV per kilogram body weight was reexpressed as raw BV, the results indicated that the statistical model would be stable under cross validation (e.g., predictive applications) with an accuracy of +/- 1,200 ml at 95% confidence. Our results support the hypothesis that BV is an increasing function of aerobic fitness and to a lesser extent the age of the subject. The results may have implication as to a mechanism by which aerobic fitness and activity may be protective against reduced BV associated with aging.

  5. Reevaluation of tephra volumes for the 1982 eruption of El Chichón volcano, Mexico

    NASA Astrophysics Data System (ADS)

    Nathenson, M.; Fierstein, J.

    2012-12-01

    Reevaluation of tephra volumes for the 1982 eruption of El Chichón volcano, Mexico Manuel Nathenson and Judy Fierstein U.S. Geological Survey, 345 Middlefield Road MS-910, Menlo Park, CA 94025 In a recent numerical simulation of tephra transport and deposition for the 1982 eruption, Bonasia et al. (2012) used masses for the tephra layers (A-1, B, and C) based on the volume data of Carey and Sigurdsson (1986) calculated by the methodology of Rose et al. (1973). For reasons not clear, using the same methodology we obtained volumes for layers A-1 and B much less than those previously reported. For example, for layer A-1, Carey and Sigurdsson (1986) reported a volume of 0.60 km3, whereas we obtain a volume of 0.23 km3. Moreover, applying the more recent methodology of tephra-volume calculation (Pyle, 1989; Fierstein and Nathenson, 1992) and using the isopachs maps in Carey and Sigurdsson (1986), we calculate a total tephra volume of 0.52 km3 (A-1, 0.135; B, 0.125; and C, 0.26 km3). In contrast, Carey and Sigurdsson (1986) report a much larger total volume of 2.19 km3. Such disagreement not only reflects the differing methodologies, but we propose that the volumes calculated with the methodology of Pyle and of Fierstein and Nathenson—involving the use of straight lines on a log thickness versus square root of area plot—better represent the actual fall deposits. After measuring the areas for the isomass contours for the HAZMAPP and FALL3D simulations in Bonasia et al. (2012), we applied the Pyle-Fierstein and Nathenson methodology to calculate the tephra masses deposited on the ground. These masses from five of the simulations range from 70% to 110% of those reported by Carey and Sigurdsson (1986), whereas that for layer B in the HAZMAP calculation is 160%. In the Bonasia et al. (2012) study, the mass erupted by the volcano is a critical input used in the simulation to produce an ash cloud that deposits tephra on the ground. Masses on the ground (as calculated by us

  6. A Methodology for Anatomic Ultrasound Image Diagnostic Quality Assessment.

    PubMed

    Hemmsen, Martin Christian; Lange, Theis; Brandt, Andreas Hjelm; Nielsen, Michael Bachmann; Jensen, Jorgen Arendt

    2017-01-01

    This paper discusses the methods for the assessment of ultrasound image quality based on our experiences with evaluating new methods for anatomic imaging. It presents a methodology to ensure a fair assessment between competing imaging methods using clinically relevant evaluations. The methodology is valuable in the continuing process of method optimization and guided development of new imaging methods. It includes a three phased study plan covering from initial prototype development to clinical assessment. Recommendations to the clinical assessment protocol, software, and statistical analysis are presented. Earlier uses of the methodology has shown that it ensures validity of the assessment, as it separates the influences between developer, investigator, and assessor once a research protocol has been established. This separation reduces confounding influences on the result from the developer to properly reveal the clinical value. This paper exemplifies the methodology using recent studies of synthetic aperture sequential beamforming tissue harmonic imaging.

  7. [A Methodological Quality Assessment of South Korean Nursing Research using Structural Equation Modeling in South Korea].

    PubMed

    Kim, Jung-Hee; Shin, Sujin; Park, Jin-Hwa

    2015-04-01

    The purpose of this study was to evaluate the methodological quality of nursing studies using structural equation modeling in Korea. Databases of KISS, DBPIA, and National Assembly Library up to March 2014 were searched using the MeSH terms 'nursing', 'structure', 'model'. A total of 152 studies were screened. After removal of duplicates and non-relevant titles, 61 papers were read in full. Of the sixty-one articles retrieved, 14 studies were published between 1992 and 2000, 27, between 2001 and 2010, and 20, between 2011 and March 2014. The methodological quality of the review examined varied considerably. The findings of this study suggest that more rigorous research is necessary to address theoretical identification, two indicator rule, distribution of sample, treatment of missing values, mediator effect, discriminant validity, convergent validity, post hoc model modification, equivalent models issues, and alternative models issues should be undergone. Further research with robust consistent methodological study designs from model identification to model respecification is needed to improve the validity of the research.

  8. A combined QC methodology in Ebro Delta HF radar system: real time web monitoring of diagnostic parameters and offline validation of current data

    NASA Astrophysics Data System (ADS)

    Lorente, Pablo; Piedracoba, Silvia; Soto-Navarro, Javier; Ruiz, Maria Isabel; Alvarez Fanjul, Enrique

    2015-04-01

    Over recent years, special attention has been focused on the development of protocols for near real-time quality control (QC) of HF radar derived current measurements. However, no agreement has been worldwide achieved to date to establish a standardized QC methodology, although a number of valuable international initiatives have been launched. In this context, Puertos del Estado (PdE) aims to implement a fully operational HF radar network with four different Codar SeaSonde HF radar systems by means of: - The development of a best-practices robust protocol for data processing and QC procedures to routinely monitor sites performance under a wide variety of ocean conditions. - The execution of validation works with in-situ observations to assess the accuracy of HF radar-derived current measurements. The main goal of the present work is to show this combined methodology for the specific case of Ebro HF radar (although easily expandable to the rest of PdE radar systems), deployed to manage Ebro River deltaic area and promote the conservation of an important aquatic ecosystem exposed to a severe erosion and reshape. To this aim, a web interface has been developed to efficiently monitor in real time the evolution of several diagnostic parameters provided by the manufacturer (CODAR) and used as indicators of HF radar system health. This web, updated automatically every hour, examines sites performance on different time basis in terms of: - Hardware parameters: power and temperature. - Radial parameters, among others: Signal-to-Noise Ratio (SNR), number of radial vectors provided by time step, maximum radial range and bearing. - Total uncertainty metrics provided by CODAR: zonal and meridional standard deviations and covariance between both components. - Additionally, a widget embedded in the web interface executes queries against PdE database, providing the chance to compare current time series observed by Tarragona buoy (located within Ebro HF radar spatial domain) and

  9. Challenges in Rotorcraft Acoustic Flight Prediction and Validation

    NASA Technical Reports Server (NTRS)

    Boyd, D. Douglas, Jr.

    2003-01-01

    Challenges associated with rotorcraft acoustic flight prediction and validation are examined. First, an outline of a state-of-the-art rotorcraft aeroacoustic prediction methodology is presented. Components including rotorcraft aeromechanics, high resolution reconstruction, and rotorcraft acoustic prediction arc discussed. Next, to illustrate challenges and issues involved, a case study is presented in which an analysis of flight data from a specific XV-15 tiltrotor acoustic flight test is discussed in detail. Issues related to validation of methodologies using flight test data are discussed. Primary flight parameters such as velocity, altitude, and attitude are discussed and compared for repeated flight conditions. Other measured steady state flight conditions are examined for consistency and steadiness. A representative example prediction is presented and suggestions are made for future research.

  10. Methodological Issues in Research on Web-Based Behavioral Interventions

    PubMed Central

    Danaher, Brian G; Seeley, John R

    2013-01-01

    Background Web-based behavioral intervention research is rapidly growing. Purpose We review methodological issues shared across Web-based intervention research to help inform future research in this area. Methods We examine measures and their interpretation using exemplar studies and our research. Results We report on research designs used to evaluate Web-based interventions and recommend newer, blended designs. We review and critique methodological issues associated with recruitment, engagement, and social validity. Conclusions We suggest that there is value to viewing this burgeoning realm of research from the broader context of behavior change research. We conclude that many studies use blended research designs, that innovative mantling designs such as the Multiphase Optimization Strategy and Sequential Multiple Assignment Randomized Trial methods hold considerable promise and should be used more widely, and that Web-based controls should be used instead of usual care or no-treatment controls in public health research. We recommend topics for future research that address participant recruitment, engagement, and social validity. PMID:19806416

  11. Evidence for the Continuous Latent Structure of Mania in the Epidemiologic Catchment Area from Multiple Latent Structure and Construct Validation Methodologies

    PubMed Central

    Prisciandaro, James J.; Roberts, John E.

    2011-01-01

    Background Although psychiatric diagnostic systems have conceptualized mania as a discrete phenomenon, appropriate latent structure investigations testing this conceptualization are lacking. In contrast to these diagnostic systems, several influential theories of mania have suggested a continuous conceptualization. The present study examined whether mania has a continuous or discrete latent structure using a comprehensive approach including taxometric, information-theoretic latent distribution modeling (ITLDM), and predictive validity methodologies in the Epidemiologic Catchment Area (ECA) study. Methods Eight dichotomous manic symptom items were submitted to a variety of latent structural analyses; including factor analyses, taxometric procedures, and ITLDM; in 10,105 ECA community participants. Additionally, a variety of continuous and discrete models of mania were compared in terms of their relative abilities to predict outcomes (i.e., health service utilization, internalizing and externalizing disorders, and suicidal behavior). Results Taxometric and ITLDM analyses consistently supported a continuous conceptualization of mania. In ITLDM analyses, a continuous model of mania demonstrated 6:52:1 odds over the best fitting latent class model of mania. Factor analyses suggested that the continuous structure of mania was best represented by a single latent factor. Predictive validity analyses demonstrated a consistent superior ability of continuous models of mania relative to discrete models. Conclusions The present study provided three independent lines of support for a continuous conceptualization of mania. The implications of a continuous model of mania are discussed. PMID:20507671

  12. Sample Preparation and Extraction in Small Sample Volumes Suitable for Pediatric Clinical Studies: Challenges, Advances, and Experiences of a Bioanalytical HPLC-MS/MS Method Validation Using Enalapril and Enalaprilat

    PubMed Central

    Burckhardt, Bjoern B.; Laeer, Stephanie

    2015-01-01

    In USA and Europe, medicines agencies force the development of child-appropriate medications and intend to increase the availability of information on the pediatric use. This asks for bioanalytical methods which are able to deal with small sample volumes as the trial-related blood lost is very restricted in children. Broadly used HPLC-MS/MS, being able to cope with small volumes, is susceptible to matrix effects. The latter restrains the precise drug quantification through, for example, causing signal suppression. Sophisticated sample preparation and purification utilizing solid-phase extraction was applied to reduce and control matrix effects. A scale-up from vacuum manifold to positive pressure manifold was conducted to meet the demands of high-throughput within a clinical setting. Faced challenges, advances, and experiences in solid-phase extraction are exemplarily presented on the basis of the bioanalytical method development and validation of low-volume samples (50 μL serum). Enalapril, enalaprilat, and benazepril served as sample drugs. The applied sample preparation and extraction successfully reduced the absolute and relative matrix effect to comply with international guidelines. Recoveries ranged from 77 to 104% for enalapril and from 93 to 118% for enalaprilat. The bioanalytical method comprising sample extraction by solid-phase extraction was fully validated according to FDA and EMA bioanalytical guidelines and was used in a Phase I study in 24 volunteers. PMID:25873972

  13. Reading Out Single-Molecule Digital RNA and DNA Isothermal Amplification in Nanoliter Volumes with Unmodified Camera Phones

    PubMed Central

    2016-01-01

    Digital single-molecule technologies are expanding diagnostic capabilities, enabling the ultrasensitive quantification of targets, such as viral load in HIV and hepatitis C infections, by directly counting single molecules. Replacing fluorescent readout with a robust visual readout that can be captured by any unmodified cell phone camera will facilitate the global distribution of diagnostic tests, including in limited-resource settings where the need is greatest. This paper describes a methodology for developing a visual readout system for digital single-molecule amplification of RNA and DNA by (i) selecting colorimetric amplification-indicator dyes that are compatible with the spectral sensitivity of standard mobile phones, and (ii) identifying an optimal ratiometric image-process for a selected dye to achieve a readout that is robust to lighting conditions and camera hardware and provides unambiguous quantitative results, even for colorblind users. We also include an analysis of the limitations of this methodology, and provide a microfluidic approach that can be applied to expand dynamic range and improve reaction performance, allowing ultrasensitive, quantitative measurements at volumes as low as 5 nL. We validate this methodology using SlipChip-based digital single-molecule isothermal amplification with λDNA as a model and hepatitis C viral RNA as a clinically relevant target. The innovative combination of isothermal amplification chemistry in the presence of a judiciously chosen indicator dye and ratiometric image processing with SlipChip technology allowed the sequence-specific visual readout of single nucleic acid molecules in nanoliter volumes with an unmodified cell phone camera. When paired with devices that integrate sample preparation and nucleic acid amplification, this hardware-agnostic approach will increase the affordability and the distribution of quantitative diagnostic and environmental tests. PMID:26900709

  14. Validation of a defibrillation lead ventricular volume measurement compared to three-dimensional echocardiography.

    PubMed

    Haines, David E; Wong, Wilson; Canby, Robert; Jewell, Coty; Houmsse, Mahmoud; Pederson, David; Sugeng, Lissa; Porterfield, John; Kottam, Anil; Pearce, John; Valvano, Jon; Michalek, Joel; Trevino, Aron; Sagar, Sandeep; Feldman, Marc D

    2017-10-01

    There is increasing evidence that using frequent invasive measures of pressure in patients with heart failure results in improved outcomes compared to traditional measures. Admittance, a measure of volume derived from preexisting defibrillation leads, is proposed as a new technique to monitor cardiac hemodynamics in patients with an implantable defibrillator. The purpose of this study was to evaluate the accuracy of a new ventricular volume sensor (VVS, CardioVol) compared with 3-dimenssional echocardiography (echo) in patients with an implantable defibrillator. Twenty-two patients referred for generator replacement had their defibrillation lead attached to VVS to determine the level of agreement to a volume measurement standard (echo). Two opposite hemodynamic challenges were sequentially applied to the heart (overdrive pacing and dobutamine administration) to determine whether real changes in hemodynamics could be reliably and repeatedly assessed with VVS. Equivalence of end-diastolic volume (EDV) and stroke volume (SV) determined by both methods was also assessed. EDV and SV were compared using VVS and echo. VVS tracked expected physiologic trends. EDV was modulated -10% by overdrive pacing (14 mL). SV was modulated -13.7% during overdrive pacing (-6 mL) and increased over baseline +14.6% (+8 mL) with dobutamine. VVS and echo mean EDVs were found statistically equivalent, with margin of equivalence 13.8 mL (P <.05). Likewise, mean SVs were found statistically equivalent with margin of equivalence 15.8 mL (P <.05). VVS provides an accurate method for ventricular volume assessment using chronically implanted defibrillator leads and is statistically equivalent to echo determination of mean EDV and SV. Copyright © 2017 Heart Rhythm Society. Published by Elsevier Inc. All rights reserved.

  15. Scope of ACE in Australia. Volume 1: Implications for Improved Data Collection and Reporting [and] Volume 2: Analysis of Existing Information in National Education and Training Data Collection.

    ERIC Educational Resources Information Center

    Borthwick, J.; Knight, B.; Bender, A.; Loveder, P.

    These two volumes provide information on the scope of adult and community education (ACE) in Australia and implications for improved data collection and reporting. Volume 1 begins with a glossary. Chapter 1 addresses project objectives and processes and methodology. Chapter 2 analyzes the scope and diversity of ACE in terms of what is currently…

  16. Methodology for calculating the volume of condensate droplets on topographically modified, microgrooved surfaces.

    PubMed

    Sommers, A D

    2011-05-03

    Liquid droplets on micropatterned surfaces consisting of parallel grooves tens of micrometers in width and depth are considered, and a method for calculating the droplet volume on these surfaces is presented. This model, which utilizes the elongated and parallel-sided nature of droplets condensed on these microgrooved surfaces, requires inputs from two droplet images at ϕ = 0° and ϕ = 90°--namely, the droplet major axis, minor axis, height, and two contact angles. In this method, a circular cross-sectional area is extruded the length of the droplet where the chord of the extruded circle is fixed by the width of the droplet. The maximum apparent contact angle is assumed to occur along the side of the droplet because of the surface energy barrier to wetting imposed by the grooves--a behavior that was observed experimentally. When applied to water droplets condensed onto a microgrooved aluminum surface, this method was shown to calculate the actual droplet volume to within 10% for 88% of the droplets analyzed. This method is useful for estimating the volume of retained droplets on topographically modified, anisotropic surfaces where both heat and mass transfer occur and the surface microchannels are aligned parallel to gravity to assist in condensate drainage.

  17. From field data to volumes: constraining uncertainties in pyroclastic eruption parameters

    NASA Astrophysics Data System (ADS)

    Klawonn, Malin; Houghton, Bruce F.; Swanson, Donald A.; Fagents, Sarah A.; Wessel, Paul; Wolfe, Cecily J.

    2014-07-01

    In this study, we aim to understand the variability in eruption volume estimates derived from field studies of pyroclastic deposits. We distributed paper maps of the 1959 Kīlauea Iki tephra to 101 volcanologists worldwide, who produced hand-drawn isopachs. Across the returned maps, uncertainty in isopach areas is 7 % across the well-sampled deposit but increases to over 30 % for isopachs that are governed by the largest and smallest thickness measurements. We fit the exponential, power-law, and Weibull functions through the isopach thickness versus area1/2 values and find volume estimate variations up to a factor of 4.9 for a single map. Across all maps and methodologies, we find an average standard deviation for a total volume of s = 29 %. The volume uncertainties are largest for the most proximal ( s = 62 %) and distal field ( s = 53 %) and small for the densely sampled intermediate deposit ( s = 8 %). For the Kīlauea Iki 1959 eruption, we find that the deposit beyond the 5-cm isopach contains only 2 % of the total erupted volume, whereas the near-source deposit contains 48 % and the intermediate deposit 50 % of the total volume. Thus, the relative uncertainty within each zone impacts the total volume estimates differently. The observed uncertainties for the different deposit regions in this study illustrate a fundamental problem of estimating eruption volumes: while some methodologies may provide better fits to the isopach data or rely on fewer free parameters, the main issue remains the predictive capabilities of the empirical functions for the regions where measurements are missing.

  18. Evaluation and Validation (E&V) Team Public Report. Volume 1.

    DTIC Science & Technology

    1984-11-30

    Components. .......... K-109 *Table K-6. Possible Approaches to Integrating Methodology Components Across Life Cycle Phases. .........K-109 SECTION I...provide a detailed and organized approach to the development of technology which will be used as a basis for the E&V of APSEs. The E&V Plan which is...A-4 1.2 Background ....... .................. A-6 2. SCOPE ...... ....................... A-8 3. E&V TECHNICAL APPROACH

  19. Methodological triangulation: an approach to understanding data.

    PubMed

    Bekhet, Abir K; Zauszniewski, Jaclene A

    2012-01-01

    To describe the use of methodological triangulation in a study of how people who had moved to retirement communities were adjusting. Methodological triangulation involves using more than one kind of method to study a phenomenon. It has been found to be beneficial in providing confirmation of findings, more comprehensive data, increased validity and enhanced understanding of studied phenomena. While many researchers have used this well-established technique, there are few published examples of its use. The authors used methodological triangulation in their study of people who had moved to retirement communities in Ohio, US. A blended qualitative and quantitative approach was used. The collected qualitative data complemented and clarified the quantitative findings by helping to identify common themes. Qualitative data also helped in understanding interventions for promoting 'pulling' factors and for overcoming 'pushing' factors of participants. The authors used focused research questions to reflect the research's purpose and four evaluative criteria--'truth value', 'applicability', 'consistency' and 'neutrality'--to ensure rigour. This paper provides an example of how methodological triangulation can be used in nursing research. It identifies challenges associated with methodological triangulation, recommends strategies for overcoming them, provides a rationale for using triangulation and explains how to maintain rigour. Methodological triangulation can be used to enhance the analysis and the interpretation of findings. As data are drawn from multiple sources, it broadens the researcher's insight into the different issues underlying the phenomena being studied.

  20. A novel adaptive scoring system for segmentation validation with multiple reference masks

    NASA Astrophysics Data System (ADS)

    Moltz, Jan H.; Rühaak, Jan; Hahn, Horst K.; Peitgen, Heinz-Otto

    2011-03-01

    The development of segmentation algorithms for different anatomical structures and imaging protocols is an important task in medical image processing. The validation of these methods, however, is often treated as a subordinate task. Since manual delineations, which are widely used as a surrogate for the ground truth, exhibit an inherent uncertainty, it is preferable to use multiple reference segmentations for an objective validation. This requires a consistent framework that should fulfill three criteria: 1) it should treat all reference masks equally a priori and not demand consensus between the experts; 2) it should evaluate the algorithmic performance in relation to the inter-reference variability, i.e., be more tolerant where the experts disagree about the true segmentation; 3) it should produce results that are comparable for different test data. We show why current state-of-the-art frameworks as the one used at several MICCAI segmentation challenges do not fulfill these criteria and propose a new validation methodology. A score is computed in an adaptive way for each individual segmentation problem, using a combination of volume- and surface-based comparison metrics. These are transformed into the score by relating them to the variability between the reference masks which can be measured by comparing the masks with each other or with an estimated ground truth. We present examples from a study on liver tumor segmentation in CT scans where our score shows a more adequate assessment of the segmentation results than the MICCAI framework.

  1. Analysis of volatile compounds in gluten-free bread crusts with an optimised and validated SPME-GC/QTOF methodology.

    PubMed

    Pico, Joana; Antolín, Beatriz; Román, Laura; Gómez, Manuel; Bernal, José

    2018-04-01

    The aroma of bread crust, as one of the first characteristics perceived, is essential for bread acceptance. However, gluten-free bread crusts exhibit weak aroma. A SPME-GC/QTOF methodology was optimised with PCA and RSM and validated for the quantification of 44 volatile compounds in bread crust, extracting 0.75 g of crust at 60 °C for 51 min. LODs ranged between 3.60 and 1760 μg Kg -1 , all the R 2 were higher than 0.99 and %RSD for precision and %Er for accuracy were lower than 9% and 12%, respectively. A commercial wheat bread crust was quantified, and furfural was the most abundant compound. Bread crusts of wheat starch and of japonica rice, basmati rice and teff flours were also quantified. Teff flour and wheat starch crusts were very suitable for improving gluten-free bread crust aroma, due to their similar content in 2-acetyl-1-pyrroline and 4-hydroxy-2,5-dimethyl-3(2H)-furanone compared to wheat flour crust and also for their high content in pyrazines. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. Sex Assessment from the Volume of the First Metatarsal Bone: A Comparison of Linear and Volume Measurements.

    PubMed

    Gibelli, Daniele; Poppa, Pasquale; Cummaudo, Marco; Mattia, Mirko; Cappella, Annalisa; Mazzarelli, Debora; Zago, Matteo; Sforza, Chiarella; Cattaneo, Cristina

    2017-11-01

    Sexual dimorphism is a crucial characteristic of skeleton. In the last years, volumetric and surface 3D acquisition systems have enabled anthropologists to assess surfaces and volumes, whose potential still needs to be verified. This article aimed at assessing volume and linear parameters of the first metatarsal bone through 3D acquisition by laser scanning. Sixty-eight skeletons underwent 3D scan through laser scanner: Seven linear measurements and volume from each bone were assessed. A cutoff value of 13,370 mm 3 was found, with an accuracy of 80.8%. Linear measurements outperformed volume: metatarsal length and mediolateral width of base showed higher cross-validated accuracies (respectively, 82.1% and 79.1%, raising at 83.6% when both of them were included). Further studies are needed to verify the real advantage for sex assessment provided by volume measurements. © 2017 American Academy of Forensic Sciences.

  3. Low validity of Google Trends for behavioral forecasting of national suicide rates

    PubMed Central

    Niederkrotenthaler, Thomas; Till, Benedikt; Ajdacic-Gross, Vladeta; Voracek, Martin

    2017-01-01

    Recent research suggests that search volumes of the most popular search engine worldwide, Google, provided via Google Trends, could be associated with national suicide rates in the USA, UK, and some Asian countries. However, search volumes have mostly been studied in an ad hoc fashion, without controls for spurious associations. This study evaluated the validity and utility of Google Trends search volumes for behavioral forecasting of suicide rates in the USA, Germany, Austria, and Switzerland. Suicide-related search terms were systematically collected and respective Google Trends search volumes evaluated for availability. Time spans covered 2004 to 2010 (USA, Switzerland) and 2004 to 2012 (Germany, Austria). Temporal associations of search volumes and suicide rates were investigated with time-series analyses that rigorously controlled for spurious associations. The number and reliability of analyzable search volume data increased with country size. Search volumes showed various temporal associations with suicide rates. However, associations differed both across and within countries and mostly followed no discernable patterns. The total number of significant associations roughly matched the number of expected Type I errors. These results suggest that the validity of Google Trends search volumes for behavioral forecasting of national suicide rates is low. The utility and validity of search volumes for the forecasting of suicide rates depend on two key assumptions (“the population that conducts searches consists mostly of individuals with suicidal ideation”, “suicide-related search behavior is strongly linked with suicidal behavior”). We discuss strands of evidence that these two assumptions are likely not met. Implications for future research with Google Trends in the context of suicide research are also discussed. PMID:28813490

  4. Low validity of Google Trends for behavioral forecasting of national suicide rates.

    PubMed

    Tran, Ulrich S; Andel, Rita; Niederkrotenthaler, Thomas; Till, Benedikt; Ajdacic-Gross, Vladeta; Voracek, Martin

    2017-01-01

    Recent research suggests that search volumes of the most popular search engine worldwide, Google, provided via Google Trends, could be associated with national suicide rates in the USA, UK, and some Asian countries. However, search volumes have mostly been studied in an ad hoc fashion, without controls for spurious associations. This study evaluated the validity and utility of Google Trends search volumes for behavioral forecasting of suicide rates in the USA, Germany, Austria, and Switzerland. Suicide-related search terms were systematically collected and respective Google Trends search volumes evaluated for availability. Time spans covered 2004 to 2010 (USA, Switzerland) and 2004 to 2012 (Germany, Austria). Temporal associations of search volumes and suicide rates were investigated with time-series analyses that rigorously controlled for spurious associations. The number and reliability of analyzable search volume data increased with country size. Search volumes showed various temporal associations with suicide rates. However, associations differed both across and within countries and mostly followed no discernable patterns. The total number of significant associations roughly matched the number of expected Type I errors. These results suggest that the validity of Google Trends search volumes for behavioral forecasting of national suicide rates is low. The utility and validity of search volumes for the forecasting of suicide rates depend on two key assumptions ("the population that conducts searches consists mostly of individuals with suicidal ideation", "suicide-related search behavior is strongly linked with suicidal behavior"). We discuss strands of evidence that these two assumptions are likely not met. Implications for future research with Google Trends in the context of suicide research are also discussed.

  5. A methodology for the rigorous verification of plasma simulation codes

    NASA Astrophysics Data System (ADS)

    Riva, Fabio

    2016-10-01

    The methodology used to assess the reliability of numerical simulation codes constitutes the Verification and Validation (V&V) procedure. V&V is composed by two separate tasks: the verification, which is a mathematical issue targeted to assess that the physical model is correctly solved, and the validation, which determines the consistency of the code results, and therefore of the physical model, with experimental data. In the present talk we focus our attention on the verification, which in turn is composed by the code verification, targeted to assess that a physical model is correctly implemented in a simulation code, and the solution verification, that quantifies the numerical error affecting a simulation. Bridging the gap between plasma physics and other scientific domains, we introduced for the first time in our domain a rigorous methodology for the code verification, based on the method of manufactured solutions, as well as a solution verification based on the Richardson extrapolation. This methodology was applied to GBS, a three-dimensional fluid code based on a finite difference scheme, used to investigate the plasma turbulence in basic plasma physics experiments and in the tokamak scrape-off layer. Overcoming the difficulty of dealing with a numerical method intrinsically affected by statistical noise, we have now generalized the rigorous verification methodology to simulation codes based on the particle-in-cell algorithm, which are employed to solve Vlasov equation in the investigation of a number of plasma physics phenomena.

  6. Evaluation of the HARDMAN comparability methodology for manpower, personnel and training

    NASA Technical Reports Server (NTRS)

    Zimmerman, W.; Butler, R.; Gray, V.; Rosenberg, L.

    1984-01-01

    The methodology evaluation and recommendation are part of an effort to improve Hardware versus Manpower (HARDMAN) methodology for projecting manpower, personnel, and training (MPT) to support new acquisition. Several different validity tests are employed to evaluate the methodology. The methodology conforms fairly well with both the MPT user needs and other accepted manpower modeling techniques. Audits of three completed HARDMAN applications reveal only a small number of potential problem areas compared to the total number of issues investigated. The reliability study results conform well with the problem areas uncovered through the audits. The results of the accuracy studies suggest that the manpower life-cycle cost component is only marginally sensitive to changes in other related cost variables. Even with some minor problems, the methodology seem sound and has good near term utility to the Army. Recommendations are provided to firm up the problem areas revealed through the evaluation.

  7. Design Science Methodology Applied to a Chemical Surveillance Tool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Zhuanyi; Han, Kyungsik; Charles-Smith, Lauren E.

    Public health surveillance systems gain significant benefits from integrating existing early incident detection systems,supported by closed data sources, with open source data.However, identifying potential alerting incidents relies on finding accurate, reliable sources and presenting the high volume of data in a way that increases analysts work efficiency; a challenge for any system that leverages open source data. In this paper, we present the design concept and the applied design science research methodology of ChemVeillance, a chemical analyst surveillance system.Our work portrays a system design and approach that translates theoretical methodology into practice creating a powerful surveillance system built for specificmore » use cases.Researchers, designers, developers, and related professionals in the health surveillance community can build upon the principles and methodology described here to enhance and broaden current surveillance systems leading to improved situational awareness based on a robust integrated early warning system.« less

  8. Volume and Asymmetry Abnormalities of Insula in Antipsychotic-Naive Schizophrenia: A 3-Tesla Magnetic Resonance Imaging Study

    PubMed Central

    Virupaksha, Harve Shanmugam; Kalmady, Sunil V.; Shivakumar, Venkataram; Arasappa, Rashmi; Venkatasubramanian, Ganesan; Gangadhar, Bangalore N.

    2012-01-01

    Context: Insula, which is a vital brain region for self-awareness, empathy, and sensory stimuli processing, is critically implicated in schizophrenia pathogenesis. Existing studies on insula volume abnormalities report inconsistent findings potentially due to the evaluation of ‘antipsychotic-treated’ schizophrenia patients as well as suboptimal methodology. Aim: To understand the role of insula in schizophrenia. Materials and Methods: In this first-time 3-T magnetic resonance imaging study, we examined antipsychotic-naive schizophrenic patients (N=30) and age-, sex-, handedness- and education-matched healthy controls (N=28). Positive and negative symptoms were scored with good interrater reliability (intraclass correlation coefficient (ICC)>0.9) by using the scales for negative and positive symptoms. Gray matter volume of insula and its anterior/posterior subregions were measured by using a three-dimensional, interactive, semiautomated software based on the valid method with good interrater reliability (ICC>0.85). Intracranial volume was automatically measured by using the FreeSurfer software. Results: Patients had significantly deficient gray matter volumes of left (F=33.4; P<0.00001) and right (F=11.9; P=0.001) insula after controlling for the effects of age, sex, and intracranial volume. Patients with predominantly negative symptoms had a significantly deficient right posterior insula volume than those with predominantly positive symptoms (F=6.3; P=0.02). Asymmetry index analysis revealed anterior insular asymmetry to be significantly reversed (right>left) in male patients in comparison with male controls (left>right) (t=2.7; P=0.01). Conclusions: Robust insular volume deficits in antipsychotic-naive schizophrenia support intrinsic role for insula in pathogenesis of this disorder. The first-time demonstration of a relationship between right posterior insular deficit and negative symptoms is in tune with the background neurobiological literature. Another

  9. Volume and asymmetry abnormalities of insula in antipsychotic-naive schizophrenia: a 3-tesla magnetic resonance imaging study.

    PubMed

    Virupaksha, Harve Shanmugam; Kalmady, Sunil V; Shivakumar, Venkataram; Arasappa, Rashmi; Venkatasubramanian, Ganesan; Gangadhar, Bangalore N

    2012-04-01

    Insula, which is a vital brain region for self-awareness, empathy, and sensory stimuli processing, is critically implicated in schizophrenia pathogenesis. Existing studies on insula volume abnormalities report inconsistent findings potentially due to the evaluation of 'antipsychotic-treated' schizophrenia patients as well as suboptimal methodology. To understand the role of insula in schizophrenia. In this first-time 3-T magnetic resonance imaging study, we examined antipsychotic-naive schizophrenic patients (N=30) and age-, sex-, handedness- and education-matched healthy controls (N=28). Positive and negative symptoms were scored with good interrater reliability (intraclass correlation coefficient (ICC)>0.9) by using the scales for negative and positive symptoms. Gray matter volume of insula and its anterior/posterior subregions were measured by using a three-dimensional, interactive, semiautomated software based on the valid method with good interrater reliability (ICC>0.85). Intracranial volume was automatically measured by using the FreeSurfer software. Patients had significantly deficient gray matter volumes of left (F=33.4; P<0.00001) and right (F=11.9; P=0.001) insula after controlling for the effects of age, sex, and intracranial volume. Patients with predominantly negative symptoms had a significantly deficient right posterior insula volume than those with predominantly positive symptoms (F=6.3; P=0.02). Asymmetry index analysis revealed anterior insular asymmetry to be significantly reversed (right>left) in male patients in comparison with male controls (left>right) (t=2.7; P=0.01). Robust insular volume deficits in antipsychotic-naive schizophrenia support intrinsic role for insula in pathogenesis of this disorder. The first-time demonstration of a relationship between right posterior insular deficit and negative symptoms is in tune with the background neurobiological literature. Another novel observation of sex-specific anterior insular asymmetry

  10. Dry Volume Fracturing Simulation of Shale Gas Reservoir

    NASA Astrophysics Data System (ADS)

    Xu, Guixi; Wang, Shuzhong; Luo, Xiangrong; Jing, Zefeng

    2017-11-01

    Application of CO2 dry fracturing technology to shale gas reservoir development in China has advantages of no water consumption, little reservoir damage and promoting CH4 desorption. This paper uses Meyer simulation to study complex fracture network extension and the distribution characteristics of shale gas reservoirs in the CO2 dry volume fracturing process. The simulation results prove the validity of the modified CO2 dry fracturing fluid used in shale volume fracturing and provides a theoretical basis for the following study on interval optimization of the shale reservoir dry volume fracturing.

  11. Spacecraft Habitable Volume: Results of an Interdisciplinary Workshop

    NASA Technical Reports Server (NTRS)

    Fitts, David J.; Connolly, Janis; Howard, Robert

    2011-01-01

    NASA's Human Exploration Framework Team posed the question: "Is 80 cubic meters per person of habitable volume acceptable for a proposed Deep Space Habitat?" The goal of the workshop was to address the "net habitable volume" necessary for long-duration human spaceflight missions and identify design and psychological issues and mitigations. The objectives were: (1) Identify psychological factors -- i.e., "stressors" -- that impact volume and layout specifications for long duration missions (2) Identify mitigation strategies for stressors, especially those that can be written as volume design specifications (3) Identify a forward research roadmap -- i.e., what future work is needed to define and validate objective design metrics? (4) Provide advisories on the human factors consequences of poor net habitable volume allocation and layout design.

  12. The space station assembly phase: Flight telerobotic servicer feasibility. Volume 2: Methodology and case study

    NASA Technical Reports Server (NTRS)

    Smith, Jeffrey H.; Gyamfi, Max A.; Volkmer, Kent; Zimmerman, Wayne F.

    1987-01-01

    A methodology is described for examining the feasibility of a Flight Telerobotic Servicer (FTS) using two assembly scenarios, defined at the EVA task level, for the 30 shuttle flights (beginning with MB-1) over a four-year period. Performing all EVA tasks by crew only is compared to a scenario in which crew EVA is augmented by FTS. A reference FTS concept is used as a technology baseline and life-cycle cost analysis is performed to highlight cost tradeoffs. The methodology, procedure, and data used to complete the analysis are documented in detail.

  13. Using Concentration Curves to Assess Organization-Specific Relationships between Surgeon Volumes and Outcomes.

    PubMed

    Kanter, Michael H; Huang, Yii-Chieh; Kally, Zina; Gordon, Margo A; Meltzer, Charles

    2018-06-01

    A well-documented association exists between higher surgeon volumes and better outcomes for many procedures, but surgeons may be reluctant to change practice patterns without objective, credible, and near real-time data on their performance. In addition, published thresholds for procedure volumes may be biased or perceived as arbitrary; typical reports compare surgeons grouped into discrete procedure volume categories, even though the volume-outcomes relationship is likely continuous. The concentration curves methodology, which has been used to analyze whether health outcomes vary with socioeconomic status, was adapted to explore the association between procedure volume and outcomes as a continuous relationship so that data for all surgeons within a health care organization could be included. Using widely available software and requiring minimal analytic expertise, this approach plots cumulative percentages of two variables of interest against each other and assesses the characteristics of the resulting curve. Organization-specific relationships between surgeon volumes and outcomes were examined for three example types of procedures: uncomplicated hysterectomies, infant circumcisions, and total thyroidectomies. The concentration index was used to assess whether outcomes were equally distributed unrelated to volumes. For all three procedures, the concentration curve methodology identified associations between surgeon procedure volumes and selected outcomes that were specific to the organization. The concentration indices confirmed the higher prevalence of examined outcomes among low-volume surgeons. The curves supported organizational discussions about surgical quality. Concentration curves require minimal resources to identify organization- and procedure-specific relationships between surgeon procedure volumes and outcomes and can support quality improvement. Copyright © 2018 The Joint Commission. Published by Elsevier Inc. All rights reserved.

  14. [Strengthening the methodology of study designs in scientific researches].

    PubMed

    Ren, Ze-qin

    2010-06-01

    Many problems in study designs have affected the validity of scientific researches seriously. We must understand the methodology of research, especially clinical epidemiology and biostatistics, and recognize the urgency in selection and implement of right study design. Thereafter we can promote the research capability and improve the overall quality of scientific researches.

  15. Assessing validity of observational intervention studies - the Benchmarking Controlled Trials.

    PubMed

    Malmivaara, Antti

    2016-09-01

    Benchmarking Controlled Trial (BCT) is a concept which covers all observational studies aiming to assess impact of interventions or health care system features to patients and populations. To create and pilot test a checklist for appraising methodological validity of a BCT. The checklist was created by extracting the most essential elements from the comprehensive set of criteria in the previous paper on BCTs. Also checklists and scientific papers on observational studies and respective systematic reviews were utilized. Ten BCTs published in the Lancet and in the New England Journal of Medicine were used to assess feasibility of the created checklist. The appraised studies seem to have several methodological limitations, some of which could be avoided in planning, conducting and reporting phases of the studies. The checklist can be used for planning, conducting, reporting, reviewing, and critical reading of observational intervention studies. However, the piloted checklist should be validated in further studies. Key messages Benchmarking Controlled Trial (BCT) is a concept which covers all observational studies aiming to assess impact of interventions or health care system features to patients and populations. This paper presents a checklist for appraising methodological validity of BCTs and pilot-tests the checklist with ten BCTs published in leading medical journals. The appraised studies seem to have several methodological limitations, some of which could be avoided in planning, conducting and reporting phases of the studies. The checklist can be used for planning, conducting, reporting, reviewing, and critical reading of observational intervention studies.

  16. Rural Schools Prototype Analysis. Volume II: Methodology. An Example Process of Identifying Determinants, Selecting Options, & Developing Schematic Designs.

    ERIC Educational Resources Information Center

    Construction Systems Management, Inc., Anchorage, AK.

    Volume II of a 3-volume report demonstrates the use of Design Determinants and Options (presented in Volume I) in the planning and design of small rural Alaskan secondary schools. Section I, a checklist for gathering site-specific information to be used as a data base for facility design, is organized in the same format as Volume I, which can be…

  17. An improved approach for flight readiness certification: Methodology for failure risk assessment and application examples. Volume 2: Software documentation

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes, These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.

  18. Safety assessment methodology in management of spent sealed sources.

    PubMed

    Mahmoud, Narmine Salah

    2005-02-14

    Environmental hazards can be caused from radioactive waste after their disposal. It was therefore important that safety assessment methodologies be developed and established to study and estimate the possible hazards, and institute certain safety methodologies that lead and prevent the evolution of these hazards. Spent sealed sources are specific type of radioactive waste. According to IAEA definition, spent sealed sources are unused sources because of activity decay, damage, misuse, loss, or theft. Accidental exposure of humans from spent sealed sources can occur at the moment they become spent and before their disposal. Because of that reason, safety assessment methodologies were tailored to suit the management of spent sealed sources. To provide understanding and confidence of this study, validation analysis was undertaken by considering the scenario of an accident that occurred in Egypt, June 2000 (the Meet-Halfa accident from an iridium-192 source). The text of this work includes consideration related to the safety assessment approaches of spent sealed sources which constitutes assessment context, processes leading an active source to be spent, accident scenarios, mathematical models for dose calculations, and radiological consequences and regulatory criteria. The text also includes a validation study, which was carried out by evaluating a theoretical scenario compared to the real scenario of Meet-Halfa accident depending on the clinical assessment of affected individuals.

  19. Mapping soil deformation around plant roots using in vivo 4D X-ray Computed Tomography and Digital Volume Correlation.

    PubMed

    Keyes, S D; Gillard, F; Soper, N; Mavrogordato, M N; Sinclair, I; Roose, T

    2016-06-14

    The mechanical impedance of soils inhibits the growth of plant roots, often being the most significant physical limitation to root system development. Non-invasive imaging techniques have recently been used to investigate the development of root system architecture over time, but the relationship with soil deformation is usually neglected. Correlative mapping approaches parameterised using 2D and 3D image data have recently gained prominence for quantifying physical deformation in composite materials including fibre-reinforced polymers and trabecular bone. Digital Image Correlation (DIC) and Digital Volume Correlation (DVC) are computational techniques which use the inherent material texture of surfaces and volumes, captured using imaging techniques, to map full-field deformation components in samples during physical loading. Here we develop an experimental assay and methodology for four-dimensional, in vivo X-ray Computed Tomography (XCT) and apply a Digital Volume Correlation (DVC) approach to the data to quantify deformation. The method is validated for a field-derived soil under conditions of uniaxial compression, and a calibration study is used to quantify thresholds of displacement and strain measurement. The validated and calibrated approach is then demonstrated for an in vivo test case in which an extending maize root in field-derived soil was imaged hourly using XCT over a growth period of 19h. This allowed full-field soil deformation data and 3D root tip dynamics to be quantified in parallel for the first time. This fusion of methods paves the way for comparative studies of contrasting soils and plant genotypes, improving our understanding of the fundamental mechanical processes which influence root system development. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Volume, Conservation and Instruction: A Classroom Based Solomon Four Group Study of Conflict.

    ERIC Educational Resources Information Center

    Rowell, J. A.; Dawson, C. J.

    1981-01-01

    Summarizes a study to widen the applicability of Piagetian theory-based conflict methodology from individual situations to entire classes. A Solomon four group design was used to study effects of conflict instruction on students' (N=127) ability to conserve volume of noncompressible matter and to apply that knowledge to gas volume. (Author/JN)

  1. Current psychometric and methodological issues in the measurement of overgeneral autobiographical memory.

    PubMed

    Griffith, James W; Sumner, Jennifer A; Raes, Filip; Barnhofer, Thorsten; Debeer, Elise; Hermans, Dirk

    2012-12-01

    Autobiographical memory is a multifaceted construct that is related to psychopathology and other difficulties in functioning. Across many studies, a variety of methods have been used to study autobiographical memory. The relationship between overgeneral autobiographical memory (OGM) and psychopathology has been of particular interest, and many studies of this cognitive phenomenon rely on the Autobiographical Memory Test (AMT) to assess it. In this paper, we examine several methodological approaches to studying autobiographical memory, and focus primarily on methodological and psychometric considerations in OGM research. We pay particular attention to what is known about the reliability, validity, and methodological variations of the AMT. The AMT has adequate psychometric properties, but there is great variability in methodology across studies that use it. Methodological recommendations and suggestions for future studies are presented. Copyright © 2011 Elsevier Ltd. All rights reserved.

  2. From field data to volumes: constraining uncertainties in pyroclastic eruption parameters

    USGS Publications Warehouse

    Klawonn, Malin; Houghton, Bruce F.; Swanson, Don; Fagents, Sarah A.; Wessel, Paul; Wolfe, Cecily J.

    2014-01-01

    In this study, we aim to understand the variability in eruption volume estimates derived from field studies of pyroclastic deposits. We distributed paper maps of the 1959 Kīlauea Iki tephra to 101 volcanologists worldwide, who produced hand-drawn isopachs. Across the returned maps, uncertainty in isopach areas is 7 % across the well-sampled deposit but increases to over 30 % for isopachs that are governed by the largest and smallest thickness measurements. We fit the exponential, power-law, and Weibull functions through the isopach thickness versus area1/2 values and find volume estimate variations up to a factor of 4.9 for a single map. Across all maps and methodologies, we find an average standard deviation for a total volume of s = 29 %. The volume uncertainties are largest for the most proximal (s = 62 %) and distal field (s = 53 %) and small for the densely sampled intermediate deposit (s = 8 %). For the Kīlauea Iki 1959 eruption, we find that the deposit beyond the 5-cm isopach contains only 2 % of the total erupted volume, whereas the near-source deposit contains 48 % and the intermediate deposit 50 % of the total volume. Thus, the relative uncertainty within each zone impacts the total volume estimates differently. The observed uncertainties for the different deposit regions in this study illustrate a fundamental problem of estimating eruption volumes: while some methodologies may provide better fits to the isopach data or rely on fewer free parameters, the main issue remains the predictive capabilities of the empirical functions for the regions where measurements are missing.

  3. Impacts of Outer Continental Shelf (OCS) development on recreation and tourism. Volume 2. Final report and case studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The final report for the project is comprised of five volumes. The volume presents the study conclusions, summarizes the methodology used (more detail is found in Volume 3), discusses four case study applications of the model, and contains profiles of coastal communities in an Appendix.

  4. Exact finite volume expectation values of local operators in excited states

    NASA Astrophysics Data System (ADS)

    Pozsgay, B.; Szécsényi, I. M.; Takács, G.

    2015-04-01

    We present a conjecture for the exact expression of finite volume expectation values in excited states in integrable quantum field theories, which is an extension of an earlier conjecture to the case of general diagonal factorized scattering with bound states and a nontrivial bootstrap structure. The conjectured expression is a spectral expansion which uses the exact form factors and the excited state thermodynamic Bethe Ansatz as building blocks. The conjecture is proven for the case of the trace of the energy-moment tensor. Concerning its validity for more general operators, we provide numerical evidence using the truncated conformal space approach. It is found that the expansion fails to be well-defined for small values of the volume in cases when the singularity structure of the TBA equations undergoes a non-trivial rearrangement under some critical value of the volume. Despite these shortcomings, the conjectured expression is expected to be valid for all volumes for most of the excited states, and as an expansion above the critical volume for the rest.

  5. LQTA-QSAR: a new 4D-QSAR methodology.

    PubMed

    Martins, João Paulo A; Barbosa, Euzébio G; Pasqualoto, Kerly F M; Ferreira, Márcia M C

    2009-06-01

    A novel 4D-QSAR approach which makes use of the molecular dynamics (MD) trajectories and topology information retrieved from the GROMACS package is presented in this study. This new methodology, named LQTA-QSAR (LQTA, Laboratório de Quimiometria Teórica e Aplicada), has a module (LQTAgrid) that calculates intermolecular interaction energies at each grid point considering probes and all aligned conformations resulting from MD simulations. These interaction energies are the independent variables or descriptors employed in a QSAR analysis. The comparison of the proposed methodology to other 4D-QSAR and CoMFA formalisms was performed using a set of forty-seven glycogen phosphorylase b inhibitors (data set 1) and a set of forty-four MAP p38 kinase inhibitors (data set 2). The QSAR models for both data sets were built using the ordered predictor selection (OPS) algorithm for variable selection. Model validation was carried out applying y-randomization and leave-N-out cross-validation in addition to the external validation. PLS models for data set 1 and 2 provided the following statistics: q(2) = 0.72, r(2) = 0.81 for 12 variables selected and 2 latent variables and q(2) = 0.82, r(2) = 0.90 for 10 variables selected and 5 latent variables, respectively. Visualization of the descriptors in 3D space was successfully interpreted from the chemical point of view, supporting the applicability of this new approach in rational drug design.

  6. Peer Review of a Formal Verification/Design Proof Methodology

    NASA Technical Reports Server (NTRS)

    1983-01-01

    The role of formal verification techniques in system validation was examined. The value and the state of the art of performance proving for fault-tolerant compuers were assessed. The investigation, development, and evaluation of performance proving tools were reviewed. The technical issues related to proof methodologies are examined. The technical issues discussed are summarized.

  7. Ethical and Legal Implications of the Methodological Crisis in Neuroimaging.

    PubMed

    Kellmeyer, Philipp

    2017-10-01

    Currently, many scientific fields such as psychology or biomedicine face a methodological crisis concerning the reproducibility, replicability, and validity of their research. In neuroimaging, similar methodological concerns have taken hold of the field, and researchers are working frantically toward finding solutions for the methodological problems specific to neuroimaging. This article examines some ethical and legal implications of this methodological crisis in neuroimaging. With respect to ethical challenges, the article discusses the impact of flawed methods in neuroimaging research in cognitive and clinical neuroscience, particularly with respect to faulty brain-based models of human cognition, behavior, and personality. Specifically examined is whether such faulty models, when they are applied to neurological or psychiatric diseases, could put patients at risk, and whether this places special obligations on researchers using neuroimaging. In the legal domain, the actual use of neuroimaging as evidence in United States courtrooms is surveyed, followed by an examination of ways that the methodological problems may create challenges for the criminal justice system. Finally, the article reviews and promotes some promising ideas and initiatives from within the neuroimaging community for addressing the methodological problems.

  8. Measuring Standards in Primary English: The Validity of PIRLS--A Response to Mary Hilton

    ERIC Educational Resources Information Center

    Whetton, Chris; Twist, Liz; Sainsbury, Marian

    2007-01-01

    Hilton (2006) criticises the PIRLS (Progress in International Reading Literacy Study) tests and the survey conduct, raising questions about the validity of international surveys of reading. Her criticisms fall into four broad areas: cultural validity, methodological issues, construct validity and the survey in England. However, her criticisms are…

  9. A model for the influences of soluble and insoluble solids, and treated volume on the ultraviolet-C resistance of heat-stressed Salmonella enterica in simulated fruit juices.

    PubMed

    Estilo, Emil Emmanuel C; Gabriel, Alonzo A

    2018-02-01

    This study was conducted to determine the effects of intrinsic juice characteristics namely insoluble solids (IS, 0-3 %w/v), and soluble solids (SS, 0-70 °Brix), and extrinsic process parameter treated volume (250-1000 mL) on the UV-C inactivation rates of heat-stressed Salmonella enterica in simulated fruit juices (SFJs). A Rotatable Central Composite Design of Experiment (CCRD) was used to determine combinations of the test variables, while Response Surface Methodology (RSM) was used to characterize and quantify the influences of the test variables on microbial inactivation. The heat-stressed cells exhibited log-linear UV-C inactivation behavior (R 2 0.952 to 0.999) in all CCRD combinations with D UV-C values ranging from 10.0 to 80.2 mJ/cm 2 . The D UV-C values obtained from the CCRD significantly fitted into a quadratic model (P < 0.0001). RSM results showed that individual linear (IS, SS, volume), individual quadratic (IS 2 and volume 2 ), and factor interactions (IS × volume and SS × volume) were found to significantly influence UV-C inactivation. Validation of the model in SFJs with combinations not included in the CCRD showed that the predictions were within acceptable error margins. Copyright © 2017. Published by Elsevier Ltd.

  10. Predeployment validation of fault-tolerant systems through software-implemented fault insertion

    NASA Technical Reports Server (NTRS)

    Czeck, Edward W.; Siewiorek, Daniel P.; Segall, Zary Z.

    1989-01-01

    Fault injection-based automated testing (FIAT) environment, which can be used to experimentally characterize and evaluate distributed realtime systems under fault-free and faulted conditions is described. A survey is presented of validation methodologies. The need for fault insertion based on validation methodologies is demonstrated. The origins and models of faults, and motivation for the FIAT concept are reviewed. FIAT employs a validation methodology which builds confidence in the system through first providing a baseline of fault-free performance data and then characterizing the behavior of the system with faults present. Fault insertion is accomplished through software and allows faults or the manifestation of faults to be inserted by either seeding faults into memory or triggering error detection mechanisms. FIAT is capable of emulating a variety of fault-tolerant strategies and architectures, can monitor system activity, and can automatically orchestrate experiments involving insertion of faults. There is a common system interface which allows ease of use to decrease experiment development and run time. Fault models chosen for experiments on FIAT have generated system responses which parallel those observed in real systems under faulty conditions. These capabilities are shown by two example experiments each using a different fault-tolerance strategy.

  11. Translation and linguistic validation of the Pediatric Patient-Reported Outcomes Measurement Information System measures into simplified Chinese using cognitive interviewing methodology.

    PubMed

    Liu, Yanyan; Hinds, Pamela S; Wang, Jichuan; Correia, Helena; Du, Shizheng; Ding, Jian; Gao, Wen Jun; Yuan, Changrong

    2013-01-01

    The Pediatric Patient-Reported Outcomes Measurement Information System (PROMIS) measures were developed using modern measurement theory and tested in a variety of settings to assess the quality of life, function, and symptoms of children and adolescents experiencing a chronic illness and its treatment. Developed in English, this set of measures had not been translated into Chinese. The objective of this study was to develop the Chinese version of the Pediatric PROMIS measures (C-Ped-PROMIS), specifically 8 short forms, and to pretest the translated measures in children and adolescents through cognitive interviewing methodology. The C-Ped-PROMIS was developed following the standard Functional Assessment of Chronic Illness Therapy Translation Methodology. Bilingual teams from the United States and China reviewed the translation to develop a provisional version, which was then pretested with cognitive interview by probing 10 native Chinese-speaking children aged 8 to 17 years in China. The translation was finalized by the bilingual teams. Most items, response options, and instructions were well understood by the children, and some revisions were made to address patient's comments during the cognitive interview. The results indicated that the C-Ped-PROMIS items were semantically and conceptually equivalent to the original. Children aged 8 to 17 years in China were able to comprehend these measures and express their experience and feelings about illness or their life. The C-Ped-PROMIS is available for psychometric validation. Future work will be directed at translating the rest of the item banks, calibrating them and creating a Chinese final version of the short forms.

  12. Assessing the validity of discourse analysis: transdisciplinary convergence

    NASA Astrophysics Data System (ADS)

    Jaipal-Jamani, Kamini

    2014-12-01

    Research studies using discourse analysis approaches make claims about phenomena or issues based on interpretation of written or spoken text, which includes images and gestures. How are findings/interpretations from discourse analysis validated? This paper proposes transdisciplinary convergence as a way to validate discourse analysis approaches to research. The argument is made that discourse analysis explicitly grounded in semiotics, systemic functional linguistics, and critical theory, offers a credible research methodology. The underlying assumptions, constructs, and techniques of analysis of these three theoretical disciplines can be drawn on to show convergence of data at multiple levels, validating interpretations from text analysis.

  13. Validation and evaluation of an HPLC methodology for the quantification of the potent antimitotic compound (+)-discodermolide in the Caribbean marine sponge Discodermia dissoluta.

    PubMed

    Valderrama, Katherine; Castellanos, Leonardo; Zea, Sven

    2010-08-01

    The sponge Discodermia dissoluta is the source of the potent antimitotic compound (+)-discodermolide. The relatively abundant and shallow populations of this sponge in Santa Marta, Colombia, allow for studies to evaluate the natural and biotechnological supply options of (+)-discodermolide. In this work, an RP-HPLC-UV methodology for the quantification of (+)-discodermolide from sponge samples was tested and validated. Our protocol for extracting this compound from the sponge included lyophilization, exhaustive methanol extraction, partitioning using water and dichloromethane, purification of the organic fraction in RP-18 cartridges and then finally retrieving the (+)-discodermolide in the methanol-water (80:20 v/v) fraction. This fraction was injected into an HPLC system with an Xterra RP-18 column and a detection wavelength of 235 nm. The calibration curve was linear, making it possible to calculate the LODs and quantification in these experiments. The intra-day and inter-day precision showed relative standard deviations lower than 5%. The accuracy, determined as the percentage recovery, was 99.4%. Nine samples of the sponge from the Bahamas, Bonaire, Curaçao and Santa Marta had concentrations of (+)-discodermolide ranging from 5.3 to 29.3 microg/g(-1) of wet sponge. This methodology is quick and simple, allowing for the quantification in sponges from natural environments, in situ cultures or dissociated cells.

  14. Validation of metabolomics analysis of human perilymph fluid using liquid chromatography-mass spectroscopy.

    PubMed

    Mavel, Sylvie; Lefèvre, Antoine; Bakhos, David; Dufour-Rainfray, Diane; Blasco, Hélène; Emond, Patrick

    2018-05-22

    Although there is some data from animal studies, the metabolome of inner ear fluid in humans remains unknown. Characterization of the metabolome of the perilymph would allow for better understanding of its role in auditory function and for identification of biomarkers that might allow prediction of response to therapeutics. There is a major technical challenge due to the small sample of perilymph fluid available for analysis (sub-microliter). The objectives of this study were to develop and validate a methodology for analysis of perilymph metabolome using liquid chromatography-high resolution mass spectrometry (LC-HRMS). Due to the low availability of perilymph fluid; a methodological study was first performed using low volumes (0.8 μL) of cerebrospinal fluid (CSF) and optimized the LC-HRMS parameters using targeted and non-targeted metabolomics approaches. We obtained excellent parameters of reproducibility for about 100 metabolites. This methodology was then used to analyze perilymph fluid using two complementary chromatographic supports: reverse phase (RP-C18) and hydrophilic interaction liquid chromatography (HILIC). Both methods were highly robust and showed their complementarity, thus reinforcing the interest to combine these chromatographic supports. A fingerprinting was obtained from 98 robust metabolites (analytical variability <30%), where amino acids (e.g., asparagine, valine, glutamine, alanine, etc.), carboxylic acids and derivatives (e.g., lactate, carnitine, trigonelline, creatinine, etc.) were observed as first-order signals. This work lays the foundations of a robust analytical workflow for the exploration of the perilymph metabolome dedicated to the research of biomarkers for the diagnosis/prognosis of auditory pathologies. Copyright © 2018 Elsevier B.V. All rights reserved.

  15. Theoretical and methodological approaches in discourse analysis.

    PubMed

    Stevenson, Chris

    2004-01-01

    Discourse analysis (DA) embodies two main approaches: Foucauldian DA and radical social constructionist DA. Both are underpinned by social constructionism to a lesser or greater extent. Social constructionism has contested areas in relation to power, embodiment, and materialism, although Foucauldian DA does focus on the issue of power Embodiment and materialism may be especially relevant for researchers of nursing where the physical body is prominent. However, the contested nature of social constructionism allows a fusion of theoretical and methodological approaches tailored to a specific research interest. In this paper, Chris Stevenson suggests a framework for working out and declaring the DA approach to be taken in relation to a research area, as well as to aid anticipating methodological critique. Method, validity, reliability and scholarship are discussed from within a discourse analytic frame of reference.

  16. Theoretical and methodological approaches in discourse analysis.

    PubMed

    Stevenson, Chris

    2004-10-01

    Discourse analysis (DA) embodies two main approaches: Foucauldian DA and radical social constructionist DA. Both are underpinned by social constructionism to a lesser or greater extent. Social constructionism has contested areas in relation to power, embodiment, and materialism, although Foucauldian DA does focus on the issue of power. Embodiment and materialism may be especially relevant for researchers of nursing where the physical body is prominent. However, the contested nature of social constructionism allows a fusion of theoretical and methodological approaches tailored to a specific research interest. In this paper, Chris Stevenson suggests a frame- work for working out and declaring the DA approach to be taken in relation to a research area, as well as to aid anticipating methodological critique. Method, validity, reliability and scholarship are discussed from within a discourse analytic frame of reference.

  17. Architecture, Design, and System; Performance Assessment and Development Methodology for Computer-Based Systems. Volume 1. Methodology Description, Discussion, and Assessment,

    DTIC Science & Technology

    1983-12-30

    AD-Ri46 57? ARCHITECTURE DESIGN AND SYSTEM; PERFORMANCE ASSESSMENT i/i AND DEVELOPMENT ME..(U) NAVAL SURFACE WEAPONS CENTER SILYER SPRING MD J...AD-A 146 577 NSIWC TR 83-324 ARCHITECTURE , DESIGN , AND SYSTEM; PERFORMANCE ASSESSMENT AND DEVELOPMENT METHODOLOGY...REPORT NUMBER 12. GOVT ACCESSION NO.3. RECIPIENT’S CATALOG NUMBER NSWC TR 83-324 10- 1 1 51’ 4. ?ITLE (and subtitle) ARCHITECTURE , DESIGN , AND SYSTEM; S

  18. Three-dimensional viscous design methodology for advanced technology aircraft supersonic inlet systems

    NASA Technical Reports Server (NTRS)

    Anderson, B. H.

    1983-01-01

    A broad program to develop advanced, reliable, and user oriented three-dimensional viscous design techniques for supersonic inlet systems, and encourage their transfer into the general user community is discussed. Features of the program include: (1) develop effective methods of computing three-dimensional flows within a zonal modeling methodology; (2) ensure reasonable agreement between said analysis and selective sets of benchmark validation data; (3) develop user orientation into said analysis; and (4) explore and develop advanced numerical methodology.

  19. Stakeholder analysis methodologies resource book

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Babiuch, W.M.; Farhar, B.C.

    1994-03-01

    Stakeholder analysis allows analysts to identify how parties might be affected by government projects. This process involves identifying the likely impacts of a proposed action and stakeholder groups affected by that action. Additionally, the process involves assessing how these groups might be affected and suggesting measures to mitigate any adverse effects. Evidence suggests that the efficiency and effectiveness of government actions can be increased and adverse social impacts mitigated when officials understand how a proposed action might affect stakeholders. This report discusses how to conduct useful stakeholder analyses for government officials making decisions on energy-efficiency and renewable-energy technologies and theirmore » commercialization. It discusses methodological issues that may affect the validity and reliability of findings, including sampling, generalizability, validity, ``uncooperative`` stakeholder groups, using social indicators, and the effect of government regulations. The Appendix contains resource directories and a list of specialists in stakeholder analysis and involvement.« less

  20. Validation of source approval of HMA surface mix aggregate using spectrometer : final report.

    DOT National Transportation Integrated Search

    2016-04-01

    The main focus of this research project was to develop methodologies for the validation of source approval of hot : mix asphalt surface mix aggregate. In order to further enhance the validation process, a secondary focus was also to : create a spectr...

  1. Correcting Fallacies in Validity, Reliability, and Classification

    ERIC Educational Resources Information Center

    Sijtsma, Klaas

    2009-01-01

    This article reviews three topics from test theory that continue to raise discussion and controversy and capture test theorists' and constructors' interest. The first topic concerns the discussion of the methodology of investigating and establishing construct validity; the second topic concerns reliability and its misuse, alternative definitions…

  2. National survey of drinking and driving attitudes and behaviors : 2008. Volume 3, methodology report

    DOT National Transportation Integrated Search

    2010-08-01

    This report presents the details of the methodology used for the 2008 National Survey of Drinking and Driving Attitudes and Behaviors conducted by Gallup, Inc. for : the National Highway Traffic Safety Administration (NHTSA). This survey represents t...

  3. Prospective validation of pathologic complete response models in rectal cancer: Transferability and reproducibility.

    PubMed

    van Soest, Johan; Meldolesi, Elisa; van Stiphout, Ruud; Gatta, Roberto; Damiani, Andrea; Valentini, Vincenzo; Lambin, Philippe; Dekker, Andre

    2017-09-01

    Multiple models have been developed to predict pathologic complete response (pCR) in locally advanced rectal cancer patients. Unfortunately, validation of these models normally omit the implications of cohort differences on prediction model performance. In this work, we will perform a prospective validation of three pCR models, including information whether this validation will target transferability or reproducibility (cohort differences) of the given models. We applied a novel methodology, the cohort differences model, to predict whether a patient belongs to the training or to the validation cohort. If the cohort differences model performs well, it would suggest a large difference in cohort characteristics meaning we would validate the transferability of the model rather than reproducibility. We tested our method in a prospective validation of three existing models for pCR prediction in 154 patients. Our results showed a large difference between training and validation cohort for one of the three tested models [Area under the Receiver Operating Curve (AUC) cohort differences model: 0.85], signaling the validation leans towards transferability. Two out of three models had a lower AUC for validation (0.66 and 0.58), one model showed a higher AUC in the validation cohort (0.70). We have successfully applied a new methodology in the validation of three prediction models, which allows us to indicate if a validation targeted transferability (large differences between training/validation cohort) or reproducibility (small cohort differences). © 2017 American Association of Physicists in Medicine.

  4. Knowledge-based system verification and validation

    NASA Technical Reports Server (NTRS)

    Johnson, Sally C.

    1990-01-01

    The objective of this task is to develop and evaluate a methodology for verification and validation (V&V) of knowledge-based systems (KBS) for space station applications with high reliability requirements. The approach consists of three interrelated tasks. The first task is to evaluate the effectiveness of various validation methods for space station applications. The second task is to recommend requirements for KBS V&V for Space Station Freedom (SSF). The third task is to recommend modifications to the SSF to support the development of KBS using effectiveness software engineering and validation techniques. To accomplish the first task, three complementary techniques will be evaluated: (1) Sensitivity Analysis (Worchester Polytechnic Institute); (2) Formal Verification of Safety Properties (SRI International); and (3) Consistency and Completeness Checking (Lockheed AI Center). During FY89 and FY90, each contractor will independently demonstrate the user of his technique on the fault detection, isolation, and reconfiguration (FDIR) KBS or the manned maneuvering unit (MMU), a rule-based system implemented in LISP. During FY91, the application of each of the techniques to other knowledge representations and KBS architectures will be addressed. After evaluation of the results of the first task and examination of Space Station Freedom V&V requirements for conventional software, a comprehensive KBS V&V methodology will be developed and documented. Development of highly reliable KBS's cannot be accomplished without effective software engineering methods. Using the results of current in-house research to develop and assess software engineering methods for KBS's as well as assessment of techniques being developed elsewhere, an effective software engineering methodology for space station KBS's will be developed, and modification of the SSF to support these tools and methods will be addressed.

  5. Ground and Space Radar Volume Matching and Comparison Software

    NASA Technical Reports Server (NTRS)

    Morris, Kenneth; Schwaller, Mathew

    2010-01-01

    This software enables easy comparison of ground- and space-based radar observations. The software was initially designed to compare ground radar reflectivity from operational, ground based Sand C-band meteorological radars with comparable measurements from the Tropical Rainfall Measuring Mission (TRMM) satellite s Precipitation Radar (PR) instrument. The software is also applicable to other ground-based and space-based radars. The ground and space radar volume matching and comparison software was developed in response to requirements defined by the Ground Validation System (GVS) of Goddard s Global Precipitation Mission (GPM) project. This software innovation is specifically concerned with simplifying the comparison of ground- and spacebased radar measurements for the purpose of GPM algorithm and data product validation. This software is unique in that it provides an operational environment to routinely create comparison products, and uses a direct geometric approach to derive common volumes of space- and ground-based radar data. In this approach, spatially coincident volumes are defined by the intersection of individual space-based Precipitation Radar rays with the each of the conical elevation sweeps of the ground radar. Thus, the resampled volume elements of the space and ground radar reflectivity can be directly compared to one another.

  6. An Analysis of Measured Pressure Signatures From Two Theory-Validation Low-Boom Models

    NASA Technical Reports Server (NTRS)

    Mack, Robert J.

    2003-01-01

    Two wing/fuselage/nacelle/fin concepts were designed to check the validity and the applicability of sonic-boom minimization theory, sonic-boom analysis methods, and low-boom design methodology in use at the end of the 1980is. Models of these concepts were built, and the pressure signatures they generated were measured in the wind-tunnel. The results of these measurements lead to three conclusions: (1) the existing methods could adequately predict sonic-boom characteristics of wing/fuselage/fin(s) configurations if the equivalent area distributions of each component were smooth and continuous; (2) these methods needed revision so the engine-nacelle volume and the nacelle-wing interference lift disturbances could be accurately predicted; and (3) current nacelle-configuration integration methods had to be updated. With these changes in place, the existing sonic-boom analysis and minimization methods could be effectively applied to supersonic-cruise concepts for acceptable/tolerable sonic-boom overpressures during cruise.

  7. Validation of the thermal challenge problem using Bayesian Belief Networks.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McFarland, John; Swiler, Laura Painton

    The thermal challenge problem has been developed at Sandia National Laboratories as a testbed for demonstrating various types of validation approaches and prediction methods. This report discusses one particular methodology to assess the validity of a computational model given experimental data. This methodology is based on Bayesian Belief Networks (BBNs) and can incorporate uncertainty in experimental measurements, in physical quantities, and model uncertainties. The approach uses the prior and posterior distributions of model output to compute a validation metric based on Bayesian hypothesis testing (a Bayes' factor). This report discusses various aspects of the BBN, specifically in the context ofmore » the thermal challenge problem. A BBN is developed for a given set of experimental data in a particular experimental configuration. The development of the BBN and the method for ''solving'' the BBN to develop the posterior distribution of model output through Monte Carlo Markov Chain sampling is discussed in detail. The use of the BBN to compute a Bayes' factor is demonstrated.« less

  8. Dried Blood Spot Methodology in Combination With Liquid Chromatography/Tandem Mass Spectrometry Facilitates the Monitoring of Teriflunomide

    PubMed Central

    Lunven, Catherine; Turpault, Sandrine; Beyer, Yann-Joel; O'Brien, Amy; Delfolie, Astrid; Boyanova, Neli; Sanderink, Ger-Jan; Baldinetti, Francesca

    2016-01-01

    Background: Teriflunomide, a once-daily oral immunomodulator approved for treatment of relapsing-remitting multiple sclerosis, is eliminated slowly from plasma. If necessary to rapidly lower plasma concentrations of teriflunomide, an accelerated elimination procedure using cholestyramine or activated charcoal may be used. The current bioanalytical assay for determination of plasma teriflunomide concentration requires laboratory facilities for blood centrifugation and plasma storage. An alternative method, with potential for greater convenience, is dried blood spot (DBS) methodology. Analytical and clinical validations are required to switch from plasma to DBS (finger-prick sampling) methodology. Methods: Using blood samples from healthy subjects, an LC-MS/MS assay method for quantification of teriflunomide in DBS over a range of 0.01–10 mcg/mL was developed and validated for specificity, selectivity, accuracy, precision, reproducibility, and stability. Results were compared with those from the current plasma assay for determination of plasma teriflunomide concentration. Results: Method was specific and selective relative to endogenous compounds, with process efficiency ∼88%, and no matrix effect. Inaccuracy and imprecision for intraday and interday analyses were <15% at all concentrations tested. Quantification of teriflunomide in DBS assay was not affected by blood deposit volume and punch position within spot, and hematocrit level had a limited but acceptable effect on measurement accuracy. Teriflunomide was stable for at least 4 months at room temperature, and for at least 24 hours at 37°C with and without 95% relative humidity, to cover sampling, drying, and shipment conditions in the field. The correlation between DBS and plasma concentrations (R2 = 0.97), with an average blood to plasma ratio of 0.59, was concentration independent and constant over time. Conclusions: DBS sampling is a simple and practical method for monitoring teriflunomide

  9. Handbook of the Economics of Education. Volume 4

    ERIC Educational Resources Information Center

    Hanushek, Erik A., Ed.; Machin, Stephen J., Ed.; Woessmann, Ludger, Ed.

    2011-01-01

    What is the value of an education? Volume 4 of the Handbooks in the Economics of Education combines recent data with new methodologies to examine this and related questions from diverse perspectives. School choice and school competition, educator incentives, the college premium, and other considerations help make sense of the investments and…

  10. A Methodological Critique of the ProPublica Surgeon Scorecard

    PubMed Central

    Friedberg, Mark W.; Pronovost, Peter J.; Shahian, David M.; Safran, Dana Gelb; Bilimoria, Karl Y.; Elliott, Marc N.; Damberg, Cheryl L.; Dimick, Justin B.; Zaslavsky, Alan M.

    2016-01-01

    Abstract On July 14, 2015, ProPublica published its Surgeon Scorecard, which displays “Adjusted Complication Rates” for individual, named surgeons for eight surgical procedures performed in hospitals. Public reports of provider performance have the potential to improve the quality of health care that patients receive. A valid performance report can drive quality improvement and usefully inform patients' choices of providers. However, performance reports with poor validity and reliability are potentially damaging to all involved. This article critiques the methods underlying the Scorecard and identifies opportunities for improvement. Until these opportunities are addressed, the authors advise users of the Scorecard—most notably, patients who might be choosing their surgeons—not to consider the Scorecard a valid or reliable predictor of the health outcomes any individual surgeon is likely to provide. The authors hope that this methodological critique will contribute to the development of more-valid and more-reliable performance reports in the future. PMID:28083411

  11. Application of Control Volume Analysis to Cerebrospinal Fluid Dynamics

    NASA Astrophysics Data System (ADS)

    Wei, Timothy; Cohen, Benjamin; Anor, Tomer; Madsen, Joseph

    2011-11-01

    Hydrocephalus is among the most common birth defects and may not be prevented nor cured. Afflicted individuals face serious issues, which at present are too complicated and not well enough understood to treat via systematic therapies. This talk outlines the framework and application of a control volume methodology to clinical Phase Contrast MRI data. Specifically, integral control volume analysis utilizes a fundamental, fluid dynamics methodology to quantify intracranial dynamics within a precise, direct, and physically meaningful framework. A chronically shunted, hydrocephalic patient in need of a revision procedure was used as an in vivo case study. Magnetic resonance velocity measurements within the patient's aqueduct were obtained in four biomedical state and were analyzed using the methods presented in this dissertation. Pressure force estimates were obtained, showing distinct differences in amplitude, phase, and waveform shape for different intracranial states within the same individual. Thoughts on the physiological and diagnostic research and development implications/opportunities will be presented.

  12. Methodological Quality Assessment of Meta-Analyses of Hyperthyroidism Treatment.

    PubMed

    Qin, Yahong; Yao, Liang; Shao, Feifei; Yang, Kehu; Tian, Limin

    2018-01-01

    Hyperthyroidism is a common condition that is associated with increased morbidity and mortality. A number of meta-analyses (MAs) have assessed the therapeutic measures for hyperthyroidism, including antithyroid drugs, surgery, and radioiodine, however, the methodological quality has not been evaluated. This study evaluated the methodological quality and summarized the evidence obtained from MAs of hyperthyroidism treatments for radioiodine, antithyroid drugs, and surgery. We searched the PubMed, EMBASE, Cochrane Library, Web of Science, and Chinese Biomedical Literature Database databases. Two investigators independently assessed the meta-analyses titles and abstracts for inclusion. Methodological quality was assessed using the validated AMSTAR (Assessing the Methodological Quality of Systematic Reviews) tool. A total of 26 MAs fulfilled the inclusion criteria. Based on the AMSTAR scores, the average methodological quality was 8.31, with large variability ranging from 4 to 11. The methodological quality of English meta-analyses was better than that of Chinese meta-analyses. Cochrane reviews had better methodological quality than non-Cochrane reviews due to better study selection and data extraction, the inclusion of unpublished studies, and better reporting of study characteristics. The authors did not report conflicts of interest in 53.8% meta-analyses, and 19.2% did not report the harmful effects of treatment. Publication bias was not assessed in 38.5% of meta-analyses, and 19.2% did not report the follow-up time. Large-scale assessment of methodological quality of meta-analyses of hyperthyroidism treatment highlighted methodological strengths and weaknesses. Consideration of scientific quality when formulating conclusions should be made explicit. Future meta-analyses should improve on reporting conflict of interest. © Georg Thieme Verlag KG Stuttgart · New York.

  13. Evaluation and validation of social and psychological markers in randomised trials of complex interventions in mental health: a methodological research programme.

    PubMed

    Dunn, Graham; Emsley, Richard; Liu, Hanhua; Landau, Sabine; Green, Jonathan; White, Ian; Pickles, Andrew

    2015-11-01

    The development of the capability and capacity to evaluate the outcomes of trials of complex interventions is a key priority of the National Institute for Health Research (NIHR) and the Medical Research Council (MRC). The evaluation of complex treatment programmes for mental illness (e.g. cognitive-behavioural therapy for depression or psychosis) not only is a vital component of this research in its own right but also provides a well-established model for the evaluation of complex interventions in other clinical areas. In the context of efficacy and mechanism evaluation (EME) there is a particular need for robust methods for making valid causal inference in explanatory analyses of the mechanisms of treatment-induced change in clinical outcomes in randomised clinical trials. The key objective was to produce statistical methods to enable trial investigators to make valid causal inferences about the mechanisms of treatment-induced change in these clinical outcomes. The primary objective of this report is to disseminate this methodology, aiming specifically at trial practitioners. The three components of the research were (1) the extension of instrumental variable (IV) methods to latent growth curve models and growth mixture models for repeated-measures data; (2) the development of designs and regression methods for parallel trials; and (3) the evaluation of the sensitivity/robustness of findings to the assumptions necessary for model identifiability. We illustrate our methods with applications from psychological and psychosocial intervention trials, keeping the technical details to a minimum, leaving the reporting of the more theoretical and mathematically demanding results for publication in appropriate specialist journals. We show how to estimate treatment effects and introduce methods for EME. We explain the use of IV methods and principal stratification to evaluate the role of putative treatment effect mediators and therapeutic process measures. These results are

  14. Relationship of Temporal Lobe Volumes to Neuropsychological Test Performance in Healthy Children

    PubMed Central

    Wells, Carolyn T.; Matson, Melissa A.; Kates, Wendy R.; Hay, Trisha; Horska, Alena

    2008-01-01

    Ecological validity of neuropsychological assessment includes the ability of tests to predict real-world functioning and/or covary with brain structures. Studies have examined the relationship between adaptive skills and test performance, with less focus on the association between regional brain volumes and neurobehavioral function in healthy children. The present study examined the relationship between temporal lobe gray matter volumes and performance on two neuropsychological tests hypothesized to measure temporal lobe functioning (Visual Perception-VP; Peabody Picture Vocabulary Test, Third Edition-PPVT-III) in 48 healthy children ages 5-18 years. After controlling for age and gender, left and right temporal and left occipital volumes were significant predictors of VP. Left and right frontal and temporal volumes were significant predictors of PPVT-III. Temporal volume emerged as the strongest lobar correlate with both tests. These results provide convergent and discriminant validity supporting VP as a measure of the “what” system; but suggest the PPVT-III as a complex measure of receptive vocabulary, potentially involving executive function demands. PMID:18513844

  15. Methodology for the systems engineering process. Volume 3: Operational availability

    NASA Technical Reports Server (NTRS)

    Nelson, J. H.

    1972-01-01

    A detailed description and explanation of the operational availability parameter is presented. The fundamental mathematical basis for operational availability is developed, and its relationship to a system's overall performance effectiveness is illustrated within the context of identifying specific availability requirements. Thus, in attempting to provide a general methodology for treating both hypothetical and existing availability requirements, the concept of an availability state, in conjunction with the more conventional probability-time capability, is investigated. In this respect, emphasis is focused upon a balanced analytical and pragmatic treatment of operational availability within the system design process. For example, several applications of operational availability to typical aerospace systems are presented, encompassing the techniques of Monte Carlo simulation, system performance availability trade-off studies, analytical modeling of specific scenarios, as well as the determination of launch-on-time probabilities. Finally, an extensive bibliography is provided to indicate further levels of depth and detail of the operational availability parameter.

  16. Development and application of an analysis methodology for interpreting ambiguous historical pressure data in the WIPP gas-generation experiments.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Felicione, F. S.

    2006-01-23

    variation in the headspace volume caused by thermal expansion and contraction within the brine and waste. A further effort was directed at recovering useful results from the voluminous archived pressure data. An analytic methodology to do this was developed. This methodology was applied to each archived pressure measurement to nullify temperature and other effects to yield an adjusted pressure, from which gas-generation rates could be calculated. A review of the adjusted-pressure data indicated that generated-gas concentrations among these containers after approximately 3.25 years of test operation ranged from zero to over 17,000 ppm by volume. Four test containers experienced significant gas generation. All test containers that showed evidence of significant gas generation contained carbon-steel in the waste, indicating that corrosion was the predominant source of gas generation.« less

  17. Performance evaluation in full-mission simulation - Methodological advances and research challenges. [in air transport operations

    NASA Technical Reports Server (NTRS)

    Chidester, Thomas R.; Kanki, Barbara G.; Helmreich, Robert L.

    1989-01-01

    The crew-factors research program at NASA Ames has developed a methodology for studying the impact of a variety of variables on the effectiveness of crews flying realistic but high workload simulated trips. The validity of investigations using the methodology is enhanced by careful design of full-mission scenarios, performance assessment using converging sources of data, and recruitment of representative subjects. Recently, portions of this methodology have been adapted for use in assessing the effectiveness of crew coordination among participants in line-oriented flight training.

  18. Downward longwave surface radiation from sun-synchronous satellite data - Validation of methodology

    NASA Technical Reports Server (NTRS)

    Darnell, W. L.; Gupta, S. K.; Staylor, W. F.

    1986-01-01

    An extensive study has been carried out to validate a satellite technique for estimating downward longwave radiation at the surface. The technique, mostly developed earlier, uses operational sun-synchronous satellite data and a radiative transfer model to provide the surface flux estimates. The satellite-derived fluxes were compared directly with corresponding ground-measured fluxes at four different sites in the United States for a common one-year period. This provided a study of seasonal variations as well as a diversity of meteorological conditions. Dome heating errors in the ground-measured fluxes were also investigated and were corrected prior to the comparisons. Comparison of the monthly averaged fluxes from the satellite and ground sources for all four sites for the entire year showed a correlation coefficient of 0.98 and a standard error of estimate of 10 W/sq m. A brief description of the technique is provided, and the results validating the technique are presented.

  19. Handbook of the Economics of Education. Volume 3

    ERIC Educational Resources Information Center

    Hanushek, Eric A., Ed.; Machin, Stephen J., Ed.; Woessmann, Ludger, Ed.

    2011-01-01

    How does education affect economic and social outcomes, and how can it inform public policy? Volume 3 of the Handbooks in the Economics of Education uses newly available high quality data from around the world to address these and other core questions. With the help of new methodological approaches, contributors cover econometric methods and…

  20. Persian Basic Course: Volume I, Lesson 1-18.

    ERIC Educational Resources Information Center

    Defense Language Inst., Monterey, CA.

    The first of 10 volumes of a basic course in Persian is presented that is designed for use in the Defense Language Institute's intensive programs. The course, employing the audiolingual methodology, is designed to train native English speakers to level three proficiency in comprehension and speaking and level two proficiency in reading and writing…

  1. Cleaning verification: A five parameter study of a Total Organic Carbon method development and validation for the cleaning assessment of residual detergents in manufacturing equipment.

    PubMed

    Li, Xue; Ahmad, Imad A Haidar; Tam, James; Wang, Yan; Dao, Gina; Blasko, Andrei

    2018-02-05

    A Total Organic Carbon (TOC) based analytical method to quantitate trace residues of clean-in-place (CIP) detergents CIP100 ® and CIP200 ® on the surfaces of pharmaceutical manufacturing equipment was developed and validated. Five factors affecting the development and validation of the method were identified: diluent composition, diluent volume, extraction method, location for TOC sample preparation, and oxidant flow rate. Key experimental parameters were optimized to minimize contamination and to improve the sensitivity, recovery, and reliability of the method. The optimized concentration of the phosphoric acid in the swabbing solution was 0.05M, and the optimal volume of the sample solution was 30mL. The swab extraction method was 1min sonication. The use of a clean room, as compared to an isolated lab environment, was not required for method validation. The method was demonstrated to be linear with a correlation coefficient (R) of 0.9999. The average recoveries from stainless steel surfaces at multiple spike levels were >90%. The repeatability and intermediate precision results were ≤5% across the 2.2-6.6ppm range (50-150% of the target maximum carry over, MACO, limit). The method was also shown to be sensitive with a detection limit (DL) of 38ppb and a quantitation limit (QL) of 114ppb. The method validation demonstrated that the developed method is suitable for its intended use. The methodology developed in this study is generally applicable to the cleaning verification of any organic detergents used for the cleaning of pharmaceutical manufacturing equipment made of electropolished stainless steel material. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. SeaWiFS Postlaunch Calibration and Validation Analyses

    NASA Technical Reports Server (NTRS)

    Hooker, Stanford B. (Editor); Firestone, Elaine (Editor); McClain, Charles R.; Barnes, Robert A.; Eplee, Robert E., Jr.; Franz, Bryan A.; Hsu, N. Christina; Patt, Frederick S.; Pietras, Christophe M.; Robinson, Wayne D.

    2000-01-01

    The effort to resolve data quality issues and improve on the initial data evaluation methodologies of the SeaWiFS Project was an extensive one. These evaluations have resulted, to date, in three major reprocessings of the entire data set where each reprocessing addressed the data quality issues that could be identified up to the time of the reprocessing. Three volumes of the SeaWiFS Postlaunch Technical Report Series (Volumes 9, 10, and 11) are needed to document the improvements implemented since launch. Volume 10 continues the sequential presentation of postlaunch data analysis and algorithm descriptions begun in Volume 9. Chapter 1 of Volume 10 describes an absorbing aerosol index, similar to that produced by the Total Ozone Mapping Spectrometer (TOMS) Project, which is used to flag pixels contaminated by absorbing aerosols, such as, dust and smoke. Chapter 2 discusses the algorithm being used to remove SeaWiFS out-of-band radiance from the water-leaving radiances. Chapter 3 provides an itemization of all significant changes in the processing algorithms for each of the first three reprocessings. Chapter 4 shows the time series of global clear water and deep-water (depths greater than 1,000m) bio-optical and atmospheric properties (normalized water-leaving radiances, chlorophyll, atmospheric optical depth, etc.) based on the eight-day composites as a check on the sensor calibration stability. Chapter 5 examines the variation in the derived products with scan angle using high resolution data around Hawaii to test for residual scan modulation effects and atmospheric correction biases. Chapter 6 provides a methodology for evaluating the atmospheric correction algorithm and atmospheric derived products using ground-based observations. Similarly, Chapter 7 presents match-up comparisons of coincident satellite and in situ data to determine the accuracy of the water-leaving radiances, chlorophyll a, and K(490) products.

  3. Methodological quality of meta-analyses of single-case experimental studies.

    PubMed

    Jamshidi, Laleh; Heyvaert, Mieke; Declercq, Lies; Fernández-Castilla, Belén; Ferron, John M; Moeyaert, Mariola; Beretvas, S Natasha; Onghena, Patrick; Van den Noortgate, Wim

    2017-12-28

    Methodological rigor is a fundamental factor in the validity and credibility of the results of a meta-analysis. Following an increasing interest in single-case experimental design (SCED) meta-analyses, the current study investigates the methodological quality of SCED meta-analyses. We assessed the methodological quality of 178 SCED meta-analyses published between 1985 and 2015 through the modified Revised-Assessment of Multiple Systematic Reviews (R-AMSTAR) checklist. The main finding of the current review is that the methodological quality of the SCED meta-analyses has increased over time, but is still low according to the R-AMSTAR checklist. A remarkable percentage of the studies (93.80% of the included SCED meta-analyses) did not even reach the midpoint score (22, on a scale of 0-44). The mean and median methodological quality scores were 15.57 and 16, respectively. Relatively high scores were observed for "providing the characteristics of the included studies" and "doing comprehensive literature search". The key areas of deficiency were "reporting an assessment of the likelihood of publication bias" and "using the methods appropriately to combine the findings of studies". Although the results of the current review reveal that the methodological quality of the SCED meta-analyses has increased over time, still more efforts are needed to improve their methodological quality. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Simulation and optimization of volume holographic imaging systems in Zemax.

    PubMed

    Wissmann, Patrick; Oh, Se Baek; Barbastathis, George

    2008-05-12

    We present a new methodology for ray-tracing analysis of volume holographic imaging (VHI) systems. Using the k-sphere formulation, we apply geometrical relationships to describe the volumetric diffraction effects imposed on rays passing through a volume hologram. We explain the k-sphere formulation in conjunction with ray tracing process and describe its implementation in a Zemax UDS (User Defined Surface). We conclude with examples of simulation and optimization results and show proof of consistency and usefulness of the proposed model.

  5. Development of analytical methodologies to assess recalcitrant pesticide bioremediation in biobeds at laboratory scale.

    PubMed

    Rivero, Anisleidy; Niell, Silvina; Cerdeiras, M Pía; Heinzen, Horacio; Cesio, María Verónica

    2016-06-01

    To assess recalcitrant pesticide bioremediation it is necessary to gradually increase the complexity of the biological system used in order to design an effective biobed assembly. Each step towards this effective biobed design needs a suitable, validated analytical methodology that allows a correct evaluation of the dissipation and bioconvertion. Low recovery yielding methods could give a false idea of a successful biodegradation process. To address this situation, different methods were developed and validated for the simultaneous determination of endosulfan, its main three metabolites, and chlorpyrifos in increasingly complex matrices where the bioconvertor basidiomycete Abortiporus biennis could grow. The matrices were culture media, bran, and finally a laboratory biomix composed of bran, peat and soil. The methodology for the analysis of the first evaluated matrix has already been reported. The methodologies developed for the other two systems are presented in this work. The targeted analytes were extracted from fungi growing over bran in semisolid media YNB (Yeast Nitrogen Based) with acetonitrile using shaker assisted extraction, The salting-out step was performed with MgSO4 and NaCl, and the extracts analyzed by GC-ECD. The best methodology was fully validated for all the evaluated analytes at 1 and 25mgkg(-1) yielding recoveries between 72% and 109% and RSDs <11% in all cases. The application of this methodology proved that A. biennis is able to dissipate 94% of endosulfan and 87% of chlorpyrifos after 90 days. Having assessed that A. biennis growing over bran can metabolize the studied pesticides, the next step faced was the development and validation of an analytical procedure to evaluate the analytes in a laboratory scale biobed composed of 50% of bran, 25% of peat and 25% of soil together with fungal micelium. From the different procedures assayed, only ultrasound assisted extraction with ethyl acetate allowed recoveries between 80% and 110% with RSDs

  6. Reverse Engineering Validation using a Benchmark Synthetic Gene Circuit in Human Cells

    PubMed Central

    Kang, Taek; White, Jacob T.; Xie, Zhen; Benenson, Yaakov; Sontag, Eduardo; Bleris, Leonidas

    2013-01-01

    Multi-component biological networks are often understood incompletely, in large part due to the lack of reliable and robust methodologies for network reverse engineering and characterization. As a consequence, developing automated and rigorously validated methodologies for unraveling the complexity of biomolecular networks in human cells remains a central challenge to life scientists and engineers. Today, when it comes to experimental and analytical requirements, there exists a great deal of diversity in reverse engineering methods, which renders the independent validation and comparison of their predictive capabilities difficult. In this work we introduce an experimental platform customized for the development and verification of reverse engineering and pathway characterization algorithms in mammalian cells. Specifically, we stably integrate a synthetic gene network in human kidney cells and use it as a benchmark for validating reverse engineering methodologies. The network, which is orthogonal to endogenous cellular signaling, contains a small set of regulatory interactions that can be used to quantify the reconstruction performance. By performing successive perturbations to each modular component of the network and comparing protein and RNA measurements, we study the conditions under which we can reliably reconstruct the causal relationships of the integrated synthetic network. PMID:23654266

  7. Reverse engineering validation using a benchmark synthetic gene circuit in human cells.

    PubMed

    Kang, Taek; White, Jacob T; Xie, Zhen; Benenson, Yaakov; Sontag, Eduardo; Bleris, Leonidas

    2013-05-17

    Multicomponent biological networks are often understood incompletely, in large part due to the lack of reliable and robust methodologies for network reverse engineering and characterization. As a consequence, developing automated and rigorously validated methodologies for unraveling the complexity of biomolecular networks in human cells remains a central challenge to life scientists and engineers. Today, when it comes to experimental and analytical requirements, there exists a great deal of diversity in reverse engineering methods, which renders the independent validation and comparison of their predictive capabilities difficult. In this work we introduce an experimental platform customized for the development and verification of reverse engineering and pathway characterization algorithms in mammalian cells. Specifically, we stably integrate a synthetic gene network in human kidney cells and use it as a benchmark for validating reverse engineering methodologies. The network, which is orthogonal to endogenous cellular signaling, contains a small set of regulatory interactions that can be used to quantify the reconstruction performance. By performing successive perturbations to each modular component of the network and comparing protein and RNA measurements, we study the conditions under which we can reliably reconstruct the causal relationships of the integrated synthetic network.

  8. MUSIC-Expected maximization gaussian mixture methodology for clustering and detection of task-related neuronal firing rates.

    PubMed

    Ortiz-Rosario, Alexis; Adeli, Hojjat; Buford, John A

    2017-01-15

    Researchers often rely on simple methods to identify involvement of neurons in a particular motor task. The historical approach has been to inspect large groups of neurons and subjectively separate neurons into groups based on the expertise of the investigator. In cases where neuron populations are small it is reasonable to inspect these neuronal recordings and their firing rates carefully to avoid data omissions. In this paper, a new methodology is presented for automatic objective classification of neurons recorded in association with behavioral tasks into groups. By identifying characteristics of neurons in a particular group, the investigator can then identify functional classes of neurons based on their relationship to the task. The methodology is based on integration of a multiple signal classification (MUSIC) algorithm to extract relevant features from the firing rate and an expectation-maximization Gaussian mixture algorithm (EM-GMM) to cluster the extracted features. The methodology is capable of identifying and clustering similar firing rate profiles automatically based on specific signal features. An empirical wavelet transform (EWT) was used to validate the features found in the MUSIC pseudospectrum and the resulting signal features captured by the methodology. Additionally, this methodology was used to inspect behavioral elements of neurons to physiologically validate the model. This methodology was tested using a set of data collected from awake behaving non-human primates. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Social validity in single-case research: A systematic literature review of prevalence and application.

    PubMed

    Snodgrass, Melinda R; Chung, Moon Y; Meadan, Hedda; Halle, James W

    2018-03-01

    Single-case research (SCR) has been a valuable methodology in special education research. Montrose Wolf (1978), an early pioneer in single-case methodology, coined the term "social validity" to refer to the social importance of the goals selected, the acceptability of procedures employed, and the effectiveness of the outcomes produced in applied investigations. Since 1978, many contributors to SCR have included social validity as a feature of their articles and several authors have examined the prevalence and role of social validity in SCR. We systematically reviewed all SCR published in six highly-ranked special education journals from 2005 to 2016 to establish the prevalence of social validity assessments and to evaluate their scientific rigor. We found relatively low, but stable prevalence with only 28 publications addressing all three factors of the social validity construct (i.e., goals, procedures, outcomes). We conducted an in-depth analysis of the scientific rigor of these 28 publications. Social validity remains an understudied construct in SCR, and the scientific rigor of social validity assessments is often lacking. Implications and future directions are discussed. Copyright © 2018 Elsevier Ltd. All rights reserved.

  10. [Optimization of Polysaccharide Extraction from Spirodela polyrrhiza by Plackett-Burman Design Combined with Box-Behnken Response Surface Methodology].

    PubMed

    Jiang, Zheng; Wang, Hong; Wu, Qi-nan

    2015-06-01

    To optimize the processing of polysaccharide extraction from Spirodela polyrrhiza. Five factors related to extraction rate of polysaccharide were optimized by the Plackett-Burman design. Based on this study, three factors, including alcohol volume fraction, extraction temperature and ratio of material to liquid, were regarded as investigation factors by Box-Behnken response surface methodology. The effect order of three factors on the extraction rate of polysaccharide from Spirodela polyrrhiza were as follows: extraction temperature, alcohol volume fraction,ratio of material to liquid. According to Box-Behnken response, the best extraction conditions were: alcohol volume fraction of 81%, ratio of material to liquid of 1:42, extraction temperature of 100 degrees C, extraction time of 60 min for four times. Plackett-Burman design and Box-Behnken response surface methodology used to optimize the extraction process for the polysaccharide in this study is effective and stable.

  11. Validation of high-throughput single cell analysis methodology.

    PubMed

    Devonshire, Alison S; Baradez, Marc-Olivier; Morley, Gary; Marshall, Damian; Foy, Carole A

    2014-05-01

    High-throughput quantitative polymerase chain reaction (qPCR) approaches enable profiling of multiple genes in single cells, bringing new insights to complex biological processes and offering opportunities for single cell-based monitoring of cancer cells and stem cell-based therapies. However, workflows with well-defined sources of variation are required for clinical diagnostics and testing of tissue-engineered products. In a study of neural stem cell lines, we investigated the performance of lysis, reverse transcription (RT), preamplification (PA), and nanofluidic qPCR steps at the single cell level in terms of efficiency, precision, and limit of detection. We compared protocols using a separate lysis buffer with cell capture directly in RT-PA reagent. The two methods were found to have similar lysis efficiencies, whereas the direct RT-PA approach showed improved precision. Digital PCR was used to relate preamplified template copy numbers to Cq values and reveal where low-quality signals may affect the analysis. We investigated the impact of calibration and data normalization strategies as a means of minimizing the impact of inter-experimental variation on gene expression values and found that both approaches can improve data comparability. This study provides validation and guidance for the application of high-throughput qPCR workflows for gene expression profiling of single cells. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. Validation of virtual learning object to support the teaching of nursing care systematization.

    PubMed

    Salvador, Pétala Tuani Candido de Oliveira; Mariz, Camila Maria Dos Santos; Vítor, Allyne Fortes; Ferreira Júnior, Marcos Antônio; Fernandes, Maria Isabel Domingues; Martins, José Carlos Amado; Santos, Viviane Euzébia Pereira

    2018-01-01

    to describe the content validation process of a Virtual Learning Object to support the teaching of nursing care systematization to nursing professionals. methodological study, with quantitative approach, developed according to the methodological reference of Pasquali's psychometry and conducted from March to July 2016, from two-stage Delphi procedure. in the Delphi 1 stage, eight judges evaluated the Virtual Object; in Delphi 2 stage, seven judges evaluated it. The seven screens of the Virtual Object were analyzed as to the suitability of its contents. The Virtual Learning Object to support the teaching of nursing care systematization was considered valid in its content, with a Total Content Validity Coefficient of 0.96. it is expected that the Virtual Object can support the teaching of nursing care systematization in light of appropriate and effective pedagogical approaches.

  13. A new hybrid transfinite element computational methodology for applicability to conduction/convection/radiation heat transfer

    NASA Technical Reports Server (NTRS)

    Tamma, Kumar K.; Railkar, Sudhir B.

    1988-01-01

    This paper describes new and recent advances in the development of a hybrid transfinite element computational methodology for applicability to conduction/convection/radiation heat transfer problems. The transfinite element methodology, while retaining the modeling versatility of contemporary finite element formulations, is based on application of transform techniques in conjunction with classical Galerkin schemes and is a hybrid approach. The purpose of this paper is to provide a viable hybrid computational methodology for applicability to general transient thermal analysis. Highlights and features of the methodology are described and developed via generalized formulations and applications to several test problems. The proposed transfinite element methodology successfully provides a viable computational approach and numerical test problems validate the proposed developments for conduction/convection/radiation thermal analysis.

  14. A new gas dilution method for measuring body volume.

    PubMed Central

    Nagao, N; Tamaki, K; Kuchiki, T; Nagao, M

    1995-01-01

    This study was designed to examine the validity of a new gas dilution method (GD) for measuring human body volume and to compare its accuracy with the results obtained by the underwater weighing method (UW). We measured the volume of plastic bottles and 16 subjects (including two females), aged 18-42 years with each method. For the bottles, the volume measured by hydrostatic weighing was correlated highly (r = 1.000) with that measured by the new gas dilution method. For the subjects, the body volume determined by the two methods was significantly correlated (r = 0.998). However, the subject's volume measured by the gas dilution method was significantly larger than that by underwater weighing method. There was significant correlation (r = 0.806) between GD volume-UW volume and the body mass index (BMI), so that UW volume could be predicted from GD volume and BMI. It can be concluded that the new gas dilution method offers promising possibilities for future research in the population who cannot submerge underwater. PMID:7551760

  15. Failure mode and effects analysis outputs: are they valid?

    PubMed Central

    2012-01-01

    Background Failure Mode and Effects Analysis (FMEA) is a prospective risk assessment tool that has been widely used within the aerospace and automotive industries and has been utilised within healthcare since the early 1990s. The aim of this study was to explore the validity of FMEA outputs within a hospital setting in the United Kingdom. Methods Two multidisciplinary teams each conducted an FMEA for the use of vancomycin and gentamicin. Four different validity tests were conducted: · Face validity: by comparing the FMEA participants’ mapped processes with observational work. · Content validity: by presenting the FMEA findings to other healthcare professionals. · Criterion validity: by comparing the FMEA findings with data reported on the trust’s incident report database. · Construct validity: by exploring the relevant mathematical theories involved in calculating the FMEA risk priority number. Results Face validity was positive as the researcher documented the same processes of care as mapped by the FMEA participants. However, other healthcare professionals identified potential failures missed by the FMEA teams. Furthermore, the FMEA groups failed to include failures related to omitted doses; yet these were the failures most commonly reported in the trust’s incident database. Calculating the RPN by multiplying severity, probability and detectability scores was deemed invalid because it is based on calculations that breach the mathematical properties of the scales used. Conclusion There are significant methodological challenges in validating FMEA. It is a useful tool to aid multidisciplinary groups in mapping and understanding a process of care; however, the results of our study cast doubt on its validity. FMEA teams are likely to need different sources of information, besides their personal experience and knowledge, to identify potential failures. As for FMEA’s methodology for scoring failures, there were discrepancies between the teams’ estimates

  16. Failure mode and effects analysis outputs: are they valid?

    PubMed

    Shebl, Nada Atef; Franklin, Bryony Dean; Barber, Nick

    2012-06-10

    Failure Mode and Effects Analysis (FMEA) is a prospective risk assessment tool that has been widely used within the aerospace and automotive industries and has been utilised within healthcare since the early 1990s. The aim of this study was to explore the validity of FMEA outputs within a hospital setting in the United Kingdom. Two multidisciplinary teams each conducted an FMEA for the use of vancomycin and gentamicin. Four different validity tests were conducted: Face validity: by comparing the FMEA participants' mapped processes with observational work. Content validity: by presenting the FMEA findings to other healthcare professionals. Criterion validity: by comparing the FMEA findings with data reported on the trust's incident report database. Construct validity: by exploring the relevant mathematical theories involved in calculating the FMEA risk priority number. Face validity was positive as the researcher documented the same processes of care as mapped by the FMEA participants. However, other healthcare professionals identified potential failures missed by the FMEA teams. Furthermore, the FMEA groups failed to include failures related to omitted doses; yet these were the failures most commonly reported in the trust's incident database. Calculating the RPN by multiplying severity, probability and detectability scores was deemed invalid because it is based on calculations that breach the mathematical properties of the scales used. There are significant methodological challenges in validating FMEA. It is a useful tool to aid multidisciplinary groups in mapping and understanding a process of care; however, the results of our study cast doubt on its validity. FMEA teams are likely to need different sources of information, besides their personal experience and knowledge, to identify potential failures. As for FMEA's methodology for scoring failures, there were discrepancies between the teams' estimates and similar incidents reported on the trust's incident

  17. Somatization in survivors of catastrophic trauma: a methodological review.

    PubMed Central

    North, Carol S

    2002-01-01

    The literature on mental health effects of catastrophic trauma such as community disasters focuses on posttraumatic stress disorder. Somatization disorder is not listed among the classic responses to disaster, nor have other somatoform disorders been described in this literature. Nondiagnostic "somatization," "somatization symptoms," and "somatic symptoms" form the basis of most information about somatization in the literature. However, these concepts have not been validated, and therefore this work suffers from multiple methodological problems of ascertainment and interpretation. Future research is encouraged to consider many methodological issues in obtaining adequate data to address questions about the association of somatization with traumatic events, including a) appropriate comparison groups, b) satisfactory definition and measurement of somatization, c) exclusion of medical explanations for the symptoms, d) recognition of somatizers' spurious attribution of symptoms to medical causes, e) collection of data from additional sources beyond single-subject interviews, f) validation of diagnosis-unrelated symptom reporting or reconsideration of symptoms within diagnostic frameworks, g) separation of somatization after an event into new (incident) and preexisting categories, h) development of research models that include sufficient variables to examine the broader scope of potential relationships, and i) novel consideration of alternative causal directionalities. PMID:12194899

  18. Interrelationships Between Receiver/Relative Operating Characteristics Display, Binomial, Logit, and Bayes' Rule Probability of Detection Methodologies

    NASA Technical Reports Server (NTRS)

    Generazio, Edward R.

    2014-01-01

    Unknown risks are introduced into failure critical systems when probability of detection (POD) capabilities are accepted without a complete understanding of the statistical method applied and the interpretation of the statistical results. The presence of this risk in the nondestructive evaluation (NDE) community is revealed in common statements about POD. These statements are often interpreted in a variety of ways and therefore, the very existence of the statements identifies the need for a more comprehensive understanding of POD methodologies. Statistical methodologies have data requirements to be met, procedures to be followed, and requirements for validation or demonstration of adequacy of the POD estimates. Risks are further enhanced due to the wide range of statistical methodologies used for determining the POD capability. Receiver/Relative Operating Characteristics (ROC) Display, simple binomial, logistic regression, and Bayes' rule POD methodologies are widely used in determining POD capability. This work focuses on Hit-Miss data to reveal the framework of the interrelationships between Receiver/Relative Operating Characteristics Display, simple binomial, logistic regression, and Bayes' Rule methodologies as they are applied to POD. Knowledge of these interrelationships leads to an intuitive and global understanding of the statistical data, procedural and validation requirements for establishing credible POD estimates.

  19. Designing, Evaluating, and Deploying Automated Scoring Systems with Validity in Mind: Methodological Design Decisions

    ERIC Educational Resources Information Center

    Rupp, André A.

    2018-01-01

    This article discusses critical methodological design decisions for collecting, interpreting, and synthesizing empirical evidence during the design, deployment, and operational quality-control phases for automated scoring systems. The discussion is inspired by work on operational large-scale systems for automated essay scoring but many of the…

  20. A methodology to address mixed AGN and starlight contributions in emission line galaxies found in the RESOLVE survey and ECO catalog

    NASA Astrophysics Data System (ADS)

    Richardson, Chris T.; Kannappan, Sheila; Bittner, Ashley; Isaac, Rohan; RESOLVE

    2017-01-01

    We present a novel methodology for modeling emission line galaxy samples that span the entire BPT diagram. Our methodology has several advantages over current modeling schemes: the free variables in the model are identical for both AGN and SF galaxies; these free variables are more closely linked to observable galaxy properties; and the ionizing spectra including an AGN and starlight are handled self-consistently rather than empirically. We show that our methodology is capable of fitting the vast majority of SDSS galaxies that fall within the traditional regions of galaxy classification on the BPT diagram. We also present current results for relaxing classification boundaries and extending our galaxies into the dwarf regime, using the REsolved Spectroscopy of a Local VolumE (RESOLVE) survey and the Environmental COntext (ECO) catalog, with special attention to compact blue E/S0s. We compare this methodology to PCA decomposition of the spectra. This work is supported by National Science Foundation awards AST-0955368 and CISE/ACI-1156614.

  1. An assessment of data and methodology of online surgeon scorecards.

    PubMed

    Xu, Linda W; Li, Amy; Swinney, Christian; Babu, Maya; Veeravagu, Anand; Wolfe, Stacey Quintero; Nahed, Brian V; Ratliff, John K

    2017-02-01

    OBJECTIVE Recently, 2 surgeon rating websites (Consumers' Checkbook and ProPublica) were published to allow the public to compare surgeons through identifying surgeon volume and complication rates. Among neurosurgeons and orthopedic surgeons, only cervical and lumbar spine, hip, and knee procedures were included in this assessment. METHODS The authors examined the methodology of each website to assess potential sources of inaccuracy. Each online tool was queried for reports on neurosurgeons specializing in spine surgery and orthopedic surgeons specializing in spine, hip, or knee surgery. Surgeons were chosen from top-ranked hospitals in the US, as recorded by a national consumer publication ranking system, within the fields of neurosurgery and orthopedic surgery. The results were compared for accuracy and surgeon representation, and the results of the 2 websites were also compared. RESULTS The methodology of each site was found to have opportunities for bias and limited risk adjustment. The end points assessed by each site were actually not complications, but proxies of complication occurrence. A search of 510 surgeons (401 orthopedic surgeons [79%] and 109 neurosurgeons [21%]) showed that only 28% and 56% of surgeons had data represented on Consumers' Checkbook and ProPublica, respectively. There was a significantly higher chance of finding surgeon data on ProPublica (p < 0.001). Of the surgeons from top-ranked programs with data available, 17% were quoted to have high complication rates, 13% with lower volume than other surgeons, and 79% had a 3-star out of 5-star rating. There was no significant correlation found between the number of stars a surgeon received on Consumers' Checkbook and his or her adjusted complication rate on ProPublica. CONCLUSIONS Both the Consumers' Checkbook and ProPublica websites have significant methodological issues. Neither site assessed complication occurrence, but rather readmissions or prolonged length of stay. Risk adjustment was

  2. Guidance for updating clinical practice guidelines: a systematic review of methodological handbooks.

    PubMed

    Vernooij, Robin W M; Sanabria, Andrea Juliana; Solà, Ivan; Alonso-Coello, Pablo; Martínez García, Laura

    2014-01-02

    Updating clinical practice guidelines (CPGs) is a crucial process for maintaining the validity of recommendations. Methodological handbooks should provide guidance on both developing and updating CPGs. However, little is known about the updating guidance provided by these handbooks. We conducted a systematic review to identify and describe the updating guidance provided by CPG methodological handbooks and included handbooks that provide updating guidance for CPGs. We searched in the Guidelines International Network library, US National Guidelines Clearinghouse and MEDLINE (PubMed) from 1966 to September 2013. Two authors independently selected the handbooks and extracted the data. We used descriptive statistics to analyze the extracted data and conducted a narrative synthesis. We included 35 handbooks. Most handbooks (97.1%) focus mainly on developing CPGs, including variable degrees of information about updating. Guidance on identifying new evidence and the methodology of assessing the need for an update is described in 11 (31.4%) and eight handbooks (22.8%), respectively. The period of time between two updates is described in 25 handbooks (71.4%), two to three years being the most frequent (40.0%). The majority of handbooks do not provide guidance for the literature search, evidence selection, assessment, synthesis, and external review of the updating process. Guidance for updating CPGs is poorly described in methodological handbooks. This guidance should be more rigorous and explicit. This could lead to a more optimal updating process, and, ultimately to valid trustworthy guidelines.

  3. Validity in Mixed Methods Research in Education: The Application of Habermas' Critical Theory

    ERIC Educational Resources Information Center

    Long, Haiying

    2017-01-01

    Mixed methods approach has developed into the third methodological movement in educational research. Validity in mixed methods research as an important issue, however, has not been examined as extensively as that of quantitative and qualitative research. Additionally, the previous discussions of validity in mixed methods research focus on research…

  4. Upward Mobility Programs in the Service Sector for Disadvantaged and Dislocated Workers. Volume II: Technical Appendices.

    ERIC Educational Resources Information Center

    Tao, Fumiyo; And Others

    This volume contains technical and supporting materials that supplement Volume I, which describes upward mobility programs for disadvantaged and dislocated workers in the service sector. Appendix A is a detailed description of the project methodology, including data collection methods and information on data compilation, processing, and analysis.…

  5. Validation the use of refractometer and mathematic equations to measure dietary formula contents for clinical application.

    PubMed

    Chang, W-K; Chao, Y-C; Mcclave, S-A; Yeh, M-K

    2005-10-01

    Gastric residual volumes are widely used to evaluate gastric emptying for patients receiving enteral feeding, but controversy exists about what constitutes gastric residual volume. We have developed a method by using refractometer and derived mathematical equations to calculate the formula concentration, total residual volume (TRV), and formula volume. In this study, we like to validate these mathematical equations before they can be implemented for clinical patient care. Four dietary formulas were evaluated in two consecutive validation experiments. Firstly, dietary formula volume of 50, 100, 200, and 400 ml were diluted with 50 ml water, and then the Brix value (BV) was measured by the refractometer. Secondly, 50 ml of water, then 100 ml of dietary formula were infused into a beaker, and followed by the BV measurement. After this, 50 ml of water was infused and followed by the second BV measurement. The entire procedure of infusing of dietary formula (100 ml) and waster (50 ml) was repeated twice and followed by the BV measurement. The formula contents (formula concentration, TRV, and formula volume) were calculated by mathematical equations. The calculated formula concentrations, TRVs, and formula volumes measured from mathematic equations were strongly close to the true values in the first and second validation experiments (R2>0.98, P<0.001). Refractometer and the derived mathematical equations may be used to accurately measure the formula concentration, TRV, and formula volume and served as a tool to monitor gastric emptying for patients receiving enteral feeding.

  6. Approaching the investigation of plasma turbulence through a rigorous verification and validation procedure: A practical example

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ricci, P., E-mail: paolo.ricci@epfl.ch; Riva, F.; Theiler, C.

    In the present work, a Verification and Validation procedure is presented and applied showing, through a practical example, how it can contribute to advancing our physics understanding of plasma turbulence. Bridging the gap between plasma physics and other scientific domains, in particular, the computational fluid dynamics community, a rigorous methodology for the verification of a plasma simulation code is presented, based on the method of manufactured solutions. This methodology assesses that the model equations are correctly solved, within the order of accuracy of the numerical scheme. The technique to carry out a solution verification is described to provide a rigorousmore » estimate of the uncertainty affecting the numerical results. A methodology for plasma turbulence code validation is also discussed, focusing on quantitative assessment of the agreement between experiments and simulations. The Verification and Validation methodology is then applied to the study of plasma turbulence in the basic plasma physics experiment TORPEX [Fasoli et al., Phys. Plasmas 13, 055902 (2006)], considering both two-dimensional and three-dimensional simulations carried out with the GBS code [Ricci et al., Plasma Phys. Controlled Fusion 54, 124047 (2012)]. The validation procedure allows progress in the understanding of the turbulent dynamics in TORPEX, by pinpointing the presence of a turbulent regime transition, due to the competition between the resistive and ideal interchange instabilities.« less

  7. Assessing validity of observational intervention studies – the Benchmarking Controlled Trials

    PubMed Central

    Malmivaara, Antti

    2016-01-01

    Abstract Background: Benchmarking Controlled Trial (BCT) is a concept which covers all observational studies aiming to assess impact of interventions or health care system features to patients and populations. Aims: To create and pilot test a checklist for appraising methodological validity of a BCT. Methods: The checklist was created by extracting the most essential elements from the comprehensive set of criteria in the previous paper on BCTs. Also checklists and scientific papers on observational studies and respective systematic reviews were utilized. Ten BCTs published in the Lancet and in the New England Journal of Medicine were used to assess feasibility of the created checklist. Results: The appraised studies seem to have several methodological limitations, some of which could be avoided in planning, conducting and reporting phases of the studies. Conclusions: The checklist can be used for planning, conducting, reporting, reviewing, and critical reading of observational intervention studies. However, the piloted checklist should be validated in further studies.Key messagesBenchmarking Controlled Trial (BCT) is a concept which covers all observational studies aiming to assess impact of interventions or health care system features to patients and populations.This paper presents a checklist for appraising methodological validity of BCTs and pilot-tests the checklist with ten BCTs published in leading medical journals. The appraised studies seem to have several methodological limitations, some of which could be avoided in planning, conducting and reporting phases of the studies.The checklist can be used for planning, conducting, reporting, reviewing, and critical reading of observational intervention studies. PMID:27238631

  8. [Systemic validation of clinical practice guidelines: the AGREE network].

    PubMed

    Hannes, K; Van Royen, P; Aertgeerts, B; Buntinx, F; Ramaekers, D; Chevalier, P

    2005-12-01

    Over recent decades, the number of available clinical practice guidelines has enormously grown. Guidelines should meet specific quality criteria to ensure good quality. There is a growing need for the developement of a set of criteria to ensure that potential biases inherent in guideline development have been properly addressed and that the recommendations for practice are valid and reliable. The AGREE-collaboration is an international network that developed an instrument to critically appraise the methodological quality of guidelines. AGREE promotes a clear strategy to produce, disseminate and evaluate guidelines of high quality. In the first phase of the international project the AGREE-instrument was tested in 11 different countries. Based on this experience the instrument was refined and optimised. In the second phase it was disseminated, promoted and evaluated in 18 participating countries. Belgium was one of them. The Belgian partner in the AGREE-project developed 3 workshops and established 13 validation committees to validate guidelines from Belgian developer groups. We collected 33 questionnaires from participants of the workshops and the validation committees, in which we asked for primary experiences and information on the usefulness and applicability of the instrument. We were also interested in the shortcomings of the instrument and potential strategies to bridge them. More efforts should be made to train methodological experts to gain certain skills for a critical appraisal of clinical practice guidelines. Promoting the AGREE-instrument will lead to a broader knowledge and use of quality criteria in guideline development and appraisal. The development and dissemination of an international list of criteria to appraise the quality of guidelines will stimulate the development of methodologically sound guidelines. International comparisons between existing guidelines will lead to a better collaboration between guideline developers throughout the world.

  9. A Review of Meta-Analyses in Education: Methodological Strengths and Weaknesses

    ERIC Educational Resources Information Center

    Ahn, Soyeon; Ames, Allison J.; Myers, Nicholas D.

    2012-01-01

    The current review addresses the validity of published meta-analyses in education that determines the credibility and generalizability of study findings using a total of 56 meta-analyses published in education in the 2000s. Our objectives were to evaluate the current meta-analytic practices in education, identify methodological strengths and…

  10. A Design Methodology for Medical Processes.

    PubMed

    Ferrante, Simona; Bonacina, Stefano; Pozzi, Giuseppe; Pinciroli, Francesco; Marceglia, Sara

    2016-01-01

    Healthcare processes, especially those belonging to the clinical domain, are acknowledged as complex and characterized by the dynamic nature of the diagnosis, the variability of the decisions made by experts driven by their experiences, the local constraints, the patient's needs, the uncertainty of the patient's response, and the indeterminacy of patient's compliance to treatment. Also, the multiple actors involved in patient's care need clear and transparent communication to ensure care coordination. In this paper, we propose a methodology to model healthcare processes in order to break out complexity and provide transparency. The model is grounded on a set of requirements that make the healthcare domain unique with respect to other knowledge domains. The modeling methodology is based on three main phases: the study of the environmental context, the conceptual modeling, and the logical modeling. The proposed methodology was validated by applying it to the case study of the rehabilitation process of stroke patients in the specific setting of a specialized rehabilitation center. The resulting model was used to define the specifications of a software artifact for the digital administration and collection of assessment tests that was also implemented. Despite being only an example, our case study showed the ability of process modeling to answer the actual needs in healthcare practices. Independently from the medical domain in which the modeling effort is done, the proposed methodology is useful to create high-quality models, and to detect and take into account relevant and tricky situations that can occur during process execution.

  11. A Design Methodology for Medical Processes

    PubMed Central

    Bonacina, Stefano; Pozzi, Giuseppe; Pinciroli, Francesco; Marceglia, Sara

    2016-01-01

    Summary Background Healthcare processes, especially those belonging to the clinical domain, are acknowledged as complex and characterized by the dynamic nature of the diagnosis, the variability of the decisions made by experts driven by their experiences, the local constraints, the patient’s needs, the uncertainty of the patient’s response, and the indeterminacy of patient’s compliance to treatment. Also, the multiple actors involved in patient’s care need clear and transparent communication to ensure care coordination. Objectives In this paper, we propose a methodology to model healthcare processes in order to break out complexity and provide transparency. Methods The model is grounded on a set of requirements that make the healthcare domain unique with respect to other knowledge domains. The modeling methodology is based on three main phases: the study of the environmental context, the conceptual modeling, and the logical modeling. Results The proposed methodology was validated by applying it to the case study of the rehabilitation process of stroke patients in the specific setting of a specialized rehabilitation center. The resulting model was used to define the specifications of a software artifact for the digital administration and collection of assessment tests that was also implemented. Conclusions Despite being only an example, our case study showed the ability of process modeling to answer the actual needs in healthcare practices. Independently from the medical domain in which the modeling effort is done, the proposed methodology is useful to create high-quality models, and to detect and take into account relevant and tricky situations that can occur during process execution. PMID:27081415

  12. The use of zeolites to generate PET phantoms for the validation of quantification strategies in oncology.

    PubMed

    Zito, Felicia; De Bernardi, Elisabetta; Soffientini, Chiara; Canzi, Cristina; Casati, Rosangela; Gerundini, Paolo; Baselli, Giuseppe

    2012-09-01

    the concentration does not influence the distribution uniformity of both solution and solute, at least at the trace concentrations used for zeolite activation. An additional proof of uniformity of zeolite saturation was obtained observing a correspondence between uptake and adsorbed volume of solution, corresponding to about 27.8% of zeolite volume. As to the ground truth for zeolites positioned inside the phantom, the segmentation of finely aligned CT images provided reliable borders, as demonstrated by a mean absolute volume error of 2.8% with respect to the PET threshold segmentation corresponding to the maximum Dice. The proposed methodology allowed obtaining an experimental phantom data set that can be used as a feasible tool to test and validate quantification and segmentation algorithms for PET in oncology. The phantom is currently under consideration for being included in a benchmark designed by AAPM TG211, which will be available to the community to evaluate PET automatic segmentation methods.

  13. Surrogate Plant Data Base : Volume 2. Appendix C : Facilities Planning Baseline Data

    DOT National Transportation Integrated Search

    1983-05-01

    This four volume report consists of a data base describing "surrogate" automobile and truck manufacturing plants developed as part of a methodology for evaluating capital investment requirements in new manufacturing facilities to build new fleets of ...

  14. A Review of Validation Research on Psychological Variables Used in Hiring Police Officers.

    ERIC Educational Resources Information Center

    Malouff, John M.; Schutte Nicola S.

    This paper reviews the methods and findings of published research on the validity of police selection procedures. As a preface to the review, the typical police officer selection process is briefly described. Several common methodological deficiencies of the validation research are identified and discussed in detail: (1) use of past-selection…

  15. Conventionalism and Methodological Standards in Contending with Skepticism about Uncertainty

    NASA Astrophysics Data System (ADS)

    Brumble, K. C.

    2012-12-01

    What it means to measure and interpret confidence and uncertainty in a result is often particular to a specific scientific community and its methodology of verification. Additionally, methodology in the sciences varies greatly across disciplines and scientific communities. Understanding the accuracy of predictions of a particular science thus depends largely upon having an intimate working knowledge of the methods, standards, and conventions utilized and underpinning discoveries in that scientific field. Thus, valid criticism of scientific predictions and discoveries must be conducted by those who are literate in the field in question: they must have intimate working knowledge of the methods of the particular community and of the particular research under question. The interpretation and acceptance of uncertainty is one such shared, community-based convention. In the philosophy of science, this methodological and community-based way of understanding scientific work is referred to as conventionalism. By applying the conventionalism of historian and philosopher of science Thomas Kuhn to recent attacks upon methods of multi-proxy mean temperature reconstructions, I hope to illuminate how climate skeptics and their adherents fail to appreciate the need for community-based fluency in the methodological standards for understanding uncertainty shared by the wider climate science community. Further, I will flesh out a picture of climate science community standards of evidence and statistical argument following the work of philosopher of science Helen Longino. I will describe how failure to appreciate the conventions of professionalism and standards of evidence accepted in the climate science community results in the application of naïve falsification criteria. Appeal to naïve falsification in turn has allowed scientists outside the standards and conventions of the mainstream climate science community to consider themselves and to be judged by climate skeptics as valid

  16. Guiding principles of USGS methodology for assessment of undiscovered conventional oil and gas resources

    USGS Publications Warehouse

    Charpentier, R.R.; Klett, T.R.

    2005-01-01

    During the last 30 years, the methodology for assessment of undiscovered conventional oil and gas resources used by the Geological Survey has undergone considerable change. This evolution has been based on five major principles. First, the U.S. Geological Survey has responsibility for a wide range of U.S. and world assessments and requires a robust methodology suitable for immaturely explored as well as maturely explored areas. Second, the assessments should be based on as comprehensive a set of geological and exploration history data as possible. Third, the perils of methods that solely use statistical methods without geological analysis are recognized. Fourth, the methodology and course of the assessment should be documented as transparently as possible, within the limits imposed by the inevitable use of subjective judgement. Fifth, the multiple uses of the assessments require a continuing effort to provide the documentation in such ways as to increase utility to the many types of users. Undiscovered conventional oil and gas resources are those recoverable volumes in undiscovered, discrete, conventional structural or stratigraphic traps. The USGS 2000 methodology for these resources is based on a framework of assessing numbers and sizes of undiscovered oil and gas accumulations and the associated risks. The input is standardized on a form termed the Seventh Approximation Data Form for Conventional Assessment Units. Volumes of resource are then calculated using a Monte Carlo program named Emc2, but an alternative analytic (non-Monte Carlo) program named ASSESS also can be used. The resource assessment methodology continues to change. Accumulation-size distributions are being examined to determine how sensitive the results are to size-distribution assumptions. The resource assessment output is changing to provide better applicability for economic analysis. The separate methodology for assessing continuous (unconventional) resources also has been evolving. Further

  17. Differences in regional grey matter volumes in currently ill patients with anorexia nervosa.

    PubMed

    Phillipou, Andrea; Rossell, Susan Lee; Gurvich, Caroline; Castle, David Jonathan; Abel, Larry Allen; Nibbs, Richard Grant; Hughes, Matthew Edward

    2018-01-01

    Neurobiological findings in anorexia nervosa (AN) are inconsistent, including differences in regional grey matter volumes. Methodological limitations often contribute to the inconsistencies reported. The aim of this study was to improve on these methodologies by utilising voxel-based morphometry (VBM) analysis with the use of diffeomorphic anatomic registration through an exponentiated lie algebra algorithm (DARTEL), in a relatively large group of individuals with AN. Twenty-six individuals with AN and 27 healthy controls underwent a T1-weighted magnetic resonance imaging (MRI) scan. AN participants were found to have reduced grey matter volumes in a number of areas including regions of the basal ganglia (including the ventral striatum), and parietal and temporal cortices. Body mass index (BMI) and global scores on the Eating Disorder Examination Questionnaire (EDE-Q) were also found to correlate with grey matter volumes in a region of the brainstem (including the substantia nigra and ventral tegmental area) in AN, and predicted 56% of the variance in grey matter volumes in this area. The brain regions associated with grey matter reductions in AN are consistent with regions responsible for cognitive deficits associated with the illness including anhedonia, deficits in affect perception and saccadic eye movement abnormalities. Overall, the findings suggest reduced grey matter volumes in AN that are associated with eating disorder symptomatology. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  18. Validation of the 4P's Plus screen for substance use in pregnancy validation of the 4P's Plus.

    PubMed

    Chasnoff, I J; Wells, A M; McGourty, R F; Bailey, L K

    2007-12-01

    The purpose of this study is to validate the 4P's Plus screen for substance use in pregnancy. A total of 228 pregnant women enrolled in prenatal care underwent screening with the 4P's Plus and received a follow-up clinical assessment for substance use. Statistical analyses regarding reliability, sensitivity, specificity, and positive and negative predictive validity of the 4Ps Plus were conducted. The overall reliability for the five-item measure was 0.62. Seventy-four (32.5%) of the women had a positive screen. Sensitivity and specificity were very good, at 87 and 76%, respectively. Positive predictive validity was low (36%), but negative predictive validity was quite high (97%). Of the 31 women who had a positive clinical assessment, 45% were using less than 1 day per week. The 4P's Plus reliably and effectively screens pregnant women for risk of substance use, including those women typically missed by other perinatal screening methodologies.

  19. Methodology. Volume 3

    DTIC Science & Technology

    1997-02-01

    modification, test, and production System operation, support, and maturation Weapon System Life Cycle Management Weapon systems Attainment...information management systems • Weapon systems electronics context – Focuses on many types of interactions » Information » Jamming » Support – Deals with...concepts • C4I context – Focuses on C4I information management systems – Defines the C4I systems and their information interchange requirements

  20. Calculating regional tissue volume for hyperthermic isolated limb perfusion: Four methods compared.

    PubMed

    Cecchin, D; Negri, A; Frigo, A C; Bui, F; Zucchetta, P; Bodanza, V; Gregianin, M; Campana, L G; Rossi, C R; Rastrelli, M

    2016-12-01

    Hyperthermic isolated limb perfusion (HILP) can be performed as an alternative to amputation for soft tissue sarcomas and melanomas of the extremities. Melphalan and tumor necrosis factor-alpha are used at a dosage that depends on the volume of the limb. Regional tissue volume is traditionally measured for the purposes of HILP using water displacement volumetry (WDV). Although this technique is considered the gold standard, it is time-consuming and complicated to implement, especially in obese and elderly patients. The aim of the present study was to compare the different methods described in the literature for calculating regional tissue volume in the HILP setting, and to validate an open source software. We reviewed the charts of 22 patients (11 males and 11 females) who had non-disseminated melanoma with in-transit metastases or sarcoma of the lower limb. We calculated the volume of the limb using four different methods: WDV, tape measurements and segmentation of computed tomography images using Osirix and Oncentra Masterplan softwares. The overall comparison provided a concordance correlation coefficient (CCC) of 0.92 for the calculations of whole limb volume. In particular, when Osirix was compared with Oncentra (validated for volume measures and used in radiotherapy), the concordance was near-perfect for the calculation of the whole limb volume (CCC = 0.99). With methods based on CT the user can choose a reliable plane for segmentation purposes. CT-based methods also provides the opportunity to separate the whole limb volume into defined tissue volumes (cortical bone, fat and water). Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Validation of Storm Water Management Model Storm Control Measures Modules

    NASA Astrophysics Data System (ADS)

    Simon, M. A.; Platz, M. C.

    2017-12-01

    EPA's Storm Water Management Model (SWMM) is a computational code heavily relied upon by industry for the simulation of wastewater and stormwater infrastructure performance. Many municipalities are relying on SWMM results to design multi-billion-dollar, multi-decade infrastructure upgrades. Since the 1970's, EPA and others have developed five major releases, the most recent ones containing storm control measures modules for green infrastructure. The main objective of this study was to quantify the accuracy with which SWMM v5.1.10 simulates the hydrologic activity of previously monitored low impact developments. Model performance was evaluated with a mathematical comparison of outflow hydrographs and total outflow volumes, using empirical data and a multi-event, multi-objective calibration method. The calibration methodology utilized PEST++ Version 3, a parameter estimation tool, which aided in the selection of unmeasured hydrologic parameters. From the validation study and sensitivity analysis, several model improvements were identified to advance SWMM LID Module performance for permeable pavements, infiltration units and green roofs, and these were performed and reported herein. Overall, it was determined that SWMM can successfully simulate low impact development controls given accurate model confirmation, parameter measurement, and model calibration.

  2. Validity for the simplified water displacement instrument to measure arm lymphedema as a result of breast cancer surgery.

    PubMed

    Sagen, Ase; Kåresen, Rolf; Skaane, Per; Risberg, May Arna

    2009-05-01

    To evaluate concurrent and construct validity for the Simplified Water Displacement Instrument (SWDI), an instrument for measuring arm volumes and arm lymphedema as a result of breast cancer surgery. Validity design. Hospital setting. Women (N=23; mean age, 64+/-11y) were examined 6 years after breast cancer surgery with axillary node dissection. Not applicable. The SWDI was included for measuring arm volumes to estimate arm lymphedema as a result of breast cancer surgery. A computed tomography (CT) scan was included to examine the cross-sectional areas (CSAs) in square millimeters for the subcutaneous tissue, for the muscle tissue, and for measuring tissue density in Hounsfield units. Magnetic resonance imaging (MRI) with T2-weighted sequences was included to show increased signal intensity in subcutaneous and muscle tissue areas. The affected arm volume measured by the SWDI was significantly correlated to the total CSA of the affected upper limb (R=.904) and also to the CSA of the subcutaneous tissue and muscles tissue (R=.867 and R=.725), respectively (P<.001). The CSA of the subcutaneous tissue for the upper limb was significantly larger compared with the control limb (11%). Tissue density measured in Hounsfield units did not correlate significantly with arm volume (P>.05). The affected arm volume was significantly larger (5%) than the control arm volume (P<.05). Five (22%) women had arm lymphedema defined as a 10% increase in the affected arm volume compared with the control arm volume, and an increased signal intensity was identified in all 5 women on MRI (T2-weighted, kappa=.777, P<.001). The SWDI showed high concurrent and construct validity as shown with significant correlations between the CSA (CT) of the subcutaneous and muscle areas of the affected limb and the affected arm volume (P>.001). There was a high agreement between those subjects who were diagnosed with arm lymphedema by using the SWDI and the increased signal intensity on MRI, with a kappa

  3. Best method for right atrial volume assessment by two-dimensional echocardiography: validation with magnetic resonance imaging.

    PubMed

    Ebtia, Mahasti; Murphy, Darra; Gin, Kenneth; Lee, Pui K; Jue, John; Nair, Parvathy; Mayo, John; Barnes, Marion E; Thompson, Darby J S; Tsang, Teresa S M

    2015-05-01

    Echocardiographic methods for estimating right atrial (RA) volume have not been standardized. Our aim was to evaluate two-dimensional (2D) echocardiographic methods of RA volume assessment, using RA volume by magnetic resonance imaging (MRI) as the reference. Right atrial volume was assessed in 51 patients (mean age 63 ± 14 years, 33 female) who underwent comprehensive 2D echocardiography and cardiac MRI for clinically indicated reasons. Echocardiographic RA volume methods included (1) biplane area length, using four-chamber view twice (biplane 4C-4C); (2) biplane area length, using four-chamber and subcostal views (biplane 4C-subcostal); and (3) single plane Simpson's method of disks (Simpson's). Echocardiographic RA volumes as well as linear RA major and minor dimensions were compared to RA volume by MRI using correlation and Bland-Altman methods, and evaluated for inter-observer reproducibility and accuracy in discriminating RA enlargement. All echocardiography volumetric methods performed well compared to MRI, with Pearson's correlation of 0.98 and concordance correlation ≥0.91 for each. For bias and limits of agreement, biplane 4C-4C (bias -4.81 mL/m(2) , limits of agreement ±9.8 mL/m(2) ) and Simpson's (bias -5.15 mL/m(2) , limits of agreement ±10.1 mL/m(2) ) outperformed biplane 4C-subcostal (bias -8.36 mL/m(2) , limits of agreement ±12.5 mL/m(2) ). Accuracy for discriminating RA enlargement was higher for all volumetric methods than for linear measurements. Inter-observer variability was satisfactory across all methods. Compared to MRI, biplane 4C-4C and single plane Simpson's are highly accurate and reproducible 2D echocardiography methods for estimating RA volume. Linear dimensions are inaccurate and should be abandoned. © 2014, Wiley Periodicals, Inc.

  4. Volume equations for the Northern Research Station's Forest Inventory and Analysis Program as of 2010

    Treesearch

    Patrick D. Miles; Andrew D. Hill

    2010-01-01

    The U.S. Forest Service's Forest Inventory and Analysis (FIA) program collects sample plot data on all forest ownerships across the United States. This report documents the methodology used to estimate live-tree gross, net, and sound volume for the 24 States inventoried by the Northern Research Station's (NRS) FIA unit. Sound volume is of particular interest...

  5. New methodology for fast prediction of wheel wear evolution

    NASA Astrophysics Data System (ADS)

    Apezetxea, I. S.; Perez, X.; Casanueva, C.; Alonso, A.

    2017-07-01

    In railway applications wear prediction in the wheel-rail interface is a fundamental matter in order to study problems such as wheel lifespan and the evolution of vehicle dynamic characteristic with time. However, one of the principal drawbacks of the existing methodologies for calculating the wear evolution is the computational cost. This paper proposes a new wear prediction methodology with a reduced computational cost. This methodology is based on two main steps: the first one is the substitution of the calculations over the whole network by the calculation of the contact conditions in certain characteristic point from whose result the wheel wear evolution can be inferred. The second one is the substitution of the dynamic calculation (time integration calculations) by the quasi-static calculation (the solution of the quasi-static situation of a vehicle at a certain point which is the same that neglecting the acceleration terms in the dynamic equations). These simplifications allow a significant reduction of computational cost to be obtained while maintaining an acceptable level of accuracy (error order of 5-10%). Several case studies are analysed along the paper with the objective of assessing the proposed methodology. The results obtained in the case studies allow concluding that the proposed methodology is valid for an arbitrary vehicle running through an arbitrary track layout.

  6. Advanced piloted aircraft flight control system design methodology. Volume 2: The FCX flight control design expert system

    NASA Technical Reports Server (NTRS)

    Myers, Thomas T.; Mcruer, Duane T.

    1988-01-01

    The development of a comprehensive and electric methodology for conceptual and preliminary design of flight control systems is presented and illustrated. The methodology is focused on the design states starting with the layout of system requirements and ending when some viable competing system architectures (feedback control structures) are defined. The approach is centered on the human pilot and the aircraft as both the sources of, and the keys to the solution of, many flight control problems. The methodology relies heavily on computational procedures which are highly interactive with the design engineer. To maximize effectiveness, these techniques, as selected and modified to be used together in the methodology, form a cadre of computational tools specifically tailored for integrated flight control system preliminary design purposes. The FCX expert system as presently developed is only a limited prototype capable of supporting basic lateral-directional FCS design activities related to the design example used. FCX presently supports design of only one FCS architecture (yaw damper plus roll damper) and the rules are largely focused on Class IV (highly maneuverable) aircraft. Despite this limited scope, the major elements which appear necessary for application of knowledge-based software concepts to flight control design were assembled and thus FCX represents a prototype which can be tested, critiqued and evolved in an ongoing process of development.

  7. Traffic analysis toolbox volume XIII : integrated corridor management analysis, modeling, and simulation guide

    DOT National Transportation Integrated Search

    2017-02-01

    As part of the Federal Highway Administration (FHWA) Traffic Analysis Toolbox (Volume XIII), this guide was designed to help corridor stakeholders implement the Integrated Corridor Management (ICM) Analysis, Modeling, and Simulation (AMS) methodology...

  8. Traffic analysis toolbox volume XIII : integrated corridor management analysis, modeling, and simulation guide.

    DOT National Transportation Integrated Search

    2017-02-01

    As part of the Federal Highway Administration (FHWA) Traffic Analysis Toolbox (Volume XIII), this guide was designed to help corridor stakeholders implement the Integrated Corridor Management (ICM) Analysis, Modeling, and Simulation (AMS) methodology...

  9. Verification and Validation Methodology of Real-Time Adaptive Neural Networks for Aerospace Applications

    NASA Technical Reports Server (NTRS)

    Gupta, Pramod; Loparo, Kenneth; Mackall, Dale; Schumann, Johann; Soares, Fola

    2004-01-01

    Recent research has shown that adaptive neural based control systems are very effective in restoring stability and control of an aircraft in the presence of damage or failures. The application of an adaptive neural network with a flight critical control system requires a thorough and proven process to ensure safe and proper flight operation. Unique testing tools have been developed as part of a process to perform verification and validation (V&V) of real time adaptive neural networks used in recent adaptive flight control system, to evaluate the performance of the on line trained neural networks. The tools will help in certification from FAA and will help in the successful deployment of neural network based adaptive controllers in safety-critical applications. The process to perform verification and validation is evaluated against a typical neural adaptive controller and the results are discussed.

  10. A framework for assessing the adequacy and effectiveness of software development methodologies

    NASA Technical Reports Server (NTRS)

    Arthur, James D.; Nance, Richard E.

    1990-01-01

    Tools, techniques, environments, and methodologies dominate the software engineering literature, but relatively little research in the evaluation of methodologies is evident. This work reports an initial attempt to develop a procedural approach to evaluating software development methodologies. Prominent in this approach are: (1) an explication of the role of a methodology in the software development process; (2) the development of a procedure based on linkages among objectives, principles, and attributes; and (3) the establishment of a basis for reduction of the subjective nature of the evaluation through the introduction of properties. An application of the evaluation procedure to two Navy methodologies has provided consistent results that demonstrate the utility and versatility of the evaluation procedure. Current research efforts focus on the continued refinement of the evaluation procedure through the identification and integration of product quality indicators reflective of attribute presence, and the validation of metrics supporting the measure of those indicators. The consequent refinement of the evaluation procedure offers promise of a flexible approach that admits to change as the field of knowledge matures. In conclusion, the procedural approach presented in this paper represents a promising path toward the end goal of objectively evaluating software engineering methodologies.

  11. Validity and Reliability of Turkish Male Breast Self-Examination Instrument.

    PubMed

    Erkin, Özüm; Göl, İlknur

    2018-04-01

    This study aims to measure the validity and reliability of Turkish male breast self-examination (MBSE) instrument. The methodological study was performed in 2016 at Ege University, Faculty of Nursing, İzmir, Turkey. The MBSE includes ten steps. For validity studies, face validity, content validity, and construct validity (exploratory factor analysis) were done. For reliability study, Kuder Richardson was calculated. The content validity index was found to be 0.94. Kendall W coefficient was 0.80 (p=0.551). The total variance explained by the two factors was found to be 63.24%. Kuder Richardson 21 was done for reliability study and found to be 0.97 for the instrument. The final instrument included 10 steps and two stages. The Turkish version of MBSE is a valid and reliable instrument for early diagnose. The MBSE can be used in Turkish speaking countries and cultures with two stages and 10 steps.

  12. An improved approach for flight readiness certification: Probabilistic models for flaw propagation and turbine blade failure. Volume 1: Methodology and applications

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with analytical modeling of failure phenomena to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in analytical modeling, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which analytical models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. State-of-the-art analytical models currently employed for designs failure prediction, or performance analysis are used in this methodology. The rationale for the statistical approach taken in the PFA methodology is discussed, the PFA methodology is described, and examples of its application to structural failure modes are presented. The engineering models and computer software used in fatigue crack growth and fatigue crack initiation applications are thoroughly documented.

  13. No evidence for intervention-dependent influence of methodological features on treatment effect.

    PubMed

    Jacobs, Wilco C H; Kruyt, Moyo C; Moojen, Wouter A; Verbout, Ab J; Oner, F Cumhur

    2013-12-01

    The goal of this systematic review was to evaluate if the influence of methodological features on treatment effect differs between types of intervention. MEDLINE, Embase, Web of Science, Cochrane methodology register, and reference lists were searched for meta-epidemiologic studies on the influence of methodological features on treatment effect. Studies analyzing influence of methodological features related to internal validity were included. We made a distinction among surgical, pharmaceutical, and therapeutical as separate types of intervention. Heterogeneity was calculated to identify differences among these types. Fourteen meta-epidemiologic studies were found with 51 estimates of influence of methodological features on treatment effect. Heterogeneity was observed among the intervention types for randomization. Surgical intervention studies showed a larger treatment effect when randomized; this was in contrast to pharmaceutical studies that found the opposite. For allocation concealment and double blinding, the influence of methodological features on the treatment effect was comparable across different types of intervention. For the remaining methodological features, there were insufficient observations. The influence of allocation concealment and double blinding on the treatment effect is consistent across studies of different interventional types. The influence of randomization although, may be different between surgical and nonsurgical studies. Copyright © 2013 Elsevier Inc. All rights reserved.

  14. Selected Bibliographies for Pharmaceutical Supply Systems. Volume 5: Pharmaceutical Supply Systems Bibliographies. International Health Planning Reference Series.

    ERIC Educational Resources Information Center

    Schaumann, Leif

    Intended as a companion piece to volume 7 in the Method Series, Pharmaceutical Supply System Planning (CE 024 234), this fifth of six volumes in the International Health Planning Reference Series is a combined literature review and annotated bibliography dealing with alternative methodologies for planning and analyzing pharmaceutical supply…

  15. Ecological Validity in Eye-Tracking: An Empirical Study

    ERIC Educational Resources Information Center

    Spinner, Patti; Gass, Susan M.; Behney, Jennifer

    2013-01-01

    Eye-trackers are becoming increasingly widespread as a tool to investigate second language (L2) acquisition. Unfortunately, clear standards for methodology--including font size, font type, and placement of interest areas--are not yet available. Although many researchers stress the need for ecological validity--that is, the simulation of natural…

  16. Personal Accountability in Education: Measure Development and Validation

    ERIC Educational Resources Information Center

    Rosenblatt, Zehava

    2017-01-01

    Purpose: The purpose of this paper, three-study research project, is to establish and validate a two-dimensional scale to measure teachers' and school administrators' accountability disposition. Design/methodology/approach: The scale items were developed in focus groups, and the final measure was tested on various samples of Israeli teachers and…

  17. Pain assessment in children: theoretical and empirical validity.

    PubMed

    Villarruel, A M; Denyes, M J

    1991-12-01

    Valid assessment of pain in children is foundational for both the nursing practice and research domains, yet few validated methods of pain measurement are currently available for young children. This article describes an innovative research approach used in the development of photographic instruments to measure pain intensity in young African-American and Hispanic children. The instruments were designed to enable children to participate actively in their own care and to do so in ways that are congruent with their developmental and cultural heritage. Conceptualization of the instruments, methodological development, and validation processes grounded in Orem's Self-Care Deficit Theory of Nursing are described. The authors discuss the ways in which the gaps between nursing theory, research, and practice are narrowed when development of instruments to measure clinical nursing phenomena are grounded in nursing theory, validated through research and utilized in practice settings.

  18. Environmental Validation of Legionella Control in a VHA Facility Water System.

    PubMed

    Jinadatha, Chetan; Stock, Eileen M; Miller, Steve E; McCoy, William F

    2018-03-01

    OBJECTIVES We conducted this study to determine what sample volume, concentration, and limit of detection (LOD) are adequate for environmental validation of Legionella control. We also sought to determine whether time required to obtain culture results can be reduced compared to spread-plate culture method. We also assessed whether polymerase chain reaction (PCR) and in-field total heterotrophic aerobic bacteria (THAB) counts are reliable indicators of Legionella in water samples from buildings. DESIGN Comparative Legionella screening and diagnostics study for environmental validation of a healthcare building water system. SETTING Veterans Health Administration (VHA) facility water system in central Texas. METHODS We analyzed 50 water samples (26 hot, 24 cold) from 40 sinks and 10 showers using spread-plate cultures (International Standards Organization [ISO] 11731) on samples shipped overnight to the analytical lab. In-field, on-site cultures were obtained using the PVT (Phigenics Validation Test) culture dipslide-format sampler. A PCR assay for genus-level Legionella was performed on every sample. RESULTS No practical differences regardless of sample volume filtered were observed. Larger sample volumes yielded more detections of Legionella. No statistically significant differences at the 1 colony-forming unit (CFU)/mL or 10 CFU/mL LOD were observed. Approximately 75% less time was required when cultures were started in the field. The PCR results provided an early warning, which was confirmed by spread-plate cultures. The THAB results did not correlate with Legionella status. CONCLUSIONS For environmental validation at this facility, we confirmed that (1) 100 mL sample volumes were adequate, (2) 10× concentrations were adequate, (3) 10 CFU/mL LOD was adequate, (4) in-field cultures reliably reduced time to get results by 75%, (5) PCR provided a reliable early warning, and (6) THAB was not predictive of Legionella results. Infect Control Hosp Epidemiol 2018;39:259-266.

  19. Post-Qualitative Line of Flight and the Confabulative Conversation: A Methodological Ethnography

    ERIC Educational Resources Information Center

    Johansson, Lotta

    2016-01-01

    This paper is a methodological ethnography aiming to highlight the difficulties in using conventional methods in connection with an explorative philosophy: Deleuze and Guattari's. Taking an empirical point of departure in conversations about the future with students in upper secondary school, the struggle to find a scientifically valid label…

  20. Multidisciplinary design and optimization (MDO) methodology for the aircraft conceptual design

    NASA Astrophysics Data System (ADS)

    Iqbal, Liaquat Ullah

    An integrated design and optimization methodology has been developed for the conceptual design of an aircraft. The methodology brings higher fidelity Computer Aided Design, Engineering and Manufacturing (CAD, CAE and CAM) Tools such as CATIA, FLUENT, ANSYS and SURFCAM into the conceptual design by utilizing Excel as the integrator and controller. The approach is demonstrated to integrate with many of the existing low to medium fidelity codes such as the aerodynamic panel code called CMARC and sizing and constraint analysis codes, thus providing the multi-fidelity capabilities to the aircraft designer. The higher fidelity design information from the CAD and CAE tools for the geometry, aerodynamics, structural and environmental performance is provided for the application of the structured design methods such as the Quality Function Deployment (QFD) and the Pugh's Method. The higher fidelity tools bring the quantitative aspects of a design such as precise measurements of weight, volume, surface areas, center of gravity (CG) location, lift over drag ratio, and structural weight, as well as the qualitative aspects such as external geometry definition, internal layout, and coloring scheme early in the design process. The performance and safety risks involved with the new technologies can be reduced by modeling and assessing their impact more accurately on the performance of the aircraft. The methodology also enables the design and evaluation of the novel concepts such as the blended (BWB) and the hybrid wing body (HWB) concepts. Higher fidelity computational fluid dynamics (CFD) and finite element analysis (FEA) allow verification of the claims for the performance gains in aerodynamics and ascertain risks of structural failure due to different pressure distribution in the fuselage as compared with the tube and wing design. The higher fidelity aerodynamics and structural models can lead to better cost estimates that help reduce the financial risks as well. This helps in

  1. An integrated methodology to forecast the efficiency of nourishment strategies in eroding deltas.

    PubMed

    Bergillos, Rafael J; López-Ruiz, Alejandro; Principal-Gómez, Daniel; Ortega-Sánchez, Miguel

    2018-02-01

    Many deltas across the globe are retreating, and nearby beaches are undergoing strong erosion as a result. Among soft and prompt solutions, nourishments are the most heavily used. This paper presents an integrated methodology to forecast the efficiency of nourishment strategies by means of wave climate simulations, wave propagations with downscaling techniques, computation of longshore sediment transport rates and application of the one-line model. It was applied to an eroding deltaic beach (Guadalfeo, southern Spain), where different scenarios as a function of the nourished coastline morphology, input volume and grain size were tested. For that, the evolution of six scenarios of coastline geometry over a two-year period (lifetime of nourishment projects at the study site) was modelled and the uncertainty of the predictions was also quantified through Monte Carlo techniques. For the most efficient coastline shape in terms of gained dry beach area, eight sub-scenarios with different nourished volumes were defined and modelled. The results indicate that an input volume around 460,000m 3 is the best strategy since nourished morphologies with higher volumes are more exposed to the prevailing storm directions, inducing less efficient responses. After setting the optimum coastline morphology and input sediment volume, eleven different nourished grain sizes were modelled; the most efficient coastline responses were obtained for sediment sizes greater than 0.01m. The availability of these sizes in the sediment accumulated upstream of a dam in the Guadalfeo River basin allows for the conclusion that this alternative would not only mitigate coastal erosion problems but also sedimentation issues in the reservoir. The methodology proposed in this work is extensible to other coastal areas across the world and can be helpful to support the decision-making process of artificial nourishment projects and other environmental management strategies. Copyright © 2017 Elsevier B.V. All

  2. High-Volume Repeaters of Self-Harm.

    PubMed

    Ness, Jennifer; Hawton, Keith; Bergen, Helen; Waters, Keith; Kapur, Navneet; Cooper, Jayne; Steeg, Sarah; Clarke, Martin

    2016-11-01

    Repetition of self-harm is common and is strongly associated with suicide. Despite this, there is limited research on high-volume repetition. To investigate individuals with high-volume repeat self-harm attendances to the emergency department (ED), including their patterns of attendance and mortality. Data from the Multicentre Study of Self-Harm in England were used. High-volume repetition was defined as ⩾15 attendances within 4 years. An attendance timeline was constructed for each high-volume repeater (HVR) and the different patterns of attendance were explored using an executive sorting task and hierarchical cluster analysis. A small proportion of self-harm patients are HVRs (0.6%) but they account for a large percentage of self-harm attendances (10%). In this study, the new methodological approach resulted in three types of attendance patterns. All of the HVRs had clusters of attendance and a greater proportion died from external causes compared with non-HVRs. The approach used in this study offers a new method for investigating this problem that could have both clinical and research benefits. The need for early intervention is highlighted by the large number of self-harm episodes per patient, the clustered nature of attendances, and the higher prevalence of death from external causes.

  3. In-vivo detectability index: development and validation of an automated methodology

    NASA Astrophysics Data System (ADS)

    Smith, Taylor Brunton; Solomon, Justin; Samei, Ehsan

    2017-03-01

    The purpose of this study was to develop and validate a method to estimate patient-specific detectability indices directly from patients' CT images (i.e., "in vivo"). The method works by automatically extracting noise (NPS) and resolution (MTF) properties from each patient's CT series based on previously validated techniques. Patient images are thresholded into skin-air interfaces to form edge-spread functions, which are further binned, differentiated, and Fourier transformed to form the MTF. The NPS is likewise estimated from uniform areas of the image. These are combined with assumed task functions (reference function: 10 mm disk lesion with contrast of -15 HU) to compute detectability indices for a non-prewhitening matched filter model observer predicting observer performance. The results were compared to those from a previous human detection study on 105 subtle, hypo-attenuating liver lesions, using a two-alternative-forcedchoice (2AFC) method, over 6 dose levels using 16 readers. The in vivo detectability indices estimated for all patient images were compared to binary 2AFC outcomes with a generalized linear mixed-effects statistical model (Probit link function, linear terms only, no interactions, random term for readers). The model showed that the in vivo detectability indices were strongly predictive of 2AFC outcomes (P < 0.05). A linear comparison between the human detection accuracy and model-predicted detection accuracy (for like conditions) resulted in Pearson and Spearman correlations coefficients of 0.86 and 0.87, respectively. These data provide evidence that the in vivo detectability index could potentially be used to automatically estimate and track image quality in a clinical operation.

  4. Validity and reliability of Internet-based physiotherapy assessment for musculoskeletal disorders: a systematic review.

    PubMed

    Mani, Suresh; Sharma, Shobha; Omar, Baharudin; Paungmali, Aatit; Joseph, Leonard

    2017-04-01

    Purpose The purpose of this review is to systematically explore and summarise the validity and reliability of telerehabilitation (TR)-based physiotherapy assessment for musculoskeletal disorders. Method A comprehensive systematic literature review was conducted using a number of electronic databases: PubMed, EMBASE, PsycINFO, Cochrane Library and CINAHL, published between January 2000 and May 2015. The studies examined the validity, inter- and intra-rater reliabilities of TR-based physiotherapy assessment for musculoskeletal conditions were included. Two independent reviewers used the Quality Appraisal Tool for studies of diagnostic Reliability (QAREL) and the Quality Assessment of Diagnostic Accuracy Studies (QUADAS) tool to assess the methodological quality of reliability and validity studies respectively. Results A total of 898 hits were achieved, of which 11 articles based on inclusion criteria were reviewed. Nine studies explored the concurrent validity, inter- and intra-rater reliabilities, while two studies examined only the concurrent validity. Reviewed studies were moderate to good in methodological quality. The physiotherapy assessments such as pain, swelling, range of motion, muscle strength, balance, gait and functional assessment demonstrated good concurrent validity. However, the reported concurrent validity of lumbar spine posture, special orthopaedic tests, neurodynamic tests and scar assessments ranged from low to moderate. Conclusion TR-based physiotherapy assessment was technically feasible with overall good concurrent validity and excellent reliability, except for lumbar spine posture, orthopaedic special tests, neurodynamic testa and scar assessment.

  5. Machine Learning. Part 1. A Historical and Methodological Analysis.

    DTIC Science & Technology

    1983-05-31

    Machine learning has always been an integral part of artificial intelligence, and its methodology has evolved in concert with the major concerns of the field. In response to the difficulties of encoding ever-increasing volumes of knowledge in modern Al systems, many researchers have recently turned their attention to machine learning as a means to overcome the knowledge acquisition bottleneck. Part 1 of this paper presents a taxonomic analysis of machine learning organized primarily by learning strategies and secondarily by

  6. A Computational Methodology for Simulating Thermal Loss Testing of the Advanced Stirling Convertor

    NASA Technical Reports Server (NTRS)

    Reid, Terry V.; Wilson, Scott D.; Schifer, Nicholas A.; Briggs, Maxwell H.

    2012-01-01

    The U.S. Department of Energy (DOE) and Lockheed Martin Space Systems Company (LMSSC) have been developing the Advanced Stirling Radioisotope Generator (ASRG) for use as a power system for space science missions. This generator would use two highefficiency Advanced Stirling Convertors (ASCs), developed by Sunpower Inc. and NASA Glenn Research Center (GRC). The ASCs convert thermal energy from a radioisotope heat source into electricity. As part of ground testing of these ASCs, different operating conditions are used to simulate expected mission conditions. These conditions require achieving a particular operating frequency, hot end and cold end temperatures, and specified electrical power output for a given net heat input. In an effort to improve net heat input predictions, numerous tasks have been performed which provided a more accurate value for net heat input into the ASCs, including the use of multidimensional numerical models. Validation test hardware has also been used to provide a direct comparison of numerical results and validate the multi-dimensional numerical models used to predict convertor net heat input and efficiency. These validation tests were designed to simulate the temperature profile of an operating Stirling convertor and resulted in a measured net heat input of 244.4 W. The methodology was applied to the multi-dimensional numerical model which resulted in a net heat input of 240.3 W. The computational methodology resulted in a value of net heat input that was 1.7 percent less than that measured during laboratory testing. The resulting computational methodology and results are discussed.

  7. Method validation for methanol quantification present in working places

    NASA Astrophysics Data System (ADS)

    Muna, E. D. M.; Bizarri, C. H. B.; Maciel, J. R. M.; da Rocha, G. P.; de Araújo, I. O.

    2015-01-01

    Given the widespread use of methanol by different industry sectors and high toxicity associated with this substance, it is necessary to use an analytical method able to determine in a sensitive, precise and accurate levels of methanol in the air of working environments. Based on the methodology established by the National Institute for Occupational Safety and Health (NIOSH), it was validated a methodology for determination of methanol in silica gel tubes which had demonstrated its effectiveness based on the participation of the international collaborative program sponsored by the American Industrial Hygiene Association (AIHA).

  8. Validity of automated measurement of left ventricular ejection fraction and volume using the Philips EPIQ system.

    PubMed

    Hovnanians, Ninel; Win, Theresa; Makkiya, Mohammed; Zheng, Qi; Taub, Cynthia

    2017-11-01

    To assess the efficiency and reproducibility of automated measurements of left ventricular (LV) volumes and LV ejection fraction (LVEF) in comparison to manually traced biplane Simpson's method. This is a single-center prospective study. Apical four- and two-chamber views were acquired in patients in sinus rhythm. Two operators independently measured LV volumes and LVEF using biplane Simpson's method. In addition, the image analysis software a2DQ on the Philips EPIQ system was applied to automatically assess the LV volumes and LVEF. Time spent on each analysis, using both methods, was documented. Concordance of echocardiographic measures was evaluated using intraclass correlation (ICC) and Bland-Altman analysis. Manual tracing and automated measurement of LV volumes and LVEF were performed in 184 patients with a mean age of 67.3 ± 17.3 years and BMI 28.0 ± 6.8 kg/m 2 . ICC and Bland-Altman analysis showed good agreements between manual and automated methods measuring LVEF, end-systolic, and end-diastolic volumes. The average analysis time was significantly less using the automated method than manual tracing (116 vs 217 seconds/patient, P < .0001). Automated measurement using the novel image analysis software a2DQ on the Philips EPIQ system produced accurate, efficient, and reproducible assessment of LV volumes and LVEF compared with manual measurement. © 2017, Wiley Periodicals, Inc.

  9. Culture-sensitive adaptation and validation of the community-oriented program for the control of rheumatic diseases methodology for rheumatic disease in Latin American indigenous populations.

    PubMed

    Peláez-Ballestas, Ingris; Granados, Ysabel; Silvestre, Adriana; Alvarez-Nemegyei, José; Valls, Evart; Quintana, Rosana; Figuera, Yemina; Santiago, Flor Julian; Goñi, Mario; González, Rosa; Santana, Natalia; Nieto, Romina; Brito, Irais; García, Imelda; Barrios, Maria Cecilia; Marcano, Manuel; Loyola-Sánchez, Adalberto; Stekman, Ivan; Jorfen, Marisa; Goycochea-Robles, Maria Victoria; Midauar, Fadua; Chacón, Rosa; Martin, Maria Celeste; Pons-Estel, Bernardo A

    2014-09-01

    The purpose of the study is to validate a culturally sensitive adaptation of the community-oriented program for the control of rheumatic diseases (COPCORD) methodology in several Latin American indigenous populations. The COPCORD Spanish questionnaire was translated and back-translated into seven indigenous languages: Warao, Kariña and Chaima (Venezuela), Mixteco, Maya-Yucateco and Raramuri (Mexico) and Qom (Argentina). The questionnaire was administered to almost 100 subjects in each community with the assistance of bilingual translators. Individuals with pain, stiffness or swelling in any part of the body in the previous 7 days and/or at any point in life were evaluated by physicians to confirm a diagnosis according to criteria for rheumatic diseases. Overall, individuals did not understand the use of a 0-10 visual analog scale for pain intensity and severity grading and preferred a Likert scale comprising four items for pain intensity (no pain, minimal pain, strong pain, and intense pain). They were unable to discriminate between pain intensity and pain severity, so only pain intensity was included. For validation, 702 subjects (286 male, 416 female, mean age 42.7 ± 18.3 years) were interviewed in their own language. In the last 7 days, 198 (28.2 %) subjects reported having musculoskeletal pain, and 90 (45.4 %) of these had intense pain. Compared with the physician-confirmed diagnosis, the COPCORD questionnaire had 73.8 % sensitivity, 72.9 % specificity, a positive likelihood ratio of 2.7 and area under the receiver operating characteristic curve of 0.73. The COPCORD questionnaire is a valid screening tool for rheumatic diseases in indigenous Latin American populations.

  10. Lean methodology improves efficiency in outpatient academic uro-oncology clinics.

    PubMed

    Skeldon, Sean C; Simmons, Andrea; Hersey, Karen; Finelli, Antonio; Jewett, Michael A; Zlotta, Alexandre R; Fleshner, Neil E

    2014-05-01

    To determine if lean methodology, an industrial engineering tool developed to optimize manufacturing efficiency, can successfully be applied to improve efficiencies and quality of care in a hospital-based high-volume uro-oncology clinic. Before the lean initiative, baseline data were collected on patient volumes, wait times, cycle times (patient arrival to discharge), nursing assessment time, patient teaching, and physician ergonomics (via spaghetti diagram). Value stream analysis and a rapid improvement event were carried out, and significant changes were made to patient check-in, work areas, and nursing face time. Follow-up data were obtained at 30, 60, and 90 days. The Student t test was used for analysis to compare performance metrics with baseline. The median cycle time before the lean initiative was 46 minutes. This remained stable at 46 minutes at 30 days but improved to 35 minutes at 60 days and 41 minutes at 90 days. Shorter wait times allowed for increased nursing and physician face time. The average length of the physician assessment increased from 7.5 minutes at baseline to 10.6 minutes at 90 days. The average proportion of value-added time compared with the entire clinic visit increased from 30.6% at baseline to 66.3% at 90 days. Using lean methodology, we were able to shorten the patient cycle time and the time to initial assessment as well as integrate both an initial registered nurse assessment and registered nurse teaching to each visit. Lean methodology can effectively be applied to improve efficiency and patient care in an academic outpatient uro-oncology clinic setting. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. Validated School Business Practices That Work. Volume III: Sharing Business Success.

    ERIC Educational Resources Information Center

    Association of School Business Officials of the United States and Canada, Park Ridge, IL. Research Corp.

    Seventeen validated school business practices are described in this document. The practices were selected through the Sharing Business Success (SBS) program, in which the Federal Department of Education, 41 state education agencies, and State Associations of School Business Officials cooperate to identify successful school district practices,…

  12. Application of response surface methodology to maximize the productivity of scalable automated human embryonic stem cell manufacture.

    PubMed

    Ratcliffe, Elizabeth; Hourd, Paul; Guijarro-Leach, Juan; Rayment, Erin; Williams, David J; Thomas, Robert J

    2013-01-01

    Commercial regenerative medicine will require large quantities of clinical-specification human cells. The cost and quality of manufacture is notoriously difficult to control due to highly complex processes with poorly defined tolerances. As a step to overcome this, we aimed to demonstrate the use of 'quality-by-design' tools to define the operating space for economic passage of a scalable human embryonic stem cell production method with minimal cell loss. Design of experiments response surface methodology was applied to generate empirical models to predict optimal operating conditions for a unit of manufacture of a previously developed automatable and scalable human embryonic stem cell production method. Two models were defined to predict cell yield and cell recovery rate postpassage, in terms of the predictor variables of media volume, cell seeding density, media exchange and length of passage. Predicted operating conditions for maximized productivity were successfully validated. Such 'quality-by-design' type approaches to process design and optimization will be essential to reduce the risk of product failure and patient harm, and to build regulatory confidence in cell therapy manufacturing processes.

  13. PRA (Probabilistic Risk Assessments) Participation versus Validation

    NASA Technical Reports Server (NTRS)

    DeMott, Diana; Banke, Richard

    2013-01-01

    Probabilistic Risk Assessments (PRAs) are performed for projects or programs where the consequences of failure are highly undesirable. PRAs primarily address the level of risk those projects or programs posed during operations. PRAs are often developed after the design has been completed. Design and operational details used to develop models include approved and accepted design information regarding equipment, components, systems and failure data. This methodology basically validates the risk parameters of the project or system design. For high risk or high dollar projects, using PRA methodologies during the design process provides new opportunities to influence the design early in the project life cycle to identify, eliminate or mitigate potential risks. Identifying risk drivers before the design has been set allows the design engineers to understand the inherent risk of their current design and consider potential risk mitigation changes. This can become an iterative process where the PRA model can be used to determine if the mitigation technique is effective in reducing risk. This can result in more efficient and cost effective design changes. PRA methodology can be used to assess the risk of design alternatives and can demonstrate how major design changes or program modifications impact the overall program or project risk. PRA has been used for the last two decades to validate risk predictions and acceptability. Providing risk information which can positively influence final system and equipment design the PRA tool can also participate in design development, providing a safe and cost effective product.

  14. Russkij jazyk za rubezom. Jahrgang 1974 ("The Russian Language Abroad." Volume 1974)

    ERIC Educational Resources Information Center

    Huebner, Wolfgang

    1975-01-01

    Articles in the 1974 volume of this periodical are briefly reviewed, preponderantly under the headings of teaching materials, methodology, linguistics, scientific reports, and chronicle. Reviews and supplements, tapes and other materials are also included. (Text is in German.) (IFS/WGA)

  15. Methodological quality and reporting of systematic reviews in hand and wrist pathology.

    PubMed

    Wasiak, J; Shen, A Y; Ware, R; O'Donohoe, T J; Faggion, C M

    2017-10-01

    The objective of this study was to assess methodological and reporting quality of systematic reviews in hand and wrist pathology. MEDLINE, EMBASE and Cochrane Library were searched from inception to November 2016 for relevant studies. Reporting quality was evaluated using Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) and methodological quality using a measurement tool to assess systematic reviews, the Assessment of Multiple Systematic Reviews (AMSTAR). Descriptive statistics and linear regression were used to identify features associated with improved methodological quality. A total of 91 studies were included in the analysis. Most reviews inadequately reported PRISMA items regarding study protocol, search strategy and bias and AMSTAR items regarding protocol, publication bias and funding. Systematic reviews published in a plastics journal, or which included more authors, were associated with higher AMSTAR scores. A large proportion of systematic reviews within hand and wrist pathology literature score poorly with validated methodological assessment tools, which may affect the reliability of their conclusions. I.

  16. Methodological review of the quality of reach out and read: does it "work"?

    PubMed

    Yeager Pelatti, Christina; Pentimonti, Jill M; Justice, Laura M

    2014-04-01

    A considerable percentage of American children and adults fail to learn adequate literacy skills and read below a third grade level. Shared book reading is perhaps the single most important activity to prepare young children for success in reading. The primary objective of this manuscript was to critically review the methodological quality of Read Out and Read (ROR), a clinically based literacy program/intervention that teaches parents strategies to incorporate while sharing books with children as a method of preventing reading difficulties and academic struggles. A PubMed search was conducted. Articles that met three criteria were considered. First, the study must be clinically based and include parent contact with a pediatrician. Second, parental counseling ("anticipatory guidance") about the importance of parent-child book reading must be included. Third, only experimental or quasi-experimental studies were included; no additional criteria were used. Published articles from any year and peer-reviewed journal were considered. Study quality was determined using a modified version of the Downs and Black (1998) checklist assessing four categories: (1) Reporting, (2) External Validity, (3) Internal Validity-Bias, and (4) Internal Validity-Confounding. We were also interested in whether quality differed based on study design, children's age, sample size, and study outcome. Eleven studies met the inclusion criteria. The overall quality of evidence was variable across all studies; Reporting and External Validity categories were relatively strong while methodological concerns were found in the area of internal validity. Quality scores differed on the four study characteristics. Implications related to clinical practice and future studies are discussed.

  17. Measurement of lung volumes from supine portable chest radiographs.

    PubMed

    Ries, A L; Clausen, J L; Friedman, P J

    1979-12-01

    Lung volumes in supine nonambulatory patients are physiological parameters often difficult to measure with current techniques (plethysmograph, gas dilution). Existing radiographic methods for measuring lung volumes require standard upright chest radiographs. Accordingly, in 31 normal supine adults, we determined helium-dilution functional residual and total lung capacities and measured planimetric lung field areas (LFA) from corresponding portable anteroposterior and lateral radiographs. Low radiation dose methods, which delivered less than 10% of that from standard portable X-ray technique, were utilized. Correlation between lung volume and radiographic LFA was highly significant (r = 0.96, SEE = 10.6%). Multiple-step regressions using height and chest diameter correction factors reduced variance, but weight and radiographic magnification factors did not. In 17 additional subjects studied for validation, the regression equations accurately predicted radiographic lung volume. Thus, this technique can provide accurate and rapid measurement of lung volume in studies involving supine patients.

  18. Methodological Quality of Consensus Guidelines in Implant Dentistry.

    PubMed

    Faggion, Clovis Mariano; Apaza, Karol; Ariza-Fritas, Tania; Málaga, Lilian; Giannakopoulos, Nikolaos Nikitas; Alarcón, Marco Antonio

    2017-01-01

    Consensus guidelines are useful to improve clinical decision making. Therefore, the methodological evaluation of these guidelines is of paramount importance. Low quality information may guide to inadequate or harmful clinical decisions. To evaluate the methodological quality of consensus guidelines published in implant dentistry using a validated methodological instrument. The six implant dentistry journals with impact factors were scrutinised for consensus guidelines related to implant dentistry. Two assessors independently selected consensus guidelines, and four assessors independently evaluated their methodological quality using the Appraisal of Guidelines for Research & Evaluation (AGREE) II instrument. Disagreements in the selection and evaluation of guidelines were resolved by consensus. First, the consensus guidelines were analysed alone. Then, systematic reviews conducted to support the guidelines were included in the analysis. Non-parametric statistics for dependent variables (Wilcoxon signed rank test) was used to compare both groups. Of 258 initially retrieved articles, 27 consensus guidelines were selected. Median scores in four domains (applicability, rigour of development, stakeholder involvement, and editorial independence), expressed as percentages of maximum possible domain scores, were below 50% (median, 26%, 30.70%, 41.70%, and 41.70%, respectively). The consensus guidelines and consensus guidelines + systematic reviews data sets could be compared for 19 guidelines, and the results showed significant improvements in all domain scores (p < 0.05). Methodological improvement of consensus guidelines published in major implant dentistry journals is needed. The findings of the present study may help researchers to better develop consensus guidelines in implant dentistry, which will improve the quality and trust of information needed to make proper clinical decisions.

  19. Towards a Trans-Disciplinary Methodology for a Game-Based Intervention Development Process

    ERIC Educational Resources Information Center

    Arnab, Sylvester; Clarke, Samantha

    2017-01-01

    The application of game-based learning adds play into educational and instructional contexts. Even though there is a lack of standard methodologies or formulaic frameworks to better inform game-based intervention development, there exist scientific and empirical studies that can serve as benchmarks for establishing scientific validity in terms of…

  20. A methodology to enhance electromagnetic compatibility in joint military operations

    NASA Astrophysics Data System (ADS)

    Buckellew, William R.

    The development and validation of an improved methodology to identify, characterize, and prioritize potential joint EMI (electromagnetic interference) interactions and identify and develop solutions to reduce the effects of the interference are discussed. The methodology identifies potential EMI problems using results from field operations, historical data bases, and analytical modeling. Operational expertise, engineering analysis, and testing are used to characterize and prioritize the potential EMI problems. Results can be used to resolve potential EMI during the development and acquisition of new systems and to develop engineering fixes and operational workarounds for systems already employed. The analytic modeling portion of the methodology is a predictive process that uses progressive refinement of the analysis and the operational electronic environment to eliminate noninterfering equipment pairs, defer further analysis on pairs lacking operational significance, and resolve the remaining EMI problems. Tests are conducted on equipment pairs to ensure that the analytical models provide a realistic description of the predicted interference.

  1. Surrogate Plant Data Base : Volume 4. Appendix E : Medium and Heavy Truck Manufacturing

    DOT National Transportation Integrated Search

    1983-05-01

    This four volume report consists of a data base describing "surrogate" automobile and truck manufacturing plants developed as part of a methodology for evaluating capital investment requirements in new manufacturing facilities to build new fleets of ...

  2. A novel porous Ffowcs-Williams and Hawkings acoustic methodology for complex geometries

    NASA Astrophysics Data System (ADS)

    Nitzkorski, Zane Lloyd

    flows are investigated at Re = 3900, 10000 and 89000 in order to evaluate the method and investigate the physical sources of noise production. The Re = 3900 case was chosen due to its highly validated flow-field and to serve as a basis of comparison. The Re = 10000 cylinder is used to validate the noise production at turbulent Reynolds numbers against other simulations. Finally the Re = 89000 simulations are used to compare to experiment serving as a rigorous test of the methods predictive ability. The proposed approach demonstrates better performance than other commonly used approaches with the added benefit of computational efficiency and the ability to query independent volumes. This gives the added benefit of discovering how much noise production is directly associated with volumetric noise contributions. These capabilities allow for a thorough investigation of the sources of noise production and a means to evaluate proposed theories. A physical description of the source of sound for subcritical Reynolds number cylinders is established. A 45° beveled trailing edge configuration is investigated due to its relevance to hydrofoil and propeller noise. This configuration also allows for the evaluation of the assumption associated with the free-space Green's function since the half-plane Green's function can be used to represent the solution to the wave equation for this geometry. Similar results for directivity and amplitudes of the two formulations confirm the flexibility of the porous surface implementation. Good agreement with experiment is obtained. The effect of boundary layer thickness is investigated. The noise produced in the upper half plane is significantly decreased for the thinner boundary layer while the noise production in the lower half plane is only slightly decreased.

  3. Graduated Frequencies alcohol measures for monitoring consumption patterns: Results from an Australian national survey and a US diary validity study

    PubMed Central

    Greenfield, Thomas K.; Kerr, William C.; Bond, Jason; Ye, Yu; Stockwell, Tim

    2009-01-01

    We investigate several types of graduated frequency (GF) instruments for monitoring drinking patterns. Two studies with 12-month GF measures and daily data were used: (i) the Australian 2004 National Drug Strategy Household Survey (n = 24,109 aged 12+; 22,546 with GF and over 8000 with yesterday data) and (ii) a US methodological study involving a 28-day daily diary plus GF summary measures drawn from the National Alcohol Survey (n = 3,025 screened, 119 eligible study completers). The NDSHS involved (i) “drop and collect” self-completed forms with random sampling methods; the Measurement study (ii) screened 3+ drinkers by telephone and collected 28-day drinking diaries and pre- and post-diary 28-day GFs. We compared mean values for the GF quantity ranges from yesterday’s drinks (study i) and 28-day diaries (study ii), also examining volume influence. Using Yesterday’s drinking, Australian results showed GF quantity range means close to arithmetic midpoints and volume effects only for the lowest two levels (1–2, and 3–4 drinks; p < .001). U.S. calibration results on the GF using 28-day diaries were similar, with a volume effect only at these low quantity levels (p < .001). Means for the highest quantity thresholds were 23.5 drinks for the 20+ (10 gram) drink level (Australia) and 15.5 drinks for the 12+ (14 g) drink level (US). In the US study, summary GF frequency and volume were highly consistent with diary-based counterparts. A conclusion is that algorithms for computing volume may be refined using validation data. We suggest measurement methods may be improved by taking better account of empirical drink ethanol content. PMID:21197381

  4. Hit-Validation Methodologies for Ligands Isolated from DNA-Encoded Chemical Libraries.

    PubMed

    Zimmermann, Gunther; Li, Yizhou; Rieder, Ulrike; Mattarella, Martin; Neri, Dario; Scheuermann, Jörg

    2017-05-04

    DNA-encoded chemical libraries (DECLs) are large collections of compounds linked to DNA fragments, serving as amplifiable barcodes, which can be screened on target proteins of interest. In typical DECL selections, preferential binders are identified by high-throughput DNA sequencing, by comparing their frequency before and after the affinity capture step. Hits identified in this procedure need to be confirmed, by resynthesis and by performing affinity measurements. In this article we present new methods based on hybridization of oligonucleotide conjugates with fluorescently labeled complementary oligonucleotides; these facilitate the determination of affinity constants and kinetic dissociation constants. The experimental procedures were demonstrated with acetazolamide, a binder to carbonic anhydrase IX with a dissociation constant in the nanomolar range. The detection of binding events was compatible not only with fluorescence polarization methodologies, but also with Alphascreen technology and with microscale thermophoresis. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Self-Contained Automated Methodology for Optimal Flow Control

    NASA Technical Reports Server (NTRS)

    Joslin, Ronald D.; Gunzburger, Max D.; Nicolaides, Roy A.; Erlebacherl, Gordon; Hussaini, M. Yousuff

    1997-01-01

    This paper describes a self-contained, automated methodology for active flow control which couples the time-dependent Navier-Stokes system with an adjoint Navier-Stokes system and optimality conditions from which optimal states, i.e., unsteady flow fields and controls (e.g., actuators), may be determined. The problem of boundary layer instability suppression through wave cancellation is used as the initial validation case to test the methodology. Here, the objective of control is to match the stress vector along a portion of the boundary to a given vector; instability suppression is achieved by choosing the given vector to be that of a steady base flow. Control is effected through the injection or suction of fluid through a single orifice on the boundary. The results demonstrate that instability suppression can be achieved without any a priori knowledge of the disturbance, which is significant because other control techniques have required some knowledge of the flow unsteadiness such as frequencies, instability type, etc. The present methodology has been extended to three dimensions and may potentially be applied to separation control, re-laminarization, and turbulence control applications using one to many sensors and actuators.

  6. Feasibility of single-beat full-volume capture real-time three-dimensional echocardiography and auto-contouring algorithm for quantification of left ventricular volume: validation with cardiac magnetic resonance imaging.

    PubMed

    Chang, Sung-A; Lee, Sang-Chol; Kim, Eun-Young; Hahm, Seung-Hee; Jang, Shin Yi; Park, Sung-Ji; Choi, Jin-Oh; Park, Seung Woo; Choe, Yeon Hyeon; Oh, Jae K

    2011-08-01

    With recent developments in echocardiographic technology, a new system using real-time three-dimensional echocardiography (RT3DE) that allows single-beat acquisition of the entire volume of the left ventricle and incorporates algorithms for automated border detection has been introduced. Provided that these techniques are acceptably reliable, three-dimensional echocardiography may be much more useful for clinical practice. The aim of this study was to evaluate the feasibility and accuracy of left ventricular (LV) volume measurements by RT3DE using the single-beat full-volume capture technique. One hundred nine consecutive patients scheduled for cardiac magnetic resonance imaging and RT3DE using the single-beat full-volume capture technique on the same day were recruited. LV end-systolic volume, end-diastolic volume, and ejection fraction were measured using an auto-contouring algorithm from data acquired on RT3DE. The data were compared with the same measurements obtained using cardiac magnetic resonance imaging. Volume measurements on RT3DE with single-beat full-volume capture were feasible in 84% of patients. Both interobserver and intraobserver variability of three-dimensional measurements of end-systolic and end-diastolic volumes showed excellent agreement. Pearson's correlation analysis showed a close correlation of end-systolic and end-diastolic volumes between RT3DE and cardiac magnetic resonance imaging (r = 0.94 and r = 0.91, respectively, P < .0001 for both). Bland-Altman analysis showed reasonable limits of agreement. After application of the auto-contouring algorithm, the rate of successful auto-contouring (cases requiring minimal manual corrections) was <50%. RT3DE using single-beat full-volume capture is an easy and reliable technique to assess LV volume and systolic function in clinical practice. However, the image quality and low frame rate still limit its application for dilated left ventricles, and the automated volume analysis program needs

  7. Crossing trend analysis methodology and application for Turkish rainfall records

    NASA Astrophysics Data System (ADS)

    Şen, Zekâi

    2018-01-01

    Trend analyses are the necessary tools for depicting possible general increase or decrease in a given time series. There are many versions of trend identification methodologies such as the Mann-Kendall trend test, Spearman's tau, Sen's slope, regression line, and Şen's innovative trend analysis. The literature has many papers about the use, cons and pros, and comparisons of these methodologies. In this paper, a completely new approach is proposed based on the crossing properties of a time series. It is suggested that the suitable trend from the centroid of the given time series should have the maximum number of crossings (total number of up-crossings or down-crossings). This approach is applicable whether the time series has dependent or independent structure and also without any dependence on the type of the probability distribution function. The validity of this method is presented through extensive Monte Carlo simulation technique and its comparison with other existing trend identification methodologies. The application of the methodology is presented for a set of annual daily extreme rainfall time series from different parts of Turkey and they have physically independent structure.

  8. Structured Uncertainty Bound Determination From Data for Control and Performance Validation

    NASA Technical Reports Server (NTRS)

    Lim, Kyong B.

    2003-01-01

    This report attempts to document the broad scope of issues that must be satisfactorily resolved before one can expect to methodically obtain, with a reasonable confidence, a near-optimal robust closed loop performance in physical applications. These include elements of signal processing, noise identification, system identification, model validation, and uncertainty modeling. Based on a recently developed methodology involving a parameterization of all model validating uncertainty sets for a given linear fractional transformation (LFT) structure and noise allowance, a new software, Uncertainty Bound Identification (UBID) toolbox, which conveniently executes model validation tests and determine uncertainty bounds from data, has been designed and is currently available. This toolbox also serves to benchmark the current state-of-the-art in uncertainty bound determination and in turn facilitate benchmarking of robust control technology. To help clarify the methodology and use of the new software, two tutorial examples are provided. The first involves the uncertainty characterization of a flexible structure dynamics, and the second example involves a closed loop performance validation of a ducted fan based on an uncertainty bound from data. These examples, along with other simulation and experimental results, also help describe the many factors and assumptions that determine the degree of success in applying robust control theory to practical problems.

  9. Radar Image Simulation: Validation of the Point Scattering Method. Volume 2

    DTIC Science & Technology

    1977-09-01

    the Engineer Topographic Labor - atory (ETL), Fort Belvoir, Virginia. This Radar Simulation Study was performed to validate the point tcattering radar...e.n For radar, the number of Independent samples in a given re.-olution cell is given by 5 ,: N L 2w (16) L Acoso where: 0 Radar incidence angle; w

  10. A Comprehensive Validation Approach Using The RAVEN Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alfonsi, Andrea; Rabiti, Cristian; Cogliati, Joshua J

    2015-06-01

    The RAVEN computer code , developed at the Idaho National Laboratory, is a generic software framework to perform parametric and probabilistic analysis based on the response of complex system codes. RAVEN is a multi-purpose probabilistic and uncertainty quantification platform, capable to communicate with any system code. A natural extension of the RAVEN capabilities is the imple- mentation of an integrated validation methodology, involving several different metrics, that represent an evolution of the methods currently used in the field. The state-of-art vali- dation approaches use neither exploration of the input space through sampling strategies, nor a comprehensive variety of metrics neededmore » to interpret the code responses, with respect experimental data. The RAVEN code allows to address both these lacks. In the following sections, the employed methodology, and its application to the newer developed thermal-hydraulic code RELAP-7, is reported.The validation approach has been applied on an integral effect experiment, representing natu- ral circulation, based on the activities performed by EG&G Idaho. Four different experiment configurations have been considered and nodalized.« less

  11. Methodological Issues in Questionnaire Design.

    PubMed

    Song, Youngshin; Son, Youn Jung; Oh, Doonam

    2015-06-01

    The process of designing a questionnaire is complicated. Many questionnaires on nursing phenomena have been developed and used by nursing researchers. The purpose of this paper was to discuss questionnaire design and factors that should be considered when using existing scales. Methodological issues were discussed, such as factors in the design of questions, steps in developing questionnaires, wording and formatting methods for items, and administrations methods. How to use existing scales, how to facilitate cultural adaptation, and how to prevent socially desirable responding were discussed. Moreover, the triangulation method in questionnaire development was introduced. Steps were recommended for designing questions such as appropriately operationalizing key concepts for the target population, clearly formatting response options, generating items and confirming final items through face or content validity, sufficiently piloting the questionnaire using item analysis, demonstrating reliability and validity, finalizing the scale, and training the administrator. Psychometric properties and cultural equivalence should be evaluated prior to administration when using an existing questionnaire and performing cultural adaptation. In the context of well-defined nursing phenomena, logical and systematic methods will contribute to the development of simple and precise questionnaires.

  12. The BREAST-V: a unifying predictive formula for volume assessment in small, medium, and large breasts.

    PubMed

    Longo, Benedetto; Farcomeni, Alessio; Ferri, Germano; Campanale, Antonella; Sorotos, Micheal; Santanelli, Fabio

    2013-07-01

    Breast volume assessment enhances preoperative planning of both aesthetic and reconstructive procedures, helping the surgeon in the decision-making process of shaping the breast. Numerous methods of breast size determination are currently reported but are limited by methodologic flaws and variable estimations. The authors aimed to develop a unifying predictive formula for volume assessment in small to large breasts based on anthropomorphic values. Ten anthropomorphic breast measurements and direct volumes of 108 mastectomy specimens from 88 women were collected prospectively. The authors performed a multivariate regression to build the optimal model for development of the predictive formula. The final model was then internally validated. A previously published formula was used as a reference. Mean (±SD) breast weight was 527.9 ± 227.6 g (range, 150 to 1250 g). After model selection, sternal notch-to-nipple, inframammary fold-to-nipple, and inframammary fold-to-fold projection distances emerged as the most important predictors. The resulting formula (the BREAST-V) showed an adjusted R of 0.73. The estimated expected absolute error on new breasts is 89.7 g (95 percent CI, 62.4 to 119.1 g) and the expected relative error is 18.4 percent (95 percent CI, 12.9 to 24.3 percent). Application of reference formula on the sample yielded worse predictions than those derived by the formula, showing an R of 0.55. The BREAST-V is a reliable tool for predicting small to large breast volumes accurately for use as a complementary device in surgeon evaluation. An app entitled BREAST-V for both iOS and Android devices is currently available for free download in the Apple App Store and Google Play Store. Diagnostic, II.

  13. Validation of simulated earthquake ground motions based on evolution of intensity and frequency content

    USGS Publications Warehouse

    Rezaeian, Sanaz; Zhong, Peng; Hartzell, Stephen; Zareian, Farzin

    2015-01-01

    Simulated earthquake ground motions can be used in many recent engineering applications that require time series as input excitations. However, applicability and validation of simulations are subjects of debate in the seismological and engineering communities. We propose a validation methodology at the waveform level and directly based on characteristics that are expected to influence most structural and geotechnical response parameters. In particular, three time-dependent validation metrics are used to evaluate the evolving intensity, frequency, and bandwidth of a waveform. These validation metrics capture nonstationarities in intensity and frequency content of waveforms, making them ideal to address nonlinear response of structural systems. A two-component error vector is proposed to quantify the average and shape differences between these validation metrics for a simulated and recorded ground-motion pair. Because these metrics are directly related to the waveform characteristics, they provide easily interpretable feedback to seismologists for modifying their ground-motion simulation models. To further simplify the use and interpretation of these metrics for engineers, it is shown how six scalar key parameters, including duration, intensity, and predominant frequency, can be extracted from the validation metrics. The proposed validation methodology is a step forward in paving the road for utilization of simulated ground motions in engineering practice and is demonstrated using examples of recorded and simulated ground motions from the 1994 Northridge, California, earthquake.

  14. Validation of Rehabilitation Counseling Accreditation and Certification Knowledge Areas: Methodology and Initial Results.

    ERIC Educational Resources Information Center

    Szymanski, Edna Mora; And Others

    1993-01-01

    Conducted ongoing study to validate and update knowledge standards for rehabilitation counseling accreditation and certification, using descriptive, ex post facto, and time-series designs and three sampling frames. Findings from 1,025 counselors who renewed their certification in 1991 revealed that 52 of 55 knowledge standards were rated as at…

  15. Research MethodologyOverview of Qualitative Research

    PubMed Central

    GROSSOEHME, DANIEL H.

    2015-01-01

    Qualitative research methods are a robust tool for chaplaincy research questions. Similar to much of chaplaincy clinical care, qualitative research generally works with written texts, often transcriptions of individual interviews or focus group conversations and seeks to understand the meaning of experience in a study sample. This article describes three common methodologies: ethnography, grounded theory, and phenomenology. Issues to consider relating to the study sample, design, and analysis are discussed. Enhancing the validity of the data, as well reliability and ethical issues in qualitative research are described. Qualitative research is an accessible way for chaplains to contribute new knowledge about the sacred dimension of people's lived experience. PMID:24926897

  16. Constructing a question bank based on script concordance approach as a novel assessment methodology in surgical education.

    PubMed

    Aldekhayel, Salah A; Alselaim, Nahar A; Magzoub, Mohi Eldin; Al-Qattan, Mohammad M; Al-Namlah, Abdullah M; Tamim, Hani; Al-Khayal, Abdullah; Al-Habdan, Sultan I; Zamakhshary, Mohammed F

    2012-10-24

    Script Concordance Test (SCT) is a new assessment tool that reliably assesses clinical reasoning skills. Previous descriptions of developing SCT-question banks were merely subjective. This study addresses two gaps in the literature: 1) conducting the first phase of a multistep validation process of SCT in Plastic Surgery, and 2) providing an objective methodology to construct a question bank based on SCT. After developing a test blueprint, 52 test items were written. Five validation questions were developed and a validation survey was established online. Seven reviewers were asked to answer this survey. They were recruited from two countries, Saudi Arabia and Canada, to improve the test's external validity. Their ratings were transformed into percentages. Analysis was performed to compare reviewers' ratings by looking at correlations, ranges, means, medians, and overall scores. Scores of reviewers' ratings were between 76% and 95% (mean 86% ± 5). We found poor correlations between reviewers (Pearson's: +0.38 to -0.22). Ratings of individual validation questions ranged between 0 and 4 (on a scale 1-5). Means and medians of these ranges were computed for each test item (mean: 0.8 to 2.4; median: 1 to 3). A subset of test items comprising 27 items was generated based on a set of inclusion and exclusion criteria. This study proposes an objective methodology for validation of SCT-question bank. Analysis of validation survey is done from all angles, i.e., reviewers, validation questions, and test items. Finally, a subset of test items is generated based on a set of criteria.

  17. How To Measure Survey Reliability and Validity. The Survey Kit, Volume 7.

    ERIC Educational Resources Information Center

    Litwin, Mark S.

    The nine-volume Survey Kit is designed to help readers prepare and conduct surveys and become better users of survey results. All the books in the series contain instructional objectives, exercises and answers, examples of surveys in use, illustrations of survey questions, guidelines for action, checklists of "dos and don'ts," and…

  18. Water-vapor pressure control in a volume

    NASA Technical Reports Server (NTRS)

    Scialdone, J. J.

    1978-01-01

    The variation with time of the partial pressure of water in a volume that has openings to the outside environment and includes vapor sources was evaluated as a function of the purging flow and its vapor content. Experimental tests to estimate the diffusion of ambient humidity through openings and to validate calculated results were included. The purging flows required to produce and maintain a certain humidity in shipping containers, storage rooms, and clean rooms can be estimated with the relationship developed here. These purging flows are necessary to prevent the contamination, degradation, and other effects of water vapor on the systems inside these volumes.

  19. Methodological standards and patient-centeredness in comparative effectiveness research: the PCORI perspective.

    PubMed

    2012-04-18

    Rigorous methodological standards help to ensure that medical research produces information that is valid and generalizable, and are essential in patient-centered outcomes research (PCOR). Patient-centeredness refers to the extent to which the preferences, decision-making needs, and characteristics of patients are addressed, and is the key characteristic differentiating PCOR from comparative effectiveness research. The Patient Protection and Affordable Care Act signed into law in 2010 created the Patient-Centered Outcomes Research Institute (PCORI), which includes an independent, federally appointed Methodology Committee. The Methodology Committee is charged to develop methodological standards for PCOR. The 4 general areas identified by the committee in which standards will be developed are (1) prioritizing research questions, (2) using appropriate study designs and analyses, (3) incorporating patient perspectives throughout the research continuum, and (4) fostering efficient dissemination and implementation of results. A Congressionally mandated PCORI methodology report (to be issued in its first iteration in May 2012) will begin to provide standards in each of these areas, and will inform future PCORI funding announcements and review criteria. The work of the Methodology Committee is intended to enable generation of information that is relevant and trustworthy for patients, and to enable decisions that improve patient-centered outcomes.

  20. A neural network based methodology to predict site-specific spectral acceleration values

    NASA Astrophysics Data System (ADS)

    Kamatchi, P.; Rajasankar, J.; Ramana, G. V.; Nagpal, A. K.

    2010-12-01

    A general neural network based methodology that has the potential to replace the computationally-intensive site-specific seismic analysis of structures is proposed in this paper. The basic framework of the methodology consists of a feed forward back propagation neural network algorithm with one hidden layer to represent the seismic potential of a region and soil amplification effects. The methodology is implemented and verified with parameters corresponding to Delhi city in India. For this purpose, strong ground motions are generated at bedrock level for a chosen site in Delhi due to earthquakes considered to originate from the central seismic gap of the Himalayan belt using necessary geological as well as geotechnical data. Surface level ground motions and corresponding site-specific response spectra are obtained by using a one-dimensional equivalent linear wave propagation model. Spectral acceleration values are considered as a target parameter to verify the performance of the methodology. Numerical studies carried out to validate the proposed methodology show that the errors in predicted spectral acceleration values are within acceptable limits for design purposes. The methodology is general in the sense that it can be applied to other seismically vulnerable regions and also can be updated by including more parameters depending on the state-of-the-art in the subject.