Sample records for verification studies comparing

  1. MARATHON Verification (MARV)

    DTIC Science & Technology

    2017-08-01

    comparable with MARATHON 1 in terms of output. Rather, the MARATHON 2 verification cases were designed to ensure correct implementation of the new algorithms...DISCLAIMER The findings of this report are not to be construed as an official Department of the Army position, policy, or decision unless so designated by...for employment against demands. This study is a comparative verification of the functionality of MARATHON 4 (our newest implementation of MARATHON

  2. Fingerprint changes and verification failure among patients with hand dermatitis.

    PubMed

    Lee, Chew Kek; Chang, Choong Chor; Johar, Asmah; Puwira, Othman; Roshidah, Baba

    2013-03-01

    To determine the prevalence of fingerprint verification failure and to define and quantify the fingerprint changes associated with fingerprint verification failure. Case-control study. Referral public dermatology center. The study included 100 consecutive patients with clinical hand dermatitis involving the palmar distal phalanx of either thumb and 100 age-, sex-, and ethnicity-matched controls. Patients with an altered thumb print due to other causes and palmar hyperhidrosis were excluded. Fingerprint verification(pass/fail) and hand eczema severity index score. Twenty-seven percent of patients failed fingerprint verification compared with 2% of controls. Fingerprint verification failure was associated with a higher hand eczema severity index score (P.001). The main fingerprint abnormalities were fingerprint dystrophy (42.0%) and abnormal white lines (79.5%). The number of abnormal white lines was significantly higher among the patients with hand dermatitis compared with controls(P=.001). Among the patients with hand dermatitis, theodds of failing fingerprint verification with fingerprint dystrophy was 4.01. The presence of broad lines and long lines was associated with a greater odds of fingerprint verification failure (odds ratio [OR], 8.04; 95% CI, 3.56-18.17 and OR, 2.37; 95% CI, 1.31-4.27, respectively),while the presence of thin lines was protective of verification failure (OR, 0.45; 95% CI, 0.23-0.89). Fingerprint verification failure is a significant problem among patients with more severe hand dermatitis. It is mainly due to fingerprint dystrophy and abnormal white lines. Malaysian National Medical Research Register Identifier: NMRR-11-30-8226

  3. INF and IAEA: A comparative analysis of verification strategy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scheinman, L.; Kratzer, M.

    1992-07-01

    This is the final report of a study on the relevance and possible lessons of Intermediate Range Nuclear Force (INF) verification to the International Atomic Energy Agency (IAEA) international safeguards activities.

  4. A new method to address verification bias in studies of clinical screening tests: cervical cancer screening assays as an example.

    PubMed

    Xue, Xiaonan; Kim, Mimi Y; Castle, Philip E; Strickler, Howard D

    2014-03-01

    Studies to evaluate clinical screening tests often face the problem that the "gold standard" diagnostic approach is costly and/or invasive. It is therefore common to verify only a subset of negative screening tests using the gold standard method. However, undersampling the screen negatives can lead to substantial overestimation of the sensitivity and underestimation of the specificity of the diagnostic test. Our objective was to develop a simple and accurate statistical method to address this "verification bias." We developed a weighted generalized estimating equation approach to estimate, in a single model, the accuracy (eg, sensitivity/specificity) of multiple assays and simultaneously compare results between assays while addressing verification bias. This approach can be implemented using standard statistical software. Simulations were conducted to assess the proposed method. An example is provided using a cervical cancer screening trial that compared the accuracy of human papillomavirus and Pap tests, with histologic data as the gold standard. The proposed approach performed well in estimating and comparing the accuracy of multiple assays in the presence of verification bias. The proposed approach is an easy to apply and accurate method for addressing verification bias in studies of multiple screening methods. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. SU-F-T-494: A Multi-Institutional Study of Independent Dose Verification Using Golden Beam Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Itano, M; Yamazaki, T; Tachibana, R

    Purpose: In general, beam data of individual linac is measured for independent dose verification software program and the verification is performed as a secondary check. In this study, independent dose verification using golden beam data was compared to that using individual linac’s beam data. Methods: Six institutions were participated and three different beam data were prepared. The one was individual measured data (Original Beam Data, OBD) .The others were generated by all measurements from same linac model (Model-GBD) and all linac models (All-GBD). The three different beam data were registered to the independent verification software program for each institute. Subsequently,more » patient’s plans in eight sites (brain, head and neck, lung, esophagus, breast, abdomen, pelvis and bone) were analyzed using the verification program to compare doses calculated using the three different beam data. Results: 1116 plans were collected from six institutes. Compared to using the OBD, the results shows the variation using the Model-GBD based calculation and the All-GBD was 0.0 ± 0.3% and 0.0 ± 0.6%, respectively. The maximum variations were 1.2% and 2.3%, respectively. The plans with the variation over 1% shows the reference points were located away from the central axis with/without physical wedge. Conclusion: The confidence limit (2SD) using the Model-GBD and the All-GBD was within 0.6% and 1.2%, respectively. Thus, the use of golden beam data may be feasible for independent verification. In addition to it, the verification using golden beam data provide quality assurance of planning from the view of audit. This research is partially supported by Japan Agency for Medical Research and Development(AMED)« less

  6. Adjusting for partial verification or workup bias in meta-analyses of diagnostic accuracy studies.

    PubMed

    de Groot, Joris A H; Dendukuri, Nandini; Janssen, Kristel J M; Reitsma, Johannes B; Brophy, James; Joseph, Lawrence; Bossuyt, Patrick M M; Moons, Karel G M

    2012-04-15

    A key requirement in the design of diagnostic accuracy studies is that all study participants receive both the test under evaluation and the reference standard test. For a variety of practical and ethical reasons, sometimes only a proportion of patients receive the reference standard, which can bias the accuracy estimates. Numerous methods have been described for correcting this partial verification bias or workup bias in individual studies. In this article, the authors describe a Bayesian method for obtaining adjusted results from a diagnostic meta-analysis when partial verification or workup bias is present in a subset of the primary studies. The method corrects for verification bias without having to exclude primary studies with verification bias, thus preserving the main advantages of a meta-analysis: increased precision and better generalizability. The results of this method are compared with the existing methods for dealing with verification bias in diagnostic meta-analyses. For illustration, the authors use empirical data from a systematic review of studies of the accuracy of the immunohistochemistry test for diagnosis of human epidermal growth factor receptor 2 status in breast cancer patients.

  7. INF and IAEA: A comparative analysis of verification strategy. [Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scheinman, L.; Kratzer, M.

    1992-07-01

    This is the final report of a study on the relevance and possible lessons of Intermediate Range Nuclear Force (INF) verification to the International Atomic Energy Agency (IAEA) international safeguards activities.

  8. Verification of performance specifications for a US Food and Drug Administration-approved molecular microbiology test: Clostridium difficile cytotoxin B using the Becton, Dickinson and Company GeneOhm Cdiff assay.

    PubMed

    Schlaberg, Robert; Mitchell, Michael J; Taggart, Edward W; She, Rosemary C

    2012-01-01

    US Food and Drug Administration (FDA)-approved diagnostic tests based on molecular genetic technologies are becoming available for an increasing number of microbial pathogens. Advances in technology and lower costs have moved molecular diagnostic tests formerly performed for research purposes only into much wider use in clinical microbiology laboratories. To provide an example of laboratory studies performed to verify the performance of an FDA-approved assay for the detection of Clostridium difficile cytotoxin B compared with the manufacturer's performance standards. We describe the process and protocols used by a laboratory for verification of an FDA-approved assay, assess data from the verification studies, and implement the assay after verification. Performance data from the verification studies conducted by the laboratory were consistent with the manufacturer's performance standards and the assay was implemented into the laboratory's test menu. Verification studies are required for FDA-approved diagnostic assays prior to use in patient care. Laboratories should develop a standardized approach to verification studies that can be adapted and applied to different types of assays. We describe the verification of an FDA-approved real-time polymerase chain reaction assay for the detection of a toxin gene in a bacterial pathogen.

  9. A Comparative Study of Two Azimuth Based Non Standard Location Methods

    DTIC Science & Technology

    2017-03-23

    Standard Location Methods Rongsong JIH U.S. Department of State / Arms Control, Verification, and Compliance Bureau, 2201 C Street, NW, Washington...COMPARATIVE STUDY OF TWO AZIMUTH-BASED NON-STANDARD LOCATION METHODS R. Jih Department of State / Arms Control, Verification, and Compliance Bureau...cable. The so-called “Yin Zhong Xian” (“引中线” in Chinese) algorithm, hereafter the YZX method , is an Oriental version of IPB-based procedure. It

  10. Comparative study of the swabbing properties of seven commercially available swab materials for cleaning verification.

    PubMed

    Corrigan, Damion K; Piletsky, Sergey; McCrossen, Sean

    2009-01-01

    This article compares the technical performances of several different commercially available swabbing materials for the purpose of cleaning verification. A steel surface was soiled with solutions of acetaminophen, nicotinic acid, diclofenac, and benzamidine and wiped with each swabbing material. The compounds were extracted with water or ethanol (depending on polarity of analyte) and their concentration in extract was quantified spectrophotometrically. The study also investigated swab debris on the wiped surface. The swab performances were compared and the best swab material was identified.

  11. Systematic study of source mask optimization and verification flows

    NASA Astrophysics Data System (ADS)

    Ben, Yu; Latypov, Azat; Chua, Gek Soon; Zou, Yi

    2012-06-01

    Source mask optimization (SMO) emerged as powerful resolution enhancement technique (RET) for advanced technology nodes. However, there is a plethora of flow and verification metrics in the field, confounding the end user of the technique. Systemic study of different flows and the possible unification thereof is missing. This contribution is intended to reveal the pros and cons of different SMO approaches and verification metrics, understand the commonality and difference, and provide a generic guideline for RET selection via SMO. The paper discusses 3 different type of variations commonly arise in SMO, namely pattern preparation & selection, availability of relevant OPC recipe for freeform source and finally the metrics used in source verification. Several pattern selection algorithms are compared and advantages of systematic pattern selection algorithms are discussed. In the absence of a full resist model for SMO, alternative SMO flow without full resist model is reviewed. Preferred verification flow with quality metrics of DOF and MEEF is examined.

  12. QPF verification using different radar-based analyses: a case study

    NASA Astrophysics Data System (ADS)

    Moré, J.; Sairouni, A.; Rigo, T.; Bravo, M.; Mercader, J.

    2009-09-01

    Verification of QPF in NWP models has been always challenging not only for knowing what scores are better to quantify a particular skill of a model but also for choosing the more appropriate methodology when comparing forecasts with observations. On the one hand, an objective verification technique can provide conclusions that are not in agreement with those ones obtained by the "eyeball" method. Consequently, QPF can provide valuable information to forecasters in spite of having poor scores. On the other hand, there are difficulties in knowing the "truth" so different results can be achieved depending on the procedures used to obtain the precipitation analysis. The aim of this study is to show the importance of combining different precipitation analyses and verification methodologies to obtain a better knowledge of the skills of a forecasting system. In particular, a short range precipitation forecasting system based on MM5 at 12 km coupled with LAPS is studied in a local convective precipitation event that took place in NE Iberian Peninsula on October 3rd 2008. For this purpose, a variety of verification methods (dichotomous, recalibration and object oriented methods) are used to verify this case study. At the same time, different precipitation analyses are used in the verification process obtained by interpolating radar data using different techniques.

  13. Development of independent MU/treatment time verification algorithm for non-IMRT treatment planning: A clinical experience

    NASA Astrophysics Data System (ADS)

    Tatli, Hamza; Yucel, Derya; Yilmaz, Sercan; Fayda, Merdan

    2018-02-01

    The aim of this study is to develop an algorithm for independent MU/treatment time (TT) verification for non-IMRT treatment plans, as a part of QA program to ensure treatment delivery accuracy. Two radiotherapy delivery units and their treatment planning systems (TPS) were commissioned in Liv Hospital Radiation Medicine Center, Tbilisi, Georgia. Beam data were collected according to vendors' collection guidelines, and AAPM reports recommendations, and processed by Microsoft Excel during in-house algorithm development. The algorithm is designed and optimized for calculating SSD and SAD treatment plans, based on AAPM TG114 dose calculation recommendations, coded and embedded in MS Excel spreadsheet, as a preliminary verification algorithm (VA). Treatment verification plans were created by TPSs based on IAEA TRS 430 recommendations, also calculated by VA, and point measurements were collected by solid water phantom, and compared. Study showed that, in-house VA can be used for non-IMRT plans MU/TT verifications.

  14. The effect of mystery shopper reports on age verification for tobacco purchases.

    PubMed

    Krevor, Brad S; Ponicki, William R; Grube, Joel W; DeJong, William

    2011-09-01

    Mystery shops involving attempted tobacco purchases by young buyers have been implemented in order to monitor retail stores' performance in refusing underage sales. Anecdotal evidence suggests that mystery shop visits with immediate feedback to store personnel can improve age verification. This study investigated the effect of monthly and twice-monthly mystery shop reports on age verification. Mystery shoppers visited 45 Walgreens stores 20 times. The stores were randomly assigned to 1 of 3 conditions. Control group stores received no feedback, whereas 2 treatment groups received feedback communications on every visit (twice monthly) or on every second visit (monthly) after baseline. Logit regression models tested whether each treatment group improved verification rates relative to the control group. Postbaseline verification rates were higher in both treatment groups than in the control group, but only the stores receiving monthly communications had a significantly greater improvement compared with the control group stores. Verification rates increased significantly during the study period for all 3 groups, with delayed improvement among control group stores. Communication between managers regarding the mystery shop program may account for the delayed age-verification improvements observed in the control group stores. Encouraging interstore communication might extend the benefits of mystery shop programs beyond those stores that receive this intervention. Copyright © Taylor & Francis Group, LLC

  15. RF model of the distribution system as a communication channel, phase 2. Volume 2: Task reports

    NASA Technical Reports Server (NTRS)

    Rustay, R. C.; Gajjar, J. T.; Rankin, R. W.; Wentz, R. C.; Wooding, R.

    1982-01-01

    Based on the established feasibility of predicting, via a model, the propagation of Power Line Frequency on radial type distribution feeders, verification studies comparing model predictions against measurements were undertaken using more complicated feeder circuits and situations. Detailed accounts of the major tasks are presented. These include: (1) verification of model; (2) extension, implementation, and verification of perturbation theory; (3) parameter sensitivity; (4) transformer modeling; and (5) compensation of power distribution systems for enhancement of power line carrier communication reliability.

  16. Computational-hydrodynamic studies of the Noh compressible flow problem using non-ideal equations of state

    NASA Astrophysics Data System (ADS)

    Honnell, Kevin; Burnett, Sarah; Yorke, Chloe'; Howard, April; Ramsey, Scott

    2017-06-01

    The Noh problem is classic verification problem in the field of compressible flows. Simple to conceptualize, it is nonetheless difficult for numerical codes to predict correctly, making it an ideal code-verification test bed. In its original incarnation, the fluid is a simple ideal gas; once validated, however, these codes are often used to study highly non-ideal fluids and solids. In this work the classic Noh problem is extended beyond the commonly-studied polytropic ideal gas to more realistic equations of state (EOS) including the stiff gas, the Nobel-Abel gas, and the Carnahan-Starling hard-sphere fluid, thus enabling verification studies to be performed on more physically-realistic fluids. Exact solutions are compared with numerical results obtained from the Lagrangian hydrocode FLAG, developed at Los Alamos. For these more realistic EOSs, the simulation errors decreased in magnitude both at the origin and at the shock, but also spread more broadly about these points compared to the ideal EOS. The overall spatial convergence rate remained first order.

  17. Online 3D EPID-based dose verification: Proof of concept.

    PubMed

    Spreeuw, Hanno; Rozendaal, Roel; Olaciregui-Ruiz, Igor; González, Patrick; Mans, Anton; Mijnheer, Ben; van Herk, Marcel

    2016-07-01

    Delivery errors during radiotherapy may lead to medical harm and reduced life expectancy for patients. Such serious incidents can be avoided by performing dose verification online, i.e., while the patient is being irradiated, creating the possibility of halting the linac in case of a large overdosage or underdosage. The offline EPID-based 3D in vivo dosimetry system clinically employed at our institute is in principle suited for online treatment verification, provided the system is able to complete 3D dose reconstruction and verification within 420 ms, the present acquisition time of a single EPID frame. It is the aim of this study to show that our EPID-based dosimetry system can be made fast enough to achieve online 3D in vivo dose verification. The current dose verification system was sped up in two ways. First, a new software package was developed to perform all computations that are not dependent on portal image acquisition separately, thus removing the need for doing these calculations in real time. Second, the 3D dose reconstruction algorithm was sped up via a new, multithreaded implementation. Dose verification was implemented by comparing planned with reconstructed 3D dose distributions delivered to two regions in a patient: the target volume and the nontarget volume receiving at least 10 cGy. In both volumes, the mean dose is compared, while in the nontarget volume, the near-maximum dose (D2) is compared as well. The real-time dosimetry system was tested by irradiating an anthropomorphic phantom with three VMAT plans: a 6 MV head-and-neck treatment plan, a 10 MV rectum treatment plan, and a 10 MV prostate treatment plan. In all plans, two types of serious delivery errors were introduced. The functionality of automatically halting the linac was also implemented and tested. The precomputation time per treatment was ∼180 s/treatment arc, depending on gantry angle resolution. The complete processing of a single portal frame, including dose verification, took 266 ± 11 ms on a dual octocore Intel Xeon E5-2630 CPU running at 2.40 GHz. The introduced delivery errors were detected after 5-10 s irradiation time. A prototype online 3D dose verification tool using portal imaging has been developed and successfully tested for two different kinds of gross delivery errors. Thus, online 3D dose verification has been technologically achieved.

  18. Quantitative assessment of the physical potential of proton beam range verification with PET/CT.

    PubMed

    Knopf, A; Parodi, K; Paganetti, H; Cascio, E; Bonab, A; Bortfeld, T

    2008-08-07

    A recent clinical pilot study demonstrated the feasibility of offline PET/CT range verification for proton therapy treatments. In vivo PET measurements are challenged by blood perfusion, variations of tissue compositions, patient motion and image co-registration uncertainties. Besides these biological and treatment specific factors, the accuracy of the method is constrained by the underlying physical processes. This phantom study distinguishes physical factors from other factors, assessing the reproducibility, consistency and sensitivity of the PET/CT range verification method. A spread-out Bragg-peak (SOBP) proton field was delivered to a phantom consisting of poly-methyl methacrylate (PMMA), lung and bone equivalent material slabs. PET data were acquired in listmode at a commercial PET/CT scanner available within 10 min walking distance from the proton therapy unit. The measured PET activity distributions were compared to simulations of the PET signal based on Geant4 and FLUKA Monte Carlo (MC) codes. To test the reproducibility of the measured PET signal, data from two independent measurements at the same geometrical position in the phantom were compared. Furthermore, activation depth profiles within identical material arrangements but at different positions within the irradiation field were compared to test the consistency of the measured PET signal. Finally, activation depth profiles through air/lung, air/bone and lung/bone interfaces parallel as well as at 6 degrees to the beam direction were studied to investigate the sensitivity of the PET/CT range verification method. The reproducibility and the consistency of the measured PET signal were found to be of the same order of magnitude. They determine the physical accuracy of the PET measurement to be about 1 mm. However, range discrepancies up to 2.6 mm between two measurements and range variations up to 2.6 mm within one measurement were found at the beam edge and at the edge of the field of view (FOV) of the PET scanner. PET/CT range verification was found to be able to detect small range modifications in the presence of complex tissue inhomogeneities. This study indicates the physical potential of the PET/CT verification method to detect the full-range characteristic of the delivered dose in the patient.

  19. Quantitative assessment of the physical potential of proton beam range verification with PET/CT

    NASA Astrophysics Data System (ADS)

    Knopf, A.; Parodi, K.; Paganetti, H.; Cascio, E.; Bonab, A.; Bortfeld, T.

    2008-08-01

    A recent clinical pilot study demonstrated the feasibility of offline PET/CT range verification for proton therapy treatments. In vivo PET measurements are challenged by blood perfusion, variations of tissue compositions, patient motion and image co-registration uncertainties. Besides these biological and treatment specific factors, the accuracy of the method is constrained by the underlying physical processes. This phantom study distinguishes physical factors from other factors, assessing the reproducibility, consistency and sensitivity of the PET/CT range verification method. A spread-out Bragg-peak (SOBP) proton field was delivered to a phantom consisting of poly-methyl methacrylate (PMMA), lung and bone equivalent material slabs. PET data were acquired in listmode at a commercial PET/CT scanner available within 10 min walking distance from the proton therapy unit. The measured PET activity distributions were compared to simulations of the PET signal based on Geant4 and FLUKA Monte Carlo (MC) codes. To test the reproducibility of the measured PET signal, data from two independent measurements at the same geometrical position in the phantom were compared. Furthermore, activation depth profiles within identical material arrangements but at different positions within the irradiation field were compared to test the consistency of the measured PET signal. Finally, activation depth profiles through air/lung, air/bone and lung/bone interfaces parallel as well as at 6° to the beam direction were studied to investigate the sensitivity of the PET/CT range verification method. The reproducibility and the consistency of the measured PET signal were found to be of the same order of magnitude. They determine the physical accuracy of the PET measurement to be about 1 mm. However, range discrepancies up to 2.6 mm between two measurements and range variations up to 2.6 mm within one measurement were found at the beam edge and at the edge of the field of view (FOV) of the PET scanner. PET/CT range verification was found to be able to detect small range modifications in the presence of complex tissue inhomogeneities. This study indicates the physical potential of the PET/CT verification method to detect the full-range characteristic of the delivered dose in the patient.

  20. Verification of intensity modulated radiation therapy beams using a tissue equivalent plastic scintillator dosimetry system

    NASA Astrophysics Data System (ADS)

    Petric, Martin Peter

    This thesis describes the development and implementation of a novel method for the dosimetric verification of intensity modulated radiation therapy (IMRT) fields with several advantages over current techniques. Through the use of a tissue equivalent plastic scintillator sheet viewed by a charge-coupled device (CCD) camera, this method provides a truly tissue equivalent dosimetry system capable of efficiently and accurately performing field-by-field verification of IMRT plans. This work was motivated by an initial study comparing two IMRT treatment planning systems. The clinical functionality of BrainLAB's BrainSCAN and Varian's Helios IMRT treatment planning systems were compared in terms of implementation and commissioning, dose optimization, and plan assessment. Implementation and commissioning revealed differences in the beam data required to characterize the beam prior to use with the BrainSCAN system requiring higher resolution data compared to Helios. This difference was found to impact on the ability of the systems to accurately calculate dose for highly modulated fields, with BrainSCAN being more successful than Helios. The dose optimization and plan assessment comparisons revealed that while both systems use considerably different optimization algorithms and user-control interfaces, they are both capable of producing substantially equivalent dose plans. The extensive use of dosimetric verification techniques in the IMRT treatment planning comparison study motivated the development and implementation of a novel IMRT dosimetric verification system. The system consists of a water-filled phantom with a tissue equivalent plastic scintillator sheet built into the top surface. Scintillation light is reflected by a plastic mirror within the phantom towards a viewing window where it is captured using a CCD camera. Optical photon spread is removed using a micro-louvre optical collimator and by deconvolving a glare kernel from the raw images. Characterization of this new dosimetric verification system indicates excellent dose response and spatial linearity, high spatial resolution, and good signal uniformity and reproducibility. Dosimetric results from square fields, dynamic wedged fields, and a 7-field head and neck IMRT treatment plan indicate good agreement with film dosimetry distributions. Efficiency analysis of the system reveals a 50% reduction in time requirements for field-by-field verification of a 7-field IMRT treatment plan compared to film dosimetry.

  1. The Maximal Oxygen Uptake Verification Phase: a Light at the End of the Tunnel?

    PubMed

    Schaun, Gustavo Z

    2017-12-08

    Commonly performed during an incremental test to exhaustion, maximal oxygen uptake (V̇O 2max ) assessment has become a recurring practice in clinical and experimental settings. To validate the test, several criteria were proposed. In this context, the plateau in oxygen uptake (V̇O 2 ) is inconsistent in its frequency, reducing its usefulness as a robust method to determine "true" V̇O 2max . Moreover, secondary criteria previously suggested, such as expiratory exchange ratios or percentages of maximal heart rate, are highly dependent on protocol design and often are achieved at V̇O 2 percentages well below V̇O 2max . Thus, an alternative method termed verification phase was proposed. Currently, it is clear that the verification phase can be a practical and sensitive method to confirm V̇O 2max ; however, procedures to conduct it are not standardized across the literature and no previous research tried to summarize how it has been employed. Therefore, in this review the knowledge on the verification phase was updated, while suggestions on how it can be performed (e.g. intensity, duration, recovery) were provided according to population and protocol design. Future studies should focus to identify a verification protocol feasible for different populations and to compare square-wave and multistage verification phases. Additionally, studies assessing verification phases in different patient populations are still warranted.

  2. Verification of operational solar flare forecast: Case of Regional Warning Center Japan

    NASA Astrophysics Data System (ADS)

    Kubo, Yûki; Den, Mitsue; Ishii, Mamoru

    2017-08-01

    In this article, we discuss a verification study of an operational solar flare forecast in the Regional Warning Center (RWC) Japan. The RWC Japan has been issuing four-categorical deterministic solar flare forecasts for a long time. In this forecast verification study, we used solar flare forecast data accumulated over 16 years (from 2000 to 2015). We compiled the forecast data together with solar flare data obtained with the Geostationary Operational Environmental Satellites (GOES). Using the compiled data sets, we estimated some conventional scalar verification measures with 95% confidence intervals. We also estimated a multi-categorical scalar verification measure. These scalar verification measures were compared with those obtained by the persistence method and recurrence method. As solar activity varied during the 16 years, we also applied verification analyses to four subsets of forecast-observation pair data with different solar activity levels. We cannot conclude definitely that there are significant performance differences between the forecasts of RWC Japan and the persistence method, although a slightly significant difference is found for some event definitions. We propose to use a scalar verification measure to assess the judgment skill of the operational solar flare forecast. Finally, we propose a verification strategy for deterministic operational solar flare forecasting. For dichotomous forecast, a set of proposed verification measures is a frequency bias for bias, proportion correct and critical success index for accuracy, probability of detection for discrimination, false alarm ratio for reliability, Peirce skill score for forecast skill, and symmetric extremal dependence index for association. For multi-categorical forecast, we propose a set of verification measures as marginal distributions of forecast and observation for bias, proportion correct for accuracy, correlation coefficient and joint probability distribution for association, the likelihood distribution for discrimination, the calibration distribution for reliability and resolution, and the Gandin-Murphy-Gerrity score and judgment skill score for skill.

  3. SU-E-T-602: Patient-Specific Online Dose Verification Based On Transmission Detector Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thoelking, J; Yuvaraj, S; Jens, F

    Purpose: Intensity modulated radiotherapy requires a comprehensive quality assurance program in general and ideally independent verification of dose delivery. Since conventional 2D detector arrays allow only pre-treatment verification, there is a debate concerning the need of online dose verification. This study presents the clinical performance, including dosimetric plan verification in 2D as well as in 3D and the error detection abilities of a new transmission detector (TD) for online dose verification of 6MV photon beam. Methods: To validate the dosimetric performance of the new device, dose reconstruction based on TD measurements were compared to a conventional pre-treatment verification method (reference)more » and treatment planning system (TPS) for 18 IMRT and VMAT treatment plans. Furthermore, dose reconstruction inside the patient based on TD read-out was evaluated by comparing various dose volume indices and 3D gamma evaluations against independent dose computation and TPS. To investigate the sensitivity of the new device, different types of systematic and random errors for leaf positions and linac output were introduced in IMRT treatment sequences. Results: The 2D gamma index evaluation of transmission detector based dose reconstruction showed an excellent agreement for all IMRT and VMAT plans compared to reference measurements (99.3±1.2)% and TPS (99.1±0.7)%. Good agreement was also obtained for 3D dose reconstruction based on TD read-out compared to dose computation (mean gamma value of PTV = 0.27±0.04). Only a minimal dose underestimation within the target volume was observed when analyzing DVH indices (<1%). Positional errors in leaf banks larger than 1mm and errors in linac output larger than 2% could clearly identified with the TD. Conclusion: Since 2D and 3D evaluations for all IMRT and VMAT treatment plans were in excellent agreement with reference measurements and dose computation, the new TD is suitable to qualify for routine treatment plan verification. Funding Support, Disclosures, and Conflict of Interest: COIs: Frank Lohr: Elekta: research grant, travel grants, teaching honoraria IBA: research grant, travel grants, teaching honoraria, advisory board C-Rad: board honoraria, travel grants Frederik Wenz: Elekta: research grant, teaching honoraria, consultant, advisory board Zeiss: research grant, teaching honoraria, patent Hansjoerg Wertz: Elekta: research grant, teaching honoraria IBA: research grant.« less

  4. Verification of endotracheal intubation in obese patients - temporal comparison of ultrasound vs. auscultation and capnography.

    PubMed

    Pfeiffer, P; Bache, S; Isbye, D L; Rudolph, S S; Rovsing, L; Børglum, J

    2012-05-01

    Ultrasound (US) may have an emerging role as an adjunct in verification of endotracheal intubation. Obtaining optimal US images in obese patients is generally regarded more difficult than for other patients. This study compared the time consumption of bilateral lung US with auscultation and capnography for verifying endotracheal intubation in obese patients. A prospective, paired and investigator-blinded study performed in the operating theatre. Twenty-four adult patients requiring endotracheal intubation for bariatric surgery were included. During post-intubation bag ventilation, bilateral lung US was performed for detection of lungsliding indicating lung ventilation simultaneous with capnography and auscultation of epigastrium and chest. Primary outcome measure was the time difference to confirmed endotracheal intubation between US and auscultation alone. The secondary outcome measure was time difference between US and auscultation combined with capnography. Both methods verified endotracheal tube placement in all patients. No significant difference was found between US compared with auscultation alone. Median time for verification by auscultation alone was 47.5 s [interquartile (IQR) 40-51 s], with a mean difference of -0.3 s in favor of US (95% confidence interval -3.5-2.9 s) P = 0.87. Comparing US with the combination of auscultation and capnography, there was a significant difference between the two methods. Median time for verification by US was 43 s (IQR 40-51 s) vs. 55 s (IQR 46-65 s), P < 0.0001. In obese patients, verification of endotracheal tube placement with US is as fast as auscultation alone and faster than the standard method of auscultation and capnography. © 2012 The Authors. Acta Anaesthesiologica Scandinavica © 2012 The Acta Anaesthesiologica Scandinavica Foundation.

  5. Verification of Advective Bar Elements Implemented in the Aria Thermal Response Code.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mills, Brantley

    2016-01-01

    A verification effort was undertaken to evaluate the implementation of the new advective bar capability in the Aria thermal response code. Several approaches to the verification process were taken : a mesh refinement study to demonstrate solution convergence in the fluid and the solid, visually examining the mapping of the advective bar element nodes to the surrounding surfaces, and a comparison of solutions produced using the advective bars for simple geometries with solutions from commercial CFD software . The mesh refinement study has shown solution convergence for simple pipe flow in both temperature and velocity . Guidelines were provided tomore » achieve appropriate meshes between the advective bar elements and the surrounding volume. Simulations of pipe flow using advective bars elements in Aria have been compared to simulations using the commercial CFD software ANSYS Fluent (r) and provided comparable solutions in temperature and velocity supporting proper implementation of the new capability. Verification of Advective Bar Elements iv Acknowledgements A special thanks goes to Dean Dobranich for his guidance and expertise through all stages of this effort . His advice and feedback was instrumental to its completion. Thanks also goes to Sam Subia and Tolu Okusanya for helping to plan many of the verification activities performed in this document. Thank you to Sam, Justin Lamb and Victor Brunini for their assistance in resolving issues encountered with running the advective bar element model. Finally, thanks goes to Dean, Sam, and Adam Hetzler for reviewing the document and providing very valuable comments.« less

  6. A study of applications scribe frame data verifications using design rule check

    NASA Astrophysics Data System (ADS)

    Saito, Shoko; Miyazaki, Masaru; Sakurai, Mitsuo; Itoh, Takahisa; Doi, Kazumasa; Sakurai, Norioko; Okada, Tomoyuki

    2013-06-01

    In semiconductor manufacturing, scribe frame data generally is generated for each LSI product according to its specific process design. Scribe frame data is designed based on definition tables of scanner alignment, wafer inspection and customers specified marks. We check that scribe frame design is conforming to specification of alignment and inspection marks at the end. Recently, in COT (customer owned tooling) business or new technology development, there is no effective verification method for the scribe frame data, and we take a lot of time to work on verification. Therefore, we tried to establish new verification method of scribe frame data by applying pattern matching and DRC (Design Rule Check) which is used in device verification. We would like to show scheme of the scribe frame data verification using DRC which we tried to apply. First, verification rules are created based on specifications of scanner, inspection and others, and a mark library is also created for pattern matching. Next, DRC verification is performed to scribe frame data. Then the DRC verification includes pattern matching using mark library. As a result, our experiments demonstrated that by use of pattern matching and DRC verification our new method can yield speed improvements of more than 12 percent compared to the conventional mark checks by visual inspection and the inspection time can be reduced to less than 5 percent if multi-CPU processing is used. Our method delivers both short processing time and excellent accuracy when checking many marks. It is easy to maintain and provides an easy way for COT customers to use original marks. We believe that our new DRC verification method for scribe frame data is indispensable and mutually beneficial.

  7. Monte Carlo simulations to replace film dosimetry in IMRT verification.

    PubMed

    Goetzfried, Thomas; Rickhey, Mark; Treutwein, Marius; Koelbl, Oliver; Bogner, Ludwig

    2011-01-01

    Patient-specific verification of intensity-modulated radiation therapy (IMRT) plans can be done by dosimetric measurements or by independent dose or monitor unit calculations. The aim of this study was the clinical evaluation of IMRT verification based on a fast Monte Carlo (MC) program with regard to possible benefits compared to commonly used film dosimetry. 25 head-and-neck IMRT plans were recalculated by a pencil beam based treatment planning system (TPS) using an appropriate quality assurance (QA) phantom. All plans were verified both by film and diode dosimetry and compared to MC simulations. The irradiated films, the results of diode measurements and the computed dose distributions were evaluated, and the data were compared on the basis of gamma maps and dose-difference histograms. Average deviations in the high-dose region between diode measurements and point dose calculations performed with the TPS and MC program were 0.7 ± 2.7% and 1.2 ± 3.1%, respectively. For film measurements, the mean gamma values with 3% dose difference and 3mm distance-to-agreement were 0.74 ± 0.28 (TPS as reference) with dose deviations up to 10%. Corresponding values were significantly reduced to 0.34 ± 0.09 for MC dose calculation. The total time needed for both verification procedures is comparable, however, by far less labor intensive in the case of MC simulations. The presented study showed that independent dose calculation verification of IMRT plans with a fast MC program has the potential to eclipse film dosimetry more and more in the near future. Thus, the linac-specific QA part will necessarily become more important. In combination with MC simulations and due to the simple set-up, point-dose measurements for dosimetric plausibility checks are recommended at least in the IMRT introduction phase. Copyright © 2010. Published by Elsevier GmbH.

  8. Online 3D EPID-based dose verification: Proof of concept

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spreeuw, Hanno; Rozendaal, Roel, E-mail: r.rozenda

    Purpose: Delivery errors during radiotherapy may lead to medical harm and reduced life expectancy for patients. Such serious incidents can be avoided by performing dose verification online, i.e., while the patient is being irradiated, creating the possibility of halting the linac in case of a large overdosage or underdosage. The offline EPID-based 3D in vivo dosimetry system clinically employed at our institute is in principle suited for online treatment verification, provided the system is able to complete 3D dose reconstruction and verification within 420 ms, the present acquisition time of a single EPID frame. It is the aim of thismore » study to show that our EPID-based dosimetry system can be made fast enough to achieve online 3D in vivo dose verification. Methods: The current dose verification system was sped up in two ways. First, a new software package was developed to perform all computations that are not dependent on portal image acquisition separately, thus removing the need for doing these calculations in real time. Second, the 3D dose reconstruction algorithm was sped up via a new, multithreaded implementation. Dose verification was implemented by comparing planned with reconstructed 3D dose distributions delivered to two regions in a patient: the target volume and the nontarget volume receiving at least 10 cGy. In both volumes, the mean dose is compared, while in the nontarget volume, the near-maximum dose (D2) is compared as well. The real-time dosimetry system was tested by irradiating an anthropomorphic phantom with three VMAT plans: a 6 MV head-and-neck treatment plan, a 10 MV rectum treatment plan, and a 10 MV prostate treatment plan. In all plans, two types of serious delivery errors were introduced. The functionality of automatically halting the linac was also implemented and tested. Results: The precomputation time per treatment was ∼180 s/treatment arc, depending on gantry angle resolution. The complete processing of a single portal frame, including dose verification, took 266 ± 11 ms on a dual octocore Intel Xeon E5-2630 CPU running at 2.40 GHz. The introduced delivery errors were detected after 5–10 s irradiation time. Conclusions: A prototype online 3D dose verification tool using portal imaging has been developed and successfully tested for two different kinds of gross delivery errors. Thus, online 3D dose verification has been technologically achieved.« less

  9. Validation of mesoscale models

    NASA Technical Reports Server (NTRS)

    Kuo, Bill; Warner, Tom; Benjamin, Stan; Koch, Steve; Staniforth, Andrew

    1993-01-01

    The topics discussed include the following: verification of cloud prediction from the PSU/NCAR mesoscale model; results form MAPS/NGM verification comparisons and MAPS observation sensitivity tests to ACARS and profiler data; systematic errors and mesoscale verification for a mesoscale model; and the COMPARE Project and the CME.

  10. Verification of NWP Cloud Properties using A-Train Satellite Observations

    NASA Astrophysics Data System (ADS)

    Kucera, P. A.; Weeks, C.; Wolff, C.; Bullock, R.; Brown, B.

    2011-12-01

    Recently, the NCAR Model Evaluation Tools (MET) has been enhanced to incorporate satellite observations for the verification of Numerical Weather Prediction (NWP) cloud products. We have developed tools that match fields spatially (both in the vertical and horizontal dimensions) to compare NWP products with satellite observations. These matched fields provide diagnostic evaluation of cloud macro attributes such as vertical distribution of clouds, cloud top height, and the spatial and seasonal distribution of cloud fields. For this research study, we have focused on using CloudSat, CALIPSO, and MODIS observations to evaluate cloud fields for a variety of NWP fields and derived products. We have selected cases ranging from large, mid-latitude synoptic systems to well-organized tropical cyclones. For each case, we matched the observed cloud field with gridded model and/or derived product fields. CloudSat and CALIPSO observations and model fields were matched and compared in the vertical along the orbit track. MODIS data and model fields were matched and compared in the horizontal. We then use MET to compute the verification statistics to quantify the performance of the models in representing the cloud fields. In this presentation we will give a summary of our comparison and show verification results for both synoptic and tropical cyclone cases.

  11. The verification of LANDSAT data in the geographical analysis of wetlands in west Tennessee

    NASA Technical Reports Server (NTRS)

    Rehder, J.; Quattrochi, D. A.

    1978-01-01

    The reliability of LANDSAT imagery as a medium for identifying, delimiting, monitoring, measuring, and mapping wetlands in west Tennessee was assessed to verify LANDSAT as an accurate, efficient cartographic tool that could be employed by a wide range of users to study wetland dynamics. The verification procedure was based on the visual interpretation and measurement of multispectral imagery. The accuracy testing procedure was predicated on surrogate ground truth data gleaned from medium altitude imagery of the wetlands. Fourteen sites or case study areas were selected from individual 9 x 9 inch photo frames on the aerial photography. These sites were then used as data control calibration parameters for assessing the cartography accuracy of the LANDSAT imagery. An analysis of results obtained from the verification tests indicated that 1:250,000 scale LANDSAT data were the most reliable scale of imagery for visually mapping and measuring wetlands using the area grid technique. The mean areal percentage of accuracy was 93.54 percent (real) and 96.93 percent (absolute). As a test of accuracy, the LANDSAT 1:250,000 scale overall wetland measurements were compared with an area cell mensuration of the swamplands from 1:130,000 scale color infrared U-2 aircraft imagery. The comparative totals substantiated the results from the LANDSAT verification procedure.

  12. Runtime Verification of Pacemaker Functionality Using Hierarchical Fuzzy Colored Petri-nets.

    PubMed

    Majma, Negar; Babamir, Seyed Morteza; Monadjemi, Amirhassan

    2017-02-01

    Today, implanted medical devices are increasingly used for many patients and in case of diverse health problems. However, several runtime problems and errors are reported by the relevant organizations, even resulting in patient death. One of those devices is the pacemaker. The pacemaker is a device helping the patient to regulate the heartbeat by connecting to the cardiac vessels. This device is directed by its software, so any failure in this software causes a serious malfunction. Therefore, this study aims to a better way to monitor the device's software behavior to decrease the failure risk. Accordingly, we supervise the runtime function and status of the software. The software verification means examining limitations and needs of the system users by the system running software. In this paper, a method to verify the pacemaker software, based on the fuzzy function of the device, is presented. So, the function limitations of the device are identified and presented as fuzzy rules and then the device is verified based on the hierarchical Fuzzy Colored Petri-net (FCPN), which is formed considering the software limits. Regarding the experiences of using: 1) Fuzzy Petri-nets (FPN) to verify insulin pumps, 2) Colored Petri-nets (CPN) to verify the pacemaker and 3) To verify the pacemaker by a software agent with Petri-network based knowledge, which we gained during the previous studies, the runtime behavior of the pacemaker software is examined by HFCPN, in this paper. This is considered a developing step compared to the earlier work. HFCPN in this paper, compared to the FPN and CPN used in our previous studies reduces the complexity. By presenting the Petri-net (PN) in a hierarchical form, the verification runtime, decreased as 90.61% compared to the verification runtime in the earlier work. Since we need an inference engine in the runtime verification, we used the HFCPN to enhance the performance of the inference engine.

  13. Thermal Pollution Mathematical Model. Volume 4: Verification of Three-Dimensional Rigid-Lid Model at Lake Keowee. [envrionment impact of thermal discharges from power plants

    NASA Technical Reports Server (NTRS)

    Lee, S. S.; Sengupta, S.; Nwadike, E. V.; Sinha, S. K.

    1980-01-01

    The rigid lid model was developed to predict three dimensional temperature and velocity distributions in lakes. This model was verified at various sites (Lake Belews, Biscayne Bay, etc.) and th verification at Lake Keowee was the last of these series of verification runs. The verification at Lake Keowee included the following: (1) selecting the domain of interest, grid systems, and comparing the preliminary results with archival data; (2) obtaining actual ground truth and infrared scanner data both for summer and winter; and (3) using the model to predict the measured data for the above periods and comparing the predicted results with the actual data. The model results compared well with measured data. Thus, the model can be used as an effective predictive tool for future sites.

  14. Verification of clinical samples, positive in AMPLICOR Neisseria gonorrhoeae polymerase chain reaction, by 16S rRNA and gyrA compared with culture.

    PubMed

    Airell, Asa; Lindbäck, Emma; Ataker, Ferda; Pörnull, Kirsti Jalakas; Wretlind, Bengt

    2005-06-01

    We compared 956 samples for AMPLICOR Neisseria gonorrhoeae polymerase chain reaction (PCR) (Roche) with species verification using the 16S rRNA gene to verification using gyrA gene. Control was the culture method. The gyrA verification uses pyrosequencing of the quinolone resistance-determining region of gyrA. Of 52 samples with optical density >/=0.2 in PCR, 27 were negative in culture, two samples from pharynx were false negative in culture and four samples from pharynx were false positives in verification with 16S rRNA. Twenty-five samples showed growth of gonococci, 18 of the corresponding PCR samples were verified by both methods; three urine samples were positive only in gyrA ; and one pharynx specimen was positive only in 16S rRNA. Three samples were lost. We conclude that AMPLICOR N. gonorrhoeae PCR with verification in gyrA gene can be considered as a diagnostic tool in populations with low prevalence of gonorrhoea and that pharynx specimens should not be analysed by PCR.

  15. Impact of radiation attenuation by a carbon fiber couch on patient dose verification

    NASA Astrophysics Data System (ADS)

    Yu, Chun-Yen; Chou, Wen-Tsae; Liao, Yi-Jen; Lee, Jeng-Hung; Liang, Ji-An; Hsu, Shih-Ming

    2017-02-01

    The aim of this study was to understand the difference between the measured and calculated irradiation attenuations obtained using two algorithms and to identify the influence of couch attenuation on patient dose verification. We performed eight tests of couch attenuation with two photon energies, two longitudinal couch positions, and two rail positions. The couch attenuation was determined using a radiation treatment planning system. The measured and calculated attenuations were compared. We also performed 12 verifications of head-and-neck and rectum cases by using a Delta phantom. The dose deviation (DD), distance to agreement (DTA), and gamma index of pencil-beam convolution (PBC) verifications were nearly the same. The agreement was least consistent for the anisotropic analytical algorithm (AAA) without the couch for the head-and-neck case, in which the DD, DTA, and gamma index were 74.4%, 99.3%, and 89%, respectively; for the rectum case, the corresponding values were 56.2%, 95.1%, and 92.4%. We suggest that dose verification should be performed using the following three metrics simultaneously: DD, DTA, and the gamma index.

  16. Comparison of OpenFOAM and EllipSys3D actuator line methods with (NEW) MEXICO results

    NASA Astrophysics Data System (ADS)

    Nathan, J.; Meyer Forsting, A. R.; Troldborg, N.; Masson, C.

    2017-05-01

    The Actuator Line Method exists for more than a decade and has become a well established choice for simulating wind rotors in computational fluid dynamics. Numerous implementations exist and are used in the wind energy research community. These codes were verified by experimental data such as the MEXICO experiment. Often the verification against other codes were made on a very broad scale. Therefore this study attempts first a validation by comparing two different implementations, namely an adapted version of SOWFA/OpenFOAM and EllipSys3D and also a verification by comparing against experimental results from the MEXICO and NEW MEXICO experiments.

  17. Experimental verification of a Monte Carlo-based MLC simulation model for IMRT dose calculations in heterogeneous media

    NASA Astrophysics Data System (ADS)

    Tyagi, N.; Curran, B. H.; Roberson, P. L.; Moran, J. M.; Acosta, E.; Fraass, B. A.

    2008-02-01

    IMRT often requires delivering small fields which may suffer from electronic disequilibrium effects. The presence of heterogeneities, particularly low-density tissues in patients, complicates such situations. In this study, we report on verification of the DPM MC code for IMRT treatment planning in heterogeneous media, using a previously developed model of the Varian 120-leaf MLC. The purpose of this study is twofold: (a) design a comprehensive list of experiments in heterogeneous media for verification of any dose calculation algorithm and (b) verify our MLC model in these heterogeneous type geometries that mimic an actual patient geometry for IMRT treatment. The measurements have been done using an IMRT head and neck phantom (CIRS phantom) and slab phantom geometries. Verification of the MLC model has been carried out using point doses measured with an A14 slim line (SL) ion chamber inside a tissue-equivalent and a bone-equivalent material using the CIRS phantom. Planar doses using lung and bone equivalent slabs have been measured and compared using EDR films (Kodak, Rochester, NY).

  18. A new verification film system for routine quality control of radiation fields: Kodak EC-L.

    PubMed

    Hermann, A; Bratengeier, K; Priske, A; Flentje, M

    2000-06-01

    The use of modern irradiation techniques requires better verification films for determining set-up deviations and patient movements during the course of radiation treatment. This is an investigation of the image quality and time requirement of a new verification film system compared to a conventional portal film system. For conventional verifications we used Agfa Curix HT 1000 films which were compared to the new Kodak EC-L film system. 344 Agfa Curix HT 1000 and 381 Kodak EC-L portal films of different tumor sites (prostate, rectum, head and neck) were visually judged on a light box by 2 experienced physicians. Subjective judgement of image quality, masking of films and time requirement were checked. In this investigation 68% of 175 Kodak EC-L ap/pa-films were judged "good", only 18% were classified "moderate" or "poor" 14%, but only 22% of 173 conventional ap/pa verification films (Agfa Curix HT 1000) were judged to be "good". The image quality, detail perception and time required for film inspection of the new Kodak EC-L film system was significantly improved when compared with standard portal films. They could be read more accurately and the detection of set-up deviation was facilitated.

  19. Self-verification motives at the collective level of self-definition.

    PubMed

    Chen, Serena; Chen, Karen Y; Shaw, Lindsay

    2004-01-01

    Three studies examined self-verification motives in relation to collective aspects of the self. Several moderators of collective self-verification were also examined--namely, the certainty with which collective self-views are held, the nature of one's ties to a source of self-verification, the salience of the collective self, and the importance of group identification. Evidence for collective self-verification emerged across all studies, particularly when collective self-views were held with high certainty (Studies 1 and 2), perceivers were somehow tied to the source of self-verification (Study 1), the collective self was salient (Study 2), and group identification was important (Study 3). To the authors' knowledge, these studies are the first to examine self-verification at the collective level of self-definition. The parallel and distinct ways in which self-verification processes may operate at different levels of self-definition are discussed.

  20. A Hybrid On-line Verification Method of Relay Setting

    NASA Astrophysics Data System (ADS)

    Gao, Wangyuan; Chen, Qing; Si, Ji; Huang, Xin

    2017-05-01

    Along with the rapid development of the power industry, grid structure gets more sophisticated. The validity and rationality of protective relaying are vital to the security of power systems. To increase the security of power systems, it is essential to verify the setting values of relays online. Traditional verification methods mainly include the comparison of protection range and the comparison of calculated setting value. To realize on-line verification, the verifying speed is the key. The verifying result of comparing protection range is accurate, but the computation burden is heavy, and the verifying speed is slow. Comparing calculated setting value is much faster, but the verifying result is conservative and inaccurate. Taking the overcurrent protection as example, this paper analyses the advantages and disadvantages of the two traditional methods above, and proposes a hybrid method of on-line verification which synthesizes the advantages of the two traditional methods. This hybrid method can meet the requirements of accurate on-line verification.

  1. SU-D-BRC-03: Development and Validation of an Online 2D Dose Verification System for Daily Patient Plan Delivery Accuracy Check

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, J; Hu, W; Xing, Y

    Purpose: All plan verification systems for particle therapy are designed to do plan verification before treatment. However, the actual dose distributions during patient treatment are not known. This study develops an online 2D dose verification tool to check the daily dose delivery accuracy. Methods: A Siemens particle treatment system with a modulated scanning spot beam is used in our center. In order to do online dose verification, we made a program to reconstruct the delivered 2D dose distributions based on the daily treatment log files and depth dose distributions. In the log files we can get the focus size, positionmore » and particle number for each spot. A gamma analysis is used to compare the reconstructed dose distributions with the dose distributions from the TPS to assess the daily dose delivery accuracy. To verify the dose reconstruction algorithm, we compared the reconstructed dose distributions to dose distributions measured using PTW 729XDR ion chamber matrix for 13 real patient plans. Then we analyzed 100 treatment beams (58 carbon and 42 proton) for prostate, lung, ACC, NPC and chordoma patients. Results: For algorithm verification, the gamma passing rate was 97.95% for the 3%/3mm and 92.36% for the 2%/2mm criteria. For patient treatment analysis,the results were 97.7%±1.1% and 91.7%±2.5% for carbon and 89.9%±4.8% and 79.7%±7.7% for proton using 3%/3mm and 2%/2mm criteria, respectively. The reason for the lower passing rate for the proton beam is that the focus size deviations were larger than for the carbon beam. The average focus size deviations were −14.27% and −6.73% for proton and −5.26% and −0.93% for carbon in the x and y direction respectively. Conclusion: The verification software meets our requirements to check for daily dose delivery discrepancies. Such tools can enhance the current treatment plan and delivery verification processes and improve safety of clinical treatments.« less

  2. PFLOTRAN Verification: Development of a Testing Suite to Ensure Software Quality

    NASA Astrophysics Data System (ADS)

    Hammond, G. E.; Frederick, J. M.

    2016-12-01

    In scientific computing, code verification ensures the reliability and numerical accuracy of a model simulation by comparing the simulation results to experimental data or known analytical solutions. The model is typically defined by a set of partial differential equations with initial and boundary conditions, and verification ensures whether the mathematical model is solved correctly by the software. Code verification is especially important if the software is used to model high-consequence systems which cannot be physically tested in a fully representative environment [Oberkampf and Trucano (2007)]. Justified confidence in a particular computational tool requires clarity in the exercised physics and transparency in its verification process with proper documentation. We present a quality assurance (QA) testing suite developed by Sandia National Laboratories that performs code verification for PFLOTRAN, an open source, massively-parallel subsurface simulator. PFLOTRAN solves systems of generally nonlinear partial differential equations describing multiphase, multicomponent and multiscale reactive flow and transport processes in porous media. PFLOTRAN's QA test suite compares the numerical solutions of benchmark problems in heat and mass transport against known, closed-form, analytical solutions, including documentation of the exercised physical process models implemented in each PFLOTRAN benchmark simulation. The QA test suite development strives to follow the recommendations given by Oberkampf and Trucano (2007), which describes four essential elements in high-quality verification benchmark construction: (1) conceptual description, (2) mathematical description, (3) accuracy assessment, and (4) additional documentation and user information. Several QA tests within the suite will be presented, including details of the benchmark problems and their closed-form analytical solutions, implementation of benchmark problems in PFLOTRAN simulations, and the criteria used to assess PFLOTRAN's performance in the code verification procedure. References Oberkampf, W. L., and T. G. Trucano (2007), Verification and Validation Benchmarks, SAND2007-0853, 67 pgs., Sandia National Laboratories, Albuquerque, NM.

  3. Verification of a Viscous Computational Aeroacoustics Code using External Verification Analysis

    NASA Technical Reports Server (NTRS)

    Ingraham, Daniel; Hixon, Ray

    2015-01-01

    The External Verification Analysis approach to code verification is extended to solve the three-dimensional Navier-Stokes equations with constant properties, and is used to verify a high-order computational aeroacoustics (CAA) code. After a brief review of the relevant literature, the details of the EVA approach are presented and compared to the similar Method of Manufactured Solutions (MMS). Pseudocode representations of EVA's algorithms are included, along with the recurrence relations needed to construct the EVA solution. The code verification results show that EVA was able to convincingly verify a high-order, viscous CAA code without the addition of MMS-style source terms, or any other modifications to the code.

  4. Verification of a Viscous Computational Aeroacoustics Code Using External Verification Analysis

    NASA Technical Reports Server (NTRS)

    Ingraham, Daniel; Hixon, Ray

    2015-01-01

    The External Verification Analysis approach to code verification is extended to solve the three-dimensional Navier-Stokes equations with constant properties, and is used to verify a high-order computational aeroacoustics (CAA) code. After a brief review of the relevant literature, the details of the EVA approach are presented and compared to the similar Method of Manufactured Solutions (MMS). Pseudocode representations of EVA's algorithms are included, along with the recurrence relations needed to construct the EVA solution. The code verification results show that EVA was able to convincingly verify a high-order, viscous CAA code without the addition of MMS-style source terms, or any other modifications to the code.

  5. Geometrical verification system using Adobe Photoshop in radiotherapy.

    PubMed

    Ishiyama, Hiromichi; Suzuki, Koji; Niino, Keiji; Hosoya, Takaaki; Hayakawa, Kazushige

    2005-02-01

    Adobe Photoshop is used worldwide and is useful for comparing portal films with simulation films. It is possible to scan images and then view them simultaneously with this software. The purpose of this study was to assess the accuracy of a geometrical verification system using Adobe Photoshop. We prepared the following two conditions for verification. Under one condition, films were hanged on light boxes, and examiners measured distances between the isocenter on simulation films and that on portal films by adjusting the bony structures. Under the other condition, films were scanned into a computer and displayed using Adobe Photoshop, and examiners measured distances between the isocenter on simulation films and those on portal films by adjusting the bony structures. To obtain control data, lead balls were used as a fiducial point for matching the films accurately. The errors, defined as the differences between the control data and the measurement data, were assessed. Errors of the data obtained using Adobe Photoshop were significantly smaller than those of the data obtained from films on light boxes (p < 0.007). The geometrical verification system using Adobe Photoshop is available on any PC with this software and is useful for improving the accuracy of verification.

  6. Formal verification of an oral messages algorithm for interactive consistency

    NASA Technical Reports Server (NTRS)

    Rushby, John

    1992-01-01

    The formal specification and verification of an algorithm for Interactive Consistency based on the Oral Messages algorithm for Byzantine Agreement is described. We compare our treatment with that of Bevier and Young, who presented a formal specification and verification for a very similar algorithm. Unlike Bevier and Young, who observed that 'the invariant maintained in the recursive subcases of the algorithm is significantly more complicated than is suggested by the published proof' and who found its formal verification 'a fairly difficult exercise in mechanical theorem proving,' our treatment is very close to the previously published analysis of the algorithm, and our formal specification and verification are straightforward. This example illustrates how delicate choices in the formulation of the problem can have significant impact on the readability of its formal specification and on the tractability of its formal verification.

  7. Verification of Ganoderma (lingzhi) commercial products by Fourier Transform infrared spectroscopy and two-dimensional IR correlation spectroscopy

    NASA Astrophysics Data System (ADS)

    Choong, Yew-Keong; Sun, Su-Qin; Zhou, Qun; Lan, Jin; Lee, Han-Lim; Chen, Xiang-Dong

    2014-07-01

    Ganoderma commercial products are typically based on two sources, raw material (powder form and/or spores) and extract (water and/or solvent). This study compared three types of Ganoderma commercial products using 1 Dimensional Fourier Transform infrared and second derivative spectroscopy. The analyzed spectra of Ganoderma raw material products were compared with spectra of cultivated Ganoderma raw material powder from different mushroom farms in Malaysia. The Ganoderma extract product was also compared with three types of cultivated Ganoderma extracts. Other medicinal Ganoderma contents in commercial extract product that included glucan and triterpenoid were analyzed by using FTIR and 2DIR. The results showed that water extract of cultivated Ganoderma possessed comparable spectra with that of Ganoderma product water extract. By comparing the content of Ganoderma commercial products using FTIR and 2DIR, product content profiles could be detected. In addition, the geographical origin of the Ganoderma products could be verified by comparing their spectra with Ganoderma products from known areas. This study demonstrated the possibility of developing verification tool to validate the purity of commercial medicinal herbal and mushroom products.

  8. Extremely accurate sequential verification of RELAP5-3D

    DOE PAGES

    Mesina, George L.; Aumiller, David L.; Buschman, Francis X.

    2015-11-19

    Large computer programs like RELAP5-3D solve complex systems of governing, closure and special process equations to model the underlying physics of nuclear power plants. Further, these programs incorporate many other features for physics, input, output, data management, user-interaction, and post-processing. For software quality assurance, the code must be verified and validated before being released to users. For RELAP5-3D, verification and validation are restricted to nuclear power plant applications. Verification means ensuring that the program is built right by checking that it meets its design specifications, comparing coding to algorithms and equations and comparing calculations against analytical solutions and method ofmore » manufactured solutions. Sequential verification performs these comparisons initially, but thereafter only compares code calculations between consecutive code versions to demonstrate that no unintended changes have been introduced. Recently, an automated, highly accurate sequential verification method has been developed for RELAP5-3D. The method also provides to test that no unintended consequences result from code development in the following code capabilities: repeating a timestep advancement, continuing a run from a restart file, multiple cases in a single code execution, and modes of coupled/uncoupled operation. In conclusion, mathematical analyses of the adequacy of the checks used in the comparisons are provided.« less

  9. Measurement of a True [Formula: see text]O2max during a Ramp Incremental Test Is Not Confirmed by a Verification Phase.

    PubMed

    Murias, Juan M; Pogliaghi, Silvia; Paterson, Donald H

    2018-01-01

    The accuracy of an exhaustive ramp incremental (RI) test to determine maximal oxygen uptake ([Formula: see text]O 2max ) was recently questioned and the utilization of a verification phase proposed as a gold standard. This study compared the oxygen uptake ([Formula: see text]O 2 ) during a RI test to that obtained during a verification phase aimed to confirm attainment of [Formula: see text]O 2max . Sixty-one healthy males [31 older (O) 65 ± 5 yrs; 30 younger (Y) 25 ± 4 yrs] performed a RI test (15-20 W/min for O and 25 W/min for Y). At the end of the RI test, a 5-min recovery period was followed by a verification phase of constant load cycling to fatigue at either 85% ( n = 16) or 105% ( n = 45) of the peak power output obtained from the RI test. The highest [Formula: see text]O 2 after the RI test (39.8 ± 11.5 mL·kg -1 ·min -1 ) and the verification phase (40.1 ± 11.2 mL·kg -1 ·min -1 ) were not different ( p = 0.33) and they were highly correlated ( r = 0.99; p < 0.01). This response was not affected by age or intensity of the verification phase. The Bland-Altman analysis revealed a very small absolute bias (-0.25 mL·kg -1 ·min -1 , not different from 0) and a precision of ±1.56 mL·kg -1 ·min -1 between measures. This study indicated that a verification phase does not highlight an under-estimation of [Formula: see text]O 2max derived from a RI test, in a large and heterogeneous group of healthy younger and older men naïve to laboratory testing procedures. Moreover, only minor within-individual differences were observed between the maximal [Formula: see text]O 2 elicited during the RI and the verification phase. Thus a verification phase does not add any validation of the determination of a [Formula: see text]O 2max . Therefore, the recommendation that a verification phase should become a gold standard procedure, although initially appealing, is not supported by the experimental data.

  10. A comparative verification of high resolution precipitation forecasts using model output statistics

    NASA Astrophysics Data System (ADS)

    van der Plas, Emiel; Schmeits, Maurice; Hooijman, Nicolien; Kok, Kees

    2017-04-01

    Verification of localized events such as precipitation has become even more challenging with the advent of high-resolution meso-scale numerical weather prediction (NWP). The realism of a forecast suggests that it should compare well against precipitation radar imagery with similar resolution, both spatially and temporally. Spatial verification methods solve some of the representativity issues that point verification gives rise to. In this study a verification strategy based on model output statistics is applied that aims to address both double penalty and resolution effects that are inherent to comparisons of NWP models with different resolutions. Using predictors based on spatial precipitation patterns around a set of stations, an extended logistic regression (ELR) equation is deduced, leading to a probability forecast distribution of precipitation for each NWP model, analysis and lead time. The ELR equations are derived for predictands based on areal calibrated radar precipitation and SYNOP observations. The aim is to extract maximum information from a series of precipitation forecasts, like a trained forecaster would. The method is applied to the non-hydrostatic model Harmonie (2.5 km resolution), Hirlam (11 km resolution) and the ECMWF model (16 km resolution), overall yielding similar Brier skill scores for the 3 post-processed models, but larger differences for individual lead times. Besides, the Fractions Skill Score is computed using the 3 deterministic forecasts, showing somewhat better skill for the Harmonie model. In other words, despite the realism of Harmonie precipitation forecasts, they only perform similarly or somewhat better than precipitation forecasts from the 2 lower resolution models, at least in the Netherlands.

  11. Formal methods for dependable real-time systems

    NASA Technical Reports Server (NTRS)

    Rushby, John

    1993-01-01

    The motivation for using formal methods to specify and reason about real time properties is outlined and approaches that were proposed and used are sketched. The formal verifications of clock synchronization algorithms are concluded as showing that mechanically supported reasoning about complex real time behavior is feasible. However, there was significant increase in the effectiveness of verification systems since those verifications were performed, at it is to be expected that verifications of comparable difficulty will become fairly routine. The current challenge lies in developing perspicuous and economical approaches to the formalization and specification of real time properties.

  12. Verification and quality control of routine hematology analyzers.

    PubMed

    Vis, J Y; Huisman, A

    2016-05-01

    Verification of hematology analyzers (automated blood cell counters) is mandatory before new hematology analyzers may be used in routine clinical care. The verification process consists of several items which comprise among others: precision, accuracy, comparability, carryover, background and linearity throughout the expected range of results. Yet, which standard should be met or which verification limit be used is at the discretion of the laboratory specialist. This paper offers practical guidance on verification and quality control of automated hematology analyzers and provides an expert opinion on the performance standard that should be met by the contemporary generation of hematology analyzers. Therefore (i) the state-of-the-art performance of hematology analyzers for complete blood count parameters is summarized, (ii) considerations, challenges, and pitfalls concerning the development of a verification plan are discussed, (iii) guidance is given regarding the establishment of reference intervals, and (iv) different methods on quality control of hematology analyzers are reviewed. © 2016 John Wiley & Sons Ltd.

  13. Statistical Design for Biospecimen Cohort Size in Proteomics-based Biomarker Discovery and Verification Studies

    PubMed Central

    Skates, Steven J.; Gillette, Michael A.; LaBaer, Joshua; Carr, Steven A.; Anderson, N. Leigh; Liebler, Daniel C.; Ransohoff, David; Rifai, Nader; Kondratovich, Marina; Težak, Živana; Mansfield, Elizabeth; Oberg, Ann L.; Wright, Ian; Barnes, Grady; Gail, Mitchell; Mesri, Mehdi; Kinsinger, Christopher R.; Rodriguez, Henry; Boja, Emily S.

    2014-01-01

    Protein biomarkers are needed to deepen our understanding of cancer biology and to improve our ability to diagnose, monitor and treat cancers. Important analytical and clinical hurdles must be overcome to allow the most promising protein biomarker candidates to advance into clinical validation studies. Although contemporary proteomics technologies support the measurement of large numbers of proteins in individual clinical specimens, sample throughput remains comparatively low. This problem is amplified in typical clinical proteomics research studies, which routinely suffer from a lack of proper experimental design, resulting in analysis of too few biospecimens to achieve adequate statistical power at each stage of a biomarker pipeline. To address this critical shortcoming, a joint workshop was held by the National Cancer Institute (NCI), National Heart, Lung and Blood Institute (NHLBI), and American Association for Clinical Chemistry (AACC), with participation from the U.S. Food and Drug Administration (FDA). An important output from the workshop was a statistical framework for the design of biomarker discovery and verification studies. Herein, we describe the use of quantitative clinical judgments to set statistical criteria for clinical relevance, and the development of an approach to calculate biospecimen sample size for proteomic studies in discovery and verification stages prior to clinical validation stage. This represents a first step towards building a consensus on quantitative criteria for statistical design of proteomics biomarker discovery and verification research. PMID:24063748

  14. Statistical design for biospecimen cohort size in proteomics-based biomarker discovery and verification studies.

    PubMed

    Skates, Steven J; Gillette, Michael A; LaBaer, Joshua; Carr, Steven A; Anderson, Leigh; Liebler, Daniel C; Ransohoff, David; Rifai, Nader; Kondratovich, Marina; Težak, Živana; Mansfield, Elizabeth; Oberg, Ann L; Wright, Ian; Barnes, Grady; Gail, Mitchell; Mesri, Mehdi; Kinsinger, Christopher R; Rodriguez, Henry; Boja, Emily S

    2013-12-06

    Protein biomarkers are needed to deepen our understanding of cancer biology and to improve our ability to diagnose, monitor, and treat cancers. Important analytical and clinical hurdles must be overcome to allow the most promising protein biomarker candidates to advance into clinical validation studies. Although contemporary proteomics technologies support the measurement of large numbers of proteins in individual clinical specimens, sample throughput remains comparatively low. This problem is amplified in typical clinical proteomics research studies, which routinely suffer from a lack of proper experimental design, resulting in analysis of too few biospecimens to achieve adequate statistical power at each stage of a biomarker pipeline. To address this critical shortcoming, a joint workshop was held by the National Cancer Institute (NCI), National Heart, Lung, and Blood Institute (NHLBI), and American Association for Clinical Chemistry (AACC) with participation from the U.S. Food and Drug Administration (FDA). An important output from the workshop was a statistical framework for the design of biomarker discovery and verification studies. Herein, we describe the use of quantitative clinical judgments to set statistical criteria for clinical relevance and the development of an approach to calculate biospecimen sample size for proteomic studies in discovery and verification stages prior to clinical validation stage. This represents a first step toward building a consensus on quantitative criteria for statistical design of proteomics biomarker discovery and verification research.

  15. Feasibility Study on Applying Radiophotoluminescent Glass Dosimeters for CyberKnife SRS Dose Verification

    PubMed Central

    Hsu, Shih-Ming; Hung, Chao-Hsiung; Liao, Yi-Jen; Fu, Hsiao-Mei; Tsai, Jo-Ting

    2017-01-01

    CyberKnife is one of multiple modalities for stereotactic radiosurgery (SRS). Due to the nature of CyberKnife and the characteristics of SRS, dose evaluation of the CyberKnife procedure is critical. A radiophotoluminescent glass dosimeter was used to verify the dose accuracy for the CyberKnife procedure and validate a viable dose verification system for CyberKnife treatment. A radiophotoluminescent glass dosimeter, thermoluminescent dosimeter, and Kodak EDR2 film were used to measure the lateral dose profile and percent depth dose of CyberKnife. A Monte Carlo simulation for dose verification was performed using BEAMnrc to verify the measured results. This study also used a radiophotoluminescent glass dosimeter coupled with an anthropomorphic phantom to evaluate the accuracy of the dose given by CyberKnife. Measurements from the radiophotoluminescent glass dosimeter were compared with the results of a thermoluminescent dosimeter and EDR2 film, and the differences found were less than 5%. The radiophotoluminescent glass dosimeter has some advantages in terms of dose measurements over CyberKnife, such as repeatability, stability, and small effective size. These advantages make radiophotoluminescent glass dosimeters a potential candidate dosimeter for the CyberKnife procedure. This study concludes that radiophotoluminescent glass dosimeters are a promising and reliable dosimeter for CyberKnife dose verification with clinically acceptable accuracy within 5%. PMID:28046056

  16. Polymer gel dosimeters for pretreatment radiotherapy verification using the three-dimensional gamma evaluation and pass rate maps.

    PubMed

    Hsieh, Ling-Ling; Shieh, Jiunn-I; Wei, Li-Ju; Wang, Yi-Chun; Cheng, Kai-Yuan; Shih, Cheng-Ting

    2017-05-01

    Polymer gel dosimeters (PGDs) have been widely studied for use in the pretreatment verification of clinical radiation therapy. However, the readability of PGDs in three-dimensional (3D) dosimetry remain unclear. In this study, the pretreatment verifications of clinical radiation therapy were performed using an N-isopropyl-acrylamide (NIPAM) PGD, and the results were used to evaluate the performance of the NIPAM PGD on 3D dose measurement. A gel phantom was used to measure the dose distribution of a clinical case of intensity-modulated radiation therapy. Magnetic resonance imaging scans were performed for dose readouts. The measured dose volumes were compared with the planned dose volume. The relative volume histograms showed that relative volumes with a negative percent dose difference decreased as time elapsed. Furthermore, the histograms revealed few changes after 24h postirradiation. For the 3%/3mm and 2%/2mm criteria, the pass rates of the 12- and 24-h dose volumes were higher than 95%, respectively. This study thus concludes that the pass rate map can be used to evaluate the dose-temporal readability of PGDs and that the NIPAM PGD can be used for clinical pretreatment verifications. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  17. Dosimetric changes with computed tomography automatic tube-current modulation techniques.

    PubMed

    Spampinato, Sofia; Gueli, Anna Maria; Milone, Pietro; Raffaele, Luigi Angelo

    2018-04-06

    The study is aimed at a verification of dose changes for a computed tomography automatic tube-current modulation (ATCM) technique. For this purpose, anthropomorphic phantom and Gafchromic ® XR-QA2 films were used. Radiochromic films were cut according to the shape of two thorax regions. The ATCM algorithm is based on noise index (NI) and three exam protocols with different NI were chosen, of which one was a reference. Results were compared with dose values displayed by the console and with Poisson statistics. The information obtained with radiochromic films has been normalized with respect to the NI reference value to compare dose percentage variations. Results showed that, on average, the information reported by the CT console and calculated values coincide with measurements. The study allowed verification of the dose information reported by the CT console for an ATCM technique. Although this evaluation represents an estimate, the method can be a starting point for further studies.

  18. Simulation verification techniques study

    NASA Technical Reports Server (NTRS)

    Schoonmaker, P. B.; Wenglinski, T. H.

    1975-01-01

    Results are summarized of the simulation verification techniques study which consisted of two tasks: to develop techniques for simulator hardware checkout and to develop techniques for simulation performance verification (validation). The hardware verification task involved definition of simulation hardware (hardware units and integrated simulator configurations), survey of current hardware self-test techniques, and definition of hardware and software techniques for checkout of simulator subsystems. The performance verification task included definition of simulation performance parameters (and critical performance parameters), definition of methods for establishing standards of performance (sources of reference data or validation), and definition of methods for validating performance. Both major tasks included definition of verification software and assessment of verification data base impact. An annotated bibliography of all documents generated during this study is provided.

  19. Study of techniques for redundancy verification without disrupting systems, phases 1-3

    NASA Technical Reports Server (NTRS)

    1970-01-01

    The problem of verifying the operational integrity of redundant equipment and the impact of a requirement for verification on such equipment are considered. Redundant circuits are examined and the characteristics which determine adaptability to verification are identified. Mutually exclusive and exhaustive categories for verification approaches are established. The range of applicability of these techniques is defined in terms of signal characteristics and redundancy features. Verification approaches are discussed and a methodology for the design of redundancy verification is developed. A case study is presented which involves the design of a verification system for a hypothetical communications system. Design criteria for redundant equipment are presented. Recommendations for the development of technological areas pertinent to the goal of increased verification capabilities are given.

  20. An Efficient Location Verification Scheme for Static Wireless Sensor Networks.

    PubMed

    Kim, In-Hwan; Kim, Bo-Sung; Song, JooSeok

    2017-01-24

    In wireless sensor networks (WSNs), the accuracy of location information is vital to support many interesting applications. Unfortunately, sensors have difficulty in estimating their location when malicious sensors attack the location estimation process. Even though secure localization schemes have been proposed to protect location estimation process from attacks, they are not enough to eliminate the wrong location estimations in some situations. The location verification can be the solution to the situations or be the second-line defense. The problem of most of the location verifications is the explicit involvement of many sensors in the verification process and requirements, such as special hardware, a dedicated verifier and the trusted third party, which causes more communication and computation overhead. In this paper, we propose an efficient location verification scheme for static WSN called mutually-shared region-based location verification (MSRLV), which reduces those overheads by utilizing the implicit involvement of sensors and eliminating several requirements. In order to achieve this, we use the mutually-shared region between location claimant and verifier for the location verification. The analysis shows that MSRLV reduces communication overhead by 77% and computation overhead by 92% on average, when compared with the other location verification schemes, in a single sensor verification. In addition, simulation results for the verification of the whole network show that MSRLV can detect the malicious sensors by over 90% when sensors in the network have five or more neighbors.

  1. An Efficient Location Verification Scheme for Static Wireless Sensor Networks

    PubMed Central

    Kim, In-hwan; Kim, Bo-sung; Song, JooSeok

    2017-01-01

    In wireless sensor networks (WSNs), the accuracy of location information is vital to support many interesting applications. Unfortunately, sensors have difficulty in estimating their location when malicious sensors attack the location estimation process. Even though secure localization schemes have been proposed to protect location estimation process from attacks, they are not enough to eliminate the wrong location estimations in some situations. The location verification can be the solution to the situations or be the second-line defense. The problem of most of the location verifications is the explicit involvement of many sensors in the verification process and requirements, such as special hardware, a dedicated verifier and the trusted third party, which causes more communication and computation overhead. In this paper, we propose an efficient location verification scheme for static WSN called mutually-shared region-based location verification (MSRLV), which reduces those overheads by utilizing the implicit involvement of sensors and eliminating several requirements. In order to achieve this, we use the mutually-shared region between location claimant and verifier for the location verification. The analysis shows that MSRLV reduces communication overhead by 77% and computation overhead by 92% on average, when compared with the other location verification schemes, in a single sensor verification. In addition, simulation results for the verification of the whole network show that MSRLV can detect the malicious sensors by over 90% when sensors in the network have five or more neighbors. PMID:28125007

  2. Audio-visual imposture

    NASA Astrophysics Data System (ADS)

    Karam, Walid; Mokbel, Chafic; Greige, Hanna; Chollet, Gerard

    2006-05-01

    A GMM based audio visual speaker verification system is described and an Active Appearance Model with a linear speaker transformation system is used to evaluate the robustness of the verification. An Active Appearance Model (AAM) is used to automatically locate and track a speaker's face in a video recording. A Gaussian Mixture Model (GMM) based classifier (BECARS) is used for face verification. GMM training and testing is accomplished on DCT based extracted features of the detected faces. On the audio side, speech features are extracted and used for speaker verification with the GMM based classifier. Fusion of both audio and video modalities for audio visual speaker verification is compared with face verification and speaker verification systems. To improve the robustness of the multimodal biometric identity verification system, an audio visual imposture system is envisioned. It consists of an automatic voice transformation technique that an impostor may use to assume the identity of an authorized client. Features of the transformed voice are then combined with the corresponding appearance features and fed into the GMM based system BECARS for training. An attempt is made to increase the acceptance rate of the impostor and to analyzing the robustness of the verification system. Experiments are being conducted on the BANCA database, with a prospect of experimenting on the newly developed PDAtabase developed within the scope of the SecurePhone project.

  3. Shop floor compliance with age restrictions for tobacco sales: remote versus in-store age verification.

    PubMed

    van Hoof, Joris J; Gosselt, Jordy F; de Jong, Menno D T

    2010-02-01

    To compare traditional in-store age verification with a newly developed remote age verification system, 100 cigarette purchase attempts were made by 15-year-old "mystery shoppers." The remote system led to a strong increase in compliance (96% vs. 12%), reflecting more identification requests and more sale refusals when adolescents showed their identification cards. Copyright 2010 Society for Adolescent Medicine. Published by Elsevier Inc. All rights reserved.

  4. Full-chip level MEEF analysis using model based lithography verification

    NASA Astrophysics Data System (ADS)

    Kim, Juhwan; Wang, Lantian; Zhang, Daniel; Tang, Zongwu

    2005-11-01

    MEEF (Mask Error Enhancement Factor) has become a critical factor in CD uniformity control since optical lithography process moved to sub-resolution era. A lot of studies have been done by quantifying the impact of the mask CD (Critical Dimension) errors on the wafer CD errors1-2. However, the benefits from those studies were restricted only to small pattern areas of the full-chip data due to long simulation time. As fast turn around time can be achieved for the complicated verifications on very large data by linearly scalable distributed processing technology, model-based lithography verification becomes feasible for various types of applications such as post mask synthesis data sign off for mask tape out in production and lithography process development with full-chip data3,4,5. In this study, we introduced two useful methodologies for the full-chip level verification of mask error impact on wafer lithography patterning process. One methodology is to check MEEF distribution in addition to CD distribution through process window, which can be used for RET/OPC optimization at R&D stage. The other is to check mask error sensitivity on potential pinch and bridge hotspots through lithography process variation, where the outputs can be passed on to Mask CD metrology to add CD measurements on those hotspot locations. Two different OPC data were compared using the two methodologies in this study.

  5. Verification and intercomparison of mesoscale ensemble prediction systems in the Beijing 2008 Olympics Research and Development Project

    NASA Astrophysics Data System (ADS)

    Kunii, Masaru; Saito, Kazuo; Seko, Hiromu; Hara, Masahiro; Hara, Tabito; Yamaguchi, Munehiko; Gong, Jiandong; Charron, Martin; Du, Jun; Wang, Yong; Chen, Dehui

    2011-05-01

    During the period around the Beijing 2008 Olympic Games, the Beijing 2008 Olympics Research and Development Project (B08RDP) was conducted as part of the World Weather Research Program short-range weather forecasting research project. Mesoscale ensemble prediction (MEP) experiments were carried out by six organizations in near-real time, in order to share their experiences in the development of MEP systems. The purpose of this study is to objectively verify these experiments and to clarify the problems associated with the current MEP systems through the same experiences. Verification was performed using the MEP outputs interpolated into a common verification domain with a horizontal resolution of 15 km. For all systems, the ensemble spreads grew as the forecast time increased, and the ensemble mean improved the forecast errors compared with individual control forecasts in the verification against the analysis fields. However, each system exhibited individual characteristics according to the MEP method. Some participants used physical perturbation methods. The significance of these methods was confirmed by the verification. However, the mean error (ME) of the ensemble forecast in some systems was worse than that of the individual control forecast. This result suggests that it is necessary to pay careful attention to physical perturbations.

  6. Verification of forecast ensembles in complex terrain including observation uncertainty

    NASA Astrophysics Data System (ADS)

    Dorninger, Manfred; Kloiber, Simon

    2017-04-01

    Traditionally, verification means to verify a forecast (ensemble) with the truth represented by observations. The observation errors are quite often neglected arguing that they are small when compared to the forecast error. In this study as part of the MesoVICT (Mesoscale Verification Inter-comparison over Complex Terrain) project it will be shown, that observation errors have to be taken into account for verification purposes. The observation uncertainty is estimated from the VERA (Vienna Enhanced Resolution Analysis) and represented via two analysis ensembles which are compared to the forecast ensemble. For the whole study results from COSMO-LEPS provided by Arpae-SIMC Emilia-Romagna are used as forecast ensemble. The time period covers the MesoVICT core case from 20-22 June 2007. In a first step, all ensembles are investigated concerning their distribution. Several tests have been executed (Kolmogorov-Smirnov-Test, Finkelstein-Schafer Test, Chi-Square Test etc.) showing no exact mathematical distribution. So the main focus is on non-parametric statistics (e.g. Kernel density estimation, Boxplots etc.) and also the deviation between "forced" normal distributed data and the kernel density estimations. In a next step the observational deviations due to the analysis ensembles are analysed. In a first approach scores are multiple times calculated with every single ensemble member from the analysis ensemble regarded as "true" observation. The results are presented as boxplots for the different scores and parameters. Additionally, the bootstrapping method is also applied to the ensembles. These possible approaches to incorporating observational uncertainty into the computation of statistics will be discussed in the talk.

  7. A method for verification of treatment delivery in HDR prostate brachytherapy using a flat panel detector for both imaging and source tracking.

    PubMed

    Smith, Ryan L; Haworth, Annette; Panettieri, Vanessa; Millar, Jeremy L; Franich, Rick D

    2016-05-01

    Verification of high dose rate (HDR) brachytherapy treatment delivery is an important step, but is generally difficult to achieve. A technique is required to monitor the treatment as it is delivered, allowing comparison with the treatment plan and error detection. In this work, we demonstrate a method for monitoring the treatment as it is delivered and directly comparing the delivered treatment with the treatment plan in the clinical workspace. This treatment verification system is based on a flat panel detector (FPD) used for both pre-treatment imaging and source tracking. A phantom study was conducted to establish the resolution and precision of the system. A pretreatment radiograph of a phantom containing brachytherapy catheters is acquired and registration between the measurement and treatment planning system (TPS) is performed using implanted fiducial markers. The measured catheter paths immediately prior to treatment were then compared with the plan. During treatment delivery, the position of the (192)Ir source is determined at each dwell position by measuring the exit radiation with the FPD and directly compared to the planned source dwell positions. The registration between the two corresponding sets of fiducial markers in the TPS and radiograph yielded a registration error (residual) of 1.0 mm. The measured catheter paths agreed with the planned catheter paths on average to within 0.5 mm. The source positions measured with the FPD matched the planned source positions for all dwells on average within 0.6 mm (s.d. 0.3, min. 0.1, max. 1.4 mm). We have demonstrated a method for directly comparing the treatment plan with the delivered treatment that can be easily implemented in the clinical workspace. Pretreatment imaging was performed, enabling visualization of the implant before treatment delivery and identification of possible catheter displacement. Treatment delivery verification was performed by measuring the source position as each dwell was delivered. This approach using a FPD for imaging and source tracking provides a noninvasive method of acquiring extensive information for verification in HDR prostate brachytherapy.

  8. A method for verification of treatment delivery in HDR prostate brachytherapy using a flat panel detector for both imaging and source tracking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Ryan L., E-mail: ryan.smith@wbrc.org.au; Millar, Jeremy L.; Franich, Rick D.

    Purpose: Verification of high dose rate (HDR) brachytherapy treatment delivery is an important step, but is generally difficult to achieve. A technique is required to monitor the treatment as it is delivered, allowing comparison with the treatment plan and error detection. In this work, we demonstrate a method for monitoring the treatment as it is delivered and directly comparing the delivered treatment with the treatment plan in the clinical workspace. This treatment verification system is based on a flat panel detector (FPD) used for both pre-treatment imaging and source tracking. Methods: A phantom study was conducted to establish the resolutionmore » and precision of the system. A pretreatment radiograph of a phantom containing brachytherapy catheters is acquired and registration between the measurement and treatment planning system (TPS) is performed using implanted fiducial markers. The measured catheter paths immediately prior to treatment were then compared with the plan. During treatment delivery, the position of the {sup 192}Ir source is determined at each dwell position by measuring the exit radiation with the FPD and directly compared to the planned source dwell positions. Results: The registration between the two corresponding sets of fiducial markers in the TPS and radiograph yielded a registration error (residual) of 1.0 mm. The measured catheter paths agreed with the planned catheter paths on average to within 0.5 mm. The source positions measured with the FPD matched the planned source positions for all dwells on average within 0.6 mm (s.d. 0.3, min. 0.1, max. 1.4 mm). Conclusions: We have demonstrated a method for directly comparing the treatment plan with the delivered treatment that can be easily implemented in the clinical workspace. Pretreatment imaging was performed, enabling visualization of the implant before treatment delivery and identification of possible catheter displacement. Treatment delivery verification was performed by measuring the source position as each dwell was delivered. This approach using a FPD for imaging and source tracking provides a noninvasive method of acquiring extensive information for verification in HDR prostate brachytherapy.« less

  9. Monitoring proton radiation therapy with in-room PET imaging

    NASA Astrophysics Data System (ADS)

    Zhu, Xuping; España, Samuel; Daartz, Juliane; Liebsch, Norbert; Ouyang, Jinsong; Paganetti, Harald; Bortfeld, Thomas R.; El Fakhri, Georges

    2011-07-01

    We used a mobile positron emission tomography (PET) scanner positioned within the proton therapy treatment room to study the feasibility of proton range verification with an in-room, stand-alone PET system, and compared with off-line equivalent studies. Two subjects with adenoid cystic carcinoma were enrolled into a pilot study in which in-room PET scans were acquired in list-mode after a routine fractionated treatment session. The list-mode PET data were reconstructed with different time schemes to generate in-room short, in-room long and off-line equivalent (by skipping coincidences from the first 15 min during the list-mode reconstruction) PET images for comparison in activity distribution patterns. A phantom study was followed to evaluate the accuracy of range verification for different reconstruction time schemes quantitatively. The in-room PET has a higher sensitivity compared to the off-line modality so that the PET acquisition time can be greatly reduced from 30 to <5 min. Features in deep-site, soft-tissue regions were better retained with in-room short PET acquisitions because of the collection of 15O component and lower biological washout. For soft tissue-equivalent material, the distal fall-off edge of an in-room short acquisition is deeper compared to an off-line equivalent scan, indicating a better coverage of the high-dose end of the beam. In-room PET is a promising low cost, high sensitivity modality for the in vivo verification of proton therapy. Better accuracy in Monte Carlo predictions, especially for biological decay modeling, is necessary.

  10. SU-F-T-269: Preliminary Experience of Kuwait Cancer Control Center (KCCC) On IMRT Treatment Planning and Pre-Treatment Verification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sethuraman, TKR; Sherif, M; Subramanian, N

    Purpose: The complexity of IMRT delivery requires pre-treatment quality assurance and plan verification. KCCC has implemented IMRT clinically in few sites and will extend to all sites. Recently, our Varian linear accelerator and Eclipse planning system were upgraded from Millennium 80 to 120 Multileaf Collimator (MLC) and from v8.6 to 11.0 respectively. Our preliminary experience on the pre-treatment quality assurance verification is discussed. Methods: Eight Breast, Three Prostate and One Hypopharynx cancer patients were planned with step and shoot IMRT. All breast cases were planned before the upgrade with 60% cases treated. The ICRU 83 recommendations were followed for themore » dose prescription and constraints to OAR for all cases. Point dose measurement was done with CIRS cylindrical phantom and PTW 0.125 cc ionization chamber. Measured dose was compared with calculated dose at the point of measurement. Map CHECK diode array phantom was used for the plan verification. Planned and measured doses were compared by applying gamma index of 3% (dose difference) / 3 mm DTA (average distance to agreement). For all cases, a plan is considered to be successful if more than 95% of the tested diodes pass the gamma test. A prostate case was chosen to compare the plan verification before and after the upgrade. Results: Point dose measurement results were in agreement with the calculated doses. The maximum deviation observed was 2.3%. The passing rate of average gamma index was measured higher than 97% for the plan verification of all cases. Similar result was observed for plan verification of the chosen prostate case before and after the upgrade. Conclusion: Our preliminary experience from the obtained results validates the accuracy of our QA process and provides confidence to extend IMRT to all sites in Kuwait.« less

  11. Quantification of the effectiveness of handheld equipment for ground verification of detected rail internal defects.

    DOT National Transportation Integrated Search

    2014-04-01

    The objective of this project was to quantify the effectiveness of the rail inspection ground verification process. More specifically, : the project focused on comparing the effectiveness of conventional versus phased array probes to manually detect ...

  12. Commissioning and quality assurance of an integrated system for patient positioning and setup verification in particle therapy.

    PubMed

    Pella, A; Riboldi, M; Tagaste, B; Bianculli, D; Desplanques, M; Fontana, G; Cerveri, P; Seregni, M; Fattori, G; Orecchia, R; Baroni, G

    2014-08-01

    In an increasing number of clinical indications, radiotherapy with accelerated particles shows relevant advantages when compared with high energy X-ray irradiation. However, due to the finite range of ions, particle therapy can be severely compromised by setup errors and geometric uncertainties. The purpose of this work is to describe the commissioning and the design of the quality assurance procedures for patient positioning and setup verification systems at the Italian National Center for Oncological Hadrontherapy (CNAO). The accuracy of systems installed in CNAO and devoted to patient positioning and setup verification have been assessed using a laser tracking device. The accuracy in calibration and image based setup verification relying on in room X-ray imaging system was also quantified. Quality assurance tests to check the integration among all patient setup systems were designed, and records of daily QA tests since the start of clinical operation (2011) are presented. The overall accuracy of the patient positioning system and the patient verification system motion was proved to be below 0.5 mm under all the examined conditions, with median values below the 0.3 mm threshold. Image based registration in phantom studies exhibited sub-millimetric accuracy in setup verification at both cranial and extra-cranial sites. The calibration residuals of the OTS were found consistent with the expectations, with peak values below 0.3 mm. Quality assurance tests, daily performed before clinical operation, confirm adequate integration and sub-millimetric setup accuracy. Robotic patient positioning was successfully integrated with optical tracking and stereoscopic X-ray verification for patient setup in particle therapy. Sub-millimetric setup accuracy was achieved and consistently verified in daily clinical operation.

  13. SU-F-T-268: A Feasibility Study of Independent Dose Verification for Vero4DRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yamashita, M; Kokubo, M; Institute of Biomedical Research and Innovation, Kobe, Hyogo

    2016-06-15

    Purpose: Vero4DRT (Mitsubishi Heavy Industries Ltd.) has been released for a few years. The treatment planning system (TPS) of Vero4DRT is dedicated, so the measurement is the only method of dose verification. There have been no reports of independent dose verification using Clarksonbased algorithm for Vero4DRT. An independent dose verification software program of the general-purpose linac using a modified Clarkson-based algorithm was modified for Vero4DRT. In this study, we evaluated the accuracy of independent dose verification program and the feasibility of the secondary check for Vero4DRT. Methods: iPlan (Brainlab AG) was used as the TPS. PencilBeam Convolution was used formore » dose calculation algorithm of IMRT and X-ray Voxel Monte Carlo was used for the others. Simple MU Analysis (SMU, Triangle Products, Japan) was used as the independent dose verification software program in which CT-based dose calculation was performed using a modified Clarkson-based algorithm. In this study, 120 patients’ treatment plans were collected in our institute. The treatments were performed using the conventional irradiation for lung and prostate, SBRT for lung and Step and shoot IMRT for prostate. Comparison in dose between the TPS and the SMU was done and confidence limits (CLs, Mean ± 2SD %) were compared to those from the general-purpose linac. Results: As the results of the CLs, the conventional irradiation (lung, prostate), SBRT (lung) and IMRT (prostate) show 2.2 ± 3.5% (CL of the general-purpose linac: 2.4 ± 5.3%), 1.1 ± 1.7% (−0.3 ± 2.0%), 4.8 ± 3.7% (5.4 ± 5.3%) and −0.5 ± 2.5% (−0.1 ± 3.6%), respectively. The CLs for Vero4DRT show similar results to that for the general-purpose linac. Conclusion: The independent dose verification for the new linac is clinically available as a secondary check and we performed the check with the similar tolerance level of the general-purpose linac. This research is partially supported by Japan Agency for Medical Research and Development (AMED)« less

  14. RELAP5-3D Resolution of Known Restart/Backup Issues

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mesina, George L.; Anderson, Nolan A.

    2014-12-01

    The state-of-the-art nuclear reactor system safety analysis computer program developed at the Idaho National Laboratory (INL), RELAP5-3D, continues to adapt to changes in computer hardware and software and to develop to meet the ever-expanding needs of the nuclear industry. To continue at the forefront, code testing must evolve with both code and industry developments, and it must work correctly. To best ensure this, the processes of Software Verification and Validation (V&V) are applied. Verification compares coding against its documented algorithms and equations and compares its calculations against analytical solutions and the method of manufactured solutions. A form of this, sequentialmore » verification, checks code specifications against coding only when originally written then applies regression testing which compares code calculations between consecutive updates or versions on a set of test cases to check that the performance does not change. A sequential verification testing system was specially constructed for RELAP5-3D to both detect errors with extreme accuracy and cover all nuclear-plant-relevant code features. Detection is provided through a “verification file” that records double precision sums of key variables. Coverage is provided by a test suite of input decks that exercise code features and capabilities necessary to model a nuclear power plant. A matrix of test features and short-running cases that exercise them is presented. This testing system is used to test base cases (called null testing) as well as restart and backup cases. It can test RELAP5-3D performance in both standalone and coupled (through PVM to other codes) runs. Application of verification testing revealed numerous restart and backup issues in both standalone and couple modes. This document reports the resolution of these issues.« less

  15. A study of compositional verification based IMA integration method

    NASA Astrophysics Data System (ADS)

    Huang, Hui; Zhang, Guoquan; Xu, Wanmeng

    2018-03-01

    The rapid development of avionics systems is driving the application of integrated modular avionics (IMA) systems. But meanwhile it is improving avionics system integration, complexity of system test. Then we need simplify the method of IMA system test. The IMA system supports a module platform that runs multiple applications, and shares processing resources. Compared with federated avionics system, IMA system is difficult to isolate failure. Therefore, IMA system verification will face the critical problem is how to test shared resources of multiple application. For a simple avionics system, traditional test methods are easily realizing to test a whole system. But for a complex system, it is hard completed to totally test a huge and integrated avionics system. Then this paper provides using compositional-verification theory in IMA system test, so that reducing processes of test and improving efficiency, consequently economizing costs of IMA system integration.

  16. Is it Code Imperfection or 'garbage in Garbage Out'? Outline of Experiences from a Comprehensive Adr Code Verification

    NASA Astrophysics Data System (ADS)

    Zamani, K.; Bombardelli, F. A.

    2013-12-01

    ADR equation describes many physical phenomena of interest in the field of water quality in natural streams and groundwater. In many cases such as: density driven flow, multiphase reactive transport, and sediment transport, either one or a number of terms in the ADR equation may become nonlinear. For that reason, numerical tools are the only practical choice to solve these PDEs. All numerical solvers developed for transport equation need to undergo code verification procedure before they are put in to practice. Code verification is a mathematical activity to uncover failures and check for rigorous discretization of PDEs and implementation of initial/boundary conditions. In the context computational PDE verification is not a well-defined procedure on a clear path. Thus, verification tests should be designed and implemented with in-depth knowledge of numerical algorithms and physics of the phenomena as well as mathematical behavior of the solution. Even test results need to be mathematically analyzed to distinguish between an inherent limitation of algorithm and a coding error. Therefore, it is well known that code verification is a state of the art, in which innovative methods and case-based tricks are very common. This study presents full verification of a general transport code. To that end, a complete test suite is designed to probe the ADR solver comprehensively and discover all possible imperfections. In this study we convey our experiences in finding several errors which were not detectable with routine verification techniques. We developed a test suit including hundreds of unit tests and system tests. The test package has gradual increment in complexity such that tests start from simple and increase to the most sophisticated level. Appropriate verification metrics are defined for the required capabilities of the solver as follows: mass conservation, convergence order, capabilities in handling stiff problems, nonnegative concentration, shape preservation, and spurious wiggles. Thereby, we provide objective, quantitative values as opposed to subjective qualitative descriptions as 'weak' or 'satisfactory' agreement with those metrics. We start testing from a simple case of unidirectional advection, then bidirectional advection and tidal flow and build up to nonlinear cases. We design tests to check nonlinearity in velocity, dispersivity and reactions. For all of the mentioned cases we conduct mesh convergence tests. These tests compare the results' order of accuracy versus the formal order of accuracy of discretization. The concealing effect of scales (Peclet and Damkohler numbers) on the mesh convergence study and appropriate remedies are also discussed. For the cases in which the appropriate benchmarks for mesh convergence study are not available we utilize Symmetry, Complete Richardson Extrapolation and Method of False Injection to uncover bugs. Detailed discussions of capabilities of the mentioned code verification techniques are given. Auxiliary subroutines for automation of the test suit and report generation are designed. All in all, the test package is not only a robust tool for code verification but also it provides comprehensive insight on the ADR solvers capabilities. Such information is essential for any rigorous computational modeling of ADR equation for surface/subsurface pollution transport.

  17. 40 CFR 1065.307 - Linearity verification.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... measurement (such as a scale, balance, or mass comparator) at the inlet to the fuel-measurement system. Use a... nitrogen. Select gas divisions that you typically use. Use a selected gas division as the measured value.... (9) Mass. For linearity verification for gravimetric PM balances, use external calibration weights...

  18. 40 CFR 1065.307 - Linearity verification.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... different flow rates. Use a gravimetric reference measurement (such as a scale, balance, or mass comparator... the gas-division system to divide the span gas with purified air or nitrogen. Select gas divisions... verification for gravimetric PM balances, use external calibration weights that that meet the requirements in...

  19. Microsatellite Imputation for parental verification from SNP across multiple Bos taurus and indicus breeds

    USDA-ARS?s Scientific Manuscript database

    Microsatellite markers (MS) have traditionally been used for parental verification and are still the international standard in spite of their higher cost, error rate, and turnaround time compared with Single Nucleotide Polymorphisms (SNP)-based assays. Despite domestic and international demands fro...

  20. TETAM Model Verification Study. Volume I. Representation of Intervisibility, Initial Comparisons

    DTIC Science & Technology

    1976-02-01

    simulation models in terms of firings, engagements, and losses between tank and antitank as compared with the field data collected during the free play battles of Field Experiment 11.8 are found in Volume III. (Author)

  1. Verification of Space Weather Forecasts using Terrestrial Weather Approaches

    NASA Astrophysics Data System (ADS)

    Henley, E.; Murray, S.; Pope, E.; Stephenson, D.; Sharpe, M.; Bingham, S.; Jackson, D.

    2015-12-01

    The Met Office Space Weather Operations Centre (MOSWOC) provides a range of 24/7 operational space weather forecasts, alerts, and warnings, which provide valuable information on space weather that can degrade electricity grids, radio communications, and satellite electronics. Forecasts issued include arrival times of coronal mass ejections (CMEs), and probabilistic forecasts for flares, geomagnetic storm indices, and energetic particle fluxes and fluences. These forecasts are produced twice daily using a combination of output from models such as Enlil, near-real-time observations, and forecaster experience. Verification of forecasts is crucial for users, researchers, and forecasters to understand the strengths and limitations of forecasters, and to assess forecaster added value. To this end, the Met Office (in collaboration with Exeter University) has been adapting verification techniques from terrestrial weather, and has been working closely with the International Space Environment Service (ISES) to standardise verification procedures. We will present the results of part of this work, analysing forecast and observed CME arrival times, assessing skill using 2x2 contingency tables. These MOSWOC forecasts can be objectively compared to those produced by the NASA Community Coordinated Modelling Center - a useful benchmark. This approach cannot be taken for the other forecasts, as they are probabilistic and categorical (e.g., geomagnetic storm forecasts give probabilities of exceeding levels from minor to extreme). We will present appropriate verification techniques being developed to address these forecasts, such as rank probability skill score, and comparing forecasts against climatology and persistence benchmarks. As part of this, we will outline the use of discrete time Markov chains to assess and improve the performance of our geomagnetic storm forecasts. We will also discuss work to adapt a terrestrial verification visualisation system to space weather, to help MOSWOC forecasters view verification results in near real-time; plans to objectively assess flare forecasts under the EU Horizon 2020 FLARECAST project; and summarise ISES efforts to achieve consensus on verification.

  2. 40 CFR 1065.307 - Linearity verification.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... different flow rates. Use a gravimetric reference measurement (such as a scale, balance, or mass comparator... the gas-division system to divide the span gas with purified air or nitrogen. Select gas divisions... PM balance, m max refers to the typical mass of a PM filter. (ii) For linearity verification of...

  3. 34 CFR 668.131 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... confirmation: A process by which the Secretary, by means of a matching program conducted with the INS, compares... records of that status maintained by the INS in its Alien Status Verification Index (ASVI) system for the... the INS, in response to the submission of INS Document Verification Form G-845 by an institution...

  4. Automated Installation Verification of COMSOL via LiveLink for MATLAB

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crowell, Michael W

    Verifying that a local software installation performs as the developer intends is a potentially time-consuming but necessary step for nuclear safety-related codes. Automating this process not only saves time, but can increase reliability and scope of verification compared to ‘hand’ comparisons. While COMSOL does not include automatic installation verification as many commercial codes do, it does provide tools such as LiveLink™ for MATLAB® and the COMSOL API for use with Java® through which the user can automate the process. Here we present a successful automated verification example of a local COMSOL 5.0 installation for nuclear safety-related calculations at the Oakmore » Ridge National Laboratory’s High Flux Isotope Reactor (HFIR).« less

  5. Isocenter verification for linac‐based stereotactic radiation therapy: review of principles and techniques

    PubMed Central

    Sabet, Mahsheed; O'Connor, Daryl J.; Greer, Peter B.

    2011-01-01

    There have been several manual, semi‐automatic and fully‐automatic methods proposed for verification of the position of mechanical isocenter as part of comprehensive quality assurance programs required for linear accelerator‐based stereotactic radiosurgery/radiotherapy (SRS/SRT) treatments. In this paper, a systematic review has been carried out to discuss the present methods for isocenter verification and compare their characteristics, to help physicists in making a decision on selection of their quality assurance routine. PACS numbers: 87.53.Ly, 87.56.Fc, 87.56.‐v PMID:22089022

  6. Computer simulated building energy consumption for verification of energy conservation measures in network facilities

    NASA Technical Reports Server (NTRS)

    Plankey, B.

    1981-01-01

    A computer program called ECPVER (Energy Consumption Program - Verification) was developed to simulate all energy loads for any number of buildings. The program computes simulated daily, monthly, and yearly energy consumption which can be compared with actual meter readings for the same time period. Such comparison can lead to validation of the model under a variety of conditions, which allows it to be used to predict future energy saving due to energy conservation measures. Predicted energy saving can then be compared with actual saving to verify the effectiveness of those energy conservation changes. This verification procedure is planned to be an important advancement in the Deep Space Network Energy Project, which seeks to reduce energy cost and consumption at all DSN Deep Space Stations.

  7. Verification and Improvement of ERS-1/2 Altimeter Geophysical Data Records for Global Change Studies

    NASA Technical Reports Server (NTRS)

    Shum, C. K.

    2000-01-01

    This Final Technical Report summarizes the research work conducted under NASA's Physical Oceanography Program entitled, Verification And Improvement Of ERS-112 Altimeter Geophysical Data Recorders For Global Change Studies, for the time period from January 1, 2000 through June 30, 2000. This report also provides a summary of the investigation from July 1, 1997 - June 30, 2000. The primary objectives of this investigation include verification and improvement of the ERS-1 and ERS-2 radar altimeter geophysical data records for distribution of the data to the ESA-approved U.S. ERS-1/-2 investigators for global climate change studies. Specifically, the investigation is to verify and improve the ERS geophysical data record products by calibrating the instrument and assessing accuracy for the ERS-1/-2 orbital, geophysical, media, and instrument corrections. The purpose is to ensure that the consistency of constants, standards and algorithms with TOPEX/POSEIDON radar altimeter for global climate change studies such as the monitoring and interpretation of long-term sea level change. This investigation has provided the current best precise orbits, with the radial orbit accuracy for ERS-1 (Phases C-G) and ERS-2 estimated at the 3-5 cm rms level, an 30-fold improvement compared to the 1993 accuracy. We have finalized the production and verification of the value-added ERS-1 mission (Phases A, B, C, D, E, F, and G), in collaboration with JPL PODAAC and the University of Texas. Orbit and data verification and improvement of algorithms led to the best data product available to-date. ERS-2 altimeter data have been improved and we have been active on Envisat (2001 launch) GDR algorithm review and improvement. The data improvement of ERS-1 and ERS-2 led to improvement in the global mean sea surface, marine gravity anomaly and bathymetry models, and a study of Antarctica mass balance, which was published in Science in 1998.

  8. Imputation of microsatellite alleles from dense SNP genotypes for parentage verification across multiple Bos taurus and Bos indicus breeds

    USDA-ARS?s Scientific Manuscript database

    Microsatellite markers (MS) have traditionally been used for parental verification and are still the international standard in spite of their higher cost, error rate, and turnaround time compared with Single Nucleotide Polymorphisms (SNP) -based assays. Despite domestic and international demands fr...

  9. Working Memory Mechanism in Proportional Quantifier Verification

    ERIC Educational Resources Information Center

    Zajenkowski, Marcin; Szymanik, Jakub; Garraffa, Maria

    2014-01-01

    The paper explores the cognitive mechanisms involved in the verification of sentences with proportional quantifiers (e.g. "More than half of the dots are blue"). The first study shows that the verification of proportional sentences is more demanding than the verification of sentences such as: "There are seven blue and eight yellow…

  10. Modeling interfacial fracture in Sierra.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Arthur A.; Ohashi, Yuki; Lu, Wei-Yang

    2013-09-01

    This report summarizes computational efforts to model interfacial fracture using cohesive zone models in the SIERRA/SolidMechanics (SIERRA/SM) finite element code. Cohesive surface elements were used to model crack initiation and propagation along predefined paths. Mesh convergence was observed with SIERRA/SM for numerous geometries. As the funding for this project came from the Advanced Simulation and Computing Verification and Validation (ASC V&V) focus area, considerable effort was spent performing verification and validation. Code verification was performed to compare code predictions to analytical solutions for simple three-element simulations as well as a higher-fidelity simulation of a double-cantilever beam. Parameter identification was conductedmore » with Dakota using experimental results on asymmetric double-cantilever beam (ADCB) and end-notched-flexure (ENF) experiments conducted under Campaign-6 funding. Discretization convergence studies were also performed with respect to mesh size and time step and an optimization study was completed for mode II delamination using the ENF geometry. Throughout this verification process, numerous SIERRA/SM bugs were found and reported, all of which have been fixed, leading to over a 10-fold increase in convergence rates. Finally, mixed-mode flexure experiments were performed for validation. One of the unexplained issues encountered was material property variability for ostensibly the same composite material. Since the variability is not fully understood, it is difficult to accurately assess uncertainty when performing predictions.« less

  11. Assessment of test methods for evaluating effectiveness of cleaning flexible endoscopes.

    PubMed

    Washburn, Rebecca E; Pietsch, Jennifer J

    2018-06-01

    Strict adherence to each step of reprocessing is imperative to removing potentially infectious agents. Multiple methods for verifying proper reprocessing exist; however, each presents challenges and limitations, and best practice within the industry has not been established. Our goal was to evaluate endoscope cleaning verification tests with particular interest in the evaluation of the manual cleaning step. The results of the cleaning verification tests were compared with microbial culturing to see if a positive cleaning verification test would be predictive of microbial growth. This study was conducted at 2 high-volume endoscopy units within a multisite health care system. Each of the 90 endoscopes were tested for adenosine triphosphate, protein, microbial growth via agar plate, and rapid gram-negative culture via assay. The endoscopes were tested in 3 locations: the instrument channel, control knob, and elevator mechanism. This analysis showed substantial level of agreement between protein detection postmanual cleaning and protein detection post-high-level disinfection at the control head for scopes sampled sequentially. This study suggests that if protein is detected postmanual cleaning, there is a significant likelihood that protein will also be detected post-high-level disinfection. It also infers that a cleaning verification test is not predictive of microbial growth. Copyright © 2018 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.

  12. Monitoring tobacco brand websites to understand marketing strategies aimed at tobacco product users and potential users.

    PubMed

    Escobedo, Patricia; Cruz, Tess Boley; Tsai, Kai-Ya; Allem, Jon-Patrick; Soto, Daniel W; Kirkpatrick, Matthew G; Pattarroyo, Monica; Unger, Jennifer B

    2017-09-11

    Limited information exists about strategies and methods used on brand marketing websites to transmit pro-tobacco messages to tobacco users and potential users. This study compared age verification methods, themes, interactive activities and links to social media across tobacco brand websites. This study examined 12 tobacco brand websites representing four tobacco product categories: cigarettes, cigar/cigarillos, smokeless tobacco, and e-cigarettes. Website content was analyzed by tobacco product category and data from all website visits (n = 699) were analyzed. Adult smokers (n=32) coded websites during a one-year period, indicating whether or not they observed any of 53 marketing themes, seven interactive activities, or five external links to social media sites. Most (58%) websites required online registration before entering, however e-cigarette websites used click-through age verification. Compared to cigarette sites, cigar/cigarillo sites were more likely to feature themes related to "party" lifestyle, and e-cigarette websites were much more likely to feature themes related to harm reduction. Cigarette sites featured greater levels of interactive content compared to other tobacco products. Compared to cigarette sites, cigar/cigarillo sites were more likely to feature activities related to events and music. Compared to cigarette sites, both cigar and e-cigarette sites were more likely to direct visitors to external social media sites. Marketing methods and strategies normalize tobacco use by providing website visitors with positive themes combined with interactive content, and is an area of future research. Moreover, all tobacco products under federal regulatory authority should be required to use more stringent age verification gates. Findings indicate the Food and Drug Administration (FDA) should require brand websites of all tobacco products under its regulatory authority use more stringent age verification gates by requiring all visitors be at least 18 years of age and register online prior to entry. This is important given that marketing strategies may encourage experimentation with tobacco or deter quit attempts among website visitors. Future research should examine the use of interactive activities and social media on a wide variety of tobacco brand websites as interactive content is associated with more active information processing. © The Author 2017. Published by Oxford University Press on behalf of the Society for Research on Nicotine and Tobacco. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  13. Ultrasound functional imaging in an ex vivo beating porcine heart platform

    NASA Astrophysics Data System (ADS)

    Petterson, Niels J.; Fixsen, Louis S.; Rutten, Marcel C. M.; Pijls, Nico H. J.; van de Vosse, Frans N.; Lopata, Richard G. P.

    2017-12-01

    In recent years, novel ultrasound functional imaging (UFI) techniques have been introduced to assess cardiac function by measuring, e.g. cardiac output (CO) and/or myocardial strain. Verification and reproducibility assessment in a realistic setting remain major issues. Simulations and phantoms are often unrealistic, whereas in vivo measurements often lack crucial hemodynamic parameters or ground truth data, or suffer from the large physiological and clinical variation between patients when attempting clinical validation. Controlled validation in certain pathologies is cumbersome and often requires the use of lab animals. In this study, an isolated beating pig heart setup was adapted and used for performance assessment of UFI techniques such as volume assessment and ultrasound strain imaging. The potential of performing verification and reproducibility studies was demonstrated. For proof-of-principle, validation of UFI in pathological hearts was examined. Ex vivo porcine hearts (n  =  6, slaughterhouse waste) were resuscitated and attached to a mock circulatory system. Radio frequency ultrasound data of the left ventricle were acquired in five short axis views and one long axis view. Based on these slices, the CO was measured, where verification was performed using flow sensor measurements in the aorta. Strain imaging was performed providing radial, circumferential and longitudinal strain to assess reproducibility and inter-subject variability under steady conditions. Finally, strains in healthy hearts were compared to a heart with an implanted left ventricular assist device, simulating a failing, supported heart. Good agreement between ultrasound and flow sensor based CO measurements was found. Strains were highly reproducible (intraclass correlation coefficients  >0.8). Differences were found due to biological variation and condition of the hearts. Strain magnitude and patterns in the assisted heart were available for different pump action, revealing large changes compared to the normal condition. The setup provides a valuable benchmarking platform for UFI techniques. Future studies will include work on different pathologies and other means of measurement verification.

  14. The politics of verification and the control of nuclear tests, 1945-1980

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gallagher, N.W.

    1990-01-01

    This dissertation addresses two questions: (1) why has agreement been reached on verification regimes to support some arms control accords but not others; and (2) what determines the extent to which verification arrangements promote stable cooperation. This study develops an alternative framework for analysis by examining the politics of verification at two levels. The logical politics of verification are shaped by the structure of the problem of evaluating cooperation under semi-anarchical conditions. The practical politics of verification are driven by players' attempts to use verification arguments to promote their desired security outcome. The historical material shows that agreements on verificationmore » regimes are reached when key domestic and international players desire an arms control accord and believe that workable verification will not have intolerable costs. Clearer understanding of how verification is itself a political problem, and how players manipulate it to promote other goals is necessary if the politics of verification are to support rather than undermine the development of stable cooperation.« less

  15. SU-E-T-762: Toward Volume-Based Independent Dose Verification as Secondary Check

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tachibana, H; Tachibana, R

    2015-06-15

    Purpose: Lung SBRT plan has been shifted to volume prescription technique. However, point dose agreement is still verified using independent dose verification at the secondary check. The volume dose verification is more affected by inhomogeneous correction rather than point dose verification currently used as the check. A feasibility study for volume dose verification was conducted in lung SBRT plan. Methods: Six SBRT plans were collected in our institute. Two dose distributions with / without inhomogeneous correction were generated using Adaptive Convolve (AC) in Pinnacle3. Simple MU Analysis (SMU, Triangle Product, Ishikawa, JP) was used as the independent dose verification softwaremore » program, in which a modified Clarkson-based algorithm was implemented and radiological path length was computed using CT images independently to the treatment planning system. The agreement in point dose and mean dose between the AC with / without the correction and the SMU were assessed. Results: In the point dose evaluation for the center of the GTV, the difference shows the systematic shift (4.5% ± 1.9 %) in comparison of the AC with the inhomogeneous correction, on the other hands, there was good agreement of 0.2 ± 0.9% between the SMU and the AC without the correction. In the volume evaluation, there were significant differences in mean dose for not only PTV (14.2 ± 5.1 %) but also GTV (8.0 ± 5.1 %) compared to the AC with the correction. Without the correction, the SMU showed good agreement for GTV (1.5 ± 0.9%) as well as PTV (0.9% ± 1.0%). Conclusion: The volume evaluation for secondary check may be possible in homogenous region. However, the volume including the inhomogeneous media would make larger discrepancy. Dose calculation algorithm for independent verification needs to be modified to take into account the inhomogeneous correction.« less

  16. Verification on spray simulation of a pintle injector for liquid rocket engine

    NASA Astrophysics Data System (ADS)

    Son, Min; Yu, Kijeong; Radhakrishnan, Kanmaniraja; Shin, Bongchul; Koo, Jaye

    2016-02-01

    The pintle injector used for a liquid rocket engine is a newly re-attracted injection system famous for its wide throttle ability with high efficiency. The pintle injector has many variations with complex inner structures due to its moving parts. In order to study the rotating flow near the injector tip, which was observed from the cold flow experiment using water and air, a numerical simulation was adopted and a verification of the numerical model was later conducted. For the verification process, three types of experimental data including velocity distributions of gas flows, spray angles and liquid distribution were all compared using simulated results. The numerical simulation was performed using a commercial simulation program with the Eulerian multiphase model and axisymmetric two dimensional grids. The maximum and minimum velocities of gas were within the acceptable range of agreement, however, the spray angles experienced up to 25% error when the momentum ratios were increased. The spray density distributions were quantitatively measured and had good agreement. As a result of this study, it was concluded that the simulation method was properly constructed to study specific flow characteristics of the pintle injector despite having the limitations of two dimensional and coarse grids.

  17. SU-E-T-49: A Multi-Institutional Study of Independent Dose Verification for IMRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baba, H; Tachibana, H; Kamima, T

    2015-06-15

    Purpose: AAPM TG114 does not cover the independent verification for IMRT. We conducted a study of independent dose verification for IMRT in seven institutes to show the feasibility. Methods: 384 IMRT plans in the sites of prostate and head and neck (HN) were collected from the institutes, where the planning was performed using Eclipse and Pinnacle3 with the two techniques of step and shoot (S&S) and sliding window (SW). All of the institutes used a same independent dose verification software program (Simple MU Analysis: SMU, Triangle Product, Ishikawa, JP), which is Clarkson-based and CT images were used to compute radiologicalmore » path length. An ion-chamber measurement in a water-equivalent slab phantom was performed to compare the doses computed using the TPS and an independent dose verification program. Additionally, the agreement in dose computed in patient CT images between using the TPS and using the SMU was assessed. The dose of the composite beams in the plan was evaluated. Results: The agreement between the measurement and the SMU were −2.3±1.9 % and −5.6±3.6 % for prostate and HN sites, respectively. The agreement between the TPSs and the SMU were −2.1±1.9 % and −3.0±3.7 for prostate and HN sites, respectively. There was a negative systematic difference with similar standard deviation and the difference was larger in the HN site. The S&S technique showed a statistically significant difference between the SW. Because the Clarkson-based method in the independent program underestimated (cannot consider) the dose under the MLC. Conclusion: The accuracy would be improved when the Clarkson-based algorithm should be modified for IMRT and the tolerance level would be within 5%.« less

  18. Comparison between In-house developed and Diamond commercial software for patient specific independent monitor unit calculation and verification with heterogeneity corrections.

    PubMed

    Kuppusamy, Vijayalakshmi; Nagarajan, Vivekanandan; Jeevanandam, Prakash; Murugan, Lavanya

    2016-02-01

    The study was aimed to compare two different monitor unit (MU) or dose verification software in volumetric modulated arc therapy (VMAT) using modified Clarkson's integration technique for 6 MV photons beams. In-house Excel Spreadsheet based monitor unit verification calculation (MUVC) program and PTW's DIAMOND secondary check software (SCS), version-6 were used as a secondary check to verify the monitor unit (MU) or dose calculated by treatment planning system (TPS). In this study 180 patients were grouped into 61 head and neck, 39 thorax and 80 pelvic sites. Verification plans are created using PTW OCTAVIUS-4D phantom and also measured using 729 detector chamber and array with isocentre as the suitable point of measurement for each field. In the analysis of 154 clinically approved VMAT plans with isocentre at a region above -350 HU, using heterogeneity corrections, In-house Spreadsheet based MUVC program and Diamond SCS showed good agreement TPS. The overall percentage average deviations for all sites were (-0.93% + 1.59%) and (1.37% + 2.72%) for In-house Excel Spreadsheet based MUVC program and Diamond SCS respectively. For 26 clinically approved VMAT plans with isocentre at a region below -350 HU showed higher variations for both In-house Spreadsheet based MUVC program and Diamond SCS. It can be concluded that for patient specific quality assurance (QA), the In-house Excel Spreadsheet based MUVC program and Diamond SCS can be used as a simple and fast accompanying to measurement based verification for plans with isocentre at a region above -350 HU. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  19. Time trend of injection drug errors before and after implementation of bar-code verification system.

    PubMed

    Sakushima, Ken; Umeki, Reona; Endoh, Akira; Ito, Yoichi M; Nasuhara, Yasuyuki

    2015-01-01

    Bar-code technology, used for verification of patients and their medication, could prevent medication errors in clinical practice. Retrospective analysis of electronically stored medical error reports was conducted in a university hospital. The number of reported medication errors of injected drugs, including wrong drug administration and administration to the wrong patient, was compared before and after implementation of the bar-code verification system for inpatient care. A total of 2867 error reports associated with injection drugs were extracted. Wrong patient errors decreased significantly after implementation of the bar-code verification system (17.4/year vs. 4.5/year, p< 0.05), although wrong drug errors did not decrease sufficiently (24.2/year vs. 20.3/year). The source of medication errors due to wrong drugs was drug preparation in hospital wards. Bar-code medication administration is effective for prevention of wrong patient errors. However, ordinary bar-code verification systems are limited in their ability to prevent incorrect drug preparation in hospital wards.

  20. Resolution verification targets for airborne and spaceborne imaging systems at the Stennis Space Center

    NASA Astrophysics Data System (ADS)

    McKellip, Rodney; Yuan, Ding; Graham, William; Holland, Donald E.; Stone, David; Walser, William E.; Mao, Chengye

    1997-06-01

    The number of available spaceborne and airborne systems will dramatically increase over the next few years. A common systematic approach toward verification of these systems will become important for comparing the systems' operational performance. The Commercial Remote Sensing Program at the John C. Stennis Space Center (SSC) in Mississippi has developed design requirements for a remote sensing verification target range to provide a means to evaluate spatial, spectral, and radiometric performance of optical digital remote sensing systems. The verification target range consists of spatial, spectral, and radiometric targets painted on a 150- by 150-meter concrete pad located at SSC. The design criteria for this target range are based upon work over a smaller, prototypical target range at SSC during 1996. This paper outlines the purpose and design of the verification target range based upon an understanding of the systems to be evaluated as well as data analysis results from the prototypical target range.

  1. Influence of the Redundant Verification and the Non-Redundant Verification on the Hydraulic Tomography

    NASA Astrophysics Data System (ADS)

    Wei, T. B.; Chen, Y. L.; Lin, H. R.; Huang, S. Y.; Yeh, T. C. J.; Wen, J. C.

    2016-12-01

    In the groundwater study, it estimated the heterogeneous spatial distribution of hydraulic Properties, there were many scholars use to hydraulic tomography (HT) from field site pumping tests to estimate inverse of heterogeneous spatial distribution of hydraulic Properties, to prove the most of most field site aquifer was heterogeneous hydrogeological parameters spatial distribution field. Many scholars had proposed a method of hydraulic tomography to estimate heterogeneous spatial distribution of hydraulic Properties of aquifer, the Huang et al. [2011] was used the non-redundant verification analysis of pumping wells changed, observation wells fixed on the inverse and the forward, to reflect the feasibility of the heterogeneous spatial distribution of hydraulic Properties of field site aquifer of the non-redundant verification analysis on steady-state model.From post literature, finding only in steady state, non-redundant verification analysis of pumping well changed location and observation wells fixed location for inverse and forward. But the studies had not yet pumping wells fixed or changed location, and observation wells fixed location for redundant verification or observation wells change location for non-redundant verification of the various combinations may to explore of influences of hydraulic tomography method. In this study, it carried out redundant verification method and non-redundant verification method for forward to influences of hydraulic tomography method in transient. And it discuss above mentioned in NYUST campus sites the actual case, to prove the effectiveness of hydraulic tomography methods, and confirmed the feasibility on inverse and forward analysis from analysis results.Keywords: Hydraulic Tomography, Redundant Verification, Heterogeneous, Inverse, Forward

  2. Integrated Formal Analysis of Timed-Triggered Ethernet

    NASA Technical Reports Server (NTRS)

    Dutertre, Bruno; Shankar, Nstarajan; Owre, Sam

    2012-01-01

    We present new results related to the verification of the Timed-Triggered Ethernet (TTE) clock synchronization protocol. This work extends previous verification of TTE based on model checking. We identify a suboptimal design choice in a compression function used in clock synchronization, and propose an improvement. We compare the original design and the improved definition using the SAL model checker.

  3. Modality Switching in a Property Verification Task: An ERP Study of What Happens When Candles Flicker after High Heels Click

    PubMed Central

    Collins, Jennifer; Pecher, Diane; Zeelenberg, René; Coulson, Seana

    2011-01-01

    The perceptual modalities associated with property words, such as flicker or click, have previously been demonstrated to affect subsequent property verification judgments (Pecher et al., 2003). Known as the conceptual modality switch effect, this finding supports the claim that brain systems for perception and action help subserve the representation of concepts. The present study addressed the cognitive and neural substrate of this effect by recording event-related potentials (ERPs) as participants performed a property verification task with visual or auditory properties in key trials. We found that for visual property verifications, modality switching was associated with an increased amplitude N400. For auditory verifications, switching led to a larger late positive complex. Observed ERP effects of modality switching suggest property words access perceptual brain systems. Moreover, the timing and pattern of the effects suggest perceptual systems impact the decision-making stage in the verification of auditory properties, and the semantic stage in the verification of visual properties. PMID:21713128

  4. Modality Switching in a Property Verification Task: An ERP Study of What Happens When Candles Flicker after High Heels Click.

    PubMed

    Collins, Jennifer; Pecher, Diane; Zeelenberg, René; Coulson, Seana

    2011-01-01

    The perceptual modalities associated with property words, such as flicker or click, have previously been demonstrated to affect subsequent property verification judgments (Pecher et al., 2003). Known as the conceptual modality switch effect, this finding supports the claim that brain systems for perception and action help subserve the representation of concepts. The present study addressed the cognitive and neural substrate of this effect by recording event-related potentials (ERPs) as participants performed a property verification task with visual or auditory properties in key trials. We found that for visual property verifications, modality switching was associated with an increased amplitude N400. For auditory verifications, switching led to a larger late positive complex. Observed ERP effects of modality switching suggest property words access perceptual brain systems. Moreover, the timing and pattern of the effects suggest perceptual systems impact the decision-making stage in the verification of auditory properties, and the semantic stage in the verification of visual properties.

  5. Functions of social support and self-verification in association with loneliness, depression, and stress.

    PubMed

    Wright, Kevin B; King, Shawn; Rosenberg, Jenny

    2014-01-01

    This study investigated the influence of social support and self-verification on loneliness, depression, and stress among 477 college students. The authors propose and test a theoretical model using structural equation modeling. The results indicated empirical support for the model, with self-verification mediating the relation between social support and health outcomes. The results have implications for social support and self-verification research, which are discussed along with directions for future research and limitations of the study.

  6. An assessment of space shuttle flight software development processes

    NASA Technical Reports Server (NTRS)

    1993-01-01

    In early 1991, the National Aeronautics and Space Administration's (NASA's) Office of Space Flight commissioned the Aeronautics and Space Engineering Board (ASEB) of the National Research Council (NRC) to investigate the adequacy of the current process by which NASA develops and verifies changes and updates to the Space Shuttle flight software. The Committee for Review of Oversight Mechanisms for Space Shuttle Flight Software Processes was convened in Jan. 1992 to accomplish the following tasks: (1) review the entire flight software development process from the initial requirements definition phase to final implementation, including object code build and final machine loading; (2) review and critique NASA's independent verification and validation process and mechanisms, including NASA's established software development and testing standards; (3) determine the acceptability and adequacy of the complete flight software development process, including the embedded validation and verification processes through comparison with (1) generally accepted industry practices, and (2) generally accepted Department of Defense and/or other government practices (comparing NASA's program with organizations and projects having similar volumes of software development, software maturity, complexity, criticality, lines of code, and national standards); (4) consider whether independent verification and validation should continue. An overview of the study, independent verification and validation of critical software, and the Space Shuttle flight software development process are addressed. Findings and recommendations are presented.

  7. Prompt gamma timing range verification for scattered proton beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kormoll, T.; Golnik, C.; Hueso Gonzalez, F.

    2015-07-01

    Range verification is a very important point in order to fully exploit the physical advantages of protons compared to photons in cancer irradiation. Recently, a simple method has been proposed which makes use of the time of fight of protons in tissue and the promptly emitted secondary photons along the proton path (Prompt Gamma Timing, PGT). This has been considered so far for monoenergetic pencil beams only. In this work, it has been studied whether this technique can also be applied in passively formed irradiation fields with a so called spread out Bragg peak. Time correlated profiles could be recorded,more » which show a trend that is consistent with theoretical predictions. (authors)« less

  8. Exploring the Possible Use of Information Barriers for future Biological Weapons Verification Regimes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luke, S J

    2011-12-20

    This report describes a path forward for implementing information barriers in a future generic biological arms-control verification regime. Information barriers have become a staple of discussion in the area of arms control verification approaches for nuclear weapons and components. Information barriers when used with a measurement system allow for the determination that an item has sensitive characteristics without releasing any of the sensitive information. Over the last 15 years the United States (with the Russian Federation) has led on the development of information barriers in the area of the verification of nuclear weapons and nuclear components. The work of themore » US and the Russian Federation has prompted other states (e.g., UK and Norway) to consider the merits of information barriers for possible verification regimes. In the context of a biological weapons control verification regime, the dual-use nature of the biotechnology will require protection of sensitive information while allowing for the verification of treaty commitments. A major question that has arisen is whether - in a biological weapons verification regime - the presence or absence of a weapon pathogen can be determined without revealing any information about possible sensitive or proprietary information contained in the genetic materials being declared under a verification regime. This study indicates that a verification regime could be constructed using a small number of pathogens that spans the range of known biological weapons agents. Since the number of possible pathogens is small it is possible and prudent to treat these pathogens as analogies to attributes in a nuclear verification regime. This study has determined that there may be some information that needs to be protected in a biological weapons control verification regime. To protect this information, the study concludes that the Lawrence Livermore Microbial Detection Array may be a suitable technology for the detection of the genetic information associated with the various pathogens. In addition, it has been determined that a suitable information barrier could be applied to this technology when the verification regime has been defined. Finally, the report posits a path forward for additional development of information barriers in a biological weapons verification regime. This path forward has shown that a new analysis approach coined as Information Loss Analysis might need to be pursued so that a numerical understanding of how information can be lost in specific measurement systems can be achieved.« less

  9. Verification and validation of RADMODL Version 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kimball, K.D.

    1993-03-01

    RADMODL is a system of linked computer codes designed to calculate the radiation environment following an accident in which nuclear materials are released. The RADMODL code and the corresponding Verification and Validation (V&V) calculations (Appendix A), were developed for Westinghouse Savannah River Company (WSRC) by EGS Corporation (EGS). Each module of RADMODL is an independent code and was verified separately. The full system was validated by comparing the output of the various modules with the corresponding output of a previously verified version of the modules. The results of the verification and validation tests show that RADMODL correctly calculates the transportmore » of radionuclides and radiation doses. As a result of this verification and validation effort, RADMODL Version 1.0 is certified for use in calculating the radiation environment following an accident.« less

  10. Space Station automated systems testing/verification and the Galileo Orbiter fault protection design/verification

    NASA Technical Reports Server (NTRS)

    Landano, M. R.; Easter, R. W.

    1984-01-01

    Aspects of Space Station automated systems testing and verification are discussed, taking into account several program requirements. It is found that these requirements lead to a number of issues of uncertainties which require study and resolution during the Space Station definition phase. Most, if not all, of the considered uncertainties have implications for the overall testing and verification strategy adopted by the Space Station Program. A description is given of the Galileo Orbiter fault protection design/verification approach. Attention is given to a mission description, an Orbiter description, the design approach and process, the fault protection design verification approach/process, and problems of 'stress' testing.

  11. TH-AB-201-01: A Feasibility Study of Independent Dose Verification for CyberKnife

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sato, A; Noda, T; Keduka, Y

    2016-06-15

    Purpose: CyberKnife irradiation is composed of tiny-size, multiple and intensity-modulated beams compared to conventional linacs. Few of the publications for Independent dose calculation verification for CyberKnife have been reported. In this study, we evaluated the feasibility of independent dose verification for CyberKnife treatment as Secondary check. Methods: The followings were measured: test plans using some static and single beams, clinical plans in a phantom and using patient’s CT. 75 patient plans were collected from several treatment sites of brain, lung, liver and bone. In the test plans and the phantom plans, a pinpoint ion-chamber measurement was performed to assess dosemore » deviation for a treatment planning system (TPS) and an independent verification program of Simple MU Analysis (SMU). In the clinical plans, dose deviation between the SMU and the TPS was performed. Results: In test plan, the dose deviations were 3.3±4.5%, and 4.1±4.4% for the TPS and the SMU, respectively. In the phantom measurements for the clinical plans, the dose deviations were −0.2±3.6% for the TPS and −2.3±4.8% for the SMU. In the clinical plans using the patient’s CT, the dose deviations were −3.0±2.1% (Mean±1SD). The systematic difference was partially derived from inverse square law and penumbra calculation. Conclusion: The independent dose calculation for CyberKnife shows −3.0±4.2% (Mean±2SD) and our study, the confidence limit was achieved within 5% of the tolerance level from AAPM task group 114 for non-IMRT treatment. Thus, it may be feasible to use independent dose calculation verification for CyberKnife treatment as the secondary check. This research is partially supported by Japan Agency for Medical Research and Development (AMED)« less

  12. Verification and transfer of thermal pollution model. Volume 3: Verification of 3-dimensional rigid-lid model

    NASA Technical Reports Server (NTRS)

    Lee, S. S.; Sengupta, S.; Nwadike, E. V.; Sinha, S. K.

    1982-01-01

    The six-volume report: describes the theory of a three dimensional (3-D) mathematical thermal discharge model and a related one dimensional (1-D) model, includes model verification at two sites, and provides a separate user's manual for each model. The 3-D model has two forms: free surface and rigid lid. The former, verified at Anclote Anchorage (FL), allows a free air/water interface and is suited for significant surface wave heights compared to mean water depth; e.g., estuaries and coastal regions. The latter, verified at Lake Keowee (SC), is suited for small surface wave heights compared to depth (e.g., natural or man-made inland lakes) because surface elevation has been removed as a parameter. These models allow computation of time-dependent velocity and temperature fields for given initial conditions and time-varying boundary conditions. The free-surface model also provides surface height variations with time.

  13. Verification and transfer of thermal pollution model. Volume 5: Verification of 2-dimensional numerical model

    NASA Technical Reports Server (NTRS)

    Lee, S. S.; Sengupta, S.; Nwadike, E. V.

    1982-01-01

    The six-volume report: describes the theory of a three dimensional (3-D) mathematical thermal discharge model and a related one dimensional (1-D) model, includes model verification at two sites, and provides a separate user's manual for each model. The 3-D model has two forms: free surface and rigid lid. The former, verified at Anclote Anchorate (FL), allows a free air/water interface and is suited for significant surface wave heights compared to mean water depth; e.g., estuaries and coastal regions. The latter, verified at Lake Keowee (SC), is suited for small surface wave heights compared to depth (e.g., natural or man-made inland lakes) because surface elevation has been removed as a parameter. These models allow computation of time dependent velocity and temperature fields for given initial conditions and time-varying boundary conditions.

  14. Study on verifying the angle measurement performance of the rotary-laser system

    NASA Astrophysics Data System (ADS)

    Zhao, Jin; Ren, Yongjie; Lin, Jiarui; Yin, Shibin; Zhu, Jigui

    2018-04-01

    An angle verification method to verify the angle measurement performance of the rotary-laser system was developed. Angle measurement performance has a great impact on measuring accuracy. Although there is some previous research on the verification of angle measuring uncertainty for the rotary-laser system, there are still some limitations. High-precision reference angles are used in the study of the method, and an integrated verification platform is set up to evaluate the performance of the system. This paper also probes the error that has biggest influence on the verification system. Some errors of the verification system are avoided via the experimental method, and some are compensated through the computational formula and curve fitting. Experimental results show that the angle measurement performance meets the requirement for coordinate measurement. The verification platform can evaluate the uncertainty of angle measurement for the rotary-laser system efficiently.

  15. VAVUQ, Python and Matlab freeware for Verification and Validation, Uncertainty Quantification

    NASA Astrophysics Data System (ADS)

    Courtney, J. E.; Zamani, K.; Bombardelli, F. A.; Fleenor, W. E.

    2015-12-01

    A package of scripts is presented for automated Verification and Validation (V&V) and Uncertainty Quantification (UQ) for engineering codes that approximate Partial Differential Equations (PDFs). The code post-processes model results to produce V&V and UQ information. This information can be used to assess model performance. Automated information on code performance can allow for a systematic methodology to assess the quality of model approximations. The software implements common and accepted code verification schemes. The software uses the Method of Manufactured Solutions (MMS), the Method of Exact Solution (MES), Cross-Code Verification, and Richardson Extrapolation (RE) for solution (calculation) verification. It also includes common statistical measures that can be used for model skill assessment. Complete RE can be conducted for complex geometries by implementing high-order non-oscillating numerical interpolation schemes within the software. Model approximation uncertainty is quantified by calculating lower and upper bounds of numerical error from the RE results. The software is also able to calculate the Grid Convergence Index (GCI), and to handle adaptive meshes and models that implement mixed order schemes. Four examples are provided to demonstrate the use of the software for code and solution verification, model validation and uncertainty quantification. The software is used for code verification of a mixed-order compact difference heat transport solver; the solution verification of a 2D shallow-water-wave solver for tidal flow modeling in estuaries; the model validation of a two-phase flow computation in a hydraulic jump compared to experimental data; and numerical uncertainty quantification for 3D CFD modeling of the flow patterns in a Gust erosion chamber.

  16. Verification bias: an under-recognized source of error in assessing the efficacy of MRI of the meniscii.

    PubMed

    Richardson, Michael L; Petscavage, Jonelle M

    2011-11-01

    The sensitivity and specificity of magnetic resonance imaging (MRI) for diagnosis of meniscal tears has been studied extensively, with tears usually verified by surgery. However, surgically unverified cases are often not considered in these studies, leading to verification bias, which can falsely increase the sensitivity and decrease the specificity estimates. Our study suggests that such bias may be very common in the meniscal MRI literature, and illustrates techniques to detect and correct for such bias. PubMed was searched for articles estimating sensitivity and specificity of MRI for meniscal tears. These were assessed for verification bias, deemed potentially present if a study included any patients whose MRI findings were not surgically verified. Retrospective global sensitivity analysis (GSA) was performed when possible. Thirty-nine of the 314 studies retrieved from PubMed specifically dealt with meniscal tears. All 39 included unverified patients, and hence, potential verification bias. Only seven articles included sufficient information to perform GSA. Of these, one showed definite verification bias, two showed no bias, and four others showed bias within certain ranges of disease prevalence. Only 9 of 39 acknowledged the possibility of verification bias. Verification bias is underrecognized and potentially common in published estimates of the sensitivity and specificity of MRI for the diagnosis of meniscal tears. When possible, it should be avoided by proper study design. If unavoidable, it should be acknowledged. Investigators should tabulate unverified as well as verified data. Finally, verification bias should be estimated; if present, corrected estimates of sensitivity and specificity should be used. Our online web-based calculator makes this process relatively easy. Copyright © 2011 AUR. Published by Elsevier Inc. All rights reserved.

  17. A Protocol-Analytic Study of Metacognition in Mathematical Problem Solving.

    ERIC Educational Resources Information Center

    Cai, Jinfa

    1994-01-01

    Metacognitive behaviors of subjects having high (n=2) and low (n=2) levels of mathematical experience were compared across four cognitive processes in mathematical problem solving: orientation, organization, execution, and verification. High-experience subjects engaged in self-regulation and spent more time on orientation and organization. (36…

  18. VERIFICATION OF THE HYDROLOGIC EVALUATION OF LANDFILL PERFORMANCE (HELP) MODEL USING FIELD DATA

    EPA Science Inventory

    The report describes a study conducted to verify the Hydrologic Evaluation of Landfill Performance (HELP) computer model using existing field data from a total of 20 landfill cells at 7 sites in the United States. Simulations using the HELP model were run to compare the predicted...

  19. Multi-centre audit of VMAT planning and pre-treatment verification.

    PubMed

    Jurado-Bruggeman, Diego; Hernández, Victor; Sáez, Jordi; Navarro, David; Pino, Francisco; Martínez, Tatiana; Alayrach, Maria-Elena; Ailleres, Norbert; Melero, Alejandro; Jornet, Núria

    2017-08-01

    We performed a multi-centre intercomparison of VMAT dose planning and pre-treatment verification. The aims were to analyse the dose plans in terms of dosimetric quality and deliverability, and to validate whether in-house pre-treatment verification results agreed with those of an external audit. The nine participating centres encompassed different machines, equipment, and methodologies. Two mock cases (prostate and head and neck) were planned using one and two arcs. A plan quality index was defined to compare the plans and different complexity indices were calculated to check their deliverability. We compared gamma index pass rates using the centre's equipment and methodology to those of an external audit (global 3D gamma, absolute dose differences, 10% of maximum dose threshold). Log-file analysis was performed to look for delivery errors. All centres fulfilled the dosimetric goals but plan quality and delivery complexity were heterogeneous and uncorrelated, depending on the manufacturer and the planner's methodology. Pre-treatment verifications results were within tolerance in all cases for gamma 3%-3mm evaluation. Nevertheless, differences between the external audit and in-house measurements arose due to different equipment or methodology, especially for 2%-2mm criteria with differences up to 20%. No correlation was found between complexity indices and verification results amongst centres. All plans fulfilled dosimetric constraints, but plan quality and complexity did not correlate and were strongly dependent on the planner and the vendor. In-house measurements cannot completely replace external audits for credentialing. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Stopping power and dose calculations with analytical and Monte Carlo methods for protons and prompt gamma range verification

    NASA Astrophysics Data System (ADS)

    Usta, Metin; Tufan, Mustafa Çağatay; Aydın, Güral; Bozkurt, Ahmet

    2018-07-01

    In this study, we have performed the calculations stopping power, depth dose, and range verification for proton beams using dielectric and Bethe-Bloch theories and FLUKA, Geant4 and MCNPX Monte Carlo codes. In the framework, as analytical studies, Drude model was applied for dielectric theory and effective charge approach with Roothaan-Hartree-Fock charge densities was used in Bethe theory. In the simulations different setup parameters were selected to evaluate the performance of three distinct Monte Carlo codes. The lung and breast tissues were investigated are considered to be related to the most common types of cancer throughout the world. The results were compared with each other and the available data in literature. In addition, the obtained results were verified with prompt gamma range data. In both stopping power values and depth-dose distributions, it was found that the Monte Carlo values give better results compared with the analytical ones while the results that agree best with ICRU data in terms of stopping power are those of the effective charge approach between the analytical methods and of the FLUKA code among the MC packages. In the depth dose distributions of the examined tissues, although the Bragg curves for Monte Carlo almost overlap, the analytical ones show significant deviations that become more pronounce with increasing energy. Verifications with the results of prompt gamma photons were attempted for 100-200 MeV protons which are regarded important for proton therapy. The analytical results are within 2%-5% and the Monte Carlo values are within 0%-2% as compared with those of the prompt gammas.

  1. Designing an autoverification system in Zagazig University Hospitals Laboratories: preliminary evaluation on thyroid function profile.

    PubMed

    Sediq, Amany Mohy-Eldin; Abdel-Azeez, Ahmad GabAllahm Hala

    2014-01-01

    The current practice in Zagazig University Hospitals Laboratories (ZUHL) is manual verification of all results for the later release of reports. These processes are time consuming and tedious, with large inter-individual variation that slows the turnaround time (TAT). Autoverification is the process of comparing patient results, generated from interfaced instruments, against laboratory-defined acceptance parameters. This study describes an autoverification engine designed and implemented in ZUHL, Egypt. A descriptive study conducted at ZUHL, from January 2012-December 2013. A rule-based system was used in designing an autoverification engine. The engine was preliminarily evaluated on a thyroid function panel. A total of 563 rules were written and tested on 563 simulated cases and 1673 archived cases. The engine decisions were compared to that of 4 independent expert reviewers. The impact of engine implementation on TAT was evaluated. Agreement was achieved among the 4 reviewers in 55.5% of cases, and with the engine in 51.5% of cases. The autoverification rate for archived cases was 63.8%. Reported lab TAT was reduced by 34.9%, and TAT segment from the completion of analysis to verification was reduced by 61.8%. The developed rule-based autoverification system has a verification rate comparable to that of the commercially available software. However, the in-house development of this system had saved the hospital the cost of commercially available ones. The implementation of the system shortened the TAT and minimized the number of samples that needed staff revision, which enabled laboratory staff to devote more time and effort to handle problematic test results and to improve patient care quality.

  2. Computational solution verification and validation applied to a thermal model of a ruggedized instrumentation package

    DOE PAGES

    Scott, Sarah Nicole; Templeton, Jeremy Alan; Hough, Patricia Diane; ...

    2014-01-01

    This study details a methodology for quantification of errors and uncertainties of a finite element heat transfer model applied to a Ruggedized Instrumentation Package (RIP). The proposed verification and validation (V&V) process includes solution verification to examine errors associated with the code's solution techniques, and model validation to assess the model's predictive capability for quantities of interest. The model was subjected to mesh resolution and numerical parameters sensitivity studies to determine reasonable parameter values and to understand how they change the overall model response and performance criteria. To facilitate quantification of the uncertainty associated with the mesh, automatic meshing andmore » mesh refining/coarsening algorithms were created and implemented on the complex geometry of the RIP. Automated software to vary model inputs was also developed to determine the solution’s sensitivity to numerical and physical parameters. The model was compared with an experiment to demonstrate its accuracy and determine the importance of both modelled and unmodelled physics in quantifying the results' uncertainty. An emphasis is placed on automating the V&V process to enable uncertainty quantification within tight development schedules.« less

  3. Development of Biomarkers for Screening Hepatocellular Carcinoma Using Global Data Mining and Multiple Reaction Monitoring

    PubMed Central

    Yu, Su Jong; Jang, Eun Sun; Yu, Jiyoung; Cho, Geunhee; Yoon, Jung-Hwan; Kim, Youngsoo

    2013-01-01

    Hepatocellular carcinoma (HCC) is one of the most common and aggressive cancers and is associated with a poor survival rate. Clinically, the level of alpha-fetoprotein (AFP) has been used as a biomarker for the diagnosis of HCC. The discovery of useful biomarkers for HCC, focused solely on the proteome, has been difficult; thus, wide-ranging global data mining of genomic and proteomic databases from previous reports would be valuable in screening biomarker candidates. Further, multiple reaction monitoring (MRM), based on triple quadrupole mass spectrometry, has been effective with regard to high-throughput verification, complementing antibody-based verification pipelines. In this study, global data mining was performed using 5 types of HCC data to screen for candidate biomarker proteins: cDNA microarray, copy number variation, somatic mutation, epigenetic, and quantitative proteomics data. Next, we applied MRM to verify HCC candidate biomarkers in individual serum samples from 3 groups: a healthy control group, patients who have been diagnosed with HCC (Before HCC treatment group), and HCC patients who underwent locoregional therapy (After HCC treatment group). After determining the relative quantities of the candidate proteins by MRM, we compared their expression levels between the 3 groups, identifying 4 potential biomarkers: the actin-binding protein anillin (ANLN), filamin-B (FLNB), complementary C4-A (C4A), and AFP. The combination of 2 markers (ANLN, FLNB) improved the discrimination of the before HCC treatment group from the healthy control group compared with AFP. We conclude that the combination of global data mining and MRM verification enhances the screening and verification of potential HCC biomarkers. This efficacious integrative strategy is applicable to the development of markers for cancer and other diseases. PMID:23717429

  4. Development of biomarkers for screening hepatocellular carcinoma using global data mining and multiple reaction monitoring.

    PubMed

    Kim, Hyunsoo; Kim, Kyunggon; Yu, Su Jong; Jang, Eun Sun; Yu, Jiyoung; Cho, Geunhee; Yoon, Jung-Hwan; Kim, Youngsoo

    2013-01-01

    Hepatocellular carcinoma (HCC) is one of the most common and aggressive cancers and is associated with a poor survival rate. Clinically, the level of alpha-fetoprotein (AFP) has been used as a biomarker for the diagnosis of HCC. The discovery of useful biomarkers for HCC, focused solely on the proteome, has been difficult; thus, wide-ranging global data mining of genomic and proteomic databases from previous reports would be valuable in screening biomarker candidates. Further, multiple reaction monitoring (MRM), based on triple quadrupole mass spectrometry, has been effective with regard to high-throughput verification, complementing antibody-based verification pipelines. In this study, global data mining was performed using 5 types of HCC data to screen for candidate biomarker proteins: cDNA microarray, copy number variation, somatic mutation, epigenetic, and quantitative proteomics data. Next, we applied MRM to verify HCC candidate biomarkers in individual serum samples from 3 groups: a healthy control group, patients who have been diagnosed with HCC (Before HCC treatment group), and HCC patients who underwent locoregional therapy (After HCC treatment group). After determining the relative quantities of the candidate proteins by MRM, we compared their expression levels between the 3 groups, identifying 4 potential biomarkers: the actin-binding protein anillin (ANLN), filamin-B (FLNB), complementary C4-A (C4A), and AFP. The combination of 2 markers (ANLN, FLNB) improved the discrimination of the before HCC treatment group from the healthy control group compared with AFP. We conclude that the combination of global data mining and MRM verification enhances the screening and verification of potential HCC biomarkers. This efficacious integrative strategy is applicable to the development of markers for cancer and other diseases.

  5. Guidance and Control Software Project Data - Volume 3: Verification Documents

    NASA Technical Reports Server (NTRS)

    Hayhurst, Kelly J. (Editor)

    2008-01-01

    The Guidance and Control Software (GCS) project was the last in a series of software reliability studies conducted at Langley Research Center between 1977 and 1994. The technical results of the GCS project were recorded after the experiment was completed. Some of the support documentation produced as part of the experiment, however, is serving an unexpected role far beyond its original project context. Some of the software used as part of the GCS project was developed to conform to the RTCA/DO-178B software standard, "Software Considerations in Airborne Systems and Equipment Certification," used in the civil aviation industry. That standard requires extensive documentation throughout the software development life cycle, including plans, software requirements, design and source code, verification cases and results, and configuration management and quality control data. The project documentation that includes this information is open for public scrutiny without the legal or safety implications associated with comparable data from an avionics manufacturer. This public availability has afforded an opportunity to use the GCS project documents for DO-178B training. This report provides a brief overview of the GCS project, describes the 4-volume set of documents and the role they are playing in training, and includes the verification documents from the GCS project. Volume 3 contains four appendices: A. Software Verification Cases and Procedures for the Guidance and Control Software Project; B. Software Verification Results for the Pluto Implementation of the Guidance and Control Software; C. Review Records for the Pluto Implementation of the Guidance and Control Software; and D. Test Results Logs for the Pluto Implementation of the Guidance and Control Software.

  6. "Edge-on" MOSkin detector for stereotactic beam measurement and verification.

    PubMed

    Jong, Wei Loong; Ung, Ngie Min; Vannyat, Ath; Jamalludin, Zulaikha; Rosenfeld, Anatoly; Wong, Jeannie Hsiu Ding

    2017-01-01

    Dosimetry in small radiation field is challenging and complicated because of dose volume averaging and beam perturbations in a detector. We evaluated the suitability of the "Edge-on" MOSkin (MOSFET) detector in small radiation field measurement. We also tested the feasibility for dosimetric verification in stereotactic radiosurgery (SRS) and stereotactic radiotherapy (SRT). "Edge-on" MOSkin detector was calibrated and the reproducibility and linearity were determined. Lateral dose profiles and output factors were measured using the "Edge-on" MOSkin detector, ionization chamber, SRS diode and EBT2 film. Dosimetric verification was carried out on two SRS and five SRT plans. In dose profile measurements, the "Edge-on" MOSkin measurements concurred with EBT2 film measurements. It showed full width at half maximum of the dose profile with average difference of 0.11mm and penumbral width with difference of ±0.2mm for all SRS cones as compared to EBT2 film measurement. For output factor measurements, a 1.1% difference was observed between the "Edge-on" MOSkin detector and EBT2 film for 4mm SRS cone. The "Edge-on" MOSkin detector provided reproducible measurements for dose verification in real-time. The measured doses concurred with the calculated dose for SRS (within 1%) and SRT (within 3%). A set of output correction factors for the "Edge-on" MOSkin detector for small radiation fields were derived from EBT2 film measurement and presented. This study showed that the "Edge-on" MOSkin detector is a suitable tool for dose verification in small radiation field. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  7. SU-F-T-440: The Feasibility Research of Checking Cervical Cancer IMRT Pre- Treatment Dose Verification by Automated Treatment Planning Verification System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, X; Yin, Y; Lin, X

    Purpose: To assess the preliminary feasibility of automated treatment planning verification system in cervical cancer IMRT pre-treatment dose verification. Methods: The study selected randomly clinical IMRT treatment planning data for twenty patients with cervical cancer, all IMRT plans were divided into 7 fields to meet the dosimetric goals using a commercial treatment planning system(PianncleVersion 9.2and the EclipseVersion 13.5). The plans were exported to the Mobius 3D (M3D)server percentage differences of volume of a region of interest (ROI) and dose calculation of target region and organ at risk were evaluated, in order to validate the accuracy automated treatment planning verification system.more » Results: The difference of volume for Pinnacle to M3D was less than results for Eclipse to M3D in ROI, the biggest difference was 0.22± 0.69%, 3.5±1.89% for Pinnacle and Eclipse respectively. M3D showed slightly better agreement in dose of target and organ at risk compared with TPS. But after recalculating plans by M3D, dose difference for Pinnacle was less than Eclipse on average, results were within 3%. Conclusion: The method of utilizing the automated treatment planning system to validate the accuracy of plans is convenientbut the scope of differences still need more clinical patient cases to determine. At present, it should be used as a secondary check tool to improve safety in the clinical treatment planning.« less

  8. Optical/digital identification/verification system based on digital watermarking technology

    NASA Astrophysics Data System (ADS)

    Herrigel, Alexander; Voloshynovskiy, Sviatoslav V.; Hrytskiv, Zenon D.

    2000-06-01

    This paper presents a new approach for the secure integrity verification of driver licenses, passports or other analogue identification documents. The system embeds (detects) the reference number of the identification document with the DCT watermark technology in (from) the owner photo of the identification document holder. During verification the reference number is extracted and compared with the reference number printed in the identification document. The approach combines optical and digital image processing techniques. The detection system must be able to scan an analogue driver license or passport, convert the image of this document into a digital representation and then apply the watermark verification algorithm to check the payload of the embedded watermark. If the payload of the watermark is identical with the printed visual reference number of the issuer, the verification was successful and the passport or driver license has not been modified. This approach constitutes a new class of application for the watermark technology, which was originally targeted for the copyright protection of digital multimedia data. The presented approach substantially increases the security of the analogue identification documents applied in many European countries.

  9. Verification of EPA's " Preliminary remediation goals for radionuclides" (PRG) electronic calculator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stagich, B. H.

    The U.S. Environmental Protection Agency (EPA) requested an external, independent verification study of their “Preliminary Remediation Goals for Radionuclides” (PRG) electronic calculator. The calculator provides information on establishing PRGs for radionuclides at Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) sites with radioactive contamination (Verification Study Charge, Background). These risk-based PRGs set concentration limits using carcinogenic toxicity values under specific exposure conditions (PRG User’s Guide, Section 1). The purpose of this verification study is to ascertain that the computer codes has no inherit numerical problems with obtaining solutions as well as to ensure that the equations are programmed correctly.

  10. Feedback Seeking in Early Adolescence: Self-Enhancement or Self-Verification?

    PubMed

    Rosen, Lisa H; Principe, Connor P; Langlois, Judith H

    2013-02-13

    The authors examined whether early adolescents ( N = 90) solicit self-enhancing feedback (i.e., positive feedback) or self-verifying feedback (i.e., feedback congruent with self-views, even when these views are negative). Sixth, seventh, and eighth graders first completed a self-perception measure and then selected whether to receive positive or negative feedback from an unknown peer in different domains of self. Results were consistent with self-verification theory; adolescents who perceived themselves as having both strengths and weaknesses were more likely to seek negative feedback regarding a self-perceived weakness compared to a self-perceived strength. The authors found similar support for self-verification processes when they considered the entire sample regardless of perceived strengths and weaknesses; hierarchical linear modeling (HLM) examined the predictive power of ratings of self-perceived ability, certainty, and importance on feedback seeking for all participants and provided additional evidence of self-verification strivings in adolescence.

  11. Feedback Seeking in Early Adolescence: Self-Enhancement or Self-Verification?

    PubMed Central

    Rosen, Lisa H.; Principe, Connor P.; Langlois, Judith H.

    2012-01-01

    The authors examined whether early adolescents (N = 90) solicit self-enhancing feedback (i.e., positive feedback) or self-verifying feedback (i.e., feedback congruent with self-views, even when these views are negative). Sixth, seventh, and eighth graders first completed a self-perception measure and then selected whether to receive positive or negative feedback from an unknown peer in different domains of self. Results were consistent with self-verification theory; adolescents who perceived themselves as having both strengths and weaknesses were more likely to seek negative feedback regarding a self-perceived weakness compared to a self-perceived strength. The authors found similar support for self-verification processes when they considered the entire sample regardless of perceived strengths and weaknesses; hierarchical linear modeling (HLM) examined the predictive power of ratings of self-perceived ability, certainty, and importance on feedback seeking for all participants and provided additional evidence of self-verification strivings in adolescence. PMID:23543746

  12. The Mediation of Mothers’ Self-Fulfilling Effects on Their Children’s Alcohol Use: Self-Verification, Informational Conformity and Modeling Processes

    PubMed Central

    Madon, Stephanie; Guyll, Max; Buller, Ashley A.; Scherr, Kyle C.; Willard, Jennifer; Spoth, Richard

    2010-01-01

    This research examined whether self-fulfilling prophecy effects are mediated by self-verification, informational conformity, and modeling processes. The authors examined these mediational processes across multiple time frames with longitudinal data obtained from two samples of mother – child dyads (N1 = 487; N2 = 287). Children’s alcohol use was the outcome variable. The results provided consistent support for the mediational process of self-verification. In both samples and across several years of adolescence, there was a significant indirect effect of mothers’ beliefs on children’s alcohol use through children’s self-assessed likelihood of drinking alcohol in the future. Comparatively less support was found for informational conformity and modeling processes as mediators of mothers’ self-fulfilling effects. The potential for self-fulfilling prophecies to produce long lasting changes in targets’ behavior via self-verification processes are discussed. PMID:18665708

  13. The mediation of mothers' self-fulfilling effects on their children's alcohol use: self-verification, informational conformity, and modeling processes.

    PubMed

    Madon, Stephanie; Guyll, Max; Buller, Ashley A; Scherr, Kyle C; Willard, Jennifer; Spoth, Richard

    2008-08-01

    This research examined whether self-fulfilling prophecy effects are mediated by self-verification, informational conformity, and modeling processes. The authors examined these mediational processes across multiple time frames with longitudinal data obtained from two samples of mother-child dyads (N-sub-1 = 486; N-sub-2 = 287), with children's alcohol use as the outcome variable. The results provided consistent support for the mediational process of self-verification. In both samples and across several years of adolescence, there was a significant indirect effect of mothers' beliefs on children's alcohol use through children's self-assessed likelihood of drinking alcohol in the future. Comparatively less support was found for informational conformity and modeling processes as mediators of mothers' self-fulfilling effects. The potential for self-fulfilling prophecies to produce long-lasting changes in targets' behavior via self-verification processes are discussed. (c) 2008 APA, all rights reserved

  14. Exploring system interconnection architectures with VIPACES: from direct connections to NOCs

    NASA Astrophysics Data System (ADS)

    Sánchez-Peña, Armando; Carballo, Pedro P.; Núñez, Antonio

    2007-05-01

    This paper presents a simple environment for the verification of AMBA 3 AXI systems in Verification IP (VIP) production called VIPACES (Verification Interface Primitives for the development of AXI Compliant Elements and Systems). These primitives are presented as a not compiled library written in SystemC where interfaces are the core of the library. The definition of interfaces instead of generic modules let the user construct custom modules improving the resources spent during the verification phase as well as easily adapting his modules to the AMBA 3 AXI protocol. This topic is the main discussion in the VIPACES library. The paper focuses on comparing and contrasting the main interconnection schemes for AMBA 3 AXI as modeled by VIPACES. For assessing these results we propose a validation scenario with a particular architecture belonging to the domain of MPEG4 video decoding, which is compound by an AXI bus connecting an IDCT and other processing resources.

  15. GENERIC VERIFICATION PROTOCOL FOR DETERMINATION OF EMISSIONS REDUCTIONS OBTAINED BY USE OF ALTERNATIVE OR REFORMULATED LIQUID FUELS, FUEL ADDITIVES, FUEL EMULSIONS AND LUBRICANTS FOR HIGHWAY AND NONROAD USE DISEL ENGINES AND LIGHT DUTY GASOLINE ENGINES AND VEHICLES

    EPA Science Inventory

    This report sets standards by which the emissions reduction provided by fuel and lubricant technologies can be tested and be tested in a comparable way. It is a generic protocol under the Environmental Technology Verification program.

  16. Response to "Improving Patient Safety With Error Identification in Chemotherapy Orders by Verification Nurses"
.

    PubMed

    Zhu, Ling-Ling; Lv, Na; Zhou, Quan

    2016-12-01

    We read, with great interest, the study by Baldwin and Rodriguez (2016), which described the role of the verification nurse and details the verification process in identifying errors related to chemotherapy orders. We strongly agree with their findings that a verification nurse, collaborating closely with the prescribing physician, pharmacist, and treating nurse, can better identify errors and maintain safety during chemotherapy administration.

  17. Partial verification bias and incorporation bias affected accuracy estimates of diagnostic studies for biomarkers that were part of an existing composite gold standard.

    PubMed

    Karch, Annika; Koch, Armin; Zapf, Antonia; Zerr, Inga; Karch, André

    2016-10-01

    To investigate how choice of gold standard biases estimates of sensitivity and specificity in studies reassessing the diagnostic accuracy of biomarkers that are already part of a lifetime composite gold standard (CGS). We performed a simulation study based on the real-life example of the biomarker "protein 14-3-3" used for diagnosing Creutzfeldt-Jakob disease. Three different types of gold standard were compared: perfect gold standard "autopsy" (available in a small fraction only; prone to partial verification bias), lifetime CGS (including the biomarker under investigation; prone to incorporation bias), and "best available" gold standard (autopsy if available, otherwise CGS). Sensitivity was unbiased when comparing 14-3-3 with autopsy but overestimated when using CGS or "best available" gold standard. Specificity of 14-3-3 was underestimated in scenarios comparing 14-3-3 with autopsy (up to 24%). In contrast, overestimation (up to 20%) was observed for specificity compared with CGS; this could be reduced to 0-10% when using the "best available" gold standard. Choice of gold standard affects considerably estimates of diagnostic accuracy. Using the "best available" gold standard (autopsy where available, otherwise CGS) leads to valid estimates of specificity, whereas sensitivity is estimated best when tested against autopsy alone. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Being known, intimate, and valued: global self-verification and dyadic adjustment in couples and roommates.

    PubMed

    Katz, Jennifer; Joiner, Thomas E

    2002-02-01

    We contend that close relationships provide adults with optimal opportunities for personal growth when relationship partners provide accurate, honest feedback. Accordingly, it was predicted that young adults would experience the relationship quality with relationship partners who evaluated them in a manner consistent their own self-evaluations. Three empirical tests of this self-verification hypothesis as applied to close dyads were conducted. In Study 1, young adults in dating relationships were most intimate with and somewhat more committed to partners when they perceived that partners evaluated them as they evaluated themselves. Self-verification effects were pronounced for those involved in more serious dating relationships. In Study 2, men reported the greatest esteem for same-sex roommates who evaluated them in a self-verifying manner. Results from Study 2 were replicated and extended to both male and female roommate dyads in Study 3. Further, self-verification effects were most pronounced for young adults with high emotional empathy. Results suggest that self-verification theory is useful for understanding dyadic adjustment across a variety of relational contexts in young adulthood. Implications of self-verification processes for adult personal development are outlined within an identity negotiation framework.

  19. Compromises produced by the dialectic between self-verification and self-enhancement.

    PubMed

    Morling, B; Epstein, S

    1997-12-01

    Three studies of people's reactions to evaluative feedback demonstrated that the dialectic between self-enhancement and self-verification results in compromises between these 2 motives, as hypothesized in cognitive-experiential self-theory. The demonstration was facilitated by 2 procedural improvements: Enhancement and verification were established by calibrating evaluative feedback against self appraisals, and degree of enhancement and of verification were varied along a continuum, rather than categorically. There was also support for the hypotheses that processing in an intuitive-experiential mode favors enhancement and processing in an analytical-rational mode favors verification in the kinds of situations investigated.

  20. Patient-specific IMRT verification using independent fluence-based dose calculation software: experimental benchmarking and initial clinical experience.

    PubMed

    Georg, Dietmar; Stock, Markus; Kroupa, Bernhard; Olofsson, Jörgen; Nyholm, Tufve; Ahnesjö, Anders; Karlsson, Mikael

    2007-08-21

    Experimental methods are commonly used for patient-specific intensity-modulated radiotherapy (IMRT) verification. The purpose of this study was to investigate the accuracy and performance of independent dose calculation software (denoted as 'MUV' (monitor unit verification)) for patient-specific quality assurance (QA). 52 patients receiving step-and-shoot IMRT were considered. IMRT plans were recalculated by the treatment planning systems (TPS) in a dedicated QA phantom, in which an experimental 1D and 2D verification (0.3 cm(3) ionization chamber; films) was performed. Additionally, an independent dose calculation was performed. The fluence-based algorithm of MUV accounts for collimator transmission, rounded leaf ends, tongue-and-groove effect, backscatter to the monitor chamber and scatter from the flattening filter. The dose calculation utilizes a pencil beam model based on a beam quality index. DICOM RT files from patient plans, exported from the TPS, were directly used as patient-specific input data in MUV. For composite IMRT plans, average deviations in the high dose region between ionization chamber measurements and point dose calculations performed with the TPS and MUV were 1.6 +/- 1.2% and 0.5 +/- 1.1% (1 S.D.). The dose deviations between MUV and TPS slightly depended on the distance from the isocentre position. For individual intensity-modulated beams (total 367), an average deviation of 1.1 +/- 2.9% was determined between calculations performed with the TPS and with MUV, with maximum deviations up to 14%. However, absolute dose deviations were mostly less than 3 cGy. Based on the current results, we aim to apply a confidence limit of 3% (with respect to the prescribed dose) or 6 cGy for routine IMRT verification. For off-axis points at distances larger than 5 cm and for low dose regions, we consider 5% dose deviation or 10 cGy acceptable. The time needed for an independent calculation compares very favourably with the net time for an experimental approach. The physical effects modelled in the dose calculation software MUV allow accurate dose calculations in individual verification points. Independent calculations may be used to replace experimental dose verification once the IMRT programme is mature.

  1. Two years experience with quality assurance protocol for patient related Rapid Arc treatment plan verification using a two dimensional ionization chamber array

    PubMed Central

    2011-01-01

    Purpose To verify the dose distribution and number of monitor units (MU) for dynamic treatment techniques like volumetric modulated single arc radiation therapy - Rapid Arc - each patient treatment plan has to be verified prior to the first treatment. The purpose of this study was to develop a patient related treatment plan verification protocol using a two dimensional ionization chamber array (MatriXX, IBA, Schwarzenbruck, Germany). Method Measurements were done to determine the dependence between response of 2D ionization chamber array, beam direction, and field size. Also the reproducibility of the measurements was checked. For the patient related verifications the original patient Rapid Arc treatment plan was projected on CT dataset of the MatriXX and the dose distribution was calculated. After irradiation of the Rapid Arc verification plans measured and calculated 2D dose distributions were compared using the gamma evaluation method implemented in the measuring software OmniPro (version 1.5, IBA, Schwarzenbruck, Germany). Results The dependence between response of 2D ionization chamber array, field size and beam direction has shown a passing rate of 99% for field sizes between 7 cm × 7 cm and 24 cm × 24 cm for measurements of single arc. For smaller and larger field sizes than 7 cm × 7 cm and 24 cm × 24 cm the passing rate was less than 99%. The reproducibility was within a passing rate of 99% and 100%. The accuracy of the whole process including the uncertainty of the measuring system, treatment planning system, linear accelerator and isocentric laser system in the treatment room was acceptable for treatment plan verification using gamma criteria of 3% and 3 mm, 2D global gamma index. Conclusion It was possible to verify the 2D dose distribution and MU of Rapid Arc treatment plans using the MatriXX. The use of the MatriXX for Rapid Arc treatment plan verification in clinical routine is reasonable. The passing rate should be 99% than the verification protocol is able to detect clinically significant errors. PMID:21342509

  2. Verification of sub-grid filtered drag models for gas-particle fluidized beds with immersed cylinder arrays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarkar, Avik; Sun, Xin; Sundaresan, Sankaran

    2014-04-23

    The accuracy of coarse-grid multiphase CFD simulations of fluidized beds may be improved via the inclusion of filtered constitutive models. In our previous study (Sarkar et al., Chem. Eng. Sci., 104, 399-412), we developed such a set of filtered drag relationships for beds with immersed arrays of cooling tubes. Verification of these filtered drag models is addressed in this work. Predictions from coarse-grid simulations with the sub-grid filtered corrections are compared against accurate, highly-resolved simulations of full-scale turbulent and bubbling fluidized beds. The filtered drag models offer a computationally efficient yet accurate alternative for obtaining macroscopic predictions, but the spatialmore » resolution of meso-scale clustering heterogeneities is sacrificed.« less

  3. Verification and Validation Studies for the LAVA CFD Solver

    NASA Technical Reports Server (NTRS)

    Moini-Yekta, Shayan; Barad, Michael F; Sozer, Emre; Brehm, Christoph; Housman, Jeffrey A.; Kiris, Cetin C.

    2013-01-01

    The verification and validation of the Launch Ascent and Vehicle Aerodynamics (LAVA) computational fluid dynamics (CFD) solver is presented. A modern strategy for verification and validation is described incorporating verification tests, validation benchmarks, continuous integration and version control methods for automated testing in a collaborative development environment. The purpose of the approach is to integrate the verification and validation process into the development of the solver and improve productivity. This paper uses the Method of Manufactured Solutions (MMS) for the verification of 2D Euler equations, 3D Navier-Stokes equations as well as turbulence models. A method for systematic refinement of unstructured grids is also presented. Verification using inviscid vortex propagation and flow over a flat plate is highlighted. Simulation results using laminar and turbulent flow past a NACA 0012 airfoil and ONERA M6 wing are validated against experimental and numerical data.

  4. You Can't See the Real Me: Attachment Avoidance, Self-Verification, and Self-Concept Clarity.

    PubMed

    Emery, Lydia F; Gardner, Wendi L; Carswell, Kathleen L; Finkel, Eli J

    2018-03-01

    Attachment shapes people's experiences in their close relationships and their self-views. Although attachment avoidance and anxiety both undermine relationships, past research has primarily emphasized detrimental effects of anxiety on the self-concept. However, as partners can help people maintain stable self-views, avoidant individuals' negative views of others might place them at risk for self-concept confusion. We hypothesized that avoidance would predict lower self-concept clarity and that less self-verification from partners would mediate this association. Attachment avoidance was associated with lower self-concept clarity (Studies 1-5), an effect that was mediated by low self-verification (Studies 2-3). The association between avoidance and self-verification was mediated by less self-disclosure and less trust in partner feedback (Study 4). Longitudinally, avoidance predicted changes in self-verification, which in turn predicted changes in self-concept clarity (Study 5). Thus, avoidant individuals' reluctance to trust or become too close to others may result in hidden costs to the self-concept.

  5. Independent Validation and Verification of automated information systems in the Department of Energy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hunteman, W.J.; Caldwell, R.

    1994-07-01

    The Department of Energy (DOE) has established an Independent Validation and Verification (IV&V) program for all classified automated information systems (AIS) operating in compartmented or multi-level modes. The IV&V program was established in DOE Order 5639.6A and described in the manual associated with the Order. This paper describes the DOE IV&V program, the IV&V process and activities, the expected benefits from an IV&V, and the criteria and methodologies used during an IV&V. The first IV&V under this program was conducted on the Integrated Computing Network (ICN) at Los Alamos National Laboratory and several lessons learned are presented. The DOE IV&Vmore » program is based on the following definitions. An IV&V is defined as the use of expertise from outside an AIS organization to conduct validation and verification studies on a classified AIS. Validation is defined as the process of applying the specialized security test and evaluation procedures, tools, and equipment needed to establish acceptance for joint usage of an AIS by one or more departments or agencies and their contractors. Verification is the process of comparing two levels of an AIS specification for proper correspondence (e.g., security policy model with top-level specifications, top-level specifications with source code, or source code with object code).« less

  6. A back-projection algorithm in the presence of an extra attenuating medium: towards EPID dosimetry for the MR-Linac

    NASA Astrophysics Data System (ADS)

    Torres-Xirau, I.; Olaciregui-Ruiz, I.; Rozendaal, R. A.; González, P.; Mijnheer, B. J.; Sonke, J.-J.; van der Heide, U. A.; Mans, A.

    2017-08-01

    In external beam radiotherapy, electronic portal imaging devices (EPIDs) are frequently used for pre-treatment and for in vivo dose verification. Currently, various MR-guided radiotherapy systems are being developed and clinically implemented. Independent dosimetric verification is highly desirable. For this purpose we adapted our EPID-based dose verification system for use with the MR-Linac combination developed by Elekta in cooperation with UMC Utrecht and Philips. In this study we extended our back-projection method to cope with the presence of an extra attenuating medium between the patient and the EPID. Experiments were performed at a conventional linac, using an aluminum mock-up of the MRI scanner housing between the phantom and the EPID. For a 10 cm square field, the attenuation by the mock-up was 72%, while 16% of the remaining EPID signal resulted from scattered radiation. 58 IMRT fields were delivered to a 20 cm slab phantom with and without the mock-up. EPID reconstructed dose distributions were compared to planned dose distributions using the γ -evaluation method (global, 3%, 3 mm). In our adapted back-projection algorithm the averaged {γmean} was 0.27+/- 0.06 , while in the conventional it was 0.28+/- 0.06 . Dose profiles of several square fields reconstructed with our adapted algorithm showed excellent agreement when compared to TPS.

  7. Shuttle payload interface verification equipment study. Volume 2: Technical document, part 1

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The technical analysis is reported that was performed during the shuttle payload interface verification equipment study. It describes: (1) the background and intent of the study; (2) study approach and philosophy covering all facets of shuttle payload/cargo integration; (3)shuttle payload integration requirements; (4) preliminary design of the horizontal IVE; (5) vertical IVE concept; and (6) IVE program development plans, schedule and cost. Also included is a payload integration analysis task to identify potential uses in addition to payload interface verification.

  8. A Roadmap for the Implementation of Continued Process Verification.

    PubMed

    Boyer, Marcus; Gampfer, Joerg; Zamamiri, Abdel; Payne, Robin

    2016-01-01

    In 2014, the members of the BioPhorum Operations Group (BPOG) produced a 100-page continued process verification case study, entitled "Continued Process Verification: An Industry Position Paper with Example Protocol". This case study captures the thought processes involved in creating a continued process verification plan for a new product in response to the U.S. Food and Drug Administration's guidance on the subject introduced in 2011. In so doing, it provided the specific example of a plan developed for a new molecular antibody product based on the "A MAb Case Study" that preceded it in 2009.This document provides a roadmap that draws on the content of the continued process verification case study to provide a step-by-step guide in a more accessible form, with reference to a process map of the product life cycle. It could be used as a basis for continued process verification implementation in a number of different scenarios: For a single product and process;For a single site;To assist in the sharing of data monitoring responsibilities among sites;To assist in establishing data monitoring agreements between a customer company and a contract manufacturing organization. The U.S. Food and Drug Administration issued guidance on the management of manufacturing processes designed to improve quality and control of drug products. This involved increased focus on regular monitoring of manufacturing processes, reporting of the results, and the taking of opportunities to improve. The guidance and practice associated with it is known as continued process verification This paper summarizes good practice in responding to continued process verification guidance, gathered from subject matter experts in the biopharmaceutical industry. © PDA, Inc. 2016.

  9. SU-E-T-48: A Multi-Institutional Study of Independent Dose Verification for Conventional, SRS and SBRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Takahashi, R; Kamima, T; Tachibana, H

    2015-06-15

    Purpose: To show the results of a multi-institutional study of the independent dose verification for conventional, Stereotactic radiosurgery and body radiotherapy (SRS and SBRT) plans based on the action level of AAPM TG-114. Methods: This study was performed at 12 institutions in Japan. To eliminate the bias of independent dose verification program (Indp), all of the institutions used the same CT-based independent dose verification software (Simple MU Analysis, Triangle Products, JP) with the Clarkson-based algorithm. Eclipse (AAA, PBC), Pinnacle{sup 3} (Adaptive Convolve) and Xio (Superposition) were used as treatment planning system (TPS). The confidence limits (CL, Mean±2SD) for 18 sitesmore » (head, breast, lung, pelvis, etc.) were evaluated in comparison in dose between the TPS and the Indp. Results: A retrospective analysis of 6352 treatment fields was conducted. The CLs for conventional, SRS and SBRT were 1.0±3.7 %, 2.0±2.5 % and 6.2±4.4 %, respectively. In conventional plans, most of the sites showed within 5 % of TG-114 action level. However, there were the systematic difference (4.0±4.0 % and 2.5±5.8 % for breast and lung, respectively). In SRS plans, our results showed good agreement compared to the action level. In SBRT plans, the discrepancy between the Indp was variable depending on dose calculation algorithms of TPS. Conclusion: The impact of dose calculation algorithms for the TPS and the Indp affects the action level. It is effective to set the site-specific tolerances, especially for the site where inhomogeneous correction can affect dose distribution strongly.« less

  10. Portal verification using the KODAK ACR 2000 RT storage phosphor plate system and EC films. A semiquantitative comparison.

    PubMed

    Geyer, Peter; Blank, Hilbert; Alheit, Horst

    2006-03-01

    The suitability of the storage phosphor plate system ACR 2000 RT (Eastman Kodak Corp., Rochester, MN, USA), that is destined for portal verification as well as for portal simulation imaging in radiotherapy, had to be proven by the comparison with a highly sensitive verification film. The comparison included portal verification images of different regions (head and neck, thorax, abdomen, and pelvis) irradiated with 6- and 15-MV photons and electrons. Each portal verification image was done at the storage screen and the EC film as well, using the EC-L cassettes (both: Eastman Kodak Corp., Rochester, MN, USA) for both systems. The soft-tissue and bony contrast and the brightness were evaluated and compared in a ranking of the two compared images. Different phantoms were irradiated to investigate the high- and low-contrast resolution. To account for quality assurance application, the short-time exposure of the unpacked and irradiated storage screen by green and red room lasers was also investigated. In general, the quality of the processed ACR images was slightly higher than that of the films, mostly due to cases of an insufficient exposure to the film. The storage screen was able to verify electron portals even for low electron energies with only minor photon contamination. The laser lines were sharply and clearly visible on the ACR images. The ACR system may replace the film without any noticeable decrease in image quality thereby reducing processing time and saving the costs of films and avoiding incorrect exposures.

  11. PET/CT imaging for treatment verification after proton therapy: A study with plastic phantoms and metallic implants

    PubMed Central

    Parodi, Katia; Paganetti, Harald; Cascio, Ethan; Flanz, Jacob B.; Bonab, Ali A.; Alpert, Nathaniel M.; Lohmann, Kevin; Bortfeld, Thomas

    2008-01-01

    The feasibility of off-line positron emission tomography/computed tomography (PET/CT) for routine three dimensional in-vivo treatment verification of proton radiation therapy is currently under investigation at Massachusetts General Hospital in Boston. In preparation for clinical trials, phantom experiments were carried out to investigate the sensitivity and accuracy of the method depending on irradiation and imaging parameters. Furthermore, they addressed the feasibility of PET/CT as a robust verification tool in the presence of metallic implants. These produce x-ray CT artifacts and fluence perturbations which may compromise the accuracy of treatment planning algorithms. Spread-out Bragg peak proton fields were delivered to different phantoms consisting of polymethylmethacrylate (PMMA), PMMA stacked with lung and bone equivalent materials, and PMMA with titanium rods to mimic implants in patients. PET data were acquired in list mode starting within 20 min after irradiation at a commercial luthetium-oxyorthosilicate (LSO)-based PET/CT scanner. The amount and spatial distribution of the measured activity could be well reproduced by calculations based on the GEANT4 and FLUKA Monte Carlo codes. This phantom study supports the potential of millimeter accuracy for range monitoring and lateral field position verification even after low therapeutic dose exposures of 2 Gy, despite the delay between irradiation and imaging. It also indicates the value of PET for treatment verification in the presence of metallic implants, demonstrating a higher sensitivity to fluence perturbations in comparison to a commercial analytical treatment planning system. Finally, it addresses the suitability of LSO-based PET detectors for hadron therapy monitoring. This unconventional application of PET involves countrates which are orders of magnitude lower than in diagnostic tracer imaging, i.e., the signal of interest is comparable to the noise originating from the intrinsic radioactivity of the detector itself. In addition to PET alone, PET/CT imaging provides accurate information on the position of the imaged object and may assess possible anatomical changes during fractionated radiotherapy in clinical applications. PMID:17388158

  12. The influence of verification jig on framework fit for nonsegmented fixed implant-supported complete denture.

    PubMed

    Ercoli, Carlo; Geminiani, Alessandro; Feng, Changyong; Lee, Heeje

    2012-05-01

    The purpose of this retrospective study was to assess if there was a difference in the likelihood of achieving passive fit when an implant-supported full-arch prosthesis framework is fabricated with or without the aid of a verification jig. This investigation was approved by the University of Rochester Research Subject Review Board (protocol #RSRB00038482). Thirty edentulous patients, 49 to 73 years old (mean 61 years old), rehabilitated with a nonsegmented fixed implant-supported complete denture were included in the study. During the restorative process, final impressions were made using the pickup impression technique and elastomeric impression materials. For 16 patients, a verification jig was made (group J), while for the remaining 14 patients, a verification jig was not used (group NJ) and the framework was fabricated directly on the master cast. During the framework try-in appointment, the fit was assessed by clinical (Sheffield test) and radiographic inspection and recorded as passive or nonpassive. When a verification jig was used (group J, n = 16), all frameworks exhibited clinically passive fit, while when a verification jig was not used (group NJ, n = 14), only two frameworks fit. This difference was statistically significant (p < .001). Within the limitations of this retrospective study, the fabrication of a verification jig ensured clinically passive fit of metal frameworks in nonsegmented fixed implant-supported complete denture. © 2011 Wiley Periodicals, Inc.

  13. Identity Verification, Control, and Aggression in Marriage

    ERIC Educational Resources Information Center

    Stets, Jan E.; Burke, Peter J.

    2005-01-01

    In this research we study the identity verification process and its effects in marriage. Drawing on identity control theory, we hypothesize that a lack of verification in the spouse identity (1) threatens stable self-meanings and interaction patterns between spouses, and (2) challenges a (nonverified) spouse's perception of control over the…

  14. Students' Verification Strategies for Combinatorial Problems

    ERIC Educational Resources Information Center

    Mashiach Eizenberg, Michal; Zaslavsky, Orit

    2004-01-01

    We focus on a major difficulty in solving combinatorial problems, namely, on the verification of a solution. Our study aimed at identifying undergraduate students' tendencies to verify their solutions, and the verification strategies that they employ when solving these problems. In addition, an attempt was made to evaluate the level of efficiency…

  15. Improved Detection Technique for Solvent Rinse Cleanliness Verification

    NASA Technical Reports Server (NTRS)

    Hornung, S. D.; Beeson, H. D.

    2001-01-01

    The NASA White Sands Test Facility (WSTF) has an ongoing effort to reduce or eliminate usage of cleaning solvents such as CFC-113 and its replacements. These solvents are used in the final clean and cleanliness verification processes for flight and ground support hardware, especially for oxygen systems where organic contaminants can pose an ignition hazard. For the final cleanliness verification in the standard process, the equivalent of one square foot of surface area of parts is rinsed with the solvent, and the final 100 mL of the rinse is captured. The amount of nonvolatile residue (NVR) in the solvent is determined by weight after the evaporation of the solvent. An improved process of sampling this rinse, developed at WSTF, requires evaporation of less than 2 mL of the solvent to make the cleanliness verification. Small amounts of the solvent are evaporated in a clean stainless steel cup, and the cleanliness of the stainless steel cup is measured using a commercially available surface quality monitor. The effectiveness of this new cleanliness verification technique was compared to the accepted NVR sampling procedures. Testing with known contaminants in solution, such as hydraulic fluid, fluorinated lubricants, and cutting and lubricating oils, was performed to establish a correlation between amount in solution and the process response. This report presents the approach and results and discusses the issues in establishing the surface quality monitor-based cleanliness verification.

  16. Application of virtual distances methodology to laser tracker verification with an indexed metrology platform

    NASA Astrophysics Data System (ADS)

    Acero, R.; Santolaria, J.; Pueo, M.; Aguilar, J. J.; Brau, A.

    2015-11-01

    High-range measuring equipment like laser trackers need large dimension calibrated reference artifacts in their calibration and verification procedures. In this paper, a new verification procedure for portable coordinate measuring instruments based on the generation and evaluation of virtual distances with an indexed metrology platform is developed. This methodology enables the definition of an unlimited number of reference distances without materializing them in a physical gauge to be used as a reference. The generation of the virtual points and reference lengths derived is linked to the concept of the indexed metrology platform and the knowledge of the relative position and orientation of its upper and lower platforms with high accuracy. It is the measuring instrument together with the indexed metrology platform one that remains still, rotating the virtual mesh around them. As a first step, the virtual distances technique is applied to a laser tracker in this work. The experimental verification procedure of the laser tracker with virtual distances is simulated and further compared with the conventional verification procedure of the laser tracker with the indexed metrology platform. The results obtained in terms of volumetric performance of the laser tracker proved the suitability of the virtual distances methodology in calibration and verification procedures for portable coordinate measuring instruments, broadening and expanding the possibilities for the definition of reference distances in these procedures.

  17. Technical Note: Range verification system using edge detection method for a scintillator and a CCD camera system.

    PubMed

    Saotome, Naoya; Furukawa, Takuji; Hara, Yousuke; Mizushima, Kota; Tansho, Ryohei; Saraya, Yuichi; Shirai, Toshiyuki; Noda, Koji

    2016-04-01

    Three-dimensional irradiation with a scanned carbon-ion beam has been performed from 2011 at the authors' facility. The authors have developed the rotating-gantry equipped with the scanning irradiation system. The number of combinations of beam properties to measure for the commissioning is more than 7200, i.e., 201 energy steps, 3 intensities, and 12 gantry angles. To compress the commissioning time, quick and simple range verification system is required. In this work, the authors develop a quick range verification system using scintillator and charge-coupled device (CCD) camera and estimate the accuracy of the range verification. A cylindrical plastic scintillator block and a CCD camera were installed on the black box. The optical spatial resolution of the system is 0.2 mm/pixel. The camera control system was connected and communicates with the measurement system that is part of the scanning system. The range was determined by image processing. Reference range for each energy beam was determined by a difference of Gaussian (DOG) method and the 80% of distal dose of the depth-dose distribution that were measured by a large parallel-plate ionization chamber. The authors compared a threshold method and a DOG method. The authors found that the edge detection method (i.e., the DOG method) is best for the range detection. The accuracy of range detection using this system is within 0.2 mm, and the reproducibility of the same energy measurement is within 0.1 mm without setup error. The results of this study demonstrate that the authors' range check system is capable of quick and easy range verification with sufficient accuracy.

  18. Characterization and clinical evaluation of a novel 2D detector array for conventional and flattening filter free (FFF) IMRT pre-treatment verification.

    PubMed

    Sekar, Yuvaraj; Thoelking, Johannes; Eckl, Miriam; Kalichava, Irakli; Sihono, Dwi Seno Kuncoro; Lohr, Frank; Wenz, Frederik; Wertz, Hansjoerg

    2018-04-01

    The novel MatriXX FFF (IBA Dosimetry, Germany) detector is a new 2D ionization chamber detector array designed for patient specific IMRT-plan verification including flattening-filter-free (FFF) beams. This study provides a detailed analysis of the characterization and clinical evaluation of the new detector array. The verification of the MatriXX FFF was subdivided into (i) physical dosimetric tests including dose linearity, dose rate dependency and output factor measurements and (ii) patient specific IMRT pre-treatment plan verifications. The MatriXX FFF measurements were compared to the calculated dose distribution of a commissioned treatment planning system by gamma index and dose difference evaluations for 18 IMRT-sequences. All IMRT-sequences were measured with original gantry angles and with collapsing all beams to 0° gantry angle to exclude the influence of the detector's angle dependency. The MatriXX FFF was found to be linear and dose rate independent for all investigated modalities (deviations ≤0.6%). Furthermore, the output measurements of the MatriXX FFF were in very good agreement to reference measurements (deviations ≤1.8%). For the clinical evaluation an average pixel passing rate for γ (3%,3mm) of (98.5±1.5)% was achieved when applying a gantry angle correction. Also, with collapsing all beams to 0° gantry angle an excellent agreement to the calculated dose distribution was observed (γ (3%,3mm) =(99.1±1.1)%). The MatriXX FFF fulfills all physical requirements in terms of dosimetric accuracy. Furthermore, the evaluation of the IMRT-plan measurements showed that the detector particularly together with the gantry angle correction is a reliable device for IMRT-plan verification including FFF. Copyright © 2017. Published by Elsevier GmbH.

  19. A comparison of two prompt gamma imaging techniques with collimator-based cameras for range verification in proton therapy

    NASA Astrophysics Data System (ADS)

    Lin, Hsin-Hon; Chang, Hao-Ting; Chao, Tsi-Chian; Chuang, Keh-Shih

    2017-08-01

    In vivo range verification plays an important role in proton therapy to fully utilize the benefits of the Bragg peak (BP) for delivering high radiation dose to tumor, while sparing the normal tissue. For accurately locating the position of BP, camera equipped with collimators (multi-slit and knife-edge collimator) to image prompt gamma (PG) emitted along the proton tracks in the patient have been proposed for range verification. The aim of the work is to compare the performance of multi-slit collimator and knife-edge collimator for non-invasive proton beam range verification. PG imaging was simulated by a validated GATE/GEANT4 Monte Carlo code to model the spot-scanning proton therapy and cylindrical PMMA phantom in detail. For each spot, 108 protons were simulated. To investigate the correlation between the acquired PG profile and the proton range, the falloff regions of PG profiles were fitted with a 3-line-segment curve function as the range estimate. Factors including the energy window setting, proton energy, phantom size, and phantom shift that may influence the accuracy of detecting range were studied. Results indicated that both collimator systems achieve reasonable accuracy and good response to the phantom shift. The accuracy of range predicted by multi-slit collimator system is less affected by the proton energy, while knife-edge collimator system can achieve higher detection efficiency that lead to a smaller deviation in predicting range. We conclude that both collimator systems have potentials for accurately range monitoring in proton therapy. It is noted that neutron contamination has a marked impact on range prediction of the two systems, especially in multi-slit system. Therefore, a neutron reduction technique for improving the accuracy of range verification of proton therapy is needed.

  20. Technical Note: Range verification system using edge detection method for a scintillator and a CCD camera system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saotome, Naoya, E-mail: naosao@nirs.go.jp; Furukawa, Takuji; Hara, Yousuke

    Purpose: Three-dimensional irradiation with a scanned carbon-ion beam has been performed from 2011 at the authors’ facility. The authors have developed the rotating-gantry equipped with the scanning irradiation system. The number of combinations of beam properties to measure for the commissioning is more than 7200, i.e., 201 energy steps, 3 intensities, and 12 gantry angles. To compress the commissioning time, quick and simple range verification system is required. In this work, the authors develop a quick range verification system using scintillator and charge-coupled device (CCD) camera and estimate the accuracy of the range verification. Methods: A cylindrical plastic scintillator blockmore » and a CCD camera were installed on the black box. The optical spatial resolution of the system is 0.2 mm/pixel. The camera control system was connected and communicates with the measurement system that is part of the scanning system. The range was determined by image processing. Reference range for each energy beam was determined by a difference of Gaussian (DOG) method and the 80% of distal dose of the depth-dose distribution that were measured by a large parallel-plate ionization chamber. The authors compared a threshold method and a DOG method. Results: The authors found that the edge detection method (i.e., the DOG method) is best for the range detection. The accuracy of range detection using this system is within 0.2 mm, and the reproducibility of the same energy measurement is within 0.1 mm without setup error. Conclusions: The results of this study demonstrate that the authors’ range check system is capable of quick and easy range verification with sufficient accuracy.« less

  1. Acoustic-based proton range verification in heterogeneous tissue: simulation studies

    NASA Astrophysics Data System (ADS)

    Jones, Kevin C.; Nie, Wei; Chu, James C. H.; Turian, Julius V.; Kassaee, Alireza; Sehgal, Chandra M.; Avery, Stephen

    2018-01-01

    Acoustic-based proton range verification (protoacoustics) is a potential in vivo technique for determining the Bragg peak position. Previous measurements and simulations have been restricted to homogeneous water tanks. Here, a CT-based simulation method is proposed and applied to a liver and prostate case to model the effects of tissue heterogeneity on the protoacoustic amplitude and time-of-flight range verification accuracy. For the liver case, posterior irradiation with a single proton pencil beam was simulated for detectors placed on the skin. In the prostate case, a transrectal probe measured the protoacoustic pressure generated by irradiation with five separate anterior proton beams. After calculating the proton beam dose deposition, each CT voxel’s material properties were mapped based on Hounsfield Unit values, and thermoacoustically-generated acoustic wave propagation was simulated with the k-Wave MATLAB toolbox. By comparing the simulation results for the original liver CT to homogenized variants, the effects of heterogeneity were assessed. For the liver case, 1.4 cGy of dose at the Bragg peak generated 50 mPa of pressure (13 cm distal), a 2×  lower amplitude than simulated in a homogeneous water tank. Protoacoustic triangulation of the Bragg peak based on multiple detector measurements resulted in 0.4 mm accuracy for a δ-function proton pulse irradiation of the liver. For the prostate case, higher amplitudes are simulated (92-1004 mPa) for closer detectors (<8 cm). For four of the prostate beams, the protoacoustic range triangulation was accurate to  ⩽1.6 mm (δ-function proton pulse). Based on the results, application of protoacoustic range verification to heterogeneous tissue will result in decreased signal amplitudes relative to homogeneous water tank measurements, but accurate range verification is still expected to be possible.

  2. Comparison of individual and composite field analysis using array detector for Intensity Modulated Radiotherapy dose verification.

    PubMed

    Saminathan, Sathiyan; Chandraraj, Varatharaj; Sridhar, C H; Manickam, Ravikumar

    2012-01-01

    To compare the measured and calculated individual and composite field planar dose distribution of Intensity Modulated Radiotherapy plans. The measurements were performed in Clinac DHX linear accelerator with 6 MV photons using Matrixx device and a solid water phantom. The 20 brain tumor patients were selected for this study. The IMRT plan was carried out for all the patients using Eclipse treatment planning system. The verification plan was produced for every original plan using CT scan of Matrixx embedded in the phantom. Every verification field was measured by the Matrixx. The TPS calculated and measured dose distributions were compared for individual and composite fields. The percentage of gamma pixel match for the dose distribution patterns were evaluated using gamma histogram. The gamma pixel match was 95-98% for 41 fields (39%) and 98% for 59 fields (61%) with individual fields. The percentage of gamma pixel match was 95-98% for 5 patients and 98% for other 12 patients with composite fields. Three patients showed a gamma pixel match of less than 95%. The comparison of percentage gamma pixel match for individual and composite fields showed more than 2.5% variation for 6 patients, more than 1% variation for 4 patients, while the remaining 10 patients showed less than 1% variation. The individual and composite field measurements showed good agreement with TPS calculated dose distribution for the studied patients. The measurement and data analysis for individual fields is a time consuming process, the composite field analysis may be sufficient enough for smaller field dose distribution analysis with array detectors.

  3. SU-E-I-24: Method for CT Automatic Exposure Control Verification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gracia, M; Olasolo, J; Martin, M

    Purpose: Design of a phantom and a simple method for the automatic exposure control (AEC) verification in CT. This verification is included in the computed tomography (CT) Spanish Quality Assurance Protocol. Methods: The phantom design is made from the head and the body phantom used for the CTDI measurement and PMMA plates (35×35 cm2) of 10 cm thickness. Thereby, three different thicknesses along the longitudinal axis are obtained which permit to evaluate the longitudinal AEC performance. Otherwise, the existent asymmetry in the PMMA layers helps to assess angular and 3D AEC operation.Recent acquisition in our hospital (August 2014) of Nomexmore » electrometer (PTW), together with the 10 cm pencil ionization chamber, led to register dose rate as a function of time. Measurements with this chamber fixed at 0° and 90° on the gantry where made on five multidetector-CTs from principal manufacturers. Results: Individual analysis of measurements shows dose rate variation as a function of phantom thickness. The comparative analysis shows that dose rate is kept constant in the head and neck phantom while the PMMA phantom exhibits an abrupt variation between both results, being greater results at 90° as the thickness of the phantom is 3.5 times larger than in the perpendicular direction. Conclusion: Proposed method is simple, quick and reproducible. Results obtained let a qualitative evaluation of the AEC and they are consistent with the expected behavior. A line of future development is to quantitatively study the intensity modulation and parameters of image quality, and a possible comparative study between different manufacturers.« less

  4. Optimized point dose measurement for monitor unit verification in intensity modulated radiation therapy using 6 MV photons by three different methodologies with different detector-phantom combinations: A comparative study

    PubMed Central

    Sarkar, Biplab; Ghosh, Bhaswar; Sriramprasath; Mahendramohan, Sukumaran; Basu, Ayan; Goswami, Jyotirup; Ray, Amitabh

    2010-01-01

    The study was aimed to compare accuracy of monitor unit verification in intensity modulated radiation therapy (IMRT) using 6 MV photons by three different methodologies with different detector phantom combinations. Sixty patients were randomly chosen. Zero degree couch and gantry angle plans were generated in a plastic universal IMRT verification phantom and 30×30×30 cc water phantom and measured using 0.125 cc and 0.6 cc chambers, respectively. Actual gantry and couch angle plans were also measured in water phantom using 0.6 cc chamber. A suitable point of measurement was chosen from the beam profile for each field. When the zero-degree gantry, couch angle plans and actual gantry, couch angle plans were measured by 0.6 cc chamber in water phantom, the percentage mean difference (MD) was 1.35%, 2.94 % and Standard Deviation (SD) was 2.99%, 5.22%, respectively. The plastic phantom measurements with 0.125 cc chamber Semiflex ionisation chamber (SIC) showed an MD=4.21% and SD=2.73 %, but when corrected for chamber-medium response, they showed an improvement, with MD=3.38 % and SD=2.59 %. It was found that measurements with water phantom and 0.6cc chamber at gantry angle zero degree showed better conformity than other measurements of medium-detector combinations. Correction in plastic phantom measurement improved the result only marginally, and actual gantry angle measurement in a flat- water phantom showed higher deviation. PMID:20927221

  5. Verification of a VRF Heat Pump Computer Model in EnergyPlus

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nigusse, Bereket; Raustad, Richard

    2013-06-15

    This paper provides verification results of the EnergyPlus variable refrigerant flow (VRF) heat pump computer model using manufacturer's performance data. The paper provides an overview of the VRF model, presents the verification methodology, and discusses the results. The verification provides quantitative comparison of full and part-load performance to manufacturer's data in cooling-only and heating-only modes of operation. The VRF heat pump computer model uses dual range bi-quadratic performance curves to represent capacity and Energy Input Ratio (EIR) as a function of indoor and outdoor air temperatures, and dual range quadratic performance curves as a function of part-load-ratio for modeling part-loadmore » performance. These performance curves are generated directly from manufacturer's published performance data. The verification compared the simulation output directly to manufacturer's performance data, and found that the dual range equation fit VRF heat pump computer model predicts the manufacturer's performance data very well over a wide range of indoor and outdoor temperatures and part-load conditions. The predicted capacity and electric power deviations are comparbale to equation-fit HVAC computer models commonly used for packaged and split unitary HVAC equipment.« less

  6. Self-verification and contextualized self-views.

    PubMed

    Chen, Serena; English, Tammy; Peng, Kaiping

    2006-07-01

    Whereas most self-verification research has focused on people's desire to verify their global self-conceptions, the present studies examined self-verification with regard to contextualized selfviews-views of the self in particular situations and relationships. It was hypothesized that individuals whose core self-conceptions include contextualized self-views should seek to verify these self-views. In Study 1, the more individuals defined the self in dialectical terms, the more their judgments were biased in favor of verifying over nonverifying feedback about a negative, situation-specific self-view. In Study 2, consistent with research on gender differences in the importance of relationships to the self-concept, women but not men showed a similar bias toward feedback about a negative, relationship-specific self-view, a pattern not seen for global self-views. Together, the results support the notion that self-verification occurs for core self-conceptions, whatever form(s) they may take. Individual differences in self-verification and the nature of selfhood and authenticity are discussed.

  7. Verification of ANSYS Fluent and OpenFOAM CFD platforms for prediction of impact flow

    NASA Astrophysics Data System (ADS)

    Tisovská, Petra; Peukert, Pavel; Kolář, Jan

    The main goal of the article is a verification of the heat transfer coefficient numerically predicted by two CDF platforms - ANSYS-Fluent and OpenFOAM on the problem of impact flows oncoming from 2D nozzle. Various mesh parameters and solver settings were tested under several boundary conditions and compared to known experimental results. The best solver setting, suitable for further optimization of more complex geometry is evaluated.

  8. A Large-Scale Study of Fingerprint Matching Systems for Sensor Interoperability Problem

    PubMed Central

    Hussain, Muhammad; AboAlSamh, Hatim; AlZuair, Mansour

    2018-01-01

    The fingerprint is a commonly used biometric modality that is widely employed for authentication by law enforcement agencies and commercial applications. The designs of existing fingerprint matching methods are based on the hypothesis that the same sensor is used to capture fingerprints during enrollment and verification. Advances in fingerprint sensor technology have raised the question about the usability of current methods when different sensors are employed for enrollment and verification; this is a fingerprint sensor interoperability problem. To provide insight into this problem and assess the status of state-of-the-art matching methods to tackle this problem, we first analyze the characteristics of fingerprints captured with different sensors, which makes cross-sensor matching a challenging problem. We demonstrate the importance of fingerprint enhancement methods for cross-sensor matching. Finally, we conduct a comparative study of state-of-the-art fingerprint recognition methods and provide insight into their abilities to address this problem. We performed experiments using a public database (FingerPass) that contains nine datasets captured with different sensors. We analyzed the effects of different sensors and found that cross-sensor matching performance deteriorates when different sensors are used for enrollment and verification. In view of our analysis, we propose future research directions for this problem. PMID:29597286

  9. A Large-Scale Study of Fingerprint Matching Systems for Sensor Interoperability Problem.

    PubMed

    AlShehri, Helala; Hussain, Muhammad; AboAlSamh, Hatim; AlZuair, Mansour

    2018-03-28

    The fingerprint is a commonly used biometric modality that is widely employed for authentication by law enforcement agencies and commercial applications. The designs of existing fingerprint matching methods are based on the hypothesis that the same sensor is used to capture fingerprints during enrollment and verification. Advances in fingerprint sensor technology have raised the question about the usability of current methods when different sensors are employed for enrollment and verification; this is a fingerprint sensor interoperability problem. To provide insight into this problem and assess the status of state-of-the-art matching methods to tackle this problem, we first analyze the characteristics of fingerprints captured with different sensors, which makes cross-sensor matching a challenging problem. We demonstrate the importance of fingerprint enhancement methods for cross-sensor matching. Finally, we conduct a comparative study of state-of-the-art fingerprint recognition methods and provide insight into their abilities to address this problem. We performed experiments using a public database (FingerPass) that contains nine datasets captured with different sensors. We analyzed the effects of different sensors and found that cross-sensor matching performance deteriorates when different sensors are used for enrollment and verification. In view of our analysis, we propose future research directions for this problem.

  10. Self-verification and social anxiety: preference for negative social feedback and low social self-esteem.

    PubMed

    Valentiner, David P; Skowronski, John J; McGrath, Patrick B; Smith, Sarah A; Renner, Kerry A

    2011-10-01

    A self-verification model of social anxiety views negative social self-esteem as a core feature of social anxiety. This core feature is proposed to be maintained through self-verification processes, such as by leading individuals with negative social self-esteem to prefer negative social feedback. This model is tested in two studies. In Study 1, questionnaires were administered to a college sample (N = 317). In Study 2, questionnaires were administered to anxiety disordered patients (N = 62) before and after treatment. Study 1 developed measures of preference for negative social feedback and social self-esteem, and provided evidence of their incremental validity in a college sample. Study 2 found that these two variables are not strongly related to fears of evaluation, are relatively unaffected by a treatment that targets such fears, and predict residual social anxiety following treatment. Overall, these studies provide preliminary evidence for a self-verification model of social anxiety.

  11. Crew Exploration Vehicle (CEV) Avionics Integration Laboratory (CAIL) Independent Analysis

    NASA Technical Reports Server (NTRS)

    Davis, Mitchell L.; Aguilar, Michael L.; Mora, Victor D.; Regenie, Victoria A.; Ritz, William F.

    2009-01-01

    Two approaches were compared to the Crew Exploration Vehicle (CEV) Avionics Integration Laboratory (CAIL) approach: the Flat-Sat and Shuttle Avionics Integration Laboratory (SAIL). The Flat-Sat and CAIL/SAIL approaches are two different tools designed to mitigate different risks. Flat-Sat approach is designed to develop a mission concept into a flight avionics system and associated ground controller. The SAIL approach is designed to aid in the flight readiness verification of the flight avionics system. The approaches are complimentary in addressing both the system development risks and mission verification risks. The following NESC team findings were identified: The CAIL assumption is that the flight subsystems will be matured for the system level verification; The Flat-Sat and SAIL approaches are two different tools designed to mitigate different risks. The following NESC team recommendation was provided: Define, document, and manage a detailed interface between the design and development (EDL and other integration labs) to the verification laboratory (CAIL).

  12. Validation (not just verification) of Deep Space Missions

    NASA Technical Reports Server (NTRS)

    Duren, Riley M.

    2006-01-01

    ion & Validation (V&V) is a widely recognized and critical systems engineering function. However, the often used definition 'Verification proves the design is right; validation proves it is the right design' is rather vague. And while Verification is a reasonably well standardized systems engineering process, Validation is a far more abstract concept and the rigor and scope applied to it varies widely between organizations and individuals. This is reflected in the findings in recent Mishap Reports for several NASA missions, in which shortfalls in Validation (not just Verification) were cited as root- or contributing-factors in catastrophic mission loss. Furthermore, although there is strong agreement in the community that Test is the preferred method for V&V, many people equate 'V&V' with 'Test', such that Analysis and Modeling aren't given comparable attention. Another strong motivator is a realization that the rapid growth in complexity of deep-space missions (particularly Planetary Landers and Space Observatories given their inherent unknowns) is placing greater demands on systems engineers to 'get it right' with Validation.

  13. The Effect of Mystery Shopper Reports on Age Verification for Tobacco Purchases

    PubMed Central

    KREVOR, BRAD S.; PONICKI, WILLIAM R.; GRUBE, JOEL W.; DeJONG, WILLIAM

    2011-01-01

    Mystery shops (MS) involving attempted tobacco purchases by young buyers have been employed to monitor retail stores’ performance in refusing underage sales. Anecdotal evidence suggests that MS visits with immediate feedback to store personnel can improve age verification. This study investigated the impact of monthly and twice-monthly MS reports on age verification. Forty-five Walgreens stores were each visited 20 times by mystery shoppers. The stores were randomly assigned to one of three conditions. Control group stores received no feedback, whereas two treatment groups received feedback communications every visit (twice monthly) or every second visit (monthly) after baseline. Logit regression models tested whether each treatment group improved verification rates relative to the control group. Post-baseline verification rates were higher in both treatment groups than in the control group, but only the stores receiving monthly communications had a significantly greater improvement than control group stores. Verification rates increased significantly during the study period for all three groups, with delayed improvement among control group stores. Communication between managers regarding the MS program may account for the delayed age-verification improvements observed in the control group stores. Encouraging inter-store communication might extend the benefits of MS programs beyond those stores that receive this intervention. PMID:21541874

  14. External tank aerothermal design criteria verification, volume 2

    NASA Technical Reports Server (NTRS)

    Crain, William K.; Frost, Cynthia; Warmbrod, John

    1990-01-01

    The objective of the study was to produce an independent set of ascent environments which would serve as a check on the Rockwell International (RI) IVBC-3 environments and provide an independent reevaluation of the thermal design criteria for the External Tank (ET). Given here are the plotted timewise environments comparing REMTECH results to the RI IVBC results.

  15. SU-E-T-215: Comparison of VMAT-SABR Treatment Plans with Flattened Filter (FF) Beam and Flattening Filter-Free (FFF) Beam for Localized Prostate Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chung, J; Kim, J; Kang, S

    2015-06-15

    Purpose: The purpose of this study is to access VMAT-SABR plan using flattening filter (FF) and flattening filter-free (FFF) beam, and compare the verification results for all pretreatment plans. Methods: SABR plans for 20 prostate patients were optimized in the Eclipse treatment planning system. A prescription dose was 42.7 Gy/7 fractions. Four SABR plans for each patient were calculated using Acuros XB algorithm with both FF and FFF beams of 6- and 10-MV. The dose-volume histograms (DVH) and technical parameters were recorded and compared. A pretreatment verification was performed and the gamma analysis was used to quantify the agreement betweenmore » calculations and measurements. Results: For each patient, the DVHs are closely similar for plans of four different beams. There are small differences showed in dose distributions and corresponding DVHs when comparing the each plan related to the same patient. Sparing on bladder and rectum was slightly better on plans with 10-MV FF and FFF than with 6-MV FF and FFF, but this difference was negligible. However, there was no significance in the other OARs. The mean agreement of 3%/3mm criteria was higher than 97% in all plans. The mean MUs and deliver time employed was 1701±101 and 3.02±0.17 min for 6-MV FF, 1870±116 and 1.69±0.08 min for 6-MV FFF, 1471±86 and 2.68±0.14 min for 10-MV FF, and 1619±101 and 0.98±0.04 min for 10-MV FFF, respectively. Conclusion: Dose distributions on prostate SABR plans using FFF beams were similar to those generated by FF beams. However, the use of FFF beam offers a clear benefit in delivery time when compared to FF beam. Verification of pretreatment also represented the acceptable and comparable results in all plans using FF beam as well as FFF beam. Therefore, this study suggests that the use of FFF beam is feasible and efficient technique for prostate SABR.« less

  16. Calibration and verification of models of organic carbon removal kinetics in Aerated Submerged Fixed-Bed Biofilm Reactors (ASFBBR): a case study of wastewater from an oil-refinery.

    PubMed

    Trojanowicz, Karol; Wójcik, Włodzimierz

    2011-01-01

    The article presents a case-study on the calibration and verification of mathematical models of organic carbon removal kinetics in biofilm. The chosen Harremöes and Wanner & Reichert models were calibrated with a set of model parameters obtained both during dedicated studies conducted at pilot- and lab-scales for petrochemical wastewater conditions and from the literature. Next, the models were successfully verified through studies carried out utilizing a pilot ASFBBR type bioreactor installed in an oil-refinery wastewater treatment plant. During verification the pilot biofilm reactor worked under varying surface organic loading rates (SOL), dissolved oxygen concentrations and temperatures. The verification proved that the models can be applied in practice to petrochemical wastewater treatment engineering for e.g. biofilm bioreactor dimensioning.

  17. Verification bias an underrecognized source of error in assessing the efficacy of medical imaging.

    PubMed

    Petscavage, Jonelle M; Richardson, Michael L; Carr, Robert B

    2011-03-01

    Diagnostic tests are validated by comparison against a "gold standard" reference test. When the reference test is invasive or expensive, it may not be applied to all patients. This can result in biased estimates of the sensitivity and specificity of the diagnostic test. This type of bias is called "verification bias," and is a common problem in imaging research. The purpose of our study is to estimate the prevalence of verification bias in the recent radiology literature. All issues of the American Journal of Roentgenology (AJR), Academic Radiology, Radiology, and European Journal of Radiology (EJR) between November 2006 and October 2009 were reviewed for original research articles mentioning sensitivity or specificity as endpoints. Articles were read to determine whether verification bias was present and searched for author recognition of verification bias in the design. During 3 years, these journals published 2969 original research articles. A total of 776 articles used sensitivity or specificity as an outcome. Of these, 211 articles demonstrated potential verification bias. The fraction of articles with potential bias was respectively 36.4%, 23.4%, 29.5%, and 13.4% for AJR, Academic Radiology, Radiology, and EJR. The total fraction of papers with potential bias in which the authors acknowledged this bias was 17.1%. Verification bias is a common and frequently unacknowledged source of error in efficacy studies of diagnostic imaging. Bias can often be eliminated by proper study design. When it cannot be eliminated, it should be estimated and acknowledged. Published by Elsevier Inc.

  18. A robust method using propensity score stratification for correcting verification bias for binary tests

    PubMed Central

    He, Hua; McDermott, Michael P.

    2012-01-01

    Sensitivity and specificity are common measures of the accuracy of a diagnostic test. The usual estimators of these quantities are unbiased if data on the diagnostic test result and the true disease status are obtained from all subjects in an appropriately selected sample. In some studies, verification of the true disease status is performed only for a subset of subjects, possibly depending on the result of the diagnostic test and other characteristics of the subjects. Estimators of sensitivity and specificity based on this subset of subjects are typically biased; this is known as verification bias. Methods have been proposed to correct verification bias under the assumption that the missing data on disease status are missing at random (MAR), that is, the probability of missingness depends on the true (missing) disease status only through the test result and observed covariate information. When some of the covariates are continuous, or the number of covariates is relatively large, the existing methods require parametric models for the probability of disease or the probability of verification (given the test result and covariates), and hence are subject to model misspecification. We propose a new method for correcting verification bias based on the propensity score, defined as the predicted probability of verification given the test result and observed covariates. This is estimated separately for those with positive and negative test results. The new method classifies the verified sample into several subsamples that have homogeneous propensity scores and allows correction for verification bias. Simulation studies demonstrate that the new estimators are more robust to model misspecification than existing methods, but still perform well when the models for the probability of disease and probability of verification are correctly specified. PMID:21856650

  19. Dosimetric accuracy of Kodak EDR2 film for IMRT verifications.

    PubMed

    Childress, Nathan L; Salehpour, Mohammad; Dong, Lei; Bloch, Charles; White, R Allen; Rosen, Isaac I

    2005-02-01

    Patient-specific intensity-modulated radiotherapy (IMRT) verifications require an accurate two-dimensional dosimeter that is not labor-intensive. We assessed the precision and reproducibility of film calibrations over time, measured the elemental composition of the film, measured the intermittency effect, and measured the dosimetric accuracy and reproducibility of calibrated Kodak EDR2 film for single-beam verifications in a solid water phantom and for full-plan verifications in a Rexolite phantom. Repeated measurements of the film sensitometric curve in a single experiment yielded overall uncertainties in dose of 2.1% local and 0.8% relative to 300 cGy. 547 film calibrations over an 18-month period, exposed to a range of doses from 0 to a maximum of 240 MU or 360 MU and using 6 MV or 18 MV energies, had optical density (OD) standard deviations that were 7%-15% of their average values. This indicates that daily film calibrations are essential when EDR2 film is used to obtain absolute dose results. An elemental analysis of EDR2 film revealed that it contains 60% as much silver and 20% as much bromine as Kodak XV2 film. EDR2 film also has an unusual 1.69:1 silver:halide molar ratio, compared with the XV2 film's 1.02:1 ratio, which may affect its chemical reactions. To test EDR2's intermittency effect, the OD generated by a single 300 MU exposure was compared to the ODs generated by exposing the film 1 MU, 2 MU, and 4 MU at a time to a total of 300 MU. An ion chamber recorded the relative dose of all intermittency measurements to account for machine output variations. Using small MU bursts to expose the film resulted in delivery times of 4 to 14 minutes and lowered the film's OD by approximately 2% for both 6 and 18 MV beams. This effect may result in EDR2 film underestimating absolute doses for patient verifications that require long delivery times. After using a calibration to convert EDR2 film's OD to dose values, film measurements agreed within 2% relative difference and 2 mm criteria to ion chamber measurements for both sliding window and step-and-shoot fluence map verifications. Calibrated film results agreed with ion chamber measurements to within 5 % /2 mm criteria for transverse-plane full-plan verifications, but were consistently low. When properly calibrated, EDR2 film can be an adequate two-dimensional dosimeter for IMRT verifications, although it may underestimate doses in regions with long exposure times.

  20. Clinical commissioning of an in vivo range verification system for prostate cancer treatment with anterior and anterior oblique proton beams

    NASA Astrophysics Data System (ADS)

    Hoesl, M.; Deepak, S.; Moteabbed, M.; Jassens, G.; Orban, J.; Park, Y. K.; Parodi, K.; Bentefour, E. H.; Lu, H. M.

    2016-04-01

    The purpose of this work is the clinical commissioning of a recently developed in vivo range verification system (IRVS) for treatment of prostate cancer by anterior and anterior oblique proton beams. The IRVS is designed to perform a complete workflow for pre-treatment range verification and adjustment. It contains specifically designed dosimetry and electronic hardware and a specific software for workflow control with database connection to the treatment and imaging systems. An essential part of the IRVS system is an array of Si-diode detectors, designed to be mounted to the endorectal water balloon routinely used for prostate immobilization. The diodes can measure dose rate as function of time from which the water equivalent path length (WEPL) and the dose received are extracted. The former is used for pre-treatment beam range verification and correction, if necessary, while the latter is to monitor the dose delivered to patient rectum during the treatment and serves as an additional verification. The entire IRVS workflow was tested for anterior and 30 degree inclined proton beam in both solid water and anthropomorphic pelvic phantoms, with the measured WEPL and rectal doses compared to the treatment plan. Gafchromic films were also used for measurement of the rectal dose and compared to IRVS results. The WEPL measurement accuracy was in the order of 1 mm and after beam range correction, the dose received by the rectal wall were 1.6% and 0.4% from treatment planning, respectively, for the anterior and anterior oblique field. We believe the implementation of IRVS would make the treatment of prostate with anterior proton beams more accurate and reliable.

  1. Can self-verification strivings fully transcend the self-other barrier? Seeking verification of ingroup identities.

    PubMed

    Gómez, Angel; Seyle, D Conor; Huici, Carmen; Swann, William B

    2009-12-01

    Recent research has demonstrated self-verification strivings in groups, such that people strive to verify collective identities, which are personal self-views (e.g., "sensitive") associated with group membership (e.g., "women"). Such demonstrations stop short of showing that the desire for self-verification can fully transcend the self-other barrier, as in people working to verify ingroup identities (e.g., "Americans are loud") even when such identities are not self-descriptive ("I am quiet and unassuming"). Five studies focus on such ingroup verification strivings. Results indicate that people prefer to interact with individuals who verify their ingroup identities over those who enhance these identities (Experiments 1-5). Strivings for ingroup identity verification were independent of the extent to which the identities were self-descriptive but were stronger among participants who were highly invested in their ingroup identities, as reflected in high certainty of these identities (Experiments 1-4) and high identification with the group (Experiments 1-5). In addition, whereas past demonstrations of self-verification strivings have been limited to efforts to verify the content of identities (Experiments 1 to 3), the findings also show that they strive to verify the valence of their identities (i.e., the extent to which the identities are valued; Experiments 4 and 5). Self-verification strivings, rather than self-enhancement strivings, appeared to motivate participants' strivings for ingroup identity verification. Links to collective self-verification strivings and social identity theory are discussed.

  2. Biometrics based authentication scheme for session initiation protocol.

    PubMed

    Xie, Qi; Tang, Zhixiong

    2016-01-01

    Many two-factor challenge-response based session initiation protocol (SIP) has been proposed, but most of them are vulnerable to smart card stolen attacks and password guessing attacks. In this paper, we propose a novel three-factor SIP authentication scheme using biometrics, password and smart card, and utilize the pi calculus-based formal verification tool ProVerif to prove that the proposed protocol achieves security and authentication. Furthermore, our protocol is highly efficient when compared to other related protocols.

  3. Multicentre validation of IMRT pre-treatment verification: comparison of in-house and external audit.

    PubMed

    Jornet, Núria; Carrasco, Pablo; Beltrán, Mercè; Calvo, Juan Francisco; Escudé, Lluís; Hernández, Victor; Quera, Jaume; Sáez, Jordi

    2014-09-01

    We performed a multicentre intercomparison of IMRT optimisation and dose planning and IMRT pre-treatment verification methods and results. The aims were to check consistency between dose plans and to validate whether in-house pre-treatment verification results agreed with those of an external audit. Participating centres used two mock cases (prostate and head and neck) for the intercomparison and audit. Compliance to dosimetric goals and total number of MU per plan were collected. A simple quality index to compare the different plans was proposed. We compared gamma index pass rates using the centre's equipment and methodology to those of an external audit. While for the prostate case, all centres fulfilled the dosimetric goals and plan quality was homogeneous, that was not the case for the head and neck case. The number of MU did not correlate with the plan quality index. Pre-treatment verifications results of the external audit did not agree with those of the in-house measurements for two centres: being within tolerance for in-house measurements and unacceptable for the audit or the other way round. Although all plans fulfilled dosimetric constraints, plan quality is highly dependent on the planner expertise. External audits are an excellent tool to detect errors in IMRT implementation and cannot be replaced by intercomparison using results obtained by centres. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  4. Check-Cases for Verification of 6-Degree-of-Freedom Flight Vehicle Simulations

    NASA Technical Reports Server (NTRS)

    Murri, Daniel G.; Jackson, E. Bruce; Shelton, Robert O.

    2015-01-01

    The rise of innovative unmanned aeronautical systems and the emergence of commercial space activities have resulted in a number of relatively new aerospace organizations that are designing innovative systems and solutions. These organizations use a variety of commercial off-the-shelf and in-house-developed simulation and analysis tools including 6-degree-of-freedom (6-DOF) flight simulation tools. The increased affordability of computing capability has made highfidelity flight simulation practical for all participants. Verification of the tools' equations-of-motion and environment models (e.g., atmosphere, gravitation, and geodesy) is desirable to assure accuracy of results. However, aside from simple textbook examples, minimal verification data exists in open literature for 6-DOF flight simulation problems. This assessment compared multiple solution trajectories to a set of verification check-cases that covered atmospheric and exo-atmospheric (i.e., orbital) flight. Each scenario consisted of predefined flight vehicles, initial conditions, and maneuvers. These scenarios were implemented and executed in a variety of analytical and real-time simulation tools. This tool-set included simulation tools in a variety of programming languages based on modified flat-Earth, round- Earth, and rotating oblate spheroidal Earth geodesy and gravitation models, and independently derived equations-of-motion and propagation techniques. The resulting simulated parameter trajectories were compared by over-plotting and difference-plotting to yield a family of solutions. In total, seven simulation tools were exercised.

  5. [Implication of inverse-probability weighting method in the evaluation of diagnostic test with verification bias].

    PubMed

    Kang, Leni; Zhang, Shaokai; Zhao, Fanghui; Qiao, Youlin

    2014-03-01

    To evaluate and adjust the verification bias existed in the screening or diagnostic tests. Inverse-probability weighting method was used to adjust the sensitivity and specificity of the diagnostic tests, with an example of cervical cancer screening used to introduce the Compare Tests package in R software which could be implemented. Sensitivity and specificity calculated from the traditional method and maximum likelihood estimation method were compared to the results from Inverse-probability weighting method in the random-sampled example. The true sensitivity and specificity of the HPV self-sampling test were 83.53% (95%CI:74.23-89.93)and 85.86% (95%CI: 84.23-87.36). In the analysis of data with randomly missing verification by gold standard, the sensitivity and specificity calculated by traditional method were 90.48% (95%CI:80.74-95.56)and 71.96% (95%CI:68.71-75.00), respectively. The adjusted sensitivity and specificity under the use of Inverse-probability weighting method were 82.25% (95% CI:63.11-92.62) and 85.80% (95% CI: 85.09-86.47), respectively, whereas they were 80.13% (95%CI:66.81-93.46)and 85.80% (95%CI: 84.20-87.41) under the maximum likelihood estimation method. The inverse-probability weighting method could effectively adjust the sensitivity and specificity of a diagnostic test when verification bias existed, especially when complex sampling appeared.

  6. Evaluation of Gafchromic EBT-XD film, with comparison to EBT3 film, and application in high dose radiotherapy verification.

    PubMed

    Palmer, Antony L; Dimitriadis, Alexis; Nisbet, Andrew; Clark, Catharine H

    2015-11-21

    There is renewed interest in film dosimetry for the verification of dose delivery of complex treatments, particularly small fields, compared to treatment planning system calculations. A new radiochromic film, Gafchromic EBT-XD, is available for high-dose treatment verification and we present the first published evaluation of its use. We evaluate the new film for MV photon dosimetry, including calibration curves, performance with single- and triple-channel dosimetry, and comparison to existing EBT3 film. In the verification of a typical 25 Gy stereotactic radiotherapy (SRS) treatment, compared to TPS planned dose distribution, excellent agreement was seen with EBT-XD using triple-channel dosimetry, in isodose overlay, maximum 1.0 mm difference over 200-2400 cGy, and gamma evaluation, mean passing rate 97% at 3% locally-normalised, 1.5 mm criteria. In comparison to EBT3, EBT-XD gave improved evaluation results for the SRS-plan, had improved calibration curve gradients at high doses, and had reduced lateral scanner effect. The dimensions of the two films are identical. The optical density of EBT-XD is lower than EBT3 for the same dose. The effective atomic number for both may be considered water-equivalent in MV radiotherapy. We have validated the use of EBT-XD for high-dose, small-field radiotherapy, for routine QC and a forthcoming multi-centre SRS dosimetry intercomparison.

  7. Evaluation of Gafchromic EBT-XD film, with comparison to EBT3 film, and application in high dose radiotherapy verification

    NASA Astrophysics Data System (ADS)

    Palmer, Antony L.; Dimitriadis, Alexis; Nisbet, Andrew; Clark, Catharine H.

    2015-11-01

    There is renewed interest in film dosimetry for the verification of dose delivery of complex treatments, particularly small fields, compared to treatment planning system calculations. A new radiochromic film, Gafchromic EBT-XD, is available for high-dose treatment verification and we present the first published evaluation of its use. We evaluate the new film for MV photon dosimetry, including calibration curves, performance with single- and triple-channel dosimetry, and comparison to existing EBT3 film. In the verification of a typical 25 Gy stereotactic radiotherapy (SRS) treatment, compared to TPS planned dose distribution, excellent agreement was seen with EBT-XD using triple-channel dosimetry, in isodose overlay, maximum 1.0 mm difference over 200-2400 cGy, and gamma evaluation, mean passing rate 97% at 3% locally-normalised, 1.5 mm criteria. In comparison to EBT3, EBT-XD gave improved evaluation results for the SRS-plan, had improved calibration curve gradients at high doses, and had reduced lateral scanner effect. The dimensions of the two films are identical. The optical density of EBT-XD is lower than EBT3 for the same dose. The effective atomic number for both may be considered water-equivalent in MV radiotherapy. We have validated the use of EBT-XD for high-dose, small-field radiotherapy, for routine QC and a forthcoming multi-centre SRS dosimetry intercomparison.

  8. Very high power THz radiation sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carr, G.L.; Martin, Michael C.; McKinney, Wayne R.

    2002-10-31

    We report the production of high power (20 watts average, {approx} 1 Megawatt peak) broadband THz light based on coherent emission from relativistic electrons. Such sources are ideal for imaging, for high power damage studies and for studies of non-linear phenomena in this spectral range. We describe the source, presenting theoretical calculations and their experimental verification. For clarity we compare this source to one based on ultrafast laser techniques.

  9. Experimental Evaluation of Verification and Validation Tools on Martian Rover Software

    NASA Technical Reports Server (NTRS)

    Brat, Guillaume; Giannakopoulou, Dimitra; Goldberg, Allen; Havelund, Klaus; Lowry, Mike; Pasareani, Corina; Venet, Arnaud; Visser, Willem; Washington, Rich

    2003-01-01

    We report on a study to determine the maturity of different verification and validation technologies (V&V) on a representative example of NASA flight software. The study consisted of a controlled experiment where three technologies (static analysis, runtime analysis and model checking) were compared to traditional testing with respect to their ability to find seeded errors in a prototype Mars Rover. What makes this study unique is that it is the first (to the best of our knowledge) to do a controlled experiment to compare formal methods based tools to testing on a realistic industrial-size example where the emphasis was on collecting as much data on the performance of the tools and the participants as possible. The paper includes a description of the Rover code that was analyzed, the tools used as well as a detailed description of the experimental setup and the results. Due to the complexity of setting up the experiment, our results can not be generalized, but we believe it can still serve as a valuable point of reference for future studies of this kind. It did confirm the belief we had that advanced tools can outperform testing when trying to locate concurrency errors. Furthermore the results of the experiment inspired a novel framework for testing the next generation of the Rover.

  10. Prompt Gamma Imaging for In Vivo Range Verification of Pencil Beam Scanning Proton Therapy.

    PubMed

    Xie, Yunhe; Bentefour, El Hassane; Janssens, Guillaume; Smeets, Julien; Vander Stappen, François; Hotoiu, Lucian; Yin, Lingshu; Dolney, Derek; Avery, Stephen; O'Grady, Fionnbarr; Prieels, Damien; McDonough, James; Solberg, Timothy D; Lustig, Robert A; Lin, Alexander; Teo, Boon-Keng K

    2017-09-01

    To report the first clinical results and value assessment of prompt gamma imaging for in vivo proton range verification in pencil beam scanning mode. A stand-alone, trolley-mounted, prototype prompt gamma camera utilizing a knife-edge slit collimator design was used to record the prompt gamma signal emitted along the proton tracks during delivery of proton therapy for a brain cancer patient. The recorded prompt gamma depth detection profiles of individual pencil beam spots were compared with the expected profiles simulated from the treatment plan. In 6 treatment fractions recorded over 3 weeks, the mean (± standard deviation) range shifts aggregated over all spots in 9 energy layers were -0.8 ± 1.3 mm for the lateral field, 1.7 ± 0.7 mm for the right-superior-oblique field, and -0.4 ± 0.9 mm for the vertex field. This study demonstrates the feasibility and illustrates the distinctive benefits of prompt gamma imaging in pencil beam scanning treatment mode. Accuracy in range verification was found in this first clinical case to be better than the range uncertainty margin applied in the treatment plan. These first results lay the foundation for additional work toward tighter integration of the system for in vivo proton range verification and quantification of range uncertainties. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Electronic cigarette sales to minors via the internet.

    PubMed

    Williams, Rebecca S; Derrick, Jason; Ribisl, Kurt M

    2015-03-01

    Electronic cigarettes (e-cigarettes) entered the US market in 2007 and, with little regulatory oversight, grew into a $2-billion-a-year industry by 2013. The Centers for Disease Control and Prevention has reported a trend of increasing e-cigarette use among teens, with use rates doubling from 2011 to 2012. While several studies have documented that teens can and do buy cigarettes online, to our knowledge, no studies have yet examined age verification among Internet tobacco vendors selling e-cigarettes. To estimate the extent to which minors can successfully purchase e-cigarettes online and assess compliance with North Carolina's 2013 e-cigarette age-verification law. In this cross-sectional study conducted from February 2014 to June 2014, 11 nonsmoking minors aged 14 to 17 years made supervised e-cigarette purchase attempts from 98 Internet e-cigarette vendors. Purchase attempts were made at the University of North Carolina Internet Tobacco Vendors Study project offices using credit cards. Rate at which minors can successfully purchase e-cigarettes on the Internet. Minors successfully received deliveries of e-cigarettes from 76.5% of purchase attempts, with no attempts by delivery companies to verify their ages at delivery and 95% of delivered orders simply left at the door. All delivered packages came from shipping companies that, according to company policy or federal regulation, do not ship cigarettes to consumers. Of the total orders, 18 failed for reasons unrelated to age verification. Only 5 of the remaining 80 youth purchase attempts were rejected owing to age verification, resulting in a youth buy rate of 93.7%. None of the vendors complied with North Carolina's e-cigarette age-verification law. Minors are easily able to purchase e-cigarettes from the Internet because of an absence of age-verification measures used by Internet e-cigarette vendors. Federal law should require and enforce rigorous age verification for all e-cigarette sales as with the federal PACT (Prevent All Cigarette Trafficking) Act's requirements for age verification in Internet cigarette sales.

  12. Large Scale Skill in Regional Climate Modeling and the Lateral Boundary Condition Scheme

    NASA Astrophysics Data System (ADS)

    Veljović, K.; Rajković, B.; Mesinger, F.

    2009-04-01

    Several points are made concerning the somewhat controversial issue of regional climate modeling: should a regional climate model (RCM) be expected to maintain the large scale skill of the driver global model that is supplying its lateral boundary condition (LBC)? Given that this is normally desired, is it able to do so without help via the fairly popular large scale nudging? Specifically, without such nudging, will the RCM kinetic energy necessarily decrease with time compared to that of the driver model or analysis data as suggested by a study using the Regional Atmospheric Modeling System (RAMS)? Finally, can the lateral boundary condition scheme make a difference: is the almost universally used but somewhat costly relaxation scheme necessary for a desirable RCM performance? Experiments are made to explore these questions running the Eta model in two versions differing in the lateral boundary scheme used. One of these schemes is the traditional relaxation scheme, and the other the Eta model scheme in which information is used at the outermost boundary only, and not all variables are prescribed at the outflow boundary. Forecast lateral boundary conditions are used, and results are verified against the analyses. Thus, skill of the two RCM forecasts can be and is compared not only against each other but also against that of the driver global forecast. A novel verification method is used in the manner of customary precipitation verification in that forecast spatial wind speed distribution is verified against analyses by calculating bias adjusted equitable threat scores and bias scores for wind speeds greater than chosen wind speed thresholds. In this way, focusing on a high wind speed value in the upper troposphere, verification of large scale features we suggest can be done in a manner that may be more physically meaningful than verifications via spectral decomposition that are a standard RCM verification method. The results we have at this point are somewhat limited in view of the integrations having being done only for 10-day forecasts. Even so, one should note that they are among very few done using forecast as opposed to reanalysis or analysis global driving data. Our results suggest that (1) running the Eta as an RCM no significant loss of large-scale kinetic energy with time seems to be taking place; (2) no disadvantage from using the Eta LBC scheme compared to the relaxation scheme is seen, while enjoying the advantage of the scheme being significantly less demanding than the relaxation given that it needs driver model fields at the outermost domain boundary only; and (3) the Eta RCM skill in forecasting large scales, with no large scale nudging, seems to be just about the same as that of the driver model, or, in the terminology of Castro et al., the Eta RCM does not lose "value of the large scale" which exists in the larger global analyses used for the initial condition and for verification.

  13. Verification, Validation and Sensitivity Studies in Computational Biomechanics

    PubMed Central

    Anderson, Andrew E.; Ellis, Benjamin J.; Weiss, Jeffrey A.

    2012-01-01

    Computational techniques and software for the analysis of problems in mechanics have naturally moved from their origins in the traditional engineering disciplines to the study of cell, tissue and organ biomechanics. Increasingly complex models have been developed to describe and predict the mechanical behavior of such biological systems. While the availability of advanced computational tools has led to exciting research advances in the field, the utility of these models is often the subject of criticism due to inadequate model verification and validation. The objective of this review is to present the concepts of verification, validation and sensitivity studies with regard to the construction, analysis and interpretation of models in computational biomechanics. Specific examples from the field are discussed. It is hoped that this review will serve as a guide to the use of verification and validation principles in the field of computational biomechanics, thereby improving the peer acceptance of studies that use computational modeling techniques. PMID:17558646

  14. Evaluation and economic value of winter weather forecasts

    NASA Astrophysics Data System (ADS)

    Snyder, Derrick W.

    State and local highway agencies spend millions of dollars each year to deploy winter operation teams to plow snow and de-ice roadways. Accurate and timely weather forecast information is critical for effective decision making. Students from Purdue University partnered with the Indiana Department of Transportation to create an experimental winter weather forecast service for the 2012-2013 winter season in Indiana to assist in achieving these goals. One forecast product, an hourly timeline of winter weather hazards produced daily, was evaluated for quality and economic value. Verification of the forecasts was performed with data from the Rapid Refresh numerical weather model. Two objective verification criteria were developed to evaluate the performance of the timeline forecasts. Using both criteria, the timeline forecasts had issues with reliability and discrimination, systematically over-forecasting the amount of winter weather that was observed while also missing significant winter weather events. Despite these quality issues, the forecasts still showed significant, but varied, economic value compared to climatology. Economic value of the forecasts was estimated to be 29.5 million or 4.1 million, depending on the verification criteria used. Limitations of this valuation system are discussed and a framework is developed for more thorough studies in the future.

  15. Replacement Technologies for Precision Cleaning of Aerospace Hardware for Propellant Service

    NASA Technical Reports Server (NTRS)

    Beeson, Harold; Kirsch, Mike; Hornung, Steven; Biesinger, Paul

    1997-01-01

    The NASA White Sands Test Facility (WSTF) is developing cleaning and verification processes to replace currently used chlorofluorocarbon-l13- (CFC-113-) based processes. The processes being evaluated include both aqueous- and solvent-based techniques. Replacement technologies are being investigated for aerospace hardware and for gauges and instrumentation. This paper includes the findings of investigations of aqueous cleaning and verification of aerospace hardware using known contaminants, such as hydraulic fluid and commonly used oils. The results correlate nonvolatile residue with CFC 113. The studies also include enhancements to aqueous sampling for organic and particulate contamination. Although aqueous alternatives have been identified for several processes, a need still exists for nonaqueous solvent cleaning, such as the cleaning and cleanliness verification of gauges used for oxygen service. The cleaning effectiveness of tetrachloroethylene (PCE), trichloroethylene (TCE), ethanol, hydrochlorofluorocarbon 225 (HCFC 225), HCFC 141b, HFE 7100(R), and Vertrel MCA(R) was evaluated using aerospace gauges and precision instruments and then compared to the cleaning effectiveness of CFC 113. Solvents considered for use in oxygen systems were also tested for oxygen compatibility using high-pressure oxygen autogenous ignition and liquid oxygen mechanical impact testing.

  16. Verification of Space Weather Forecasts Issued by the Met Office Space Weather Operations Centre

    NASA Astrophysics Data System (ADS)

    Sharpe, M. A.; Murray, S. A.

    2017-10-01

    The Met Office Space Weather Operations Centre was founded in 2014 and part of its remit is a daily Space Weather Technical Forecast to help the UK build resilience to space weather impacts; guidance includes 4 day geomagnetic storm forecasts (GMSF) and X-ray flare forecasts (XRFF). It is crucial for forecasters, users, modelers, and stakeholders to understand the strengths and weaknesses of these forecasts; therefore, it is important to verify against the most reliable truth data source available. The present study contains verification results for XRFFs using Geo-Orbiting Earth Satellite 15 satellite data and GMSF using planetary K-index (Kp) values from the GFZ Helmholtz Centre. To assess the value of the verification results, it is helpful to compare them against a reference forecast and the frequency of occurrence during a rolling prediction period is used for this purpose. An analysis of the rolling 12 month performance over a 19 month period suggests that both the XRFF and GMSF struggle to provide a better prediction than the reference. However, a relative operating characteristic and reliability analysis of the full 19 month period reveals that although the GMSF and XRFF possess discriminatory skill, events tend to be overforecast.

  17. A new plan-scoring method using normal tissue complication probability for personalized treatment plan decisions in prostate cancer

    NASA Astrophysics Data System (ADS)

    Kim, Kwang Hyeon; Lee, Suk; Shim, Jang Bo; Yang, Dae Sik; Yoon, Won Sup; Park, Young Je; Kim, Chul Yong; Cao, Yuan Jie; Chang, Kyung Hwan

    2018-01-01

    The aim of this study was to derive a new plan-scoring index using normal tissue complication probabilities to verify different plans in the selection of personalized treatment. Plans for 12 patients treated with tomotherapy were used to compare scoring for ranking. Dosimetric and biological indexes were analyzed for the plans for a clearly distinguishable group ( n = 7) and a similar group ( n = 12), using treatment plan verification software that we developed. The quality factor ( QF) of our support software for treatment decisions was consistent with the final treatment plan for the clearly distinguishable group (average QF = 1.202, 100% match rate, n = 7) and the similar group (average QF = 1.058, 33% match rate, n = 12). Therefore, we propose a normal tissue complication probability (NTCP) based on the plan scoring index for verification of different plans for personalized treatment-plan selection. Scoring using the new QF showed a 100% match rate (average NTCP QF = 1.0420). The NTCP-based new QF scoring method was adequate for obtaining biological verification quality and organ risk saving using the treatment-planning decision-support software we developed for prostate cancer.

  18. SU-F-T-229: A Novel Method for EPID-Based In-Vivo Exit Dose Verification for Intensity Modulated Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Z; Wang, J; Peng, J

    Purpose: Electronic portal imaging device (EPID) can be used to acquire a two-dimensional exit dose distribution during treatment delivery, thus allowing the in-vivo verification of the dose delivery through a comparison of measured portal images to predicted portal dose images (PDI). The aim of this study was to present a novel method to easily and accurately predict PDI, and to establish an EPID-based in-vivo dose verification method during IMRT treatments. Methods: We developed a model to determine the predicted portal dose at the same plane of the EPID detector location. The Varian EPID (aS1000) positions at 150cm source-to-detector-distance (SDD), andmore » can be used to acquire in-vivo exit dose using Portal Dosimetry (PD) function. Our model was generated to make an equivalent water thickness represent the buildup plate of EPID. The exit dose at extend SDD plane with patient CT data in the beam can be calculated as the predicted PDI in the treatment planning system (TPS). After that, the PDI was converted to the fluence at SDD of 150cm using the inverse square law coded in MATLAB. Five head-and-neck and prostate IMRT patient plans contain 32 fields were investigated to evaluate the feasibility of this new method. The measured EPID image was compared with PDI using the gamma analysis. Results: The average results for cumulative dose comparison were 81.9% and 91.6% for 3%, 3mm and 4%, 4mm gamma criteria, respectively. Results indicate that the patient transit dosimetry predicted algorithm compares well with EPID measured PD doses for test situations. Conclusion: Our new method can be used as an easy and feasible tool for online EPID-based in-vivo dose delivery verification for IMRT treatments. It can be implemented for fast detecting those obvious treatment delivery errors for individual field and patient quality assurance.« less

  19. Evaluation of Kodak EDR2 film for dose verification of intensity modulated radiation therapy delivered by a static multileaf collimator.

    PubMed

    Zhu, X R; Jursinic, P A; Grimm, D F; Lopez, F; Rownd, J J; Gillin, M T

    2002-08-01

    A new type of radiographic film, Kodak EDR2 film, was evaluated for dose verification of intensity modulated radiation therapy (IMRT) delivered by a static multileaf collimator (SMLC). A sensitometric curve of EDR2 film irradiated by a 6 MV x-ray beam was compared with that of Kodak X-OMAT V (XV) film. The effects of field size, depth and dose rate on the sensitometric curve were also studied. It is found that EDR2 film is much less sensitive than XV film. In high-energy x-ray beams, the double hit process is the dominant mechanism that renders the grains on EDR2 films developable. As a result, in the dose range that is commonly used for film dosimetry for IMRT and conventional external beam therapy, the sensitometric curves of EDR2 films cannot be approximated as a linear function, OD = c * D. Within experimental uncertainty, the film sensitivity does not depend on the dose rate (50 vs 300 MU/min) or dose per pulse (from 1.0 x 10(-4) to 4.21 x 10(-4) Gy/pulse). Field sizes and depths (up to field size of 10 x 10 cm2 and depth = 10 cm) have little effect on the sensitometric curves. Percent depth doses (PDDs) for both 6 and 23 MV x rays were measured with both EDR2 and XV films and compared with ion chamber data. Film data are within 2.5% of the ion chamber results. Dose profiles measured with EDR2 film are consistent with those measured with an ion chamber. Examples of measured IMRT isodose distributions versus calculated isodoses are presented. We have used EDR2 films for verification of all IMRT patients treated by SMLC in our clinic. In most cases, with EDR2 film, actual clinical daily fraction doses can be used for verification of composite isodose distributions of SMLC-based IMRT.

  20. Current status of 3D EPID-based in vivo dosimetry in The Netherlands Cancer Institute

    NASA Astrophysics Data System (ADS)

    Mijnheer, B.; Olaciregui-Ruiz, I.; Rozendaal, R.; Spreeuw, H.; van Herk, M.; Mans, A.

    2015-01-01

    3D in vivo dose verification using a-Si EPIDs is performed routinely in our institution for almost all RT treatments. The EPID-based 3D dose distribution is reconstructed using a back-projection algorithm and compared with the planned dose distribution using 3D gamma evaluation. Dose-reconstruction and gamma-evaluation software runs automatically, and deviations outside the alert criteria are immediately available and investigated, in combination with inspection of cone-beam CT scans. The implementation of our 3D EPID- based in vivo dosimetry approach was able to replace pre-treatment verification for more than 90% of the patient treatments. Clinically relevant deviations could be detected for approximately 1 out of 300 patient treatments (IMRT and VMAT). Most of these errors were patient related anatomical changes or deviations from the routine clinical procedure, and would not have been detected by pre-treatment verification. Moreover, 3D EPID-based in vivo dose verification is a fast and accurate tool to assure the safe delivery of RT treatments. It provides clinically more useful information and is less time consuming than pre-treatment verification measurements. Automated 3D in vivo dosimetry is therefore a prerequisite for large-scale implementation of patient-specific quality assurance of RT treatments.

  1. A New Integrated Threshold Selection Methodology for Spatial Forecast Verification of Extreme Events

    NASA Astrophysics Data System (ADS)

    Kholodovsky, V.

    2017-12-01

    Extreme weather and climate events such as heavy precipitation, heat waves and strong winds can cause extensive damage to the society in terms of human lives and financial losses. As climate changes, it is important to understand how extreme weather events may change as a result. Climate and statistical models are often independently used to model those phenomena. To better assess performance of the climate models, a variety of spatial forecast verification methods have been developed. However, spatial verification metrics that are widely used in comparing mean states, in most cases, do not have an adequate theoretical justification to benchmark extreme weather events. We proposed a new integrated threshold selection methodology for spatial forecast verification of extreme events that couples existing pattern recognition indices with high threshold choices. This integrated approach has three main steps: 1) dimension reduction; 2) geometric domain mapping; and 3) thresholds clustering. We apply this approach to an observed precipitation dataset over CONUS. The results are evaluated by displaying threshold distribution seasonally, monthly and annually. The method offers user the flexibility of selecting a high threshold that is linked to desired geometrical properties. The proposed high threshold methodology could either complement existing spatial verification methods, where threshold selection is arbitrary, or be directly applicable in extreme value theory.

  2. The effectiveness of ID readers and remote age verification in enhancing compliance with the legal age limit for alcohol.

    PubMed

    Van Hoof, Joris J

    2017-04-01

    Currently, two different age verification systems (AVS) are implemented to enhance compliance with legal age limits for the sale of alcohol in the Netherlands. In this study, we tested the operational procedures and effectiveness of ID readers and remote age verification technology in supermarkets during the sale of alcohol. Following a trained alcohol purchase protocol, eight mystery shoppers (both underage and in the branch's reference age) conducted 132 alcohol purchase attempts in stores that were equipped with ID readers or remote age verification or were part of a control group. In stores equipped with an ID reader, 34% of the purchases were conducted without any mistakes (full compliance). In stores with remote age verification, full compliance was achieved in 87% of the cases. The control group reached 57% compliance, which is in line with the national average. Stores with ID readers perform worse than stores with remote age verification, and also worse than stores without any AVS. For both systems, in addition to effectiveness, public support and user friendliness need to be investigated. This study shows that remote age verification technology is a promising intervention that increases vendor compliance during the sales of age restricted products. © The Author 2016. Published by Oxford University Press on behalf of the European Public Health Association. All rights reserved.

  3. Feasibility study on dosimetry verification of volumetric-modulated arc therapy-based total marrow irradiation.

    PubMed

    Liang, Yun; Kim, Gwe-Ya; Pawlicki, Todd; Mundt, Arno J; Mell, Loren K

    2013-03-04

    The purpose of this study was to develop dosimetry verification procedures for volumetric-modulated arc therapy (VMAT)-based total marrow irradiation (TMI). The VMAT based TMI plans were generated for three patients: one child and two adults. The planning target volume (PTV) was defined as bony skeleton, from head to mid-femur, with a 3 mm margin. The plan strategy similar to published studies was adopted. The PTV was divided into head and neck, chest, and pelvic regions, with separate plans each of which is composed of 2-3 arcs/fields. Multiple isocenters were evenly distributed along the patient's axial direction. The focus of this study is to establish a dosimetry quality assurance procedure involving both two-dimensional (2D) and three-dimensional (3D) volumetric verifications, which is desirable for a large PTV treated with multiple isocenters. The 2D dose verification was performed with film for gamma evaluation and absolute point dose was measured with ion chamber, with attention to the junction between neighboring plans regarding hot/cold spots. The 3D volumetric dose verification used commercial dose reconstruction software to reconstruct dose from electronic portal imaging devices (EPID) images. The gamma evaluation criteria in both 2D and 3D verification were 5% absolute point dose difference and 3 mm of distance to agreement. With film dosimetry, the overall average gamma passing rate was 98.2% and absolute dose difference was 3.9% in junction areas among the test patients; with volumetric portal dosimetry, the corresponding numbers were 90.7% and 2.4%. A dosimetry verification procedure involving both 2D and 3D was developed for VMAT-based TMI. The initial results are encouraging and warrant further investigation in clinical trials.

  4. A feasibility study on bedside upper airway ultrasonography compared to waveform capnography for verifying endotracheal tube location after intubation

    PubMed Central

    2013-01-01

    Background In emergency settings, verification of endotracheal tube (ETT) location is important for critically ill patients. Ignorance of oesophageal intubation can be disastrous. Many methods are used for verification of the endotracheal tube location; none are ideal. Quantitative waveform capnography is considered the standard of care for this purpose but is not always available and is expensive. Therefore, this feasibility study is conducted to compare a cheaper alternative, bedside upper airway ultrasonography to waveform capnography, for verification of endotracheal tube location after intubation. Methods This was a prospective, single-centre, observational study, conducted at the HRPB, Ipoh. It included patients who were intubated in the emergency department from 28 March 2012 to 17 August 2012. A waiver of consent had been obtained from the Medical Research Ethics Committee. Bedside upper airway ultrasonography was performed after intubation and compared to waveform capnography. Specificity, sensitivity, positive and negative predictive value and likelihood ratio are calculated. Results A sample of 107 patients were analysed, and 6 (5.6%) had oesophageal intubations. The overall accuracy of bedside upper airway ultrasonography was 98.1% (95% confidence interval (CI) 93.0% to 100.0%). The kappa value (Κ) was 0.85, indicating a very good agreement between the bedside upper airway ultrasonography and waveform capnography. Thus, bedside upper airway ultrasonography is in concordance with waveform capnography. The sensitivity, specificity, positive predictive value and negative predictive value of bedside upper airway ultrasonography were 98.0% (95% CI 93.0% to 99.8%), 100% (95% CI 54.1% to 100.0%), 100% (95% CI 96.3% to 100.0%) and 75.0% (95% CI 34.9% to 96.8%). The likelihood ratio of a positive test is infinite and the likelihood ratio of a negative test is 0.0198 (95% CI 0.005 to 0.0781). The mean confirmation time by ultrasound is 16.4 s. No adverse effects were recorded. Conclusions Our study shows that ultrasonography can replace waveform capnography in confirming ETT placement in centres without capnography. This can reduce incidence of unrecognised oesophageal intubation and prevent morbidity and mortality. Trial registration National Medical Research Register NMRR11100810230. PMID:23826756

  5. NHEXAS PHASE I ARIZONA STUDY--STANDARD OPERATING PROCEDURE FOR DATA ENTRY AND DATA VERIFICATION (HAND ENTRY) (UA-D-15.0)

    EPA Science Inventory

    The purpose of this SOP is to define the steps involved in data entry and data verification of physical forms. It applies to the data entry and data verification of all physical forms. The procedure defined herein was developed for use in the Arizona NHEXAS project and the "Bor...

  6. Comparing the Effectiveness of Verification and Inquiry Laboratories in Supporting Undergraduate Science Students in Constructing Arguments around Socioscientific Issues

    ERIC Educational Resources Information Center

    Grooms, Jonathon; Sampson, Victor; Golden, Barry

    2014-01-01

    This quasi-experimental study uses a pre-/post-intervention approach to investigate the quality of undergraduate students' arguments in the context of socioscientific issues (SSI) based on experiencing a semester of traditional "cookbook" instruction (N?=?79) or a semester of argument-based instruction (N?=?73) in the context of an…

  7. Finding Needles in Haystacks: Identity Mismatch Frequency and Facial Identity Verification

    ERIC Educational Resources Information Center

    Bindemann, Markus; Avetisyan, Meri; Blackwell, Kristy-Ann

    2010-01-01

    Accurate person identification is central to all security, police, and judicial systems. A commonplace method to achieve this is to compare a photo-ID and the face of its purported owner. The critical aspect of this task is to spot cases in which these two instances of a face do not match. Studies of person identification show that these instances…

  8. Method for secure electronic voting system: face recognition based approach

    NASA Astrophysics Data System (ADS)

    Alim, M. Affan; Baig, Misbah M.; Mehboob, Shahzain; Naseem, Imran

    2017-06-01

    In this paper, we propose a framework for low cost secure electronic voting system based on face recognition. Essentially Local Binary Pattern (LBP) is used for face feature characterization in texture format followed by chi-square distribution is used for image classification. Two parallel systems are developed based on smart phone and web applications for face learning and verification modules. The proposed system has two tire security levels by using person ID followed by face verification. Essentially class specific threshold is associated for controlling the security level of face verification. Our system is evaluated three standard databases and one real home based database and achieve the satisfactory recognition accuracies. Consequently our propose system provides secure, hassle free voting system and less intrusive compare with other biometrics.

  9. The Sedov Blast Wave as a Radial Piston Verification Test

    DOE PAGES

    Pederson, Clark; Brown, Bart; Morgan, Nathaniel

    2016-06-22

    The Sedov blast wave is of great utility as a verification problem for hydrodynamic methods. The typical implementation uses an energized cell of finite dimensions to represent the energy point source. We avoid this approximation by directly finding the effects of the energy source as a boundary condition (BC). Furthermore, the proposed method transforms the Sedov problem into an outward moving radial piston problem with a time-varying velocity. A portion of the mesh adjacent to the origin is removed and the boundaries of this hole are forced with the velocities from the Sedov solution. This verification test is implemented onmore » two types of meshes, and convergence is shown. Our results from the typical initial condition (IC) method and the new BC method are compared.« less

  10. Alternative sample sizes for verification dose experiments and dose audits

    NASA Astrophysics Data System (ADS)

    Taylor, W. A.; Hansen, J. M.

    1999-01-01

    ISO 11137 (1995), "Sterilization of Health Care Products—Requirements for Validation and Routine Control—Radiation Sterilization", provides sampling plans for performing initial verification dose experiments and quarterly dose audits. Alternative sampling plans are presented which provide equivalent protection. These sampling plans can significantly reduce the cost of testing. These alternative sampling plans have been included in a draft ISO Technical Report (type 2). This paper examines the rational behind the proposed alternative sampling plans. The protection provided by the current verification and audit sampling plans is first examined. Then methods for identifying equivalent plans are highlighted. Finally, methods for comparing the cost associated with the different plans are provided. This paper includes additional guidance for selecting between the original and alternative sampling plans not included in the technical report.

  11. Automation bias and verification complexity: a systematic review.

    PubMed

    Lyell, David; Coiera, Enrico

    2017-03-01

    While potentially reducing decision errors, decision support systems can introduce new types of errors. Automation bias (AB) happens when users become overreliant on decision support, which reduces vigilance in information seeking and processing. Most research originates from the human factors literature, where the prevailing view is that AB occurs only in multitasking environments. This review seeks to compare the human factors and health care literature, focusing on the apparent association of AB with multitasking and task complexity. EMBASE, Medline, Compendex, Inspec, IEEE Xplore, Scopus, Web of Science, PsycINFO, and Business Source Premiere from 1983 to 2015. Evaluation studies where task execution was assisted by automation and resulted in errors were included. Participants needed to be able to verify automation correctness and perform the task manually. Tasks were identified and grouped. Task and automation type and presence of multitasking were noted. Each task was rated for its verification complexity. Of 890 papers identified, 40 met the inclusion criteria; 6 were in health care. Contrary to the prevailing human factors view, AB was found in single tasks, typically involving diagnosis rather than monitoring, and with high verification complexity. The literature is fragmented, with large discrepancies in how AB is reported. Few studies reported the statistical significance of AB compared to a control condition. AB appears to be associated with the degree of cognitive load experienced in decision tasks, and appears to not be uniquely associated with multitasking. Strategies to minimize AB might focus on cognitive load reduction. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  12. Verification of chemistry reference ranges using a simple method in sub-Saharan Africa

    PubMed Central

    Taylor, Douglas; Mandala, Justin; Nanda, Kavita; Van Campenhout, Christel; Agingu, Walter; Madurai, Lorna; Barsch, Eva-Maria; Deese, Jennifer; Van Damme, Lut; Crucitti, Tania

    2016-01-01

    Background Chemistry safety assessments are interpreted by using chemistry reference ranges (CRRs). Verification of CRRs is time consuming and often requires a statistical background. Objectives We report on an easy and cost-saving method to verify CRRs. Methods Using a former method introduced by Sigma Diagnostics, three study sites in sub-Saharan Africa, Bondo, Kenya, and Pretoria and Bloemfontein, South Africa, verified the CRRs for hepatic and renal biochemistry assays performed during a clinical trial of HIV antiretroviral pre-exposure prophylaxis. The aspartate aminotransferase/alanine aminotransferase, creatinine and phosphorus results from 10 clinically-healthy participants at the screening visit were used. In the event the CRRs did not pass the verification, new CRRs had to be calculated based on 40 clinically-healthy participants. Results Within a few weeks, the study sites accomplished verification of the CRRs without additional costs. The aspartate aminotransferase reference ranges for the Bondo, Kenya site and the alanine aminotransferase reference ranges for the Pretoria, South Africa site required adjustment. The phosphorus CRR passed verification and the creatinine CRR required adjustment at every site. The newly-established CRR intervals were narrower than the CRRs used previously at these study sites due to decreases in the upper limits of the reference ranges. As a result, more toxicities were detected. Conclusion To ensure the safety of clinical trial participants, verification of CRRs should be standard practice in clinical trials conducted in settings where the CRR has not been validated for the local population. This verification method is simple, inexpensive, and can be performed by any medical laboratory. PMID:28879112

  13. Verification of chemistry reference ranges using a simple method in sub-Saharan Africa.

    PubMed

    De Baetselier, Irith; Taylor, Douglas; Mandala, Justin; Nanda, Kavita; Van Campenhout, Christel; Agingu, Walter; Madurai, Lorna; Barsch, Eva-Maria; Deese, Jennifer; Van Damme, Lut; Crucitti, Tania

    2016-01-01

    Chemistry safety assessments are interpreted by using chemistry reference ranges (CRRs). Verification of CRRs is time consuming and often requires a statistical background. We report on an easy and cost-saving method to verify CRRs. Using a former method introduced by Sigma Diagnostics, three study sites in sub-Saharan Africa, Bondo, Kenya, and Pretoria and Bloemfontein, South Africa, verified the CRRs for hepatic and renal biochemistry assays performed during a clinical trial of HIV antiretroviral pre-exposure prophylaxis. The aspartate aminotransferase/alanine aminotransferase, creatinine and phosphorus results from 10 clinically-healthy participants at the screening visit were used. In the event the CRRs did not pass the verification, new CRRs had to be calculated based on 40 clinically-healthy participants. Within a few weeks, the study sites accomplished verification of the CRRs without additional costs. The aspartate aminotransferase reference ranges for the Bondo, Kenya site and the alanine aminotransferase reference ranges for the Pretoria, South Africa site required adjustment. The phosphorus CRR passed verification and the creatinine CRR required adjustment at every site. The newly-established CRR intervals were narrower than the CRRs used previously at these study sites due to decreases in the upper limits of the reference ranges. As a result, more toxicities were detected. To ensure the safety of clinical trial participants, verification of CRRs should be standard practice in clinical trials conducted in settings where the CRR has not been validated for the local population. This verification method is simple, inexpensive, and can be performed by any medical laboratory.

  14. Cooling Tower (Evaporative Cooling System) Measurement and Verification Protocol

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurnik, Charles W.; Boyd, Brian; Stoughton, Kate M.

    This measurement and verification (M and V) protocol provides procedures for energy service companies (ESCOs) and water efficiency service companies (WESCOs) to determine water savings resulting from water conservation measures (WCMs) in energy performance contracts associated with cooling tower efficiency projects. The water savings are determined by comparing the baseline water use to the water use after the WCM has been implemented. This protocol outlines the basic structure of the M and V plan, and details the procedures to use to determine water savings.

  15. Outdoor Irrigation Measurement and Verification Protocol

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurnik, Charles W.; Stoughton, Kate M.; Figueroa, Jorge

    This measurement and verification (M&V) protocol provides procedures for energy service companies (ESCOs) and water efficiency service companies (WESCOs) to determine water savings resulting from water conservation measures (WCMs) in energy performance contracts associated with outdoor irrigation efficiency projects. The water savings are determined by comparing the baseline water use to the water use after the WCM has been implemented. This protocol outlines the basic structure of the M&V plan, and details the procedures to use to determine water savings.

  16. Formal verification of medical monitoring software using Z language: a representative sample.

    PubMed

    Babamir, Seyed Morteza; Borhani, Mehdi

    2012-08-01

    Medical monitoring systems are useful aids assisting physicians in keeping patients under constant surveillance; however, taking sound decision by the systems is a physician concern. As a result, verification of the systems behavior in monitoring patients is a matter of significant. The patient monitoring is undertaken by software in modern medical systems; so, software verification of modern medial systems have been noticed. Such verification can be achieved by the Formal Languages having mathematical foundations. Among others, the Z language is a suitable formal language has been used to formal verification of systems. This study aims to present a constructive method to verify a representative sample of a medical system by which the system is visually specified and formally verified against patient constraints stated in Z Language. Exploiting our past experience in formal modeling Continuous Infusion Insulin Pump (CIIP), we think of the CIIP system as a representative sample of medical systems in proposing our present study. The system is responsible for monitoring diabetic's blood sugar.

  17. Integration of Error Compensation of Coordinate Measuring Machines into Feature Measurement: Part II—Experimental Implementation

    PubMed Central

    Calvo, Roque; D’Amato, Roberto; Gómez, Emilio; Domingo, Rosario

    2016-01-01

    Coordinate measuring machines (CMM) are main instruments of measurement in laboratories and in industrial quality control. A compensation error model has been formulated (Part I). It integrates error and uncertainty in the feature measurement model. Experimental implementation for the verification of this model is carried out based on the direct testing on a moving bridge CMM. The regression results by axis are quantified and compared to CMM indication with respect to the assigned values of the measurand. Next, testing of selected measurements of length, flatness, dihedral angle, and roundness features are accomplished. The measurement of calibrated gauge blocks for length or angle, flatness verification of the CMM granite table and roundness of a precision glass hemisphere are presented under a setup of repeatability conditions. The results are analysed and compared with alternative methods of estimation. The overall performance of the model is endorsed through experimental verification, as well as the practical use and the model capability to contribute in the improvement of current standard CMM measuring capabilities. PMID:27754441

  18. Simulated sudden increase in geomagnetic activity and its effect on heart rate variability: Experimental verification of correlation studies.

    PubMed

    Caswell, Joseph M; Singh, Manraj; Persinger, Michael A

    2016-08-01

    Previous research investigating the potential influence of geomagnetic factors on human cardiovascular state has tended to converge upon similar inferences although the results remain relatively controversial. Furthermore, previous findings have remained essentially correlational without accompanying experimental verification. An exception to this was noted for human brain activity in a previous study employing experimental simulation of sudden geomagnetic impulses in order to assess correlational results that had demonstrated a relationship between geomagnetic perturbations and neuroelectrical parameters. The present study employed the same equipment in a similar procedure in order to validate previous findings of a geomagnetic-cardiovascular dynamic with electrocardiography and heart rate variability measures. Results indicated that potential magnetic field effects on frequency components of heart rate variability tended to overlap with previous correlational studies where low frequency power and the ratio between low and high frequency components of heart rate variability appeared affected. In the present study, a significant increase in these particular parameters was noted during geomagnetic simulation compared to baseline recordings. Copyright © 2016 The Committee on Space Research (COSPAR). Published by Elsevier Ltd. All rights reserved.

  19. Block 2 SRM conceptual design studies. Volume 1, Book 2: Preliminary development and verification plan

    NASA Technical Reports Server (NTRS)

    1986-01-01

    Activities that will be conducted in support of the development and verification of the Block 2 Solid Rocket Motor (SRM) are described. Development includes design, fabrication, processing, and testing activities in which the results are fed back into the project. Verification includes analytical and test activities which demonstrate SRM component/subassembly/assembly capability to perform its intended function. The management organization responsible for formulating and implementing the verification program is introduced. It also identifies the controls which will monitor and track the verification program. Integral with the design and certification of the SRM are other pieces of equipment used in transportation, handling, and testing which influence the reliability and maintainability of the SRM configuration. The certification of this equipment is also discussed.

  20. Study of space shuttle orbiter system management computer function. Volume 2: Automated performance verification concepts

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The findings are presented of investigations on concepts and techniques in automated performance verification. The investigations were conducted to provide additional insight into the design methodology and to develop a consolidated technology base from which to analyze performance verification design approaches. Other topics discussed include data smoothing, function selection, flow diagrams, data storage, and shuttle hydraulic systems.

  1. Exploring implementation practices in results-based financing: the case of the verification in Benin.

    PubMed

    Antony, Matthieu; Bertone, Maria Paola; Barthes, Olivier

    2017-03-14

    Results-based financing (RBF) has been introduced in many countries across Africa and a growing literature is building around the assessment of their impact. These studies are usually quantitative and often silent on the paths and processes through which results are achieved and on the wider health system effects of RBF. To address this gap, our study aims at exploring the implementation of an RBF pilot in Benin, focusing on the verification of results. The study is based on action research carried out by authors involved in the pilot as part of the agency supporting the RBF implementation in Benin. While our participant observation and operational collaboration with project's stakeholders informed the study, the analysis is mostly based on quantitative and qualitative secondary data, collected throughout the project's implementation and documentation processes. Data include project documents, reports and budgets, RBF data on service outputs and on the outcome of the verification, daily activity timesheets of the technical assistants in the districts, as well as focus groups with Community-based Organizations and informal interviews with technical assistants and district medical officers. Our analysis focuses on the actual practices of quantitative, qualitative and community verification. Results show that the verification processes are complex, costly and time-consuming, and in practice they end up differing from what designed originally. We explore the consequences of this on the operation of the scheme, on its potential to generate the envisaged change. We find, for example, that the time taken up by verification procedures limits the time available for data analysis and feedback to facility staff, thus limiting the potential to improve service delivery. Verification challenges also result in delays in bonus payment, which delink effort and reward. Additionally, the limited integration of the verification activities of district teams with their routine tasks causes a further verticalization of the health system. Our results highlight the potential disconnect between the theory of change behind RBF and the actual scheme's implementation. The implications are relevant at methodological level, stressing the importance of analyzing implementation processes to fully understand results, as well as at operational level, pointing to the need to carefully adapt the design of RBF schemes (including verification and other key functions) to the context and to allow room to iteratively modify it during implementation. They also question whether the rationale for thorough and costly verification is justified, or rather adaptations are possible.

  2. Development and verification of a novel device for dental intra-oral 3D scanning using chromatic confocal technology

    NASA Astrophysics Data System (ADS)

    Zint, M.; Stock, K.; Graser, R.; Ertl, T.; Brauer, E.; Heyninck, J.; Vanbiervliet, J.; Dhondt, S.; De Ceuninck, P.; Hibst, R.

    2015-03-01

    The presented work describes the development and verification of a novel optical, powder-free intra-oral scanner based on chromatic confocal technology combined with a multifocal approach. The proof of concept for a chromatic confocal area scanner for intra-oral scanning is given. Several prototype scanners passed a verification process showing an average accuracy (distance deviation on flat surfaces) of less than 31μm +/- 21μm and a reproducibility of less than 4μm +/- 3μm. Compared to a tactile measurement on a full jaw model fitted with 4mm ceramic spheres the measured average distance deviation between the spheres was 49μm +/- 12μm for scans of up to 8 teeth (3- unit bridge, single Quadrant) and 104μm +/- 82μm for larger scans and full jaws. The average deviation of the measured sphere diameter compared to the tactile measurement was 27μm +/- 14μm. Compared to μCT scans of plaster models equipped with human teeth the average standard deviation on up to 3 units was less than 55μm +/- 49μm whereas the reproducibility of the scans was better than 22μm +/- 10μm.

  3. A field study of the accuracy and reliability of a biometric iris recognition system.

    PubMed

    Latman, Neal S; Herb, Emily

    2013-06-01

    The iris of the eye appears to satisfy the criteria for a good anatomical characteristic for use in a biometric system. The purpose of this study was to evaluate a biometric iris recognition system: Mobile-Eyes™. The enrollment, verification, and identification applications were evaluated in a field study for accuracy and reliability using both irises of 277 subjects. Independent variables included a wide range of subject demographics, ambient light, and ambient temperature. A sub-set of 35 subjects had alcohol-induced nystagmus. There were 2710 identification and verification attempts, which resulted in 1,501,340 and 5540 iris comparisons respectively. In this study, the system successfully enrolled all subjects on the first attempt. All 277 subjects were successfully verified and identified on the first day of enrollment. None of the current or prior eye conditions prevented enrollment, verification, or identification. All 35 subjects with alcohol-induced nystagmus were successfully verified and identified. There were no false verifications or false identifications. Two conditions were identified that potentially could circumvent the use of iris recognitions systems in general. The Mobile-Eyes™ iris recognition system exhibited accurate and reliable enrollment, verification, and identification applications in this study. It may have special applications in subjects with nystagmus. Copyright © 2012 Forensic Science Society. Published by Elsevier Ireland Ltd. All rights reserved.

  4. ICSH guidelines for the verification and performance of automated cell counters for body fluids.

    PubMed

    Bourner, G; De la Salle, B; George, T; Tabe, Y; Baum, H; Culp, N; Keng, T B

    2014-12-01

    One of the many challenges facing laboratories is the verification of their automated Complete Blood Count cell counters for the enumeration of body fluids. These analyzers offer improved accuracy, precision, and efficiency in performing the enumeration of cells compared with manual methods. A patterns of practice survey was distributed to laboratories that participate in proficiency testing in Ontario, Canada, the United States, the United Kingdom, and Japan to determine the number of laboratories that are testing body fluids on automated analyzers and the performance specifications that were performed. Based on the results of this questionnaire, an International Working Group for the Verification and Performance of Automated Cell Counters for Body Fluids was formed by the International Council for Standardization in Hematology (ICSH) to prepare a set of guidelines to help laboratories plan and execute the verification of their automated cell counters to provide accurate and reliable results for automated body fluid counts. These guidelines were discussed at the ICSH General Assemblies and reviewed by an international panel of experts to achieve further consensus. © 2014 John Wiley & Sons Ltd.

  5. Comparative study of minutiae selection algorithms for ISO fingerprint templates

    NASA Astrophysics Data System (ADS)

    Vibert, B.; Charrier, C.; Le Bars, J.-M.; Rosenberger, C.

    2015-03-01

    We address the selection of fingerprint minutiae given a fingerprint ISO template. Minutiae selection plays a very important role when a secure element (i.e. a smart-card) is used. Because of the limited capability of computation and memory, the number of minutiae of a stored reference in the secure element is limited. We propose in this paper a comparative study of 6 minutiae selection methods including 2 methods from the literature and 1 like reference (No Selection). Experimental results on 3 fingerprint databases from the Fingerprint Verification Competition show their relative efficiency in terms of performance and computation time.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, Jing-Jy; Flood, Paul E.; LePoire, David

    In this report, the results generated by RESRAD-RDD version 2.01 are compared with those produced by RESRAD-RDD version 1.7 for different scenarios with different sets of input parameters. RESRAD-RDD version 1.7 is spreadsheet-driven, performing calculations with Microsoft Excel spreadsheets. RESRAD-RDD version 2.01 revamped version 1.7 by using command-driven programs designed with Visual Basic.NET to direct calculations with data saved in Microsoft Access database, and re-facing the graphical user interface (GUI) to provide more flexibility and choices in guideline derivation. Because version 1.7 and version 2.01 perform the same calculations, the comparison of their results serves as verification of both versions.more » The verification covered calculation results for 11 radionuclides included in both versions: Am-241, Cf-252, Cm-244, Co-60, Cs-137, Ir-192, Po-210, Pu-238, Pu-239, Ra-226, and Sr-90. At first, all nuclidespecific data used in both versions were compared to ensure that they are identical. Then generic operational guidelines and measurement-based radiation doses or stay times associated with a specific operational guideline group were calculated with both versions using different sets of input parameters, and the results obtained with the same set of input parameters were compared. A total of 12 sets of input parameters were used for the verification, and the comparison was performed for each operational guideline group, from A to G, sequentially. The verification shows that RESRAD-RDD version 1.7 and RESRAD-RDD version 2.01 generate almost identical results; the slight differences could be attributed to differences in numerical precision with Microsoft Excel and Visual Basic.NET. RESRAD-RDD version 2.01 allows the selection of different units for use in reporting calculation results. The results of SI units were obtained and compared with the base results (in traditional units) used for comparison with version 1.7. The comparison shows that RESRAD-RDD version 2.01 correctly reports calculation results in the unit specified in the GUI.« less

  7. Mitigating errors caused by interruptions during medication verification and administration: interventions in a simulated ambulatory chemotherapy setting.

    PubMed

    Prakash, Varuna; Koczmara, Christine; Savage, Pamela; Trip, Katherine; Stewart, Janice; McCurdie, Tara; Cafazzo, Joseph A; Trbovich, Patricia

    2014-11-01

    Nurses are frequently interrupted during medication verification and administration; however, few interventions exist to mitigate resulting errors, and the impact of these interventions on medication safety is poorly understood. The study objectives were to (A) assess the effects of interruptions on medication verification and administration errors, and (B) design and test the effectiveness of targeted interventions at reducing these errors. The study focused on medication verification and administration in an ambulatory chemotherapy setting. A simulation laboratory experiment was conducted to determine interruption-related error rates during specific medication verification and administration tasks. Interventions to reduce these errors were developed through a participatory design process, and their error reduction effectiveness was assessed through a postintervention experiment. Significantly more nurses committed medication errors when interrupted than when uninterrupted. With use of interventions when interrupted, significantly fewer nurses made errors in verifying medication volumes contained in syringes (16/18; 89% preintervention error rate vs 11/19; 58% postintervention error rate; p=0.038; Fisher's exact test) and programmed in ambulatory pumps (17/18; 94% preintervention vs 11/19; 58% postintervention; p=0.012). The rate of error commission significantly decreased with use of interventions when interrupted during intravenous push (16/18; 89% preintervention vs 6/19; 32% postintervention; p=0.017) and pump programming (7/18; 39% preintervention vs 1/19; 5% postintervention; p=0.017). No statistically significant differences were observed for other medication verification tasks. Interruptions can lead to medication verification and administration errors. Interventions were highly effective at reducing unanticipated errors of commission in medication administration tasks, but showed mixed effectiveness at reducing predictable errors of detection in medication verification tasks. These findings can be generalised and adapted to mitigate interruption-related errors in other settings where medication verification and administration are required. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  8. Mitigating errors caused by interruptions during medication verification and administration: interventions in a simulated ambulatory chemotherapy setting

    PubMed Central

    Prakash, Varuna; Koczmara, Christine; Savage, Pamela; Trip, Katherine; Stewart, Janice; McCurdie, Tara; Cafazzo, Joseph A; Trbovich, Patricia

    2014-01-01

    Background Nurses are frequently interrupted during medication verification and administration; however, few interventions exist to mitigate resulting errors, and the impact of these interventions on medication safety is poorly understood. Objective The study objectives were to (A) assess the effects of interruptions on medication verification and administration errors, and (B) design and test the effectiveness of targeted interventions at reducing these errors. Methods The study focused on medication verification and administration in an ambulatory chemotherapy setting. A simulation laboratory experiment was conducted to determine interruption-related error rates during specific medication verification and administration tasks. Interventions to reduce these errors were developed through a participatory design process, and their error reduction effectiveness was assessed through a postintervention experiment. Results Significantly more nurses committed medication errors when interrupted than when uninterrupted. With use of interventions when interrupted, significantly fewer nurses made errors in verifying medication volumes contained in syringes (16/18; 89% preintervention error rate vs 11/19; 58% postintervention error rate; p=0.038; Fisher's exact test) and programmed in ambulatory pumps (17/18; 94% preintervention vs 11/19; 58% postintervention; p=0.012). The rate of error commission significantly decreased with use of interventions when interrupted during intravenous push (16/18; 89% preintervention vs 6/19; 32% postintervention; p=0.017) and pump programming (7/18; 39% preintervention vs 1/19; 5% postintervention; p=0.017). No statistically significant differences were observed for other medication verification tasks. Conclusions Interruptions can lead to medication verification and administration errors. Interventions were highly effective at reducing unanticipated errors of commission in medication administration tasks, but showed mixed effectiveness at reducing predictable errors of detection in medication verification tasks. These findings can be generalised and adapted to mitigate interruption-related errors in other settings where medication verification and administration are required. PMID:24906806

  9. Precision, accuracy, cross reactivity and comparability of serum indices measurement on Abbott Architect c8000, Beckman Coulter AU5800 and Roche Cobas 6000 c501 clinical chemistry analyzers.

    PubMed

    Nikolac Gabaj, Nora; Miler, Marijana; Vrtarić, Alen; Hemar, Marina; Filipi, Petra; Kocijančić, Marija; Šupak Smolčić, Vesna; Ćelap, Ivana; Šimundić, Ana-Maria

    2018-04-25

    The aim of our study was to perform verification of serum indices on three clinical chemistry platforms. This study was done on three analyzers: Abbott Architect c8000, Beckman Coulter AU5800 (BC) and Roche Cobas 6000 c501. The following analytical specifications were verified: precision (two patient samples), accuracy (sample with the highest concentration of interferent was serially diluted and measured values compared to theoretical values), comparability (120 patients samples) and cross reactivity (samples with increasing concentrations of interferent were divided in two aliquots and remaining interferents were added in each aliquot. Measurements were done before and after adding interferents). Best results for precision were obtained for the H index (0.72%-2.08%). Accuracy for the H index was acceptable for Cobas and BC, while on Architect, deviations in the high concentration range were observed (y=0.02 [0.01-0.07]+1.07 [1.06-1.08]x). All three analyzers showed acceptable results in evaluating accuracy of L index and unacceptable results for I index. The H index was comparable between BC and both, Architect (Cohen's κ [95% CI]=0.795 [0.692-0.898]) and Roche (Cohen's κ [95% CI]=0.825 [0.729-0.922]), while Roche and Architect were not comparable. The I index was not comparable between all analyzer combinations, while the L index was only comparable between Abbott and BC. Cross reactivity analysis mostly showed that serum indices measurement is affected when a combination of interferences is present. There is heterogeneity between analyzers in the hemolysis, icteria, lipemia (HIL) quality performance. Verification of serum indices in routine work is necessary to establish analytical specifications.

  10. Inverse probability weighting estimation of the volume under the ROC surface in the presence of verification bias.

    PubMed

    Zhang, Ying; Alonzo, Todd A

    2016-11-01

    In diagnostic medicine, the volume under the receiver operating characteristic (ROC) surface (VUS) is a commonly used index to quantify the ability of a continuous diagnostic test to discriminate between three disease states. In practice, verification of the true disease status may be performed only for a subset of subjects under study since the verification procedure is invasive, risky, or expensive. The selection for disease examination might depend on the results of the diagnostic test and other clinical characteristics of the patients, which in turn can cause bias in estimates of the VUS. This bias is referred to as verification bias. Existing verification bias correction in three-way ROC analysis focuses on ordinal tests. We propose verification bias-correction methods to construct ROC surface and estimate the VUS for a continuous diagnostic test, based on inverse probability weighting. By applying U-statistics theory, we develop asymptotic properties for the estimator. A Jackknife estimator of variance is also derived. Extensive simulation studies are performed to evaluate the performance of the new estimators in terms of bias correction and variance. The proposed methods are used to assess the ability of a biomarker to accurately identify stages of Alzheimer's disease. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Gains to L2 Listeners from Reading while Listening vs. Listening Only in Comprehending Short Stories

    ERIC Educational Resources Information Center

    Chang, Anna C.-S.

    2009-01-01

    This study builds on the concept that aural-written verification helps L2 learners develop auditory discrimination skills, refine word recognition and gain awareness of form-meaning relationships, by comparing two modes of aural input: reading while listening (R/L) vs. listening only (L/O). Two test tasks (sequencing and gap filling) of 95 items,…

  12. An analysis of random projection for changeable and privacy-preserving biometric verification.

    PubMed

    Wang, Yongjin; Plataniotis, Konstantinos N

    2010-10-01

    Changeability and privacy protection are important factors for widespread deployment of biometrics-based verification systems. This paper presents a systematic analysis of a random-projection (RP)-based method for addressing these problems. The employed method transforms biometric data using a random matrix with each entry an independent and identically distributed Gaussian random variable. The similarity- and privacy-preserving properties, as well as the changeability of the biometric information in the transformed domain, are analyzed in detail. Specifically, RP on both high-dimensional image vectors and dimensionality-reduced feature vectors is discussed and compared. A vector translation method is proposed to improve the changeability of the generated templates. The feasibility of the introduced solution is well supported by detailed theoretical analyses. Extensive experimentation on a face-based biometric verification problem shows the effectiveness of the proposed method.

  13. Verification and classification bias interactions in diagnostic test accuracy studies for fine-needle aspiration biopsy.

    PubMed

    Schmidt, Robert L; Walker, Brandon S; Cohen, Michael B

    2015-03-01

    Reliable estimates of accuracy are important for any diagnostic test. Diagnostic accuracy studies are subject to unique sources of bias. Verification bias and classification bias are 2 sources of bias that commonly occur in diagnostic accuracy studies. Statistical methods are available to estimate the impact of these sources of bias when they occur alone. The impact of interactions when these types of bias occur together has not been investigated. We developed mathematical relationships to show the combined effect of verification bias and classification bias. A wide range of case scenarios were generated to assess the impact of bias components and interactions on total bias. Interactions between verification bias and classification bias caused overestimation of sensitivity and underestimation of specificity. Interactions had more effect on sensitivity than specificity. Sensitivity was overestimated by at least 7% in approximately 6% of the tested scenarios. Specificity was underestimated by at least 7% in less than 0.1% of the scenarios. Interactions between verification bias and classification bias create distortions in accuracy estimates that are greater than would be predicted from each source of bias acting independently. © 2014 American Cancer Society.

  14. Electronic Cigarette Sales to Minors via the Internet

    PubMed Central

    Williams, Rebecca S.; Derrick, Jason; Ribisl, Kurt M.

    2015-01-01

    Importance Electronic cigarettes (e-cigarettes) entered the US market in 2007 and, with little regulatory oversight, grew into a $2-billion-a-year industry by 2013. The Centers for Disease Control and Prevention has reported a trend of increasing e-cigarette use among teens, with use rates doubling from 2011 to 2012. While several studies have documented that teens can and do buy cigarettes online, to our knowledge, no studies have yet examined age verification among Internet tobacco vendors selling e-cigarettes. Objective To estimate the extent to which minors can successfully purchase e-cigarettes online and assess compliance with North Carolina's 2013 e-cigarette age-verification law. Design, Setting, and Participants In this cross-sectional study conducted from February 2014 to June 2014, 11 nonsmoking minors aged 14 to 17 years made supervised e-cigarette purchase attempts from 98 Internet e-cigarette vendors. Purchase attempts were made at the University of North Carolina Internet Tobacco Vendors Study project offices using credit cards. Main Outcome and Measure Rate at which minors can successfully purchase e-cigarettes on the Internet. Results Minors successfully received deliveries of e-cigarettes from 76.5% of purchase attempts, with no attempts by delivery companies to verify their ages at delivery and 95% of delivered orders simply left at the door. All delivered packages came from shipping companies that, according to company policy or federal regulation, do not ship cigarettes to consumers. Of the total orders, 18 failed for reasons unrelated to age verification. Only 5 of the remaining 80 youth purchase attempts were rejected owing to age verification, resulting in a youth buy rate of 93.7%. None of the vendors complied with North Carolina's e-cigarette age-verification law. Conclusions and Relevance Minors are easily able to purchase e-cigarettes from the Internet because of an absence of age-verification measures used by Internet e-cigarette vendors. Federal law should require and enforce rigorous age verification for all e-cigarette sales as with the federal PACT (Prevent All Cigarette Trafficking) Act's requirements for age verification in Internet cigarette sales. PMID:25730697

  15. Verification of the FtCayuga fault-tolerant microprocessor system. Volume 1: A case study in theorem prover-based verification

    NASA Technical Reports Server (NTRS)

    Srivas, Mandayam; Bickford, Mark

    1991-01-01

    The design and formal verification of a hardware system for a task that is an important component of a fault tolerant computer architecture for flight control systems is presented. The hardware system implements an algorithm for obtaining interactive consistancy (byzantine agreement) among four microprocessors as a special instruction on the processors. The property verified insures that an execution of the special instruction by the processors correctly accomplishes interactive consistency, provided certain preconditions hold. An assumption is made that the processors execute synchronously. For verification, the authors used a computer aided design hardware design verification tool, Spectool, and the theorem prover, Clio. A major contribution of the work is the demonstration of a significant fault tolerant hardware design that is mechanically verified by a theorem prover.

  16. A system for EPID-based real-time treatment delivery verification during dynamic IMRT treatment.

    PubMed

    Fuangrod, Todsaporn; Woodruff, Henry C; van Uytven, Eric; McCurdy, Boyd M C; Kuncic, Zdenka; O'Connor, Daryl J; Greer, Peter B

    2013-09-01

    To design and develop a real-time electronic portal imaging device (EPID)-based delivery verification system for dynamic intensity modulated radiation therapy (IMRT) which enables detection of gross treatment delivery errors before delivery of substantial radiation to the patient. The system utilizes a comprehensive physics-based model to generate a series of predicted transit EPID image frames as a reference dataset and compares these to measured EPID frames acquired during treatment. The two datasets are using MLC aperture comparison and cumulative signal checking techniques. The system operation in real-time was simulated offline using previously acquired images for 19 IMRT patient deliveries with both frame-by-frame comparison and cumulative frame comparison. Simulated error case studies were used to demonstrate the system sensitivity and performance. The accuracy of the synchronization method was shown to agree within two control points which corresponds to approximately ∼1% of the total MU to be delivered for dynamic IMRT. The system achieved mean real-time gamma results for frame-by-frame analysis of 86.6% and 89.0% for 3%, 3 mm and 4%, 4 mm criteria, respectively, and 97.9% and 98.6% for cumulative gamma analysis. The system can detect a 10% MU error using 3%, 3 mm criteria within approximately 10 s. The EPID-based real-time delivery verification system successfully detected simulated gross errors introduced into patient plan deliveries in near real-time (within 0.1 s). A real-time radiation delivery verification system for dynamic IMRT has been demonstrated that is designed to prevent major mistreatments in modern radiation therapy.

  17. Poster - Thurs Eve-43: Verification of dose calculation with tissue inhomogeneity using MapCHECK.

    PubMed

    Korol, R; Chen, J; Mosalaei, H; Karnas, S

    2008-07-01

    MapCHECK (Sun Nuclear, Melbourne, FL) with 445 diode detectors has been used widely for routine IMRT quality assurance (QA) 1 . However, routine IMRT QA has not included the verification of inhomogeneity effects. The objective of this study is to use MapCHECK and a phantom to verify dose calculation and IMRT delivery with tissue inhomogeneity. A phantom with tissue inhomogeneities was placed on top of MapCHECK to measure the planar dose for an anterior beam with photon energy 6 MV or 18 MV. The phantom was composed of a 3.5 cm thick block of lung equivalent material and solid water arranged side by side with a 0.5 cm slab of solid water on the top of the phantom. The phantom setup including MapCHECK was CT scanned and imported into Pinnacle 8.0d for dose calculation. Absolute dose distributions were compared with gamma criteria 3% for dose difference and 3 mm for distance-to-agreement. The results are in good agreement between the measured and calculated planar dose with 88% pass rate based on the gamma analysis. The major dose difference was at the lung-water interface. Further investigation will be performed on a custom designed inhomogeneity phantom with inserts of varying densities and effective depth to create various dose gradients at the interface for dose calculation and delivery verification. In conclusion, a phantom with tissue inhomogeneities can be used with MapCHECK for verification of dose calculation and delivery with tissue inhomogeneity. © 2008 American Association of Physicists in Medicine.

  18. Computer Simulations to Study Diffraction Effects of Stacking Faults in Beta-SiC: II. Experimental Verification. 2; Experimental Verification

    NASA Technical Reports Server (NTRS)

    Pujar, Vijay V.; Cawley, James D.; Levine, S. (Technical Monitor)

    2000-01-01

    Earlier results from computer simulation studies suggest a correlation between the spatial distribution of stacking errors in the Beta-SiC structure and features observed in X-ray diffraction patterns of the material. Reported here are experimental results obtained from two types of nominally Beta-SiC specimens, which yield distinct XRD data. These samples were analyzed using high resolution transmission electron microscopy (HRTEM) and the stacking error distribution was directly determined. The HRTEM results compare well to those deduced by matching the XRD data with simulated spectra, confirming the hypothesis that the XRD data is indicative not only of the presence and density of stacking errors, but also that it can yield information regarding their distribution. In addition, the stacking error population in both specimens is related to their synthesis conditions and it appears that it is similar to the relation developed by others to explain the formation of the corresponding polytypes.

  19. Verification Test of the SURF and SURFplus Models in xRage: Part III Affect of Mesh Alignment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Menikoff, Ralph

    The previous studies used an underdriven detonation wave in 1-dimension (steady ZND reaction zone profile followed by a scale-invariant rarefaction wave) for PBX 9502 as a verification test of the implementation of the SURF and SURFplus models in the xRage code. Since the SURF rate is a function of the lead shock pressure, the question arises as to the effect on accuracy of variations in the detected shock pressure due to the alignment of the shock front with the mesh. To study the effect of mesh alignment we simulate a cylindrically diverging detonation wave using a planar 2-D mesh. Themore » leading issue is the magnitude of azimuthal asymmetries in the numerical solution. The 2-D test case does not have an exact analytic solution. To quantify the accuracy, the 2-D solution along rays through the origin are compared to a highly resolved 1-D simulation in cylindrical geometry.« less

  20. Probabilistic Material Strength Degradation Model for Inconel 718 Components Subjected to High Temperature, High-Cycle and Low-Cycle Mechanical Fatigue, Creep and Thermal Fatigue Effects

    NASA Technical Reports Server (NTRS)

    Bast, Callie C.; Boyce, Lola

    1995-01-01

    The development of methodology for a probabilistic material strength degradation is described. The probabilistic model, in the form of a postulated randomized multifactor equation, provides for quantification of uncertainty in the lifetime material strength of aerospace propulsion system components subjected to a number of diverse random effects. This model is embodied in the computer program entitled PROMISS, which can include up to eighteen different effects. Presently, the model includes five effects that typically reduce lifetime strength: high temperature, high-cycle mechanical fatigue, low-cycle mechanical fatigue, creep and thermal fatigue. Results, in the form of cumulative distribution functions, illustrated the sensitivity of lifetime strength to any current value of an effect. In addition, verification studies comparing predictions of high-cycle mechanical fatigue and high temperature effects with experiments are presented. Results from this limited verification study strongly supported that material degradation can be represented by randomized multifactor interaction models.

  1. 30 CFR 250.913 - When must I resubmit Platform Verification Program plans?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Structures Platform Verification Program § 250.913 When must I resubmit Platform Verification Program plans? (a) You must resubmit any design verification, fabrication verification, or installation verification... 30 Mineral Resources 2 2010-07-01 2010-07-01 false When must I resubmit Platform Verification...

  2. Verification and transfer of thermal pollution model. Volume 6: User's manual for 1-dimensional numerical model

    NASA Technical Reports Server (NTRS)

    Lee, S. S.; Sengupta, S.; Nwadike, E. V.

    1982-01-01

    The six-volume report: describes the theory of a three dimensional (3-D) mathematical thermal discharge model and a related one dimensional (1-D) model, includes model verification at two sites, and provides a separate user's manual for each model. The 3-D model has two forms: free surface and rigid lid. The former, verified at Anclote Anchorage (FL), allows a free air/water interface and is suited for significant surface wave heights compared to mean water depth; e.g., estuaries and coastal regions. The latter, verified at Lake Keowee (SC), is suited for small surface wave heights compared to depth (e.g., natural or man-made inland lakes) because surface elevation has been removed as a parameter.

  3. Comparison of Kodak EDR2 and Gafchromic EBT film for intensity-modulated radiation therapy dose distribution verification.

    PubMed

    Sankar, A; Ayyangar, Komanduri M; Nehru, R Mothilal; Kurup, P G Gopalakrishna; Murali, V; Enke, Charles A; Velmurugan, J

    2006-01-01

    The quantitative dose validation of intensity-modulated radiation therapy (IMRT) plans require 2-dimensional (2D) high-resolution dosimetry systems with uniform response over its sensitive region. The present work deals with clinical use of commercially available self-developing Radio Chromic Film, Gafchromic EBT film, for IMRT dose verification. Dose response curves were generated for the films using a VXR-16 film scanner. The results obtained with EBT films were compared with the results of Kodak extended dose range 2 (EDR2) films. The EBT film had a linear response between the dose range of 0 to 600 cGy. The dose-related characteristics of the EBT film, such as post irradiation color growth with time, film uniformity, and effect of scanning orientation, were studied. There was up to 8.6% increase in the color density between 2 to 40 hours after irradiation. There was a considerable variation, up to 8.5%, in the film uniformity over its sensitive region. The quantitative differences between calculated and measured dose distributions were analyzed using DTA and Gamma index with the tolerance of 3% dose difference and 3-mm distance agreement. The EDR2 films showed consistent results with the calculated dose distributions, whereas the results obtained using EBT were inconsistent. The variation in the film uniformity limits the use of EBT film for conventional large-field IMRT verification. For IMRT of smaller field sizes (4.5 x 4.5 cm), the results obtained with EBT were comparable with results of EDR2 films.

  4. SU-E-T-50: A Multi-Institutional Study of Independent Dose Verification Software Program for Lung SBRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kawai, D; Takahashi, R; Kamima, T

    2015-06-15

    Purpose: The accuracy of dose distribution depends on treatment planning system especially in heterogeneity-region. The tolerance level (TL) of the secondary check using the independent dose verification may be variable in lung SBRT plans. We conducted a multi-institutional study to evaluate the tolerance level of lung SBRT plans shown in the AAPM TG114. Methods: Five institutes in Japan participated in this study. All of the institutes used a same independent dose verification software program (Simple MU Analysis: SMU, Triangle Product, Ishikawa, JP), which is Clarkson-based and CT images were used to compute radiological path length. Analytical Anisotropic Algorithm (AAA), Pencilmore » Beam Convolution with modified Batho-method (PBC-B) and Adaptive Convolve (AC) were used for lung SBRT planning. A measurement using an ion-chamber was performed in a heterogeneous phantom to compare doses from the three different algorithms and the SMU to the measured dose. In addition to it, a retrospective analysis using clinical lung SBRT plans (547 beams from 77 patients) was conducted to evaluate the confidence limit (CL, Average±2SD) in dose between the three algorithms and the SMU. Results: Compared to the measurement, the AAA showed the larger systematic dose error of 2.9±3.2% than PBC-B and AC. The Clarkson-based SMU showed larger error of 5.8±3.8%. The CLs for clinical plans were 7.7±6.0 % (AAA), 5.3±3.3 % (AC), 5.7±3.4 % (PBC -B), respectively. Conclusion: The TLs from the CLs were evaluated. A Clarkson-based system shows a large systematic variation because of inhomogeneous correction. The AAA showed a significant variation. Thus, we must consider the difference of inhomogeneous correction as well as the dependence of dose calculation engine.« less

  5. Independent verification and validation report of Washington state ferries' wireless high speed data project

    DOT National Transportation Integrated Search

    2008-06-30

    The following Independent Verification and Validation (IV&V) report documents and presents the results of a study of the Washington State Ferries Prototype Wireless High Speed Data Network. The purpose of the study was to evaluate and determine if re...

  6. THE DEVELOPMENT AND INTER-LABORATORY VERIFICATION OF LC-MS LIBRARIES FOR ORGANIC CHEMICALS OF ENVIRONMENTAL CONCERN

    EPA Science Inventory

    The development, verification, and comparison study between LC-MS libraries for two manufacturers’ instruments and a verified protocol are discussed. The LC-MS library protocol was verified through an inter-laboratory study that involved Federal, State, and private laboratories. ...

  7. A verification library for multibody simulation software

    NASA Technical Reports Server (NTRS)

    Kim, Sung-Soo; Haug, Edward J.; Frisch, Harold P.

    1989-01-01

    A multibody dynamics verification library, that maintains and manages test and validation data is proposed, based on RRC Robot arm and CASE backhoe validation and a comparitive study of DADS, DISCOS, and CONTOPS that are existing public domain and commercial multibody dynamic simulation programs. Using simple representative problems, simulation results from each program are cross checked, and the validation results are presented. Functionalities of the verification library are defined, in order to automate validation procedure.

  8. Generic Verification Protocol for Verification of Online Turbidimeters

    EPA Science Inventory

    This protocol provides generic procedures for implementing a verification test for the performance of online turbidimeters. The verification tests described in this document will be conducted under the Environmental Technology Verification (ETV) Program. Verification tests will...

  9. SU-E-T-398: Feasibility of Automated Tools for Robustness Evaluation of Advanced Photon and Proton Techniques in Oropharyngeal Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, H; Liang, X; Kalbasi, A

    2014-06-01

    Purpose: Advanced radiotherapy (RT) techniques such as proton pencil beam scanning (PBS) and photon-based volumetric modulated arc therapy (VMAT) have dosimetric advantages in the treatment of head and neck malignancies. However, anatomic or alignment changes during treatment may limit robustness of PBS and VMAT plans. We assess the feasibility of automated deformable registration tools for robustness evaluation in adaptive PBS and VMAT RT of oropharyngeal cancer (OPC). Methods: We treated 10 patients with bilateral OPC with advanced RT techniques and obtained verification CT scans with physician-reviewed target and OAR contours. We generated 3 advanced RT plans for each patient: protonmore » PBS plan using 2 posterior oblique fields (2F), proton PBS plan using an additional third low-anterior field (3F), and a photon VMAT plan using 2 arcs (Arc). For each of the planning techniques, we forward calculated initial (Ini) plans on the verification scans to create verification (V) plans. We extracted DVH indicators based on physician-generated contours for 2 target and 14 OAR structures to investigate the feasibility of two automated tools (contour propagation (CP) and dose deformation (DD)) as surrogates for routine clinical plan robustness evaluation. For each verification scan, we compared DVH indicators of V, CP and DD plans in a head-to-head fashion using Student's t-test. Results: We performed 39 verification scans; each patient underwent 3 to 6 verification scan. We found no differences in doses to target or OAR structures between V and CP, V and DD, and CP and DD plans across all patients (p > 0.05). Conclusions: Automated robustness evaluation tools, CP and DD, accurately predicted dose distributions of verification (V) plans using physician-generated contours. These tools may be further developed as a potential robustness screening tool in the workflow for adaptive treatment of OPC using advanced RT techniques, reducing the need for physician-generated contours.« less

  10. Turf Conversion Measurement and Verification Protocol

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurnik, Charles W.; Stoughton, Kate M.; Figueroa, Jorge

    This measurement and verification (M and V) protocol provides procedures for energy service companies (ESCOs) and water efficiency service companies (WESCOs) to determine water savings as a result of water conservation measures (WCMs) in energy performance contracts associated with converting turfgrass or other water-intensive plantings to water-wise and sustainable landscapes. The water savings are determined by comparing the baseline water use to the water use after the WCM has been implemented. This protocol outlines the basic structure of the M and V plan, and details the procedures to use to determine water savings.

  11. Digital data storage systems, computers, and data verification methods

    DOEpatents

    Groeneveld, Bennett J.; Austad, Wayne E.; Walsh, Stuart C.; Herring, Catherine A.

    2005-12-27

    Digital data storage systems, computers, and data verification methods are provided. According to a first aspect of the invention, a computer includes an interface adapted to couple with a dynamic database; and processing circuitry configured to provide a first hash from digital data stored within a portion of the dynamic database at an initial moment in time, to provide a second hash from digital data stored within the portion of the dynamic database at a subsequent moment in time, and to compare the first hash and the second hash.

  12. Comparing portable computers with bedside computers when administering medications using bedside medication verification.

    PubMed

    Ludwig-Beymer, Patti; Williams, Phillip; Stimac, Ellen

    2012-01-01

    This research examined bedside medication verification administration in 2 adult critical care units, using portable computers and permanent bedside computers. There were no differences in the number of near-miss errors, the time to administer the medications, or nurse perception of ease of medication administration, care of patients, or reliability of technology. The percentage of medications scanned was significantly higher with the use of permanent bedside computers, and nurses using permanent bedside computers were more likely to agree that the computer was always available.

  13. Off-line robot programming and graphical verification of path planning

    NASA Technical Reports Server (NTRS)

    Tonkay, Gregory L.

    1989-01-01

    The objective of this project was to develop or specify an integrated environment for off-line programming, graphical path verification, and debugging for robotic systems. Two alternatives were compared. The first was the integration of the ASEA Off-line Programming package with ROBSIM, a robotic simulation program. The second alternative was the purchase of the commercial product IGRIP. The needs of the RADL (Robotics Applications Development Laboratory) were explored and the alternatives were evaluated based on these needs. As a result, IGRIP was proposed as the best solution to the problem.

  14. SU-E-T-287: Robustness Study of Passive-Scattering Proton Therapy in Lung: Is Range and Setup Uncertainty Calculation On the Initial CT Enough to Predict the Plan Robustness?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, X; Dormer, J; Kenton, O

    Purpose: Plan robustness of the passive-scattering proton therapy treatment of lung tumors has been studied previously using combined uncertainties of 3.5% in CT number and 3 mm geometric shifts. In this study, we investigate whether this method is sufficient to predict proton plan robustness by comparing to plans performed on weekly verification CT scans. Methods: Ten lung cancer patients treated with passive-scattering proton therapy were randomly selected. All plans were prescribed 6660cGy in 37 fractions. Each initial plan was calculated using +/− 3.5% range and +/− 0.3cm setup uncertainty in x, y and z directions in Eclipse TPS(Method-A). Throughout themore » treatment course, patients received weekly verification CT scans to assess the daily treatment variation(Method-B). After contours and imaging registrations are verified by the physician, the initial plan with the same beamline and compensator was mapped into the verification CT. Dose volume histograms (DVH) were evaluated for robustness study. Results: Differences are observed between method A and B in terms of iCTV coverage and lung dose. Method-A shows all the iCTV D95 are within +/− 1% difference, while 20% of cases fall outside +/−1% range in Method-B. In the worst case scenario(WCS), the iCTV D95 is reduced by 2.5%. All lung V5 and V20 are within +/−5% in Method-A while 15% of V5 and 10% of V20 fall outside of +/−5% in Method-B. In the WCS, Lung V5 increased by 15% and V20 increased by 9%. Method A and B show good agreement with regard to cord maximum and Esophagus mean dose. Conclusion: This study suggests that using range and setup uncertainty calculation (+/−3.5% and +/−3mm) may not be sufficient to predict the WCS. In the absence of regular verification scans, expanding the conventional uncertainty parameters(e.g., to +/−3.5% and +/−4mm) may be needed to better reflect plan actual robustness.« less

  15. Verification and Validation of EnergyPlus Phase Change Material Model for Opaque Wall Assemblies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tabares-Velasco, P. C.; Christensen, C.; Bianchi, M.

    2012-08-01

    Phase change materials (PCMs) represent a technology that may reduce peak loads and HVAC energy consumption in buildings. A few building energy simulation programs have the capability to simulate PCMs, but their accuracy has not been completely tested. This study shows the procedure used to verify and validate the PCM model in EnergyPlus using a similar approach as dictated by ASHRAE Standard 140, which consists of analytical verification, comparative testing, and empirical validation. This process was valuable, as two bugs were identified and fixed in the PCM model, and version 7.1 of EnergyPlus will have a validated PCM model. Preliminarymore » results using whole-building energy analysis show that careful analysis should be done when designing PCMs in homes, as their thermal performance depends on several variables such as PCM properties and location in the building envelope.« less

  16. Adapted RF pulse design for SAR reduction in parallel excitation with experimental verification at 9.4 T.

    PubMed

    Wu, Xiaoping; Akgün, Can; Vaughan, J Thomas; Andersen, Peter; Strupp, John; Uğurbil, Kâmil; Van de Moortele, Pierre-François

    2010-07-01

    Parallel excitation holds strong promises to mitigate the impact of large transmit B1 (B+1) distortion at very high magnetic field. Accelerated RF pulses, however, inherently tend to require larger values in RF peak power which may result in substantial increase in Specific Absorption Rate (SAR) in tissues, which is a constant concern for patient safety at very high field. In this study, we demonstrate adapted rate RF pulse design allowing for SAR reduction while preserving excitation target accuracy. Compared with other proposed implementations of adapted rate RF pulses, our approach is compatible with any k-space trajectories, does not require an analytical expression of the gradient waveform and can be used for large flip angle excitation. We demonstrate our method with numerical simulations based on electromagnetic modeling and we include an experimental verification of transmit pattern accuracy on an 8 transmit channel 9.4 T system.

  17. Improving the modelling of irradiation-induced brain activation for in vivo PET verification of proton therapy.

    PubMed

    Bauer, Julia; Chen, Wenjing; Nischwitz, Sebastian; Liebl, Jakob; Rieken, Stefan; Welzel, Thomas; Debus, Juergen; Parodi, Katia

    2018-04-24

    A reliable Monte Carlo prediction of proton-induced brain tissue activation used for comparison to particle therapy positron-emission-tomography (PT-PET) measurements is crucial for in vivo treatment verification. Major limitations of current approaches to overcome include the CT-based patient model and the description of activity washout due to tissue perfusion. Two approaches were studied to improve the activity prediction for brain irradiation: (i) a refined patient model using tissue classification based on MR information and (ii) a PT-PET data-driven refinement of washout model parameters. Improvements of the activity predictions compared to post-treatment PT-PET measurements were assessed in terms of activity profile similarity for six patients treated with a single or two almost parallel fields delivered by active proton beam scanning. The refined patient model yields a generally higher similarity for most of the patients, except in highly pathological areas leading to tissue misclassification. Using washout model parameters deduced from clinical patient data could considerably improve the activity profile similarity for all patients. Current methods used to predict proton-induced brain tissue activation can be improved with MR-based tissue classification and data-driven washout parameters, thus providing a more reliable basis for PT-PET verification. Copyright © 2018 Elsevier B.V. All rights reserved.

  18. 4D offline PET-based treatment verification in scanned ion beam therapy: a phantom study

    NASA Astrophysics Data System (ADS)

    Kurz, Christopher; Bauer, Julia; Unholtz, Daniel; Richter, Daniel; Stützer, Kristin; Bert, Christoph; Parodi, Katia

    2015-08-01

    At the Heidelberg Ion-Beam Therapy Center, patient irradiation with scanned proton and carbon ion beams is verified by offline positron emission tomography (PET) imaging: the {β+} -activity measured within the patient is compared to a prediction calculated on the basis of the treatment planning data in order to identify potential delivery errors. Currently, this monitoring technique is limited to the treatment of static target structures. However, intra-fractional organ motion imposes considerable additional challenges to scanned ion beam radiotherapy. In this work, the feasibility and potential of time-resolved (4D) offline PET-based treatment verification with a commercial full-ring PET/CT (x-ray computed tomography) device are investigated for the first time, based on an experimental campaign with moving phantoms. Motion was monitored during the gated beam delivery as well as the subsequent PET acquisition and was taken into account in the corresponding 4D Monte-Carlo simulations and data evaluation. Under the given experimental conditions, millimeter agreement between the prediction and measurement was found. Dosimetric consequences due to the phantom motion could be reliably identified. The agreement between PET measurement and prediction in the presence of motion was found to be similar as in static reference measurements, thus demonstrating the potential of 4D PET-based treatment verification for future clinical applications.

  19. Proton therapy of prostate cancer by anterior-oblique beams: implications of setup and anatomy variations

    NASA Astrophysics Data System (ADS)

    Moteabbed, M.; Trofimov, A.; Sharp, G. C.; Wang, Y.; Zietman, A. L.; Efstathiou, J. A.; Lu, H.-M.

    2017-03-01

    Proton therapy of prostate by anterior beams could offer an attractive option for treating patients with hip prosthesis and limiting the high-dose exposure to the rectum. We investigated the impact of setup and anatomy variations on the anterior-oblique (AO) proton plan dose, and strategies to manage these effects via range verification and adaptive delivery. Ten patients treated by bilateral (BL) passive-scattering proton therapy (79.2 Gy in 44 fractions) who underwent weekly verification CT scans were selected. Plans with AO beams were additionally created. To isolate the effect of daily variations, initial AO plans did not include range uncertainty margins. The use of fixed planning margins and adaptive range adjustments to manage these effects was investigated. For each case, the planned dose was recalculated on weekly CTs, and accumulated on the simulation CT using deformable registration to approximate the delivered dose. Planned and accumulated doses were compared for each scenario to quantify dose deviations induced by variations. The possibility of estimating the necessary range adjustments before each treatment was explored by simulating the procedure of a diode-based in vivo range verification technique, which would potentially be used clinically. The average planned rectum, penile bulb and femoral heads mean doses were smaller for initial AO compared to BL plans (by 8.3, 16.1 and 25.9 Gy, respectively). After considering interfractional variations in AO plans, the target coverage was substantially reduced. The maximum reduction of V 79.2/D 95/D mean/EUD for AO (without distal margins) (25.3%/10.7/1.6/4.9 Gy, respectively) was considerably larger than BL plans. The loss of coverage was mainly related to changes in water equivalent path length of the prostate after fiducial-based setup, caused by discrepancies in patient anterior surface and bony-anatomy alignment. Target coverage was recovered partially when using fixed planning margins, and fully when applying adaptive range adjustments. The accumulated organs-at-risk dose for AO beams after range adjustment demonstrated full sparing of femoral heads and superior sparing of penile bulb and rectum compared to the conventional BL cases. Our study indicates that using AO beams makes prostate treatment more susceptible to target underdose induced by interfractional variations. Adaptive range verification/adjustment may facilitate the use of anterior beam approaches, and ensure adequate target coverage in every fraction of the treatment.

  20. Verification Assessment of Flow Boundary Conditions for CFD Analysis of Supersonic Inlet Flows

    NASA Technical Reports Server (NTRS)

    Slater, John W.

    2002-01-01

    Boundary conditions for subsonic inflow, bleed, and subsonic outflow as implemented into the WIND CFD code are assessed with respect to verification for steady and unsteady flows associated with supersonic inlets. Verification procedures include grid convergence studies and comparisons to analytical data. The objective is to examine errors, limitations, capabilities, and behavior of the boundary conditions. Computational studies were performed on configurations derived from a "parameterized" supersonic inlet. These include steady supersonic flows with normal and oblique shocks, steady subsonic flow in a diffuser, and unsteady flow with the propagation and reflection of an acoustic disturbance.

  1. Verification and Validation (V&V) Methodologies for Multiphase Turbulent and Explosive Flows. V&V Case Studies of Computer Simulations from Los Alamos National Laboratory GMFIX codes

    NASA Astrophysics Data System (ADS)

    Dartevelle, S.

    2006-12-01

    Large-scale volcanic eruptions are inherently hazardous events, hence cannot be described by detailed and accurate in situ measurements; hence, volcanic explosive phenomenology is inadequately constrained in terms of initial and inflow conditions. Consequently, little to no real-time data exist to Verify and Validate computer codes developed to model these geophysical events as a whole. However, code Verification and Validation remains a necessary step, particularly when volcanologists use numerical data for mitigation of volcanic hazards as more often performed nowadays. The Verification and Validation (V&V) process formally assesses the level of 'credibility' of numerical results produced within a range of specific applications. The first step, Verification, is 'the process of determining that a model implementation accurately represents the conceptual description of the model', which requires either exact analytical solutions or highly accurate simplified experimental data. The second step, Validation, is 'the process of determining the degree to which a model is an accurate representation of the real world', which requires complex experimental data of the 'real world' physics. The Verification step is rather simple to formally achieve, while, in the 'real world' explosive volcanism context, the second step, Validation, is about impossible. Hence, instead of validating computer code against the whole large-scale unconstrained volcanic phenomenology, we rather suggest to focus on the key physics which control these volcanic clouds, viz., momentum-driven supersonic jets and multiphase turbulence. We propose to compare numerical results against a set of simple but well-constrained analog experiments, which uniquely and unambiguously represent these two key-phenomenology separately. Herewith, we use GMFIX (Geophysical Multiphase Flow with Interphase eXchange, v1.62), a set of multiphase- CFD FORTRAN codes, which have been recently redeveloped to meet the strict Quality Assurance, verification, and validation requirements from the Office of Civilian Radioactive Waste Management of the US Dept of Energy. GMFIX solves Navier-Stokes and energy partial differential equations for each phase with appropriate turbulence and interfacial coupling between phases. For momentum-driven single- to multi-phase underexpanded jets, the position of the first Mach disk is known empirically as a function of both the pressure ratio, K, and the particle mass fraction, Phi at the nozzle. Namely, the higher K, the further downstream the Mach disk and the higher Phi, the further upstream the first Mach disk. We show that GMFIX captures these two essential features. In addition, GMFIX displays all the properties found in these jets, such as expansion fans, incident and reflected shocks, and subsequent downstream mach discs, which make this code ideal for further investigations of equivalent volcanological phenomena. One of the other most challenging aspects of volcanic phenomenology is the multiphase nature of turbulence. We also validated GMFIX in comparing the velocity profiles and turbulence quantities against well constrained analog experiments. The velocity profiles agree with the analog ones as well as these of production of turbulent quantities. Overall, the Verification and the Validation experiments although inherently challenging suggest GMFIX captures the most essential dynamical properties of multiphase and supersonic flows and jets.

  2. Verification Games: Crowd-Sourced Formal Verification

    DTIC Science & Technology

    2016-03-01

    VERIFICATION GAMES : CROWD-SOURCED FORMAL VERIFICATION UNIVERSITY OF WASHINGTON MARCH 2016 FINAL TECHNICAL REPORT...DATES COVERED (From - To) JUN 2012 – SEP 2015 4. TITLE AND SUBTITLE VERIFICATION GAMES : CROWD-SOURCED FORMAL VERIFICATION 5a. CONTRACT NUMBER FA8750...clarification memorandum dated 16 Jan 09. 13. SUPPLEMENTARY NOTES 14. ABSTRACT Over the more than three years of the project Verification Games : Crowd-sourced

  3. The MCNP6 Analytic Criticality Benchmark Suite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.

    2016-06-16

    Analytical benchmarks provide an invaluable tool for verifying computer codes used to simulate neutron transport. Several collections of analytical benchmark problems [1-4] are used routinely in the verification of production Monte Carlo codes such as MCNP® [5,6]. Verification of a computer code is a necessary prerequisite to the more complex validation process. The verification process confirms that a code performs its intended functions correctly. The validation process involves determining the absolute accuracy of code results vs. nature. In typical validations, results are computed for a set of benchmark experiments using a particular methodology (code, cross-section data with uncertainties, and modeling)more » and compared to the measured results from the set of benchmark experiments. The validation process determines bias, bias uncertainty, and possibly additional margins. Verification is generally performed by the code developers, while validation is generally performed by code users for a particular application space. The VERIFICATION_KEFF suite of criticality problems [1,2] was originally a set of 75 criticality problems found in the literature for which exact analytical solutions are available. Even though the spatial and energy detail is necessarily limited in analytical benchmarks, typically to a few regions or energy groups, the exact solutions obtained can be used to verify that the basic algorithms, mathematics, and methods used in complex production codes perform correctly. The present work has focused on revisiting this benchmark suite. A thorough review of the problems resulted in discarding some of them as not suitable for MCNP benchmarking. For the remaining problems, many of them were reformulated to permit execution in either multigroup mode or in the normal continuous-energy mode for MCNP. Execution of the benchmarks in continuous-energy mode provides a significant advance to MCNP verification methods.« less

  4. Verification Image of The Veins on The Back Palm with Modified Local Line Binary Pattern (MLLBP) and Histogram

    NASA Astrophysics Data System (ADS)

    Prijono, Agus; Darmawan Hangkawidjaja, Aan; Ratnadewi; Saleh Ahmar, Ansari

    2018-01-01

    The verification to person who is used today as a fingerprint, signature, personal identification number (PIN) in the bank system, identity cards, attendance, easily copied and forged. This causes the system not secure and is vulnerable to unauthorized persons to access the system. In this research will be implemented verification system using the image of the blood vessels in the back of the palms as recognition more difficult to imitate because it is located inside the human body so it is safer to use. The blood vessels located at the back of the human hand is unique, even humans twins have a different image of the blood vessels. Besides the image of the blood vessels do not depend on a person’s age, so it can be used for long term, except in the case of an accident, or disease. Because of the unique vein pattern recognition can be used in a person. In this paper, we used a modification method to perform the introduction of a person based on the image of the blood vessel that is using Modified Local Line Binary Pattern (MLLBP). The process of matching blood vessel image feature extraction using Hamming Distance. Test case of verification is done by calculating the percentage of acceptance of the same person. Rejection error occurs if a person was not matched by the system with the data itself. The 10 person with 15 image compared to 5 image vein for each person is resulted 80,67% successful Another test case of the verification is done by verified two image from different person that is forgery, and the verification will be true if the system can rejection the image forgery. The ten different person is not verified and the result is obtained 94%.

  5. [Validation and verfication of microbiology methods].

    PubMed

    Camaró-Sala, María Luisa; Martínez-García, Rosana; Olmos-Martínez, Piedad; Catalá-Cuenca, Vicente; Ocete-Mochón, María Dolores; Gimeno-Cardona, Concepción

    2015-01-01

    Clinical microbiologists should ensure, to the maximum level allowed by the scientific and technical development, the reliability of the results. This implies that, in addition to meeting the technical criteria to ensure their validity, they must be performed with a number of conditions that allows comparable results to be obtained, regardless of the laboratory that performs the test. In this sense, the use of recognized and accepted reference methodsis the most effective tool for these guarantees. The activities related to verification and validation of analytical methods has become very important, as there is continuous development, as well as updating techniques and increasingly complex analytical equipment, and an interest of professionals to ensure quality processes and results. The definitions of validation and verification are described, along with the different types of validation/verification, and the types of methods, and the level of validation necessary depending on the degree of standardization. The situations in which validation/verification is mandatory and/or recommended is discussed, including those particularly related to validation in Microbiology. It stresses the importance of promoting the use of reference strains as controls in Microbiology and the use of standard controls, as well as the importance of participation in External Quality Assessment programs to demonstrate technical competence. The emphasis is on how to calculate some of the parameters required for validation/verification, such as the accuracy and precision. The development of these concepts can be found in the microbiological process SEIMC number 48: «Validation and verification of microbiological methods» www.seimc.org/protocols/microbiology. Copyright © 2013 Elsevier España, S.L.U. y Sociedad Española de Enfermedades Infecciosas y Microbiología Clínica. All rights reserved.

  6. Fostering group identification and creativity in diverse groups: the role of individuation and self-verification.

    PubMed

    Swann, William B; Kwan, Virginia S Y; Polzer, Jeffrey T; Milton, Laurie P

    2003-11-01

    A longitudinal study examined the interplay of identity negotiation processes and diversity in small groups of master's of business administration (MBA) students. When perceivers formed relatively positive impressions of other group members, higher diversity predicted more individuation of targets. When perceivers formed relatively neutral impressions of other group members, however, higher diversity predicted less individuation of targets. Individuation at the outset of the semester predicted self-verification effects several weeks later, and self-verification, in turn, predicted group identification and creative task performance. The authors conclude that contrary to self-categorization theory, fostering individuation and self-verification in diverse groups may maximize group identification and productivity.

  7. Developing a Test for Assessing Elementary Students' Comprehension of Science Texts

    ERIC Educational Resources Information Center

    Wang, Jing-Ru; Chen, Shin-Feng; Tsay, Reuy-Fen; Chou, Ching-Ting; Lin, Sheau-Wen; Kao, Huey-Lien

    2012-01-01

    This study reports on the process of developing a test to assess students' reading comprehension of scientific materials and on the statistical results of the verification study. A combination of classic test theory and item response theory approaches was used to analyze the assessment data from a verification study. Data analysis indicates the…

  8. Industrial methodology for process verification in research (IMPROVER): toward systems biology verification

    PubMed Central

    Meyer, Pablo; Hoeng, Julia; Rice, J. Jeremy; Norel, Raquel; Sprengel, Jörg; Stolle, Katrin; Bonk, Thomas; Corthesy, Stephanie; Royyuru, Ajay; Peitsch, Manuel C.; Stolovitzky, Gustavo

    2012-01-01

    Motivation: Analyses and algorithmic predictions based on high-throughput data are essential for the success of systems biology in academic and industrial settings. Organizations, such as companies and academic consortia, conduct large multi-year scientific studies that entail the collection and analysis of thousands of individual experiments, often over many physical sites and with internal and outsourced components. To extract maximum value, the interested parties need to verify the accuracy and reproducibility of data and methods before the initiation of such large multi-year studies. However, systematic and well-established verification procedures do not exist for automated collection and analysis workflows in systems biology which could lead to inaccurate conclusions. Results: We present here, a review of the current state of systems biology verification and a detailed methodology to address its shortcomings. This methodology named ‘Industrial Methodology for Process Verification in Research’ or IMPROVER, consists on evaluating a research program by dividing a workflow into smaller building blocks that are individually verified. The verification of each building block can be done internally by members of the research program or externally by ‘crowd-sourcing’ to an interested community. www.sbvimprover.com Implementation: This methodology could become the preferred choice to verify systems biology research workflows that are becoming increasingly complex and sophisticated in industrial and academic settings. Contact: gustavo@us.ibm.com PMID:22423044

  9. EXAMINING THE ROLE AND RESEARCH CHALLENGES OF SOCIAL MEDIA AS A TOOL FOR NONPROLIFERATION AND ARMS CONTROL TREATY VERIFICATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Henry, Michael J.; Cramer, Nicholas O.; Benz, Jacob M.

    Traditional arms control treaty verification activities typically involve a combination of technical measurements via physical and chemical sensors, state declarations, political agreements, and on-site inspections involving international subject matter experts. However, the ubiquity of the internet, and the electronic sharing of data that it enables, has made available a wealth of open source information with the potential to benefit verification efforts. Open source information is already being used by organizations such as the International Atomic Energy Agency to support the verification of state-declared information, prepare inspectors for in-field activities, and to maintain situational awareness . The recent explosion in socialmore » media use has opened new doors to exploring the attitudes, moods, and activities around a given topic. Social media platforms, such as Twitter, Facebook, and YouTube, offer an opportunity for individuals, as well as institutions, to participate in a global conversation at minimal cost. Social media data can also provide a more data-rich environment, with text data being augmented with images, videos, and location data. The research described in this paper investigates the utility of applying social media signatures as potential arms control and nonproliferation treaty verification tools and technologies, as determined through a series of case studies. The treaty relevant events that these case studies touch upon include detection of undeclared facilities or activities, determination of unknown events recorded by the International Monitoring System (IMS), and the global media response to the occurrence of an Indian missile launch. The case studies examine how social media can be used to fill an information gap and provide additional confidence to a verification activity. The case studies represent, either directly or through a proxy, instances where social media information may be available that could potentially augment the evaluation of an event. The goal of this paper is to instigate a discussion within the verification community as to where and how social media can be effectively utilized to complement and enhance traditional treaty verification efforts. In addition, this paper seeks to identify areas of future research and development necessary to adapt social media analytic tools and techniques, and to form the seed for social media analytics to aid and inform arms control and nonproliferation policymakers and analysts. While social media analysis (as well as open source analysis as a whole) will not ever be able to replace traditional arms control verification measures, they do supply unique signatures that can augment existing analysis.« less

  10. Watermarking 3D Objects for Verification

    DTIC Science & Technology

    1999-01-01

    signal (audio/ image /video) pro- cessing and steganography fields, and even newer to the computer graphics community. Inherently, digital watermarking of...quality images , and digital video. The field of digital watermarking is relatively new, and many of its terms have not been well defined. Among the dif...ferent media types, watermarking of 2D still images is comparatively better studied. Inherently, digital water- marking of 3D objects remains a

  11. The Empirical Verification of an Assignment of Items to Subtests: The Oblique Multiple Group Method versus the Confirmatory Common Factor Method

    ERIC Educational Resources Information Center

    Stuive, Ilse; Kiers, Henk A. L.; Timmerman, Marieke E.; ten Berge, Jos M. F.

    2008-01-01

    This study compares two confirmatory factor analysis methods on their ability to verify whether correct assignments of items to subtests are supported by the data. The confirmatory common factor (CCF) method is used most often and defines nonzero loadings so that they correspond to the assignment of items to subtests. Another method is the oblique…

  12. Physical property measurements on analog granites related to the joint verification experiment

    NASA Astrophysics Data System (ADS)

    Martin, Randolph J., III; Coyner, Karl B.; Haupt, Robert W.

    1990-08-01

    A key element in JVE (Joint Verification Experiment) conducted jointly between the United States and the USSR is the analysis of the geology and physical properties of the rocks in the respective test sites. A study was initiated to examine unclassified crystalline rock specimens obtained from areas near the Soviet site, Semipalatinsk and appropriate analog samples selected from Mt. Katadin, Maine. These rocks were also compared to Sierra White and Westerly Granite which have been studied in great detail. Measurements performed to characterize these rocks were: (1) Uniaxial strain with simultaneous compressional and shear wave velocities; (2) Hydrostatic compression to 150 MPa with simultaneous compressional and shear wave velocities; (3) Attenuation measurements as a function of frequency and strain amplitude for both dry and water saturated conditions. Elastic moduli determined from the hydrostatic compression and uniaxial strain test show that the rock matrix/mineral properties were comparable in magnitudes which vary within 25 percent from sample to sample. These properties appear to be approximately isotropic, especially at high pressures. However, anisotropy evident for certain samples at pressures below 35 MPa is attributed to dominant pre-existing microcrack populations and their alignments. Dependence of extensional attenuation and Young's modulus on strain amplitude were experimentally determined for intact Sierra White granite using the hysteresis loop technique.

  13. Design and Mechanical Evaluation of a Capacitive Sensor-Based Indexed Platform for Verification of Portable Coordinate Measuring Instruments

    PubMed Central

    Avila, Agustín Brau; Mazo, Jorge Santolaria; Martín, Juan José Aguilar

    2014-01-01

    During the last years, the use of Portable Coordinate Measuring Machines (PCMMs) in industry has increased considerably, mostly due to their flexibility for accomplishing in-line measuring tasks as well as their reduced costs and operational advantages as compared to traditional coordinate measuring machines (CMMs). However, their operation has a significant drawback derived from the techniques applied in the verification and optimization procedures of their kinematic parameters. These techniques are based on the capture of data with the measuring instrument from a calibrated gauge object, fixed successively in various positions so that most of the instrument measuring volume is covered, which results in time-consuming, tedious and expensive verification procedures. In this work the mechanical design of an indexed metrology platform (IMP) is presented. The aim of the IMP is to increase the final accuracy and to radically simplify the calibration, identification and verification of geometrical parameter procedures of PCMMs. The IMP allows us to fix the calibrated gauge object and move the measuring instrument in such a way that it is possible to cover most of the instrument working volume, reducing the time and operator fatigue to carry out these types of procedures. PMID:24451458

  14. Design and mechanical evaluation of a capacitive sensor-based indexed platform for verification of portable coordinate measuring instruments.

    PubMed

    Avila, Agustín Brau; Mazo, Jorge Santolaria; Martín, Juan José Aguilar

    2014-01-02

    During the last years, the use of Portable Coordinate Measuring Machines (PCMMs) in industry has increased considerably, mostly due to their flexibility for accomplishing in-line measuring tasks as well as their reduced costs and operational advantages as compared to traditional coordinate measuring machines (CMMs). However, their operation has a significant drawback derived from the techniques applied in the verification and optimization procedures of their kinematic parameters. These techniques are based on the capture of data with the measuring instrument from a calibrated gauge object, fixed successively in various positions so that most of the instrument measuring volume is covered, which results in time-consuming, tedious and expensive verification procedures. In this work the mechanical design of an indexed metrology platform (IMP) is presented. The aim of the IMP is to increase the final accuracy and to radically simplify the calibration, identification and verification of geometrical parameter procedures of PCMMs. The IMP allows us to fix the calibrated gauge object and move the measuring instrument in such a way that it is possible to cover most of the instrument working volume, reducing the time and operator fatigue to carry out these types of procedures.

  15. Development of CFC-Free Cleaning Processes at the NASA White Sands Test Facility

    NASA Technical Reports Server (NTRS)

    Beeson, Harold; Kirsch, Mike; Hornung, Steven; Biesinger, Paul

    1995-01-01

    The NASA White Sands Test Facility (WSTF) is developing cleaning and verification processes to replace currently used chlorofluorocarbon-113- (CFC-113-) based processes. The processes being evaluated include both aqueous- and solvent-based techniques. The presentation will include the findings of investigations of aqueous cleaning and verification processes that are based on a draft of a proposed NASA Kennedy Space Center (KSC) cleaning procedure. Verification testing with known contaminants, such as hydraulic fluid and commonly used oils, established correlations between nonvolatile residue and CFC-113. Recoveries ranged from 35 to 60 percent of theoretical. WSTF is also investigating enhancements to aqueous sampling for organics and particulates. Although aqueous alternatives have been identified for several processes, a need still exists for nonaqueous solvent cleaning, such as the cleaning and cleanliness verification of gauges used for oxygen service. The cleaning effectiveness of tetrachloroethylene (PCE), trichloroethylene (TCE), ethanol, hydrochlorofluorocarbon-225 (HCFC-225), tert-butylmethylether, and n-Hexane was evaluated using aerospace gauges and precision instruments and then compared to the cleaning effectiveness of CFC-113. Solvents considered for use in oxygen systems were also tested for oxygen compatibility using high-pressure oxygen autoignition and liquid oxygen mechanical impact testing.

  16. A Methodology for Evaluating Artifacts Produced by a Formal Verification Process

    NASA Technical Reports Server (NTRS)

    Siminiceanu, Radu I.; Miner, Paul S.; Person, Suzette

    2011-01-01

    The goal of this study is to produce a methodology for evaluating the claims and arguments employed in, and the evidence produced by formal verification activities. To illustrate the process, we conduct a full assessment of a representative case study for the Enabling Technology Development and Demonstration (ETDD) program. We assess the model checking and satisfiabilty solving techniques as applied to a suite of abstract models of fault tolerant algorithms which were selected to be deployed in Orion, namely the TTEthernet startup services specified and verified in the Symbolic Analysis Laboratory (SAL) by TTTech. To this end, we introduce the Modeling and Verification Evaluation Score (MVES), a metric that is intended to estimate the amount of trust that can be placed on the evidence that is obtained. The results of the evaluation process and the MVES can then be used by non-experts and evaluators in assessing the credibility of the verification results.

  17. Results from an Independent View on The Validation of Safety-Critical Space Systems

    NASA Astrophysics Data System (ADS)

    Silva, N.; Lopes, R.; Esper, A.; Barbosa, R.

    2013-08-01

    The Independent verification and validation (IV&V) has been a key process for decades, and is considered in several international standards. One of the activities described in the “ESA ISVV Guide” is the independent test verification (stated as Integration/Unit Test Procedures and Test Data Verification). This activity is commonly overlooked since customers do not really see the added value of checking thoroughly the validation team work (could be seen as testing the tester's work). This article presents the consolidated results of a large set of independent test verification activities, including the main difficulties, results obtained and advantages/disadvantages for the industry of these activities. This study will support customers in opting-in or opting-out for this task in future IV&V contracts since we provide concrete results from real case studies in the space embedded systems domain.

  18. Development of an inpatient operational pharmacy productivity model.

    PubMed

    Naseman, Ryan W; Lopez, Ben R; Forrey, Ryan A; Weber, Robert J; Kipp, Kris M

    2015-02-01

    An innovative model for measuring the operational productivity of medication order management in inpatient settings is described. Order verification within a computerized prescriber order-entry system was chosen as the pharmacy workload driver. To account for inherent variability in the tasks involved in processing different types of orders, pharmaceutical products were grouped by class, and each class was assigned a time standard, or "medication complexity weight" reflecting the intensity of pharmacist and technician activities (verification of drug indication, verification of appropriate dosing, adverse-event prevention and monitoring, medication preparation, product checking, product delivery, returns processing, nurse/provider education, and problem-order resolution). The resulting "weighted verifications" (WV) model allows productivity monitoring by job function (pharmacist versus technician) to guide hiring and staffing decisions. A 9-month historical sample of verified medication orders was analyzed using the WV model, and the calculations were compared with values derived from two established models—one based on the Case Mix Index (CMI) and the other based on the proprietary Pharmacy Intensity Score (PIS). Evaluation of Pearson correlation coefficients indicated that values calculated using the WV model were highly correlated with those derived from the CMI-and PIS-based models (r = 0.845 and 0.886, respectively). Relative to the comparator models, the WV model offered the advantage of less period-to-period variability. The WV model yielded productivity data that correlated closely with values calculated using two validated workload management models. The model may be used as an alternative measure of pharmacy operational productivity. Copyright © 2015 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  19. Utility of biochemical verification of tobacco cessation in the Department of Veterans Affairs.

    PubMed

    Noonan, Devon; Jiang, Yunyun; Duffy, Sonia A

    2013-03-01

    Research on the validity of self-report tobacco use has varied by the population studied and has yet to be examined among smokers serviced by the Department of Veterans Affairs (VA). The purpose of this study was to determine the predictors of returning a biochemical urine test and the specificity and sensitivity of self-reported tobacco use status compared to biochemical verification. This was a sub-analysis of the larger Tobacco Tactics research study, a pre-/post-non-randomized control design study to implement and evaluate a smoking cessation intervention in three large VA hospitals. Inpatient smokers completed baseline demographic, health history and tobacco use measures. Patients were sent a follow-up survey at six-months to assess tobacco use and urine cotinine levels. A total of 645 patients returned six-month surveys of which 578 also returned a urinary cotinine strip at six-months. Multivariate analysis of the predictors of return rate revealed those more likely to return biochemical verification of their smoking status were younger, more likely to be thinking about quitting smoking, have arthritis, and less likely to have heart disease. The sensitivity and specificity of self-report tobacco use were 97% (95% confidence interval=0.95-0.98) and 93% (95% confidence interval=0.84-0.98) respectively. The misclassification rate among self-reported quitters was 21%. The misclassification rate among self-reported tobacco users was 1%. The sensitivity and specificity of self-report tobacco use were high among veteran smokers, yet among self-report quitters that misclassification rate was high at 21% suggesting that validating self-report tobacco measures is warranted in future studies especially in populations that are prone to misclassification. Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. GFO-1 Geophysical Data Record and Orbit Verifications for Global Change Studies

    NASA Technical Reports Server (NTRS)

    Shum, C. K.

    2000-01-01

    This final report summarizes the research work conducted under NASA's Physical Oceanography Program, entitled, GFO-1 Geophysical Data Record And Orbit Verifications For Global Change Studies, for the investigation time period from December 1, 1997 through November 30, 2000. The primary objectives of the investigation include providing verification and improvement for the precise orbit, media, geophysical, and instrument corrections to accurately reduce U.S. Navy's Geosat-Followon-1 (GFO-1) mission radar altimeter data to sea level measurements. The status of the GFO satellite (instrument and spacecraft operations, orbital tracking and altimeter) is summarized. GFO spacecraft has been accepted by the Navy from Ball Aerospace and has been declared operational since November, 2000. We have participated in four official GFO calibration/validation periods (Cal/Val I-IV), spanning from June 1999 through October 2000. Results of verification of the GFO orbit and geophysical data record measurements both from NOAA (IGDR) and from the Navy (NGDR) are reported. Our preliminary results indicate that: (1) the precise orbit (GSFC and OSU) can be determined to approx. 5 - 6 cm rms radially using SLR and altimeter crossovers; (2) estimated GFO MOE (GSFC or NRL) radial orbit accuracy is approx. 7 - 30 cm and Operational Doppler orbit accuracy is approx. 60 - 350 cm. After bias and tilt adjustment (1000 km arc), estimated Doppler orbit accuracy is approx. 1.2 - 6.5 cm rms and the MOE accuracy is approx. 1.0 - 2.3 cm; (3) the geophysical and media corrections have been validated versus in situ measurements and measurements from other operating altimeters (T/P and ERS-2). Altimeter time bias is insignificant with 0-2 ms. Sea state bias is about approx. 3 - 4.5% of SWH. Wet troposphere correction has approx. 1 cm bias and approx. 3 cm rms when compared with ERS-2 data. Use of GIM and IRI95 provide ionosphere correction accurate to 2-3 cm rms during medium to high solar activities; (4) the noise of the GFO altimeter data (uncorrected SSH) is about 15 mm, compared to 19 min for ERS-2, and 12 min for TOPEX. It is anticipated that the operational GFO-1 altimeter data will contribute to a number of researches in physical oceanography. A list of relevant presentations and publications is attached.

  1. Spacecraft attitude calibration/verification baseline study

    NASA Technical Reports Server (NTRS)

    Chen, L. C.

    1981-01-01

    A baseline study for a generalized spacecraft attitude calibration/verification system is presented. It can be used to define software specifications for three major functions required by a mission: the pre-launch parameter observability and data collection strategy study; the in-flight sensor calibration; and the post-calibration attitude accuracy verification. Analytical considerations are given for both single-axis and three-axis spacecrafts. The three-axis attitudes considered include the inertial-pointing attitudes, the reference-pointing attitudes, and attitudes undergoing specific maneuvers. The attitude sensors and hardware considered include the Earth horizon sensors, the plane-field Sun sensors, the coarse and fine two-axis digital Sun sensors, the three-axis magnetometers, the fixed-head star trackers, and the inertial reference gyros.

  2. Thermal acoustic oscillations, volume 2. [cryogenic fluid storage

    NASA Technical Reports Server (NTRS)

    Spradley, L. W.; Sims, W. H.; Fan, C.

    1975-01-01

    A number of thermal acoustic oscillation phenomena and their effects on cryogenic systems were studied. The conditions which cause or suppress oscillations, the frequency, amplitude and intensity of oscillations when they exist, and the heat loss they induce are discussed. Methods of numerical analysis utilizing the digital computer were developed for use in cryogenic systems design. In addition, an experimental verification program was conducted to study oscillation wave characteristics and boiloff rate. The data were then reduced and compared with the analytical predictions.

  3. Verification and transfer of thermal pollution model. Volume 4: User's manual for three-dimensional rigid-lid model

    NASA Technical Reports Server (NTRS)

    Lee, S. S.; Nwadike, E. V.; Sinha, S. E.

    1982-01-01

    The theory of a three dimensional (3-D) mathematical thermal discharge model and a related one dimensional (1-D) model are described. Model verification at two sites, a separate user's manual for each model are included. The 3-D model has two forms: free surface and rigid lid. The former allows a free air/water interface and is suited for significant surface wave heights compared to mean water depth, estuaries and coastal regions. The latter is suited for small surface wave heights compared to depth because surface elevation was removed as a parameter. These models allow computation of time dependent velocity and temperature fields for given initial conditions and time-varying boundary conditions. The free surface model also provides surface height variations with time.

  4. Verification and transfer of thermal pollution model. Volume 2: User's manual for 3-dimensional free-surface model

    NASA Technical Reports Server (NTRS)

    Lee, S. S.; Sengupta, S.; Tuann, S. Y.; Lee, C. R.

    1982-01-01

    The six-volume report: describes the theory of a three-dimensional (3-D) mathematical thermal discharge model and a related one-dimensional (1-D) model, includes model verification at two sites, and provides a separate user's manual for each model. The 3-D model has two forms: free surface and rigid lid. The former, verified at Anclote Anchorage (FL), allows a free air/water interface and is suited for significant surface wave heights compared to mean water depth; e.g., estuaries and coastal regions. The latter, verified at Lake Keowee (SC), is suited for small surface wave heights compared to depth. These models allow computation of time-dependent velocity and temperature fields for given initial conditions and time-varying boundary conditions.

  5. Landing System Development- Design and Test Prediction of a Lander Leg Using Nonlinear Analysis

    NASA Astrophysics Data System (ADS)

    Destefanis, Stefano; Buchwald, Robert; Pellegrino, Pasquale; Schroder, Silvio

    2014-06-01

    Several mission studies have been performed focusing on a soft and precision landing using landing legs. Examples for such missions are Mars Sample Return scenarios (MSR), Lunar landing scenarios (MoonNEXT, Lunar Lander) and small body sample return studies (Marco Polo, MMSR, Phootprint). Such missions foresee a soft landing on the planet surface for delivering payload in a controlled manner and limiting the landing loads.To ensure a successful final landing phase, a landing system is needed, capable of absorbing the residual velocities (vertical, horizontal and angular) at touch- down, and insuring a controlled attitude after landing. Such requirements can be fulfilled by using landing legs with adequate damping.The Landing System Development (LSD) study, currently in its phase 2, foresees the design, analysis, verification, manufacturing and testing of a representative landing leg breadboard based on the Phase B design of the ESA Lunar Lander. Drop tests of a single leg will be performed both on rigid and soft ground, at several impact angles. The activity is covered under ESA contract with TAS-I as Prime Contractor, responsible for analysis and verification, Astrium GmbH for design and test and QinetiQ Space for manufacturing. Drop tests will be performed at the Institute of Space Systems of the German Aerospace Center (DLR-RY) in Bremen.This paper presents an overview of the analytical simulations (test predictions and design verification) performed, comparing the results produced by Astrium made multi body model (rigid bodies, nonlinearities accounted for in mechanical joints and force definitions, based on development tests) and TAS-I made nonlinear explicit model (fully deformable bodies).

  6. ENVIRONMENTAL TECHNOLOGY VERIFICATION (ETV) PROGRAM CASE STUDIES: DEMONSTRATING PROGRAM OUTCOMES

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) Environmental Technology Verification (ETV) Program evaluates the performance of innovative air, water, pollution prevention and monitoring technologies that have the potential to improve human health and the environment. This bookle...

  7. Leveraging pattern matching to solve SRAM verification challenges at advanced nodes

    NASA Astrophysics Data System (ADS)

    Kan, Huan; Huang, Lucas; Yang, Legender; Zou, Elaine; Wan, Qijian; Du, Chunshan; Hu, Xinyi; Liu, Zhengfang; Zhu, Yu; Zhang, Recoo; Huang, Elven; Muirhead, Jonathan

    2018-03-01

    Memory is a critical component in today's system-on-chip (SoC) designs. Static random-access memory (SRAM) blocks are assembled by combining intellectual property (IP) blocks that come from SRAM libraries developed and certified by the foundries for both functionality and a specific process node. Customers place these SRAM IP in their designs, adjusting as necessary to achieve DRC-clean results. However, any changes a customer makes to these SRAM IP during implementation, whether intentionally or in error, can impact yield and functionality. Physical verification of SRAM has always been a challenge, because these blocks usually contain smaller feature sizes and spacing constraints compared to traditional logic or other layout structures. At advanced nodes, critical dimension becomes smaller and smaller, until there is almost no opportunity to use optical proximity correction (OPC) and lithography to adjust the manufacturing process to mitigate the effects of any changes. The smaller process geometries, reduced supply voltages, increasing process variation, and manufacturing uncertainty mean accurate SRAM physical verification results are not only reaching new levels of difficulty, but also new levels of criticality for design success. In this paper, we explore the use of pattern matching to create an SRAM verification flow that provides both accurate, comprehensive coverage of the required checks and visual output to enable faster, more accurate error debugging. Our results indicate that pattern matching can enable foundries to improve SRAM manufacturing yield, while allowing designers to benefit from SRAM verification kits that can shorten the time to market.

  8. SU-E-T-455: Impact of Different Independent Dose Verification Software Programs for Secondary Check

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Itano, M; Yamazaki, T; Kosaka, M

    2015-06-15

    Purpose: There have been many reports for different dose calculation algorithms for treatment planning system (TPS). Independent dose verification program (IndpPro) is essential to verify clinical plans from the TPS. However, the accuracy of different independent dose verification programs was not evident. We conducted a multi-institutional study to reveal the impact of different IndpPros using different TPSs. Methods: Three institutes participated in this study. They used two different IndpPros (RADCALC and Simple MU Analysis (SMU), which implemented the Clarkson algorithm. RADCALC needed the input of radiological path length (RPL) computed by the TPSs (Eclipse or Pinnacle3). SMU used CT imagesmore » to compute the RPL independently from TPS). An ion-chamber measurement in water-equivalent phantom was performed to evaluate the accuracy of two IndpPros and the TPS in each institute. Next, the accuracy of dose calculation using the two IndpPros compared to TPS was assessed in clinical plan. Results: The accuracy of IndpPros and the TPSs in the homogenous phantom was +/−1% variation to the measurement. 1543 treatment fields were collected from the patients treated in the institutes. The RADCALC showed better accuracy (0.9 ± 2.2 %) than the SMU (1.7 ± 2.1 %). However, the accuracy was dependent on the TPS (Eclipse: 0.5%, Pinnacle3: 1.0%). The accuracy of RADCALC with Eclipse was similar to that of SMU in one of the institute. Conclusion: Depending on independent dose verification program, the accuracy shows systematic dose accuracy variation even though the measurement comparison showed a similar variation. The variation was affected by radiological path length calculation. IndpPro with Pinnacle3 has different variation because Pinnacle3 computed the RPL using physical density. Eclipse and SMU uses electron density, though.« less

  9. Separating stages of arithmetic verification: An ERP study with a novel paradigm.

    PubMed

    Avancini, Chiara; Soltész, Fruzsina; Szűcs, Dénes

    2015-08-01

    In studies of arithmetic verification, participants typically encounter two operands and they carry out an operation on these (e.g. adding them). Operands are followed by a proposed answer and participants decide whether this answer is correct or incorrect. However, interpretation of results is difficult because multiple parallel, temporally overlapping numerical and non-numerical processes of the human brain may contribute to task execution. In order to overcome this problem here we used a novel paradigm specifically designed to tease apart the overlapping cognitive processes active during arithmetic verification. Specifically, we aimed to separate effects related to detection of arithmetic correctness, detection of the violation of strategic expectations, detection of physical stimulus properties mismatch and numerical magnitude comparison (numerical distance effects). Arithmetic correctness, physical stimulus properties and magnitude information were not task-relevant properties of the stimuli. We distinguished between a series of temporally highly overlapping cognitive processes which in turn elicited overlapping ERP effects with distinct scalp topographies. We suggest that arithmetic verification relies on two major temporal phases which include parallel running processes. Our paradigm offers a new method for investigating specific arithmetic verification processes in detail. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Hierarchical Representation Learning for Kinship Verification.

    PubMed

    Kohli, Naman; Vatsa, Mayank; Singh, Richa; Noore, Afzel; Majumdar, Angshul

    2017-01-01

    Kinship verification has a number of applications such as organizing large collections of images and recognizing resemblances among humans. In this paper, first, a human study is conducted to understand the capabilities of human mind and to identify the discriminatory areas of a face that facilitate kinship-cues. The visual stimuli presented to the participants determine their ability to recognize kin relationship using the whole face as well as specific facial regions. The effect of participant gender and age and kin-relation pair of the stimulus is analyzed using quantitative measures such as accuracy, discriminability index d' , and perceptual information entropy. Utilizing the information obtained from the human study, a hierarchical kinship verification via representation learning (KVRL) framework is utilized to learn the representation of different face regions in an unsupervised manner. We propose a novel approach for feature representation termed as filtered contractive deep belief networks (fcDBN). The proposed feature representation encodes relational information present in images using filters and contractive regularization penalty. A compact representation of facial images of kin is extracted as an output from the learned model and a multi-layer neural network is utilized to verify the kin accurately. A new WVU kinship database is created, which consists of multiple images per subject to facilitate kinship verification. The results show that the proposed deep learning framework (KVRL-fcDBN) yields the state-of-the-art kinship verification accuracy on the WVU kinship database and on four existing benchmark data sets. Furthermore, kinship information is used as a soft biometric modality to boost the performance of face verification via product of likelihood ratio and support vector machine based approaches. Using the proposed KVRL-fcDBN framework, an improvement of over 20% is observed in the performance of face verification.

  11. Code Verification of the HIGRAD Computational Fluid Dynamics Solver

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Buren, Kendra L.; Canfield, Jesse M.; Hemez, Francois M.

    2012-05-04

    The purpose of this report is to outline code and solution verification activities applied to HIGRAD, a Computational Fluid Dynamics (CFD) solver of the compressible Navier-Stokes equations developed at the Los Alamos National Laboratory, and used to simulate various phenomena such as the propagation of wildfires and atmospheric hydrodynamics. Code verification efforts, as described in this report, are an important first step to establish the credibility of numerical simulations. They provide evidence that the mathematical formulation is properly implemented without significant mistakes that would adversely impact the application of interest. Highly accurate analytical solutions are derived for four code verificationmore » test problems that exercise different aspects of the code. These test problems are referred to as: (i) the quiet start, (ii) the passive advection, (iii) the passive diffusion, and (iv) the piston-like problem. These problems are simulated using HIGRAD with different levels of mesh discretization and the numerical solutions are compared to their analytical counterparts. In addition, the rates of convergence are estimated to verify the numerical performance of the solver. The first three test problems produce numerical approximations as expected. The fourth test problem (piston-like) indicates the extent to which the code is able to simulate a 'mild' discontinuity, which is a condition that would typically be better handled by a Lagrangian formulation. The current investigation concludes that the numerical implementation of the solver performs as expected. The quality of solutions is sufficient to provide credible simulations of fluid flows around wind turbines. The main caveat associated to these findings is the low coverage provided by these four problems, and somewhat limited verification activities. A more comprehensive evaluation of HIGRAD may be beneficial for future studies.« less

  12. Dosimetric verification for intensity-modulated arc therapy plans by use of 2D diode array, radiochromic film and radiosensitive polymer gel.

    PubMed

    Hayashi, Naoki; Malmin, Ryan L; Watanabe, Yoichi

    2014-05-01

    Several tools are used for the dosimetric verification of intensity-modulated arc therapy (IMAT) treatment delivery. However, limited information is available for composite on-line evaluation of these tools. The purpose of this study was to evaluate the dosimetric verification of IMAT treatment plans using a 2D diode array detector (2D array), radiochromic film (RCF) and radiosensitive polymer gel dosimeter (RPGD). The specific verification plans were created for IMAT for two prostate cancer patients by use of the clinical treatment plans. Accordingly, the IMAT deliveries were performed with the 2D array on a gantry-mounting device, RCF in a cylindrical acrylic phantom, and the RPGD in two cylindrical phantoms. After the irradiation, the planar dose distributions from the 2D array and the RCFs, and the 3D dose distributions from the RPGD measurements were compared with the calculated dose distributions using the gamma analysis method (3% dose difference and 3-mm distance-to-agreement criterion), dose-dependent dose difference diagrams, dose difference histograms, and isodose distributions. The gamma passing rates of 2D array, RCFs and RPGD for one patient were 99.5%, 96.5% and 93.7%, respectively; the corresponding values for the second patient were 97.5%, 92.6% and 92.9%. Mean percentage differences between the RPGD measured and calculated doses in 3D volumes containing PTVs were -0.29 ± 7.1% and 0.97 ± 7.6% for the two patients, respectively. In conclusion, IMAT prostate plans can be delivered with high accuracy, although the 3D measurements indicated less satisfactory agreement with the treatment plans, mainly due to the dosimetric inaccuracy in low-dose regions of the RPGD measurements.

  13. Software verification plan for GCS. [guidance and control software

    NASA Technical Reports Server (NTRS)

    Dent, Leslie A.; Shagnea, Anita M.; Hayhurst, Kelly J.

    1990-01-01

    This verification plan is written as part of an experiment designed to study the fundamental characteristics of the software failure process. The experiment will be conducted using several implementations of software that were produced according to industry-standard guidelines, namely the Radio Technical Commission for Aeronautics RTCA/DO-178A guidelines, Software Consideration in Airborne Systems and Equipment Certification, for the development of flight software. This plan fulfills the DO-178A requirements for providing instructions on the testing of each implementation of software. The plan details the verification activities to be performed at each phase in the development process, contains a step by step description of the testing procedures, and discusses all of the tools used throughout the verification process.

  14. SU-F-T-284: The Effect of Linear Accelerator Output Variation On the Quality of Patient Specific Rapid Arc Verification Plans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sandhu, G; Cao, F; Szpala, S

    2016-06-15

    Purpose: The aim of the current study is to investigate the effect of machine output variation on the delivery of the RapidArc verification plans. Methods: Three verification plans were generated using Eclipse™ treatment planning system (V11.031) with plan normalization value 100.0%. These plans were delivered on the linear accelerators using ArcCHECK− device, with machine output 1.000 cGy/MU at calibration point. These planned and delivered dose distributions were used as reference plans. Additional plans were created in Eclipse− with normalization values ranging 92.80%–102% to mimic the machine output ranging 1.072cGy/MU-0.980cGy/MU, at the calibration point. These plans were compared against the referencemore » plans using gamma indices (3%, 3mm) and (2%, 2mm). Calculated gammas were studied for its dependence on machine output. Plans were considered passed if 90% of the points satisfy the defined gamma criteria. Results: The gamma index (3%, 3mm) was insensitive to output fluctuation within the output tolerance level (2% of calibration), and showed failures, when the machine output exceeds ≥3%. Gamma (2%, 2mm) was found to be more sensitive to the output variation compared to the gamma (3%, 3mm), and showed failures, when output exceeds ≥1.7%. The variation of the gamma indices with output variability also showed dependence upon the plan parameters (e.g. MLC movement and gantry rotation). The variation of the percentage points passing gamma criteria with output variation followed a non-linear decrease beyond the output tolerance level. Conclusion: Data from the limited plans and output conditions showed that gamma (2%, 2mm) is more sensitive to the output fluctuations compared to Gamma (3%,3mm). Work under progress, including detail data from a large number of plans and a wide range of output conditions, may be able to conclude the quantitative dependence of gammas on machine output, and hence the effect on the quality of delivered rapid arc plans.« less

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lucconi, G; Department of Radiation Oncology, Massachusetts General Hospital, Boston, MA; Bentefour, E

    Purpose: The clinical commissioning of a workflow for pre-treatment range verification/adjustment for the head treatment of pediatric medulloblastoma patients, including dose monitoring during treatment. Methods: An array of Si-diodes (DIODES Incorporated) is placed on the patient skin on the opposite side to the beam entrance. A “scout” SOBP beam, with a longer beam range to cover the diodes in its plateau, is delivered; the measured signal is analyzed and the extracted water equivalent path lengths (WEPL) are compared to the expected values, revealing if a range correction is needed. Diodes stay in place during treatment to measure dose. The workflowmore » was tested in solid water and head phantoms and validated against independent WEPL measurements. Both measured WEPL and skin doses were compared to computed values from the TPS (XiO); a Markus chamber was used for reference dose measurements. Results: The WEPL accuracy of the method was verified by comparing it with the dose extinction method. It resulted, for both solid water and head phantom, in the sub-millimeter range, with a deviation less than 1% to the value extracted from the TPS. The accuracy of dose measurements in the fall-off part of the dose profile was validated against the Markus chamber. The entire range verification workflow was successfully tested for the mock-treatment of head phantom with the standard delivery of 90 cGy per field per fraction. The WEPL measurement revealed no need for range correction. The dose measurements agreed to better than 4% with the prescription dose. The robustness of the method and workflow, including detector array, hardware set and software functions, was successfully stress-tested with multiple repetitions. Conclusion: The performance of the in-vivo range verification system and related workflow meet the clinical requirements in terms of the needed WEPL accuracy for pretreatment range verification with acceptable dose to the patient.« less

  16. Comprehension of idioms in adolescents with language-based learning disabilities compared to their typically developing peers.

    PubMed

    Qualls, Constance Dean; Lantz, Jennifer M; Pietrzyk, Rose M; Blood, Gordon W; Hammer, Carol Scheffner

    2004-01-01

    Adolescents with language-based learning disabilities (LBLD) often interpret idioms literally. When idioms are provided in an enriched context, comprehension is compromised further because of the LBLD student's inability to assign multiple meanings to words, assemble and integrate information, and go beyond a local referent to derive a global, coherent meaning. This study tested the effects of context and familiarity on comprehension of 24 idioms in 22 adolescents with LBLD. The students completed the Idiom Comprehension Test (ICT) [Language, Speech, and Hearing Services in Schools 30 (1999) 141; LSHSS 34 (2003) 69] in one of two conditions: in a story or during a verification task. Within each condition were three familiarity levels: high, moderate, and low. The LBLD adolescents' data were then compared to previously collected data from 21 age-, gender-, and reading ability-matched typically developing (TD) peers. The relations between reading and language literacy and idiom comprehension were also examined in the LBLD adolescents. Results showed that: (a) the LBLD adolescents generally performed poorly relative to their TD counterparts; however, the groups performed comparably on the high and moderate familiarity idioms in the verification condition; (b) the LBLD adolescents performed significantly better in the verification condition than in the story condition; and (c) reading ability was associated with comprehension of the low familiarity idioms in the story condition only. Findings are discussed relative to implications for speech-language pathologists (SLPs) and educators working with adolescents with LBLD. As a result of this activity, the participant will be able to (1) describe the importance of metalinguistic maturity for comprehension of idioms and other figures of speech; (2) understand the roles of context and familiarity when assessing idiom comprehension in adolescents with LBLD; and (3) critically evaluate assessments of idiom comprehension and determine their appropriateness for use with adolescents with LBLD.

  17. ENVIRONMENTAL TECHNOLOGY VERIFICATION (ETV) PROGRAM CASE STUDIES: DEMONSTRATING PROGRAM OUTCOMES, VOLUME II

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) Environmental Technology Verification (ETV) Program evaluates the performance of innovative air, water, pollution prevention and monitoring technologies that have the potential to improve human health and the environment. This bookle...

  18. Environmental Technology Verification Program - ETV - Case Studies: Demonstrating Program Outcomes

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) Environmental Technology Verification (ETV) Program evaluates the performance of innovative air, water, pollution prevention and monitoring technologies that have the potential to improve human health and the environment. This cd con...

  19. Development, Verification and Validation of Enclosure Radiation Capabilities in the CHarring Ablator Response (CHAR) Code

    NASA Technical Reports Server (NTRS)

    Salazar, Giovanni; Droba, Justin C.; Oliver, Brandon; Amar, Adam J.

    2016-01-01

    With the recent development of multi-dimensional thermal protection system (TPS) material response codes including the capabilities to account for radiative heating is a requirement. This paper presents the recent efforts to implement such capabilities in the CHarring Ablator Response (CHAR) code developed at NASA's Johnson Space Center. This work also describes the different numerical methods implemented in the code to compute view factors for radiation problems involving multiple surfaces. Furthermore, verification and validation of the code's radiation capabilities are demonstrated by comparing solutions to analytical results, to other codes, and to radiant test data.

  20. User-friendly design approach for analog layout design

    NASA Astrophysics Data System (ADS)

    Li, Yongfu; Lee, Zhao Chuan; Tripathi, Vikas; Perez, Valerio; Ong, Yoong Seang; Hui, Chiu Wing

    2017-03-01

    Analog circuits are sensitives to the changes in the layout environment conditions, manufacturing processes, and variations. This paper presents analog verification flow with five types of analogfocused layout constraint checks to assist engineers in identifying any potential device mismatch and layout drawing mistakes. Compared to several solutions, our approach only requires layout design, which is sufficient to recognize all the matched devices. Our approach simplifies the data preparation and allows seamless integration into the layout environment with minimum disruption to the custom layout flow. Our user-friendly analog verification flow provides the engineer with more confident with their layouts quality.

  1. The SeaHorn Verification Framework

    NASA Technical Reports Server (NTRS)

    Gurfinkel, Arie; Kahsai, Temesghen; Komuravelli, Anvesh; Navas, Jorge A.

    2015-01-01

    In this paper, we present SeaHorn, a software verification framework. The key distinguishing feature of SeaHorn is its modular design that separates the concerns of the syntax of the programming language, its operational semantics, and the verification semantics. SeaHorn encompasses several novelties: it (a) encodes verification conditions using an efficient yet precise inter-procedural technique, (b) provides flexibility in the verification semantics to allow different levels of precision, (c) leverages the state-of-the-art in software model checking and abstract interpretation for verification, and (d) uses Horn-clauses as an intermediate language to represent verification conditions which simplifies interfacing with multiple verification tools based on Horn-clauses. SeaHorn provides users with a powerful verification tool and researchers with an extensible and customizable framework for experimenting with new software verification techniques. The effectiveness and scalability of SeaHorn are demonstrated by an extensive experimental evaluation using benchmarks from SV-COMP 2015 and real avionics code.

  2. Bias-corrected diagnostic performance of the naked-eye single-tube red-cell osmotic fragility test (NESTROFT): an effective screening tool for beta-thalassemia.

    PubMed

    Mamtani, Manju; Jawahirani, Anil; Das, Kishor; Rughwani, Vinky; Kulkarni, Hemant

    2006-08-01

    It is being increasingly recognized that a majority of the countries in the thalassemia-belt need a cost-effective screening program as the first step towards control of thalassemia. Although the naked eye single tube red cell osmotic fragility test (NESTROFT) has been considered to be a very effective screening tool for beta-thalassemia trait, assessment of its diagnostic performance has been affected with the reference test- and verification-bias. Here, we set out to provide estimates of sensitivity and specificity of NESTROFT corrected for these potential biases. We conducted a cross-sectional diagnostic test evaluation study using data from 1563 subjects from Central India with a high prevalence of beta-thalassemia. We used latent class modelling after ensuring its validity to account for the reference test bias and global sensitivity analysis to control the verification bias. We also compared the results of latent class modelling with those of five discriminant indexes. We observed that across a range of cut-offs for the mean corpuscular volume (MCV) and the hemoglobin A2 (HbA2) concentration the average sensitivity and specificity of NESTROFT obtained from latent class modelling was 99.8 and 83.7%, respectively. These estimates were comparable to those characterizing the diagnostic performance of HbA2, which is considered by many as the reference test to detect beta-thalassemia. After correction for the verification bias these estimates were 93.4 and 97.2%, respectively. Combined with the inexpensive and quick disposition of NESTROFT, these results strongly support its candidature as a screening tool-especially in the resource-poor and high-prevalence settings.

  3. IMRT plan verification with EBT2 and EBT3 films compared to PTW 2D-ARRAY seven29

    NASA Astrophysics Data System (ADS)

    Hanušová, Tereza; Horáková, Ivana; Koniarová, Irena

    2017-11-01

    The aim of this study was to compare dosimetry with Gafchromic EBT2 and EBT3 films to the ion chamber array PTW seven29 in terms of their performance in clinical IMRT plan verification. A methodology for film processing and calibration was developed. Calibration curves were obtained in MATLAB and in FilmQA Pro. The best calibration curve was then used to calibrate EBT2 and EBT3 films for IMRT plan verification measurements. Films were placed in several coronal planes into an RW3 slab phantom and irradiated with a clinical IMRT plan for prostate and lymph nodes using 18 MV photon beams. Individual fields were tested and irradiated with gantry at 0°. Results were evaluated using gamma analysis with 3%/3 mm criteria in OmniPro I'mRT version 1.7. The same measurements were performed with the ion chamber array PTW seven29 in RW3 slabs (different depths) and in the OCTAVIUS II phantom (isocenter depth only; both original and nominal gantry angles). Results were evaluated in PTW VeriSoft version 3.1 using the same criteria. Altogether, 45 IMRT planes were tested with film and 25 planes with the PTW 2D-ARRAY seven29. Film measuerements showed different results than ion chamber matrix measurements. With PTW 2D-ARRAY seven29, worse results were obtained when the detector was placed into the OCTAVIUS phantom than into the RW3 slab phantom, and the worst pass rates were seen for rotational measurements. EBT2 films showed inconsistent results and could differ significantly for different planes in one field. EBT3 films seemed to give the best results of all the tested configurations.

  4. [Uniqueness seeking behavior as a self-verification: an alternative approach to the study of uniqueness].

    PubMed

    Yamaoka, S

    1995-06-01

    Uniqueness theory explains that extremely high perceived similarity between self and others evokes negative emotional reactions and causes uniqueness seeking behavior. However, the theory conceptualizes similarity so ambiguously that it appears to suffer from low predictive validity. The purpose of the current article is to propose an alternative explanation of uniqueness seeking behavior. It posits that perceived uniqueness deprivation is a threat to self-concepts, and therefore causes self-verification behavior. Two levels of self verification are conceived: one based on personal categorization and the other on social categorization. The present approach regards uniqueness seeking behavior as the personal-level self verification. To test these propositions, a 2 (very high or moderate similarity information) x 2 (with or without outgroup information) x 2 (high or low need for uniqueness) between-subject factorial-design experiment was conducted with 95 university students. Results supported the self-verification approach, and were discussed in terms of effects of uniqueness deprivation, levels of self-categorization, and individual differences in need for uniqueness.

  5. Retroperitoneal tumour radiotherapy: clinical improvements using kilovoltage cone beam computed tomography.

    PubMed

    Juan-Senabre, Xavier J; Ferrer-Albiach, Carlos; Rodríguez-Cordón, Marta; Santos-Serra, Agustín; López-Tarjuelo, Juan; Calzada-Feliu, Salvador

    2009-04-01

    We present a clinical case of a patient diagnosed with a retroperitoneal sarcoma, which received preoperative treatment with daily verification via computed tomography obtained with kilovoltage cone beam. We compare the benefit of this treatment compared to other conventional treatment without image guiding, reporting quantitative results.

  6. U.S.-MEXICO BORDER PROGRAM ARIZONA BORDER STUDY--STANDARD OPERATING PROCEDURE FOR PERFORMANCE OF COMPUTER SOFTWARE: VERIFICATION AND VALIDATION (IIT-A-2.0)

    EPA Science Inventory

    The purpose of this SOP is to define the procedures for the initial and periodic verification and validation of computer programs. The programs are used during the Arizona NHEXAS project and Border study at the Illinois Institute of Technology (IIT) site. Keywords: computers; s...

  7. NHEXAS PHASE I ARIZONA STUDY--STANDARD OPERATING PROCEDURE FOR PERFORMANCE OF COMPUTER SOFTWARE: VERIFICATION AND VALIDATION (UA-D-2.0)

    EPA Science Inventory

    The purpose of this SOP is to define the procedures used for the initial and periodic verification and validation of computer programs used during the Arizona NHEXAS project and the "Border" study. Keywords: Computers; Software; QA/QC.

    The National Human Exposure Assessment Sur...

  8. A Study of Feature Combination for Vehicle Detection Based on Image Processing

    PubMed Central

    2014-01-01

    Video analytics play a critical role in most recent traffic monitoring and driver assistance systems. In this context, the correct detection and classification of surrounding vehicles through image analysis has been the focus of extensive research in the last years. Most of the pieces of work reported for image-based vehicle verification make use of supervised classification approaches and resort to techniques, such as histograms of oriented gradients (HOG), principal component analysis (PCA), and Gabor filters, among others. Unfortunately, existing approaches are lacking in two respects: first, comparison between methods using a common body of work has not been addressed; second, no study of the combination potentiality of popular features for vehicle classification has been reported. In this study the performance of the different techniques is first reviewed and compared using a common public database. Then, the combination capabilities of these techniques are explored and a methodology is presented for the fusion of classifiers built upon them, taking into account also the vehicle pose. The study unveils the limitations of single-feature based classification and makes clear that fusion of classifiers is highly beneficial for vehicle verification. PMID:24672299

  9. Clinical evaluation of 4D PET motion compensation strategies for treatment verification in ion beam therapy

    NASA Astrophysics Data System (ADS)

    Gianoli, Chiara; Kurz, Christopher; Riboldi, Marco; Bauer, Julia; Fontana, Giulia; Baroni, Guido; Debus, Jürgen; Parodi, Katia

    2016-06-01

    A clinical trial named PROMETHEUS is currently ongoing for inoperable hepatocellular carcinoma (HCC) at the Heidelberg Ion Beam Therapy Center (HIT, Germany). In this framework, 4D PET-CT datasets are acquired shortly after the therapeutic treatment to compare the irradiation induced PET image with a Monte Carlo PET prediction resulting from the simulation of treatment delivery. The extremely low count statistics of this measured PET image represents a major limitation of this technique, especially in presence of target motion. The purpose of the study is to investigate two different 4D PET motion compensation strategies towards the recovery of the whole count statistics for improved image quality of the 4D PET-CT datasets for PET-based treatment verification. The well-known 4D-MLEM reconstruction algorithm, embedding the motion compensation in the reconstruction process of 4D PET sinograms, was compared to a recently proposed pre-reconstruction motion compensation strategy, which operates in sinogram domain by applying the motion compensation to the 4D PET sinograms. With reference to phantom and patient datasets, advantages and drawbacks of the two 4D PET motion compensation strategies were identified. The 4D-MLEM algorithm was strongly affected by inverse inconsistency of the motion model but demonstrated the capability to mitigate the noise-break-up effects. Conversely, the pre-reconstruction warping showed less sensitivity to inverse inconsistency but also more noise in the reconstructed images. The comparison was performed by relying on quantification of PET activity and ion range difference, typically yielding similar results. The study demonstrated that treatment verification of moving targets could be accomplished by relying on the whole count statistics image quality, as obtained from the application of 4D PET motion compensation strategies. In particular, the pre-reconstruction warping was shown to represent a promising choice when combined with intra-reconstruction smoothing.

  10. ENVIRONMENTAL TECHNOLOGY VERIFICATION REPORT FOR AMMONIA RECOVERY PROCESS

    EPA Science Inventory

    This Technology Verification report describes the nature and scope of an environmental evaluation of ThermoEnergy Corporation’s Ammonia Recovery Process (ARP) system. The information contained in this report represents data that were collected over a 3-month pilot study. The ti...

  11. GHG MITIGATION TECHNOLOGY PERFORMANCE EVALUATIONS UNDERWAY AT THE GHG TECHNOLOGY VERIFICATION CENTER

    EPA Science Inventory

    The paper outlines the verification approach and activities of the Greenhouse Gas (GHG) Technology Verification Center, one of 12 independent verification entities operating under the U.S. EPA-sponsored Environmental Technology Verification (ETV) program. (NOTE: The ETV program...

  12. Design Characteristics Influence Performance of Clinical Prediction Rules in Validation: A Meta-Epidemiological Study

    PubMed Central

    Ban, Jong-Wook; Emparanza, José Ignacio; Urreta, Iratxe; Burls, Amanda

    2016-01-01

    Background Many new clinical prediction rules are derived and validated. But the design and reporting quality of clinical prediction research has been less than optimal. We aimed to assess whether design characteristics of validation studies were associated with the overestimation of clinical prediction rules’ performance. We also aimed to evaluate whether validation studies clearly reported important methodological characteristics. Methods Electronic databases were searched for systematic reviews of clinical prediction rule studies published between 2006 and 2010. Data were extracted from the eligible validation studies included in the systematic reviews. A meta-analytic meta-epidemiological approach was used to assess the influence of design characteristics on predictive performance. From each validation study, it was assessed whether 7 design and 7 reporting characteristics were properly described. Results A total of 287 validation studies of clinical prediction rule were collected from 15 systematic reviews (31 meta-analyses). Validation studies using case-control design produced a summary diagnostic odds ratio (DOR) 2.2 times (95% CI: 1.2–4.3) larger than validation studies using cohort design and unclear design. When differential verification was used, the summary DOR was overestimated by twofold (95% CI: 1.2 -3.1) compared to complete, partial and unclear verification. The summary RDOR of validation studies with inadequate sample size was 1.9 (95% CI: 1.2 -3.1) compared to studies with adequate sample size. Study site, reliability, and clinical prediction rule was adequately described in 10.1%, 9.4%, and 7.0% of validation studies respectively. Conclusion Validation studies with design shortcomings may overestimate the performance of clinical prediction rules. The quality of reporting among studies validating clinical prediction rules needs to be improved. PMID:26730980

  13. Design Characteristics Influence Performance of Clinical Prediction Rules in Validation: A Meta-Epidemiological Study.

    PubMed

    Ban, Jong-Wook; Emparanza, José Ignacio; Urreta, Iratxe; Burls, Amanda

    2016-01-01

    Many new clinical prediction rules are derived and validated. But the design and reporting quality of clinical prediction research has been less than optimal. We aimed to assess whether design characteristics of validation studies were associated with the overestimation of clinical prediction rules' performance. We also aimed to evaluate whether validation studies clearly reported important methodological characteristics. Electronic databases were searched for systematic reviews of clinical prediction rule studies published between 2006 and 2010. Data were extracted from the eligible validation studies included in the systematic reviews. A meta-analytic meta-epidemiological approach was used to assess the influence of design characteristics on predictive performance. From each validation study, it was assessed whether 7 design and 7 reporting characteristics were properly described. A total of 287 validation studies of clinical prediction rule were collected from 15 systematic reviews (31 meta-analyses). Validation studies using case-control design produced a summary diagnostic odds ratio (DOR) 2.2 times (95% CI: 1.2-4.3) larger than validation studies using cohort design and unclear design. When differential verification was used, the summary DOR was overestimated by twofold (95% CI: 1.2 -3.1) compared to complete, partial and unclear verification. The summary RDOR of validation studies with inadequate sample size was 1.9 (95% CI: 1.2 -3.1) compared to studies with adequate sample size. Study site, reliability, and clinical prediction rule was adequately described in 10.1%, 9.4%, and 7.0% of validation studies respectively. Validation studies with design shortcomings may overestimate the performance of clinical prediction rules. The quality of reporting among studies validating clinical prediction rules needs to be improved.

  14. 18 CFR 281.213 - Data Verification Committee.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... preparation. (e) The Data Verification Committee shall prepare a report concerning the proposed index of... 18 Conservation of Power and Water Resources 1 2011-04-01 2011-04-01 false Data Verification....213 Data Verification Committee. (a) Each interstate pipeline shall establish a Data Verification...

  15. 18 CFR 281.213 - Data Verification Committee.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... preparation. (e) The Data Verification Committee shall prepare a report concerning the proposed index of... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Data Verification....213 Data Verification Committee. (a) Each interstate pipeline shall establish a Data Verification...

  16. A preliminary study on the use of FX-Glycine gel and an in-house optical cone beam CT readout for IMRT and RapidArc verification

    NASA Astrophysics Data System (ADS)

    Ravindran, Paul B.; Ebenezer, Suman Babu S.; Winfred, Michael Raj; Amalan, S.

    2017-05-01

    The radiochromic FX gel with Optical CT readout has been investigated by several authors and has shown promising results for 3D dosimetry. One of the applications of the gel dosimeters is their use in 3D dose verification for IMRT and RapidArc quality assurance. Though polymer gel has been used successfully for clinical dose verification, the use of FX gel for clinical dose verification with optical cone beam CT needs further validation. In this work, we have used FX gel and an in- house optical readout system for gamma analysis between the dose matrices of measured dose distribution and a treatment planning system (TPS) calculated dose distribution for a few test cases.

  17. Depression and selection of positive and negative social feedback: motivated preference or cognitive balance?

    PubMed

    Alloy, L B; Lipman, A J

    1992-05-01

    In this commentary we examine Swann, Wenzlaff, Krull, and Pelham's (1992) findings with respect to each of 5 central propositions in self-verification theory. We conclude that although the data are consistent with self-verification theory, none of the 5 components of the theory have been demonstrated convincingly as yet. Specifically, we argue that depressed subjects' selection of social feedback appears to be balanced or evenhanded rather than biased toward negative feedback and that there is little evidence to indicate that depressives actively seek negative appraisals. Furthermore, we suggest that the studies are silent with respect to the motivational postulates of self-verification theory and that a variety of competing cognitive and motivational models can explain Swann et al.'s findings as well as self-verification theory.

  18. Components of Processing Deficit Among Paranoid and Nonparanoid Schizophrenics

    ERIC Educational Resources Information Center

    Neufeld, Richard W. J.

    1977-01-01

    Paranoid and nonparanoid schizophrenics were compared to normals in their performance on a sentence verification task. Results were related to past evidence and hypotheses about central processing performance among schizophrenics. (Editor/RK)

  19. 75 FR 12811 - Petition for Waiver of Compliance

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-17

    ... verifying accuracy of the check sum and CRC values of all programmable elements used in the solid-state... software being used. This verification is done by comparing the parameters found on all programmable...

  20. Investigation, Development, and Evaluation of Performance Proving for Fault-tolerant Computers

    NASA Technical Reports Server (NTRS)

    Levitt, K. N.; Schwartz, R.; Hare, D.; Moore, J. S.; Melliar-Smith, P. M.; Shostak, R. E.; Boyer, R. S.; Green, M. W.; Elliott, W. D.

    1983-01-01

    A number of methodologies for verifying systems and computer based tools that assist users in verifying their systems were developed. These tools were applied to verify in part the SIFT ultrareliable aircraft computer. Topics covered included: STP theorem prover; design verification of SIFT; high level language code verification; assembly language level verification; numerical algorithm verification; verification of flight control programs; and verification of hardware logic.

  1. The formal verification of generic interpreters

    NASA Technical Reports Server (NTRS)

    Windley, P.; Levitt, K.; Cohen, G. C.

    1991-01-01

    The task assignment 3 of the design and validation of digital flight control systems suitable for fly-by-wire applications is studied. Task 3 is associated with formal verification of embedded systems. In particular, results are presented that provide a methodological approach to microprocessor verification. A hierarchical decomposition strategy for specifying microprocessors is also presented. A theory of generic interpreters is presented that can be used to model microprocessor behavior. The generic interpreter theory abstracts away the details of instruction functionality, leaving a general model of what an interpreter does.

  2. Verification of MCNP6.2 for Nuclear Criticality Safety Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.; Rising, Michael Evan; Alwin, Jennifer Louise

    2017-05-10

    Several suites of verification/validation benchmark problems were run in early 2017 to verify that the new production release of MCNP6.2 performs correctly for nuclear criticality safety applications (NCS). MCNP6.2 results for several NCS validation suites were compared to the results from MCNP6.1 [1] and MCNP6.1.1 [2]. MCNP6.1 is the production version of MCNP® released in 2013, and MCNP6.1.1 is the update released in 2014. MCNP6.2 includes all of the standard features for NCS calculations that have been available for the past 15 years, along with new features for sensitivity-uncertainty based methods for NCS validation [3]. Results from the benchmark suitesmore » were compared with results from previous verification testing [4-8]. Criticality safety analysts should consider testing MCNP6.2 on their particular problems and validation suites. No further development of MCNP5 is planned. MCNP6.1 is now 4 years old, and MCNP6.1.1 is now 3 years old. In general, released versions of MCNP are supported only for about 5 years, due to resource limitations. All future MCNP improvements, bug fixes, user support, and new capabilities are targeted only to MCNP6.2 and beyond.« less

  3. To thine own self be true? Clarifying the effects of identity discrepancies on psychological distress and emotions.

    PubMed

    Kalkhoff, Will; Marcussen, Kristen; Serpe, Richard T

    2016-07-01

    After many years of research across disciplines, it remains unclear whether people are more motivated to seek appraisals that accurately match self-views (self-verification) or are as favorable as possible (self-enhancement). Within sociology, mixed findings in identity theory have fueled the debate. A problem here is that a commonly employed statistical approach does not take into account the direction of a discrepancy between how we see ourselves and how we think others see us in terms of a given identity, yet doing so is critical for determining which self-motive is at play. We offer a test of three competing models of identity processes, including a new "mixed motivations" model where self-verification and self-enhancement operate simultaneously. We compare the models using the conventional statistical approach versus response surface analysis. The latter method allows us to determine whether identity discrepancies involving over-evaluation are as distressing as those involving under-evaluation. We use nationally representative data and compare results across four different identities and multiple outcomes. The two statistical approaches lead to the same conclusions more often than not and mostly support identity theory and its assumption that people seek self-verification. However, response surface tests reveal patterns that are mistaken as evidence of self-verification by conventional procedures, especially for the spouse identity. We also find that identity discrepancies have different effects on distress and self-conscious emotions (guilt and shame). Our findings have implications not only for research on self and identity across disciplines, but also for many other areas of research that incorporate these concepts and/or use difference scores as explanatory variables. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Developing a NASA strategy for the verification of large space telescope observatories

    NASA Astrophysics Data System (ADS)

    Crooke, Julie A.; Gunderson, Johanna A.; Hagopian, John G.; Levine, Marie

    2006-06-01

    In July 2005, the Office of Program Analysis and Evaluation (PA&E) at NASA Headquarters was directed to develop a strategy for verification of the performance of large space telescope observatories, which occurs predominantly in a thermal vacuum test facility. A mission model of the expected astronomical observatory missions over the next 20 years was identified along with performance, facility and resource requirements. Ground testing versus alternatives was analyzed to determine the pros, cons and break points in the verification process. Existing facilities and their capabilities were examined across NASA, industry and other government agencies as well as the future demand for these facilities across NASA's Mission Directorates. Options were developed to meet the full suite of mission verification requirements, and performance, cost, risk and other analyses were performed. Findings and recommendations from the study were presented to the NASA Administrator and the NASA Strategic Management Council (SMC) in February 2006. This paper details the analysis, results, and findings from this study.

  5. SU-E-J-34: Setup Accuracy in Spine SBRT Using CBCT 6D Image Guidance in Comparison with 6D ExacTrac

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, Z; Yip, S; Lewis, J

    2015-06-15

    Purpose Volumetric information of the spine captured on CBCT can potentially improve the accuracy in spine SBRT setup that has been commonly performed through 2D radiographs. This work evaluates the setup accuracy in spine SBRT using 6D CBCT image guidance that recently became available on Varian systems. Methods ExacTrac radiographs have been commonly used for Spine SBRT setup. The setup process involves first positioning patients with lasers followed by localization imaging, registration, and repositioning. Verification images are then taken providing the residual errors (ExacTracRE) before beam on. CBCT verification is also acquired in our institute. The availability of both ExacTracmore » and CBCT verifications allows a comparison study. 41 verification CBCT of 16 patients were retrospectively registered with the planning CT enabling 6D corrections, giving CBCT residual errors (CBCTRE) which were compared with ExacTracRE. Results The RMS discrepancies between CBCTRE and ExacTracRE are 1.70mm, 1.66mm, 1.56mm in vertical, longitudinal and lateral directions and 0.27°, 0.49°, 0.35° in yaw, roll and pitch respectively. The corresponding mean discrepancies (and standard deviation) are 0.62mm (1.60mm), 0.00mm (1.68mm), −0.80mm (1.36mm) and 0.05° (0.58°), 0.11° (0.48°), −0.16° (0.32°). Of the 41 CBCT, 17 had high-Z surgical implants. No significant difference in ExacTrac-to-CBCT discrepancy was observed between patients with and without the implants. Conclusion Multiple factors can contribute to the discrepancies between CBCT and ExacTrac: 1) the imaging iso-centers of the two systems, while calibrated to coincide, can be different; 2) the ROI used for registration can be different especially if ribs were included in ExacTrac images; 3) small patient motion can occur between the two verification image acquisitions; 4) the algorithms can be different between CBCT (volumetric) and ExacTrac (radiographic) registrations.« less

  6. Static and Dynamic Verification of Critical Software for Space Applications

    NASA Astrophysics Data System (ADS)

    Moreira, F.; Maia, R.; Costa, D.; Duro, N.; Rodríguez-Dapena, P.; Hjortnaes, K.

    Space technology is no longer used only for much specialised research activities or for sophisticated manned space missions. Modern society relies more and more on space technology and applications for every day activities. Worldwide telecommunications, Earth observation, navigation and remote sensing are only a few examples of space applications on which we rely daily. The European driven global navigation system Galileo and its associated applications, e.g. air traffic management, vessel and car navigation, will significantly expand the already stringent safety requirements for space based applications Apart from their usefulness and practical applications, every single piece of onboard software deployed into the space represents an enormous investment. With a long lifetime operation and being extremely difficult to maintain and upgrade, at least when comparing with "mainstream" software development, the importance of ensuring their correctness before deployment is immense. Verification &Validation techniques and technologies have a key role in ensuring that the onboard software is correct and error free, or at least free from errors that can potentially lead to catastrophic failures. Many RAMS techniques including both static criticality analysis and dynamic verification techniques have been used as a means to verify and validate critical software and to ensure its correctness. But, traditionally, these have been isolated applied. One of the main reasons is the immaturity of this field in what concerns to its application to the increasing software product(s) within space systems. This paper presents an innovative way of combining both static and dynamic techniques exploiting their synergy and complementarity for software fault removal. The methodology proposed is based on the combination of Software FMEA and FTA with Fault-injection techniques. The case study herein described is implemented with support from two tools: The SoftCare tool for the SFMEA and SFTA, and the Xception tool for fault-injection. Keywords: Verification &Validation, RAMS, Onboard software, SFMEA, STA, Fault-injection 1 This work is being performed under the project STADY Applied Static And Dynamic Verification Of Critical Software, ESA/ESTEC Contract Nr. 15751/02/NL/LvH.

  7. Trends in age verification among U.S. adolescents attempting to buy cigarettes at retail stores, 2000-2009.

    PubMed

    Filippidis, Filippos T; Agaku, Israel T; Connolly, Gregory N; Vardavas, Constantine I

    2014-04-01

    This study assessed trends in age verification prior to cigarette sales to U.S. middle and high school students, and refusal to sell cigarettes to students aged <18 years during 2000-2009. Data were obtained from the 2000-2009 National Youth Tobacco Survey. Trends during 2000-2009 were assessed using binary logistic regression (p<0.05). The proportion of all students, who reported being asked to show proof of age prior to a cigarette purchase in the past 30 days did not change significantly between 2000 (46.9%) and 2009 (44.9%) (p=0.529 for linear trend). No significant trend in the proportion of students aged < 18 years who were refused a sale when attempting to buy cigarettes was observed between 2000 (39.8%) and 2009 (36.7%) (p=0.283 for linear trend). Refusal of a cigarette sale was significantly higher among under-aged boys compared to girls (adjusted odds ratio=1.48; 95% confidence interval: 1.28-1.70). About half of U.S. middle and high school students who reported making a cigarette purchase were not asked for proof of age, and about three of five under-aged buyers successfully made a cigarette purchase in 2009. Intensified implementation and enforcement of policies requiring age verification among youths is warranted to reduce access and use of tobacco products. Copyright © 2014 Elsevier Inc. All rights reserved.

  8. Numerical verification of two-component dental implant in the context of fatigue life for various load cases.

    PubMed

    Szajek, Krzysztof; Wierszycki, Marcin

    2016-01-01

    Dental implant designing is a complex process which considers many limitations both biological and mechanical in nature. In earlier studies, a complete procedure for improvement of two-component dental implant was proposed. However, the optimization tasks carried out required assumption on representative load case, which raised doubts on optimality for the other load cases. This paper deals with verification of the optimal design in context of fatigue life and its main goal is to answer the question if the assumed load scenario (solely horizontal occlusal load) leads to the design which is also "safe" for oblique occlussal loads regardless the angle from an implant axis. The verification is carried out with series of finite element analyses for wide spectrum of physiologically justified loads. The design of experiment methodology with full factorial technique is utilized. All computations are done in Abaqus suite. The maximal Mises stress and normalized effective stress amplitude for various load cases are discussed and compared with the assumed "safe" limit (equivalent of fatigue life for 5e6 cycles). The obtained results proof that coronial-appical load component should be taken into consideration in the two component dental implant when fatigue life is optimized. However, its influence in the analyzed case is small and does not change the fact that the fatigue life improvement is observed for all components within whole range of analyzed loads.

  9. 40 CFR 1065.920 - PEMS calibrations and verifications.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ....920 PEMS calibrations and verifications. (a) Subsystem calibrations and verifications. Use all the... verifications and analysis. It may also be necessary to limit the range of conditions under which the PEMS can... additional information or analysis to support your conclusions. (b) Overall verification. This paragraph (b...

  10. GENERIC VERIFICATION PROTOCOL FOR THE VERIFICATION OF PESTICIDE SPRAY DRIFT REDUCTION TECHNOLOGIES FOR ROW AND FIELD CROPS

    EPA Science Inventory

    This ETV program generic verification protocol was prepared and reviewed for the Verification of Pesticide Drift Reduction Technologies project. The protocol provides a detailed methodology for conducting and reporting results from a verification test of pesticide drift reductio...

  11. Feasibility of biochemical verification in a web-based smoking cessation study.

    PubMed

    Cha, Sarah; Ganz, Ollie; Cohn, Amy M; Ehlke, Sarah J; Graham, Amanda L

    2017-10-01

    Cogent arguments have been made against the need for biochemical verification in population-based studies with low-demand characteristics. Despite this fact, studies involving digital interventions (low-demand) are often required in peer review to report biochemically verified abstinence. To address this discrepancy, we examined the feasibility and costs of biochemical verification in a web-based study conducted with a national sample. Participants were 600U.S. adult current smokers who registered on a web-based smoking cessation program and completed surveys at baseline and 3months. Saliva sampling kits were sent to participants who reported 7-day abstinence at 3months, and analyzed for cotinine. The response rate at 3-months was 41.2% (n=247): 93 participants reported 7-day abstinence (38%) and were mailed a saliva kit (71% returned). The discordance rate was 36.4%. Participants with discordant responses were more likely to report 3-month use of nicotine replacement therapy or e-cigarettes than those with concordant responses (79.2% vs. 45.2%, p=0.007). The total cost of saliva sampling was $8280 ($125/sample). Biochemical verification was both time- and cost-intensive, and yielded a relatively small number of samples due to low response rates and use of other nicotine products during the follow-up period. There was a high rate of discordance of self-reported abstinence and saliva testing. Costs for data collection may be prohibitive for studies with large sample sizes or limited budgets. Our findings echo previous statements that biochemical verification is not necessary in population-based studies, and add evidence specific to technology-based studies. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Investigation of Cleanliness Verification Techniques for Rocket Engine Hardware

    NASA Technical Reports Server (NTRS)

    Fritzemeier, Marilyn L.; Skowronski, Raymund P.

    1994-01-01

    Oxidizer propellant systems for liquid-fueled rocket engines must meet stringent cleanliness requirements for particulate and nonvolatile residue. These requirements were established to limit residual contaminants which could block small orifices or ignite in the oxidizer system during engine operation. Limiting organic residues in high pressure oxygen systems, such as in the Space Shuttle Main Engine (SSME), is particularly important. The current method of cleanliness verification for the SSME uses an organic solvent flush of the critical hardware surfaces. The solvent is filtered and analyzed for particulate matter followed by gravimetric determination of the nonvolatile residue (NVR) content of the filtered solvent. The organic solvents currently specified for use (1, 1, 1-trichloroethane and CFC-113) are ozone-depleting chemicals slated for elimination by December 1995. A test program is in progress to evaluate alternative methods for cleanliness verification that do not require the use of ozone-depleting chemicals and that minimize or eliminate the use of solvents regulated as hazardous air pollutants or smog precursors. Initial results from the laboratory test program to evaluate aqueous-based methods and organic solvent flush methods for NVR verification are provided and compared with results obtained using the current method. Evaluation of the alternative methods was conducted using a range of contaminants encountered in the manufacture of rocket engine hardware.

  13. A service-oriented architecture for integrating the modeling and formal verification of genetic regulatory networks

    PubMed Central

    2009-01-01

    Background The study of biological networks has led to the development of increasingly large and detailed models. Computer tools are essential for the simulation of the dynamical behavior of the networks from the model. However, as the size of the models grows, it becomes infeasible to manually verify the predictions against experimental data or identify interesting features in a large number of simulation traces. Formal verification based on temporal logic and model checking provides promising methods to automate and scale the analysis of the models. However, a framework that tightly integrates modeling and simulation tools with model checkers is currently missing, on both the conceptual and the implementational level. Results We have developed a generic and modular web service, based on a service-oriented architecture, for integrating the modeling and formal verification of genetic regulatory networks. The architecture has been implemented in the context of the qualitative modeling and simulation tool GNA and the model checkers NUSMV and CADP. GNA has been extended with a verification module for the specification and checking of biological properties. The verification module also allows the display and visual inspection of the verification results. Conclusions The practical use of the proposed web service is illustrated by means of a scenario involving the analysis of a qualitative model of the carbon starvation response in E. coli. The service-oriented architecture allows modelers to define the model and proceed with the specification and formal verification of the biological properties by means of a unified graphical user interface. This guarantees a transparent access to formal verification technology for modelers of genetic regulatory networks. PMID:20042075

  14. SU-E-T-624: Quantitative Evaluation of 2D Versus 3D Dosimetry for Stereotactic Volumetric Modulated Arc Delivery Using COMPASS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vikraman, S; Karrthick, K; Rajesh, T

    2014-06-15

    Purpose: The purpose of this study was to evaluate quantitatively 2D versus 3D dosimetry for stereotactic volumetric modulated arc delivery using COMPASS with 2D array. Methods: Twenty-five patients CT images and RT structures of different sites like brain, head and neck, thorax, abdomen and spine were taken from Multiplan planning system for this study. All these patients underwent radical stereotactic treatment in Cyberknife. For each patient, linac based VMAT stereotactic plans were generated in Monaco TPS v 3.1 using Elekta Beam Modulator MLC. Dose prescription was in the range of 5-20Gy/fraction.TPS calculated VMAT plan delivery accuracy was quantitatively evaluated withmore » COMPASS measured dose and calculated dose based on DVH metrics. In order to ascertain the potential of COMPASS 3D dosimetry for stereotactic plan delivery, 2D fluence verification was performed with MatriXX using Multicube. Results: For each site, D{sub 9} {sub 5} was achieved with 100% of prescription dose with maximum 0.05SD. Conformity index (CI) was observed closer to 1.15 in all cases. Maximum deviation of 2.62 % was observed for D{sub 9} {sub 5} when compared TPS versus COMPASS measured. Considerable deviations were observed in head and neck cases compare to other sites. The maximum mean and standard deviation for D{sub 9} {sub 5}, average target dose and average gamma were -0.78±1.72, -1.10±1.373 and 0.39±0.086 respectively. Numbers of pixels passing 2D fluence verification were observed as a mean of 99.36% ±0.455 SD with 3% dose difference and 3mm DTA. For critical organs in head and neck cases, significant dose differences were observed in 3D dosimetry while the target doses were matched well within limit in both 2D and 3D dosimetry. Conclusion: The quantitative evaluations of 2D versus 3D dosimetry for stereotactic volumetric modulated plans showed the potential of highlighting the delivery errors. This study reveals that COMPASS 3D dosimetry is an effective tool for patient specific quality assurance compared to 2D fluence verification.« less

  15. U.S.-MEXICO BORDER PROGRAM ARIZONA BORDER STUDY--STANDARD OPERATING PROCEDURE FOR PERFORMANCE OF COMPUTER SOFTWARE: VERIFICATION AND VALIDATION (UA-D-2.0)

    EPA Science Inventory

    The purpose of this SOP is to define the procedures used for the initial and periodic verification and validation of computer programs used during the Arizona NHEXAS project and the Border study. Keywords: Computers; Software; QA/QC.

    The U.S.-Mexico Border Program is sponsored ...

  16. Forecast Verification: Identification of small changes in weather forecasting skill

    NASA Astrophysics Data System (ADS)

    Weatherhead, E. C.; Jensen, T. L.

    2017-12-01

    Global and regonal weather forecasts have improved over the past seven decades most often because of small, incrmental improvements. The identificaiton and verification of forecast improvement due to proposed small changes in forecasting can be expensive and, if not carried out efficiently, can slow progress in forecasting development. This presentation will look at the skill of commonly used verification techniques and show how the ability to detect improvements can depend on the magnitude of the improvement, the number of runs used to test the improvement, the location on the Earth and the statistical techniques used. For continuous variables, such as temperture, wind and humidity, the skill of a forecast can be directly compared using a pair-wise statistical test that accommodates the natural autocorrelation and magnitude of variability. For discrete variables, such as tornado outbreaks, or icing events, the challenges is to reduce the false alarm rate while improving the rate of correctly identifying th discrete event. For both continuus and discrete verification results, proper statistical approaches can reduce the number of runs needed to identify a small improvement in forecasting skill. Verification within the Next Generation Global Prediction System is an important component to the many small decisions needed to make stat-of-the-art improvements to weather forecasting capabilities. The comparison of multiple skill scores with often conflicting results requires not only appropriate testing, but also scientific judgment to assure that the choices are appropriate not only for improvements in today's forecasting capabilities, but allow improvements that will come in the future.

  17. 30 CFR 250.913 - When must I resubmit Platform Verification Program plans?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... CONTINENTAL SHELF Platforms and Structures Platform Verification Program § 250.913 When must I resubmit Platform Verification Program plans? (a) You must resubmit any design verification, fabrication... 30 Mineral Resources 2 2011-07-01 2011-07-01 false When must I resubmit Platform Verification...

  18. 30 CFR 250.909 - What is the Platform Verification Program?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 30 Mineral Resources 2 2011-07-01 2011-07-01 false What is the Platform Verification Program? 250... Platforms and Structures Platform Verification Program § 250.909 What is the Platform Verification Program? The Platform Verification Program is the MMS approval process for ensuring that floating platforms...

  19. Requirement Assurance: A Verification Process

    NASA Technical Reports Server (NTRS)

    Alexander, Michael G.

    2011-01-01

    Requirement Assurance is an act of requirement verification which assures the stakeholder or customer that a product requirement has produced its "as realized product" and has been verified with conclusive evidence. Product requirement verification answers the question, "did the product meet the stated specification, performance, or design documentation?". In order to ensure the system was built correctly, the practicing system engineer must verify each product requirement using verification methods of inspection, analysis, demonstration, or test. The products of these methods are the "verification artifacts" or "closure artifacts" which are the objective evidence needed to prove the product requirements meet the verification success criteria. Institutional direction is given to the System Engineer in NPR 7123.1A NASA Systems Engineering Processes and Requirements with regards to the requirement verification process. In response, the verification methodology offered in this report meets both the institutional process and requirement verification best practices.

  20. Preventing illegal tobacco and alcohol sales to minors through electronic age-verification devices: a field effectiveness study.

    PubMed

    Krevor, Brad; Capitman, John A; Oblak, Leslie; Cannon, Joanna B; Ruwe, Mathilda

    2003-01-01

    Efforts to prohibit the sales of tobacco and alcohol products to minors are widespread. Electronic Age Verification (EAV) devices are one possible means to improve compliance with sales to minors laws. The purpose of this study was to evaluate the implementation and effectiveness of EAV devices in terms of the frequency and accuracy of age verification, as well as to examine the impact of EAV's on the retailer environment. Two study locations were selected: Tallahassee, Florida and Iowa City, Iowa. Retail stores were invited to participate in the study, producing a self-selected experimental group. Stores that did not elect to test the EAV's comprised the comparison group. The data sources included: 1) mystery shopper inspections: two pre- and five post-EAV installation mystery shopper inspections of tobacco and alcohol retailers; 2) retail clerk and manager interviews; and 3) customer interviews. The study found that installing EAV devices with minimal training and encouragement did not increase age verification and underage sales refusal. Surveyed clerks reported positive experiences using the electronic ID readers and customers reported almost no discomfort about being asked to swipe their IDs. Observations from this study support the need for a more comprehensive system for responsible retailing.

  1. Receiver operating characteristic (ROC) curves: review of methods with applications in diagnostic medicine

    NASA Astrophysics Data System (ADS)

    Obuchowski, Nancy A.; Bullen, Jennifer A.

    2018-04-01

    Receiver operating characteristic (ROC) analysis is a tool used to describe the discrimination accuracy of a diagnostic test or prediction model. While sensitivity and specificity are the basic metrics of accuracy, they have many limitations when characterizing test accuracy, particularly when comparing the accuracies of competing tests. In this article we review the basic study design features of ROC studies, illustrate sample size calculations, present statistical methods for measuring and comparing accuracy, and highlight commonly used ROC software. We include descriptions of multi-reader ROC study design and analysis, address frequently seen problems of verification and location bias, discuss clustered data, and provide strategies for testing endpoints in ROC studies. The methods are illustrated with a study of transmission ultrasound for diagnosing breast lesions.

  2. Direct and full-scale experimental verifications towards ground-satellite quantum key distribution

    NASA Astrophysics Data System (ADS)

    Wang, Jian-Yu; Yang, Bin; Liao, Sheng-Kai; Zhang, Liang; Shen, Qi; Hu, Xiao-Fang; Wu, Jin-Cai; Yang, Shi-Ji; Jiang, Hao; Tang, Yan-Lin; Zhong, Bo; Liang, Hao; Liu, Wei-Yue; Hu, Yi-Hua; Huang, Yong-Mei; Qi, Bo; Ren, Ji-Gang; Pan, Ge-Sheng; Yin, Juan; Jia, Jian-Jun; Chen, Yu-Ao; Chen, Kai; Peng, Cheng-Zhi; Pan, Jian-Wei

    2013-05-01

    Quantum key distribution (QKD) provides the only intrinsically unconditional secure method for communication based on the principle of quantum mechanics. Compared with fibre-based demonstrations, free-space links could provide the most appealing solution for communication over much larger distances. Despite significant efforts, all realizations to date rely on stationary sites. Experimental verifications are therefore extremely crucial for applications to a typical low Earth orbit satellite. To achieve direct and full-scale verifications of our set-up, we have carried out three independent experiments with a decoy-state QKD system, and overcome all conditions. The system is operated on a moving platform (using a turntable), on a floating platform (using a hot-air balloon), and with a high-loss channel to demonstrate performances under conditions of rapid motion, attitude change, vibration, random movement of satellites, and a high-loss regime. The experiments address wide ranges of all leading parameters relevant to low Earth orbit satellites. Our results pave the way towards ground-satellite QKD and a global quantum communication network.

  3. Validation and Verification of Composite Pressure Vessel Design

    NASA Technical Reports Server (NTRS)

    Kreger, Stephen T.; Ortyl, Nicholas; Grant, Joseph; Taylor, F. Tad

    2006-01-01

    Ten composite pressure vessels were instrumented with fiber Bragg grating sensors and pressure tested Through burst. This paper and presentation will discuss the testing methodology, the test results, compare the testing results to the analytical model, and also compare the fiber Bragg grating sensor data with data obtained against that obtained from foil strain gages.

  4. Dosimetric verification of gated delivery of electron beams using a 2D ion chamber array

    PubMed Central

    Yoganathan, S. A.; Das, K. J. Maria; Raj, D. Gowtham; Kumar, Shaleen

    2015-01-01

    The purpose of this study was to compare the dosimetric characteristics; such as beam output, symmetry and flatness between gated and non-gated electron beams. Dosimetric verification of gated delivery was carried for all electron beams available on Varian CL 2100CD medical linear accelerator. Measurements were conducted for three dose rates (100 MU/min, 300 MU/min and 600 MU/min) and two respiratory motions (breathing period of 4s and 8s). Real-time position management (RPM) system was used for the gated deliveries. Flatness and symmetry values were measured using Imatrixx 2D ion chamber array device and the beam output was measured using plane parallel ion chamber. These detector systems were placed over QUASAR motion platform which was programmed to simulate the respiratory motion of target. The dosimetric characteristics of gated deliveries were compared with non-gated deliveries. The flatness and symmetry of all the evaluated electron energies did not differ by more than 0.7 % with respect to corresponding non-gated deliveries. The beam output variation of gated electron beam was less than 0.6 % for all electron energies except for 16 MeV (1.4 %). Based on the results of this study, it can be concluded that Varian CL2100 CD is well suitable for gated delivery of non-dynamic electron beams. PMID:26170552

  5. Climate Projections and Drought: Verification for the Colorado River Basin

    NASA Astrophysics Data System (ADS)

    Santos, N. I.; Piechota, T. C.; Miller, W. P.; Ahmad, S.

    2017-12-01

    The Colorado River Basin has experienced the driest 17 year period (2000-2016) in over 100 years of historical record keeping. While the Colorado River reservoir system began the current drought at near 100% capacity, reservoir storage has fallen to just above 50% during the drought. Even though federal and state water agencies have worked together to mitigate the impact of the drought and have collaboratively sponsored conservation programs and drought contingency plans, the 17-years of observed data beg the question as to whether the most recent climate projections would have been able to project the current drought's severity. The objective of this study is to analyze observations and ensemble projections (e.g. temperature, precipitation, streamflow) from the CMIP3 and CMIP5 archive in the Colorado River Basin and compare metrics related to skill scores, the Palmer Drought Severity Index, and water supply sustainability index. Furthermore, a sub-ensemble of CMIP3/CMIP5 projections, developed using a teleconnection replication verification technique developed by the author, will also be compared to the observed record to assist in further validating the technique as a usable process to increase skill in climatological projections. In the end, this study will assist to better inform water resource managers about the ability of climate ensembles to project hydroclimatic variability and the appearance of decadal drought periods.

  6. WE-DE-201-11: Sensitivity and Specificity of Verification Methods Based On Total Reference Air Kerma (TRAK) Or On User Provided Dose Points for Graphically Planned Skin HDR Brachytherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Damato, A; Devlin, P; Bhagwat, M

    Purpose: To investigate the sensitivity and specificity of a novel verification methodology for image-guided skin HDR brachytherapy plans using a TRAK-based reasonableness test, compared to a typical manual verification methodology. Methods: Two methodologies were used to flag treatment plans necessitating additional review due to a potential discrepancy of 3 mm between planned dose and clinical target in the skin. Manual verification was used to calculate the discrepancy between the average dose to points positioned at time of planning representative of the prescribed depth and the expected prescription dose. Automatic verification was used to calculate the discrepancy between TRAK of themore » clinical plan and its expected value, which was calculated using standard plans with varying curvatures, ranging from flat to cylindrically circumferential. A plan was flagged if a discrepancy >10% was observed. Sensitivity and specificity were calculated using as a criteria for true positive that >10% of plan dwells had a distance to prescription dose >1 mm different than prescription depth (3 mm + size of applicator). All HDR image-based skin brachytherapy plans treated at our institution in 2013 were analyzed. Results: 108 surface applicator plans to treat skin of the face, scalp, limbs, feet, hands or abdomen were analyzed. Median number of catheters was 19 (range, 4 to 71) and median number of dwells was 257 (range, 20 to 1100). Sensitivity/specificity were 57%/78% for manual and 70%/89% for automatic verification. Conclusion: A check based on expected TRAK value is feasible for irregularly shaped, image-guided skin HDR brachytherapy. This test yielded higher sensitivity and specificity than a test based on the identification of representative points, and can be implemented with a dedicated calculation code or with pre-calculated lookup tables of ideally shaped, uniform surface applicators.« less

  7. Thermal System Verification and Model Validation for NASA's Cryogenic Passively Cooled James Webb Space Telescope

    NASA Technical Reports Server (NTRS)

    Cleveland, Paul E.; Parrish, Keith A.

    2005-01-01

    A thorough and unique thermal verification and model validation plan has been developed for NASA s James Webb Space Telescope. The JWST observatory consists of a large deployed aperture optical telescope passively cooled to below 50 Kelvin along with a suite of several instruments passively and actively cooled to below 37 Kelvin and 7 Kelvin, respectively. Passive cooling to these extremely low temperatures is made feasible by the use of a large deployed high efficiency sunshield and an orbit location at the L2 Lagrange point. Another enabling feature is the scale or size of the observatory that allows for large radiator sizes that are compatible with the expected power dissipation of the instruments and large format Mercury Cadmium Telluride (HgCdTe) detector arrays. This passive cooling concept is simple, reliable, and mission enabling when compared to the alternatives of mechanical coolers and stored cryogens. However, these same large scale observatory features, which make passive cooling viable, also prevent the typical flight configuration fully-deployed thermal balance test that is the keystone to most space missions thermal verification plan. JWST is simply too large in its deployed configuration to be properly thermal balance tested in the facilities that currently exist. This reality, when combined with a mission thermal concept with little to no flight heritage, has necessitated the need for a unique and alternative approach to thermal system verification and model validation. This paper describes the thermal verification and model validation plan that has been developed for JWST. The plan relies on judicious use of cryogenic and thermal design margin, a completely independent thermal modeling cross check utilizing different analysis teams and software packages, and finally, a comprehensive set of thermal tests that occur at different levels of JWST assembly. After a brief description of the JWST mission and thermal architecture, a detailed description of the three aspects of the thermal verification and model validation plan is presented.

  8. PATIENT STUDY OF IN VIVO VERIFICATION OF BEAM DELIVERY AND RANGE, USING POSITRON EMISSION TOMOGRAPHY AND COMPUTED TOMOGRAPHY IMAGING AFTER PROTON THERAPY

    PubMed Central

    Parodi, Katia; Paganetti, Harald; Shih, Helen A.; Michaud, Susan; Loeffler, Jay S.; Delaney, Thomas F.; Liebsch, Norbert J.; Munzenrider, John E.; Fischman, Alan J.; Knopf, Antje; Bortfeld, Thomas

    2007-01-01

    Purpose To investigate the feasibility and value of positron emission tomography and computed tomography (PET/CT) for treatment verification after proton radiotherapy. Methods and Materials This study included 9 patients with tumors in the cranial base, spine, orbit, and eye. Total doses of 1.8–3 GyE and 10 GyE (for an ocular melanoma) per fraction were delivered in 1 or 2 fields. Imaging was performed with a commercial PET/CT scanner for 30 min, starting within 20 min after treatment. The same treatment immobilization device was used during imaging for all but 2 patients. Measured PET/CT images were coregistered to the planning CT and compared with the corresponding PET expectation, obtained from CT-based Monte Carlo calculations complemented by functional information. For the ocular case, treatment position was approximately replicated, and spatial correlation was deduced from reference clips visible in both the planning radiographs and imaging CT. Here, the expected PET image was obtained from an analytical model. Results Good spatial correlation and quantitative agreement within 30% were found between the measured and expected activity. For head-and-neck patients, the beam range could be verified with an accuracy of 1–2 mm in well-coregistered bony structures. Low spine and eye sites indicated the need for better fixation and coregistration methods. An analysis of activity decay revealed as tissue-effective half-lives of 800–1,150 s. Conclusions This study demonstrates the feasibility of postradiation PET/CT for in vivo treatment verification. It also indicates some technological and methodological improvements needed for optimal clinical application. PMID:17544003

  9. Characteristics of ACS-verified Level I and Level II trauma centers: A study linking trauma center verification review data and the National Trauma Data Bank of the American College of Surgeons Committee on Trauma.

    PubMed

    Shafi, Shahid; Barnes, Sunni; Ahn, Chul; Hemilla, Mark R; Cryer, H Gill; Nathens, Avery; Neal, Melanie; Fildes, John

    2016-10-01

    The Trauma Quality Improvement Project of the American College of Surgeons (ACS) has demonstrated variations in trauma center outcomes despite similar verification status. The purpose of this study was to identify structural characteristics of trauma centers that affect patient outcomes. Trauma registry data on 361,187 patients treated at 222 ACS-verified Level I and Level II trauma centers were obtained from the National Trauma Data Bank of ACS. These data were used to estimate each center's observed-to-expected (O-E) mortality ratio with 95% confidence intervals using multivariate logistic regression analysis. De-identified data on structural characteristics of these trauma centers were obtained from the ACS Verification Review Committee. Centers in the lowest quartile of mortality based on O-E ratio (n = 56) were compared to the rest (n = 166) using Classification and Regression Tree (CART) analysis to identify institutional characteristics independently associated with high-performing centers. Of the 72 structural characteristics explored, only 3 were independently associated with high-performing centers: annual patient visits to the emergency department of fewer than 61,000; proportion of patients on Medicare greater than 20%; and continuing medical education for emergency department physician liaison to the trauma program ranging from 55 and 113 hours annually. Each 5% increase in O-E mortality ratio was associated with an increase in total length of stay of one day (r = 0.25; p < 0.001). Very few structural characteristics of ACS-verified trauma centers are associated with risk-adjusted mortality. Thus, variations in patient outcomes across trauma centers are likely related to variations in clinical practices. Therapeutic study, level III.

  10. Multi-site assessment of the precision and reproducibility of multiple reaction monitoring–based measurements of proteins in plasma

    PubMed Central

    Addona, Terri A; Abbatiello, Susan E; Schilling, Birgit; Skates, Steven J; Mani, D R; Bunk, David M; Spiegelman, Clifford H; Zimmerman, Lisa J; Ham, Amy-Joan L; Keshishian, Hasmik; Hall, Steven C; Allen, Simon; Blackman, Ronald K; Borchers, Christoph H; Buck, Charles; Cardasis, Helene L; Cusack, Michael P; Dodder, Nathan G; Gibson, Bradford W; Held, Jason M; Hiltke, Tara; Jackson, Angela; Johansen, Eric B; Kinsinger, Christopher R; Li, Jing; Mesri, Mehdi; Neubert, Thomas A; Niles, Richard K; Pulsipher, Trenton C; Ransohoff, David; Rodriguez, Henry; Rudnick, Paul A; Smith, Derek; Tabb, David L; Tegeler, Tony J; Variyath, Asokan M; Vega-Montoto, Lorenzo J; Wahlander, Åsa; Waldemarson, Sofia; Wang, Mu; Whiteaker, Jeffrey R; Zhao, Lei; Anderson, N Leigh; Fisher, Susan J; Liebler, Daniel C; Paulovich, Amanda G; Regnier, Fred E; Tempst, Paul; Carr, Steven A

    2010-01-01

    Verification of candidate biomarkers relies upon specific, quantitative assays optimized for selective detection of target proteins, and is increasingly viewed as a critical step in the discovery pipeline that bridges unbiased biomarker discovery to preclinical validation. Although individual laboratories have demonstrated that multiple reaction monitoring (MRM) coupled with isotope dilution mass spectrometry can quantify candidate protein biomarkers in plasma, reproducibility and transferability of these assays between laboratories have not been demonstrated. We describe a multilaboratory study to assess reproducibility, recovery, linear dynamic range and limits of detection and quantification of multiplexed, MRM-based assays, conducted by NCI-CPTAC. Using common materials and standardized protocols, we demonstrate that these assays can be highly reproducible within and across laboratories and instrument platforms, and are sensitive to low µg/ml protein concentrations in unfractionated plasma. We provide data and benchmarks against which individual laboratories can compare their performance and evaluate new technologies for biomarker verification in plasma. PMID:19561596

  11. Satellite detection of oil on the marine surface

    NASA Technical Reports Server (NTRS)

    Wilson, M. J.; Oneill, P. E.; Estes, J. E.

    1981-01-01

    The ability of two widely dissimilar spaceborne imaging sensors to detect surface oil accumulations in the marine environment has been evaluated using broadly different techniques. Digital Landsat multispectral scanner (MSS) data consisting of two visible and two near infrared channels has been processed to enhance contrast between areas of known oil coverage and background clean surface water. These enhanced images have then been compared to surface verification data gathered by aerial reconnaissance during the October 15, 1975, Landsat overpass. A similar evaluation of oil slick imaging potential has been made for digitally enhanced Seasat-A synthetic aperture radar (SAR) data from July 18, 1979. Due to the premature failure of this satellite, however, no concurrent surface verification data were collected. As a substitute, oil slick configuration information has been generated for the comparison using meteorological and oceanographic data. The test site utilized in both studies was the extensive area of natural seepage located off Coal Oil Point, adjacent to the University of California, Santa Barbara.

  12. Transfer function verification and block diagram simplification of a very high-order distributed pole closed-loop servo by means of non-linear time-response simulation

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, A. K.

    1975-01-01

    Linear frequency domain methods are inadequate in analyzing the 1975 Viking Orbiter (VO75) digital tape recorder servo due to dominant nonlinear effects such as servo signal limiting, unidirectional servo control, and static/dynamic Coulomb friction. The frequency loop (speed control) servo of the VO75 tape recorder is used to illustrate the analytical tools and methodology of system redundancy elimination and high order transfer function verification. The paper compares time-domain performance parameters derived from a series of nonlinear time responses with the available experimental data in order to select the best possible analytical transfer function representation of the tape transport (mechanical segment of the tape recorder) from several possible candidates. The study also shows how an analytical time-response simulation taking into account most system nonlinearities can pinpoint system redundancy and overdesign stemming from a strictly empirical design approach. System order reduction is achieved through truncation of individual transfer functions and elimination of redundant blocks.

  13. 30 CFR 250.909 - What is the Platform Verification Program?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 2 2010-07-01 2010-07-01 false What is the Platform Verification Program? 250... Verification Program § 250.909 What is the Platform Verification Program? The Platform Verification Program is the MMS approval process for ensuring that floating platforms; platforms of a new or unique design...

  14. Property-driven functional verification technique for high-speed vision system-on-chip processor

    NASA Astrophysics Data System (ADS)

    Nshunguyimfura, Victor; Yang, Jie; Liu, Liyuan; Wu, Nanjian

    2017-04-01

    The implementation of functional verification in a fast, reliable, and effective manner is a challenging task in a vision chip verification process. The main reason for this challenge is the stepwise nature of existing functional verification techniques. This vision chip verification complexity is also related to the fact that in most vision chip design cycles, extensive efforts are focused on how to optimize chip metrics such as performance, power, and area. Design functional verification is not explicitly considered at an earlier stage at which the most sound decisions are made. In this paper, we propose a semi-automatic property-driven verification technique. The implementation of all verification components is based on design properties. We introduce a low-dimension property space between the specification space and the implementation space. The aim of this technique is to speed up the verification process for high-performance parallel processing vision chips. Our experimentation results show that the proposed technique can effectively improve the verification effort up to 20% for the complex vision chip design while reducing the simulation and debugging overheads.

  15. Hydrologic data-verification management program plan

    USGS Publications Warehouse

    Alexander, C.W.

    1982-01-01

    Data verification refers to the performance of quality control on hydrologic data that have been retrieved from the field and are being prepared for dissemination to water-data users. Water-data users now have access to computerized data files containing unpublished, unverified hydrologic data. Therefore, it is necessary to develop techniques and systems whereby the computer can perform some data-verification functions before the data are stored in user-accessible files. Computerized data-verification routines can be developed for this purpose. A single, unified concept describing master data-verification program using multiple special-purpose subroutines, and a screen file containing verification criteria, can probably be adapted to any type and size of computer-processing system. Some traditional manual-verification procedures can be adapted for computerized verification, but new procedures can also be developed that would take advantage of the powerful statistical tools and data-handling procedures available to the computer. Prototype data-verification systems should be developed for all three data-processing environments as soon as possible. The WATSTORE system probably affords the greatest opportunity for long-range research and testing of new verification subroutines. (USGS)

  16. Acoustic time-of-flight for proton range verification in water

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, Kevin C.; Avery, Stephen, E-mail: Stephen.A

    2016-09-15

    Purpose: Measurement of the arrival times of thermoacoustic waves induced by pulsed proton dose depositions (protoacoustics) may provide a proton range verification method. The goal of this study is to characterize the required dose and protoacoustic proton range (distance) verification accuracy in a homogeneous water medium at a hospital-based clinical cyclotron. Methods: Gaussian-like proton pulses with 17 μs widths and instantaneous currents of 480 nA (5.6 × 10{sup 7} protons/pulse, 3.4 cGy/pulse at the Bragg peak) were generated by modulating the cyclotron proton source with a function generator. After energy degradation, the 190 MeV proton pulses irradiated a water phantom,more » and the generated protoacoustic emissions were measured by a hydrophone. The detector position and proton pulse characteristics were varied. The experimental results were compared to simulations. Different arrival time metrics derived from acoustic waveforms were compared, and the accuracy of protoacoustic time-of-flight distance calculations was assessed. Results: A 27 mPa noise level was observed in the treatment room during irradiation. At 5 cm from the proton beam, an average maximum pressure of 5.2 mPa/1 × 10{sup 7} protons (6.1 mGy at the Bragg peak) was measured after irradiation with a proton pulse with 10%–90% rise time of 11 μs. Simulation and experiment arrival times agreed well, and the observed 2.4 μs delay between simulation and experiment is attributed to the difference between the hydrophone’s acoustic and geometric centers. Based on protoacoustic arrival times, the beam axis position was measured to within (x, y) = (−2.0,  0.5) ± 1 mm. After deconvolution of the exciting proton pulse, the protoacoustic compression peak provided the most consistent measure of the distance to the Bragg peak, with an error distribution with mean = − 4.5 mm and standard deviation = 2.0 mm. Conclusions: Based on water tank measurements at a clinical hospital-based cyclotron, protoacoustics is a potential method for measuring the beam’s position (x and y within 2.0 mm) and Bragg peak range (2.0 mm standard deviation), although range verification will require simulation or experimental calibration to remove systematic error. Based on extrapolation, a protoacoustic arrival time reproducibility of 1.5 μs (2.2 mm) is achievable with 2 Gy of total deposited dose. Of the compared methods, deconvolution of the excitation proton pulse is the best technique for extracting protoacoustic arrival times, particularly if there is variation in the proton pulse shape.« less

  17. Acoustic time-of-flight for proton range verification in water.

    PubMed

    Jones, Kevin C; Vander Stappen, François; Sehgal, Chandra M; Avery, Stephen

    2016-09-01

    Measurement of the arrival times of thermoacoustic waves induced by pulsed proton dose depositions (protoacoustics) may provide a proton range verification method. The goal of this study is to characterize the required dose and protoacoustic proton range (distance) verification accuracy in a homogeneous water medium at a hospital-based clinical cyclotron. Gaussian-like proton pulses with 17 μs widths and instantaneous currents of 480 nA (5.6 × 10(7) protons/pulse, 3.4 cGy/pulse at the Bragg peak) were generated by modulating the cyclotron proton source with a function generator. After energy degradation, the 190 MeV proton pulses irradiated a water phantom, and the generated protoacoustic emissions were measured by a hydrophone. The detector position and proton pulse characteristics were varied. The experimental results were compared to simulations. Different arrival time metrics derived from acoustic waveforms were compared, and the accuracy of protoacoustic time-of-flight distance calculations was assessed. A 27 mPa noise level was observed in the treatment room during irradiation. At 5 cm from the proton beam, an average maximum pressure of 5.2 mPa/1 × 10(7) protons (6.1 mGy at the Bragg peak) was measured after irradiation with a proton pulse with 10%-90% rise time of 11 μs. Simulation and experiment arrival times agreed well, and the observed 2.4 μs delay between simulation and experiment is attributed to the difference between the hydrophone's acoustic and geometric centers. Based on protoacoustic arrival times, the beam axis position was measured to within (x, y) = (-2.0,  0.5) ± 1 mm. After deconvolution of the exciting proton pulse, the protoacoustic compression peak provided the most consistent measure of the distance to the Bragg peak, with an error distribution with mean = - 4.5 mm and standard deviation = 2.0 mm. Based on water tank measurements at a clinical hospital-based cyclotron, protoacoustics is a potential method for measuring the beam's position (x and y within 2.0 mm) and Bragg peak range (2.0 mm standard deviation), although range verification will require simulation or experimental calibration to remove systematic error. Based on extrapolation, a protoacoustic arrival time reproducibility of 1.5 μs (2.2 mm) is achievable with 2 Gy of total deposited dose. Of the compared methods, deconvolution of the excitation proton pulse is the best technique for extracting protoacoustic arrival times, particularly if there is variation in the proton pulse shape.

  18. Further Development of Verification Check-Cases for Six- Degree-of-Freedom Flight Vehicle Simulations

    NASA Technical Reports Server (NTRS)

    Jackson, E. Bruce; Madden, Michael M.; Shelton, Robert; Jackson, A. A.; Castro, Manuel P.; Noble, Deleena M.; Zimmerman, Curtis J.; Shidner, Jeremy D.; White, Joseph P.; Dutta, Doumyo; hide

    2015-01-01

    This follow-on paper describes the principal methods of implementing, and documents the results of exercising, a set of six-degree-of-freedom rigid-body equations of motion and planetary geodetic, gravitation and atmospheric models for simple vehicles in a variety of endo- and exo-atmospheric conditions with various NASA, and one popular open-source, engineering simulation tools. This effort is intended to provide an additional means of verification of flight simulations. The models used in this comparison, as well as the resulting time-history trajectory data, are available electronically for persons and organizations wishing to compare their flight simulation implementations of the same models.

  19. JacketSE: An Offshore Wind Turbine Jacket Sizing Tool; Theory Manual and Sample Usage with Preliminary Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Damiani, Rick

    This manual summarizes the theory and preliminary verifications of the JacketSE module, which is an offshore jacket sizing tool that is part of the Wind-Plant Integrated System Design & Engineering Model toolbox. JacketSE is based on a finite-element formulation and on user-prescribed inputs and design standards' criteria (constraints). The physics are highly simplified, with a primary focus on satisfying ultimate limit states and modal performance requirements. Preliminary validation work included comparing industry data and verification against ANSYS, a commercial finite-element analysis package. The results are encouraging, and future improvements to the code are recommended in this manual.

  20. Experimental Verification Of The Osculating Cones Method For Two Waverider Forebodies At Mach 4 and 6

    NASA Technical Reports Server (NTRS)

    Miller, Rolf W.; Argrow, Brian M.; Center, Kenneth B.; Brauckmann, Gregory J.; Rhode, Matthew N.

    1998-01-01

    The NASA Langley Research Center Unitary Plan Wind Tunnel and the 20-Inch Mach 6 Tunnel were used to test two osculating cones waverider models. The Mach-4 and Mach-6 shapes were generated using the interactive design tool WIPAR. WIPAR performance predictions are compared to the experimental results. Vapor screen results for the Mach-4 model at the on- design Mach number provide visual verification that the shock is attached along the entire leading edge, within the limits of observation. WIPAR predictions of pressure distributions and aerodynamic coefficients show general agreement with the corresponding experimental values.

  1. Reasoning about Function Objects

    NASA Astrophysics Data System (ADS)

    Nordio, Martin; Calcagno, Cristiano; Meyer, Bertrand; Müller, Peter; Tschannen, Julian

    Modern object-oriented languages support higher-order implementations through function objects such as delegates in C#, agents in Eiffel, or closures in Scala. Function objects bring a new level of abstraction to the object-oriented programming model, and require a comparable extension to specification and verification techniques. We introduce a verification methodology that extends function objects with auxiliary side-effect free (pure) methods to model logical artifacts: preconditions, postconditions and modifies clauses. These pure methods can be used to specify client code abstractly, that is, independently from specific instantiations of the function objects. To demonstrate the feasibility of our approach, we have implemented an automatic prover, which verifies several non-trivial examples.

  2. Cross-Language Phonological Activation of Meaning: Evidence from Category Verification

    ERIC Educational Resources Information Center

    Friesen, Deanna C.; Jared, Debra

    2012-01-01

    The study investigated phonological processing in bilingual reading for meaning. English-French and French-English bilinguals performed a category verification task in either their first or second language. Interlingual homophones (words that share phonology across languages but not orthography or meaning) and single language control words served…

  3. Academic Self-Esteem and Perceived Validity of Grades: A Test of Self-Verification Theory.

    ERIC Educational Resources Information Center

    Okun, Morris A.; Fournet, Lee M.

    1993-01-01

    The hypothesis derived from self-verification theory that semester grade point average would be positively related to perceived validity of grade scores among high self-esteem undergraduates and inversely related for low self-esteem students was not supported in a study with 281 undergraduates. (SLD)

  4. Implementation of Precision Verification Solvents on the External Tank

    NASA Technical Reports Server (NTRS)

    Campbell, M.

    1998-01-01

    This paper presents the Implementation of Precision Verification Solvents on the External Tank. The topics include: 1) Background; 2) Solvent Usages; 3) TCE (Trichloroethylene) Reduction; 4) Solvent Replacement Studies; 5) Implementation; 6) Problems Occuring During Implementation; and 7) Future Work. This paper is presented in viewgraph form.

  5. A UVM simulation environment for the study, optimization and verification of HL-LHC digital pixel readout chips

    NASA Astrophysics Data System (ADS)

    Marconi, S.; Conti, E.; Christiansen, J.; Placidi, P.

    2018-05-01

    The operating conditions of the High Luminosity upgrade of the Large Hadron Collider are very demanding for the design of next generation hybrid pixel readout chips in terms of particle rate, radiation level and data bandwidth. To this purpose, the RD53 Collaboration has developed for the ATLAS and CMS experiments a dedicated simulation and verification environment using industry-consolidated tools and methodologies, such as SystemVerilog and the Universal Verification Methodology (UVM). This paper presents how the so-called VEPIX53 environment has first guided the design of digital architectures, optimized for processing and buffering very high particle rates, and secondly how it has been reused for the functional verification of the first large scale demonstrator chip designed by the collaboration, which has recently been submitted.

  6. Design of verification platform for wireless vision sensor networks

    NASA Astrophysics Data System (ADS)

    Ye, Juanjuan; Shang, Fei; Yu, Chuang

    2017-08-01

    At present, the majority of research for wireless vision sensor networks (WVSNs) still remains in the software simulation stage, and the verification platforms of WVSNs that available for use are very few. This situation seriously restricts the transformation from theory research of WVSNs to practical application. Therefore, it is necessary to study the construction of verification platform of WVSNs. This paper combines wireless transceiver module, visual information acquisition module and power acquisition module, designs a high-performance wireless vision sensor node whose core is ARM11 microprocessor and selects AODV as the routing protocol to set up a verification platform called AdvanWorks for WVSNs. Experiments show that the AdvanWorks can successfully achieve functions of image acquisition, coding, wireless transmission, and obtain the effective distance parameters between nodes, which lays a good foundation for the follow-up application of WVSNs.

  7. Comparative and Combinative Study of Urban Heat island in Wuhan City with Remote Sensing and CFD Simulation

    PubMed Central

    Li, Kun; Yu, Zhuang

    2008-01-01

    Urban heat islands are one of the most critical urban environment heat problems. Landsat ETM+ satellite data were used to investigate the land surface temperature and underlying surface indices such as NDVI and NDBI. A comparative study of the urban heat environment at different scales, times and locations was done to verify the heat island characteristics. Since remote sensing technology has limitations for dynamic flow analysis in the study of urban spaces, a CFD simulation was used to validate the improvement of the heat environment in a city by means of wind. CFD technology has its own shortcomings in parameter setting and verification, while RS technology is helpful to remedy this. The city of Wuhan and its climatological condition of being hot in summer and cold in winter were chosen to verify the comparative and combinative application of RS with CFD in studying the urban heat island. PMID:27873893

  8. Psychiatric Residents' Attitudes toward and Experiences with the Clinical-Skills Verification Process: A Pilot Study on U.S. and International Medical Graduates

    ERIC Educational Resources Information Center

    Rao, Nyapati R.; Kodali, Rahul; Mian, Ayesha; Ramtekkar, Ujjwal; Kamarajan, Chella; Jibson, Michael D.

    2012-01-01

    Objective: The authors report on a pilot study of the experiences and perceptions of foreign international medical graduate (F-IMG), United States international medical graduate (US-IMG), and United States medical graduate (USMG) psychiatric residents with the newly mandated Clinical Skills Verification (CSV) process. The goal was to identify and…

  9. Space shuttle engineering and operations support. Avionics system engineering

    NASA Technical Reports Server (NTRS)

    Broome, P. A.; Neubaur, R. J.; Welsh, R. T.

    1976-01-01

    The shuttle avionics integration laboratory (SAIL) requirements for supporting the Spacelab/orbiter avionics verification process are defined. The principal topics are a Spacelab avionics hardware assessment, test operations center/electronic systems test laboratory (TOC/ESL) data processing requirements definition, SAIL (Building 16) payload accommodations study, and projected funding and test scheduling. Because of the complex nature of the Spacelab/orbiter computer systems, the PCM data link, and the high rate digital data system hardware/software relationships, early avionics interface verification is required. The SAIL is a prime candidate test location to accomplish this early avionics verification.

  10. Automated finite element meshing of the lumbar spine: Verification and validation with 18 specimen-specific models.

    PubMed

    Campbell, J Q; Coombs, D J; Rao, M; Rullkoetter, P J; Petrella, A J

    2016-09-06

    The purpose of this study was to seek broad verification and validation of human lumbar spine finite element models created using a previously published automated algorithm. The automated algorithm takes segmented CT scans of lumbar vertebrae, automatically identifies important landmarks and contact surfaces, and creates a finite element model. Mesh convergence was evaluated by examining changes in key output variables in response to mesh density. Semi-direct validation was performed by comparing experimental results for a single specimen to the automated finite element model results for that specimen with calibrated material properties from a prior study. Indirect validation was based on a comparison of results from automated finite element models of 18 individual specimens, all using one set of generalized material properties, to a range of data from the literature. A total of 216 simulations were run and compared to 186 experimental data ranges in all six primary bending modes up to 7.8Nm with follower loads up to 1000N. Mesh convergence results showed less than a 5% difference in key variables when the original mesh density was doubled. The semi-direct validation results showed that the automated method produced results comparable to manual finite element modeling methods. The indirect validation results showed a wide range of outcomes due to variations in the geometry alone. The studies showed that the automated models can be used to reliably evaluate lumbar spine biomechanics, specifically within our intended context of use: in pure bending modes, under relatively low non-injurious simulated in vivo loads, to predict torque rotation response, disc pressures, and facet forces. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Systematic Model-in-the-Loop Test of Embedded Control Systems

    NASA Astrophysics Data System (ADS)

    Krupp, Alexander; Müller, Wolfgang

    Current model-based development processes offer new opportunities for verification automation, e.g., in automotive development. The duty of functional verification is the detection of design flaws. Current functional verification approaches exhibit a major gap between requirement definition and formal property definition, especially when analog signals are involved. Besides lack of methodical support for natural language formalization, there does not exist a standardized and accepted means for formal property definition as a target for verification planning. This article addresses several shortcomings of embedded system verification. An Enhanced Classification Tree Method is developed based on the established Classification Tree Method for Embeded Systems CTM/ES which applies a hardware verification language to define a verification environment.

  12. Verification of spatial and temporal pressure distributions in segmented solid rocket motors

    NASA Technical Reports Server (NTRS)

    Salita, Mark

    1989-01-01

    A wide variety of analytical tools are in use today to predict the history and spatial distributions of pressure in the combustion chambers of solid rocket motors (SRMs). Experimental and analytical methods are presented here that allow the verification of many of these predictions. These methods are applied to the redesigned space shuttle booster (RSRM). Girth strain-gage data is compared to the predictions of various one-dimensional quasisteady analyses in order to verify the axial drop in motor static pressure during ignition transients as well as quasisteady motor operation. The results of previous modeling of radial flows in the bore, slots, and around grain overhangs are supported by approximate analytical and empirical techniques presented here. The predictions of circumferential flows induced by inhibitor asymmetries, nozzle vectoring, and propellant slump are compared to each other and to subscale cold air and water tunnel measurements to ascertain their validity.

  13. Verification of VLSI designs

    NASA Technical Reports Server (NTRS)

    Windley, P. J.

    1991-01-01

    In this paper we explore the specification and verification of VLSI designs. The paper focuses on abstract specification and verification of functionality using mathematical logic as opposed to low-level boolean equivalence verification such as that done using BDD's and Model Checking. Specification and verification, sometimes called formal methods, is one tool for increasing computer dependability in the face of an exponentially increasing testing effort.

  14. Using SysML for verification and validation planning on the Large Synoptic Survey Telescope (LSST)

    NASA Astrophysics Data System (ADS)

    Selvy, Brian M.; Claver, Charles; Angeli, George

    2014-08-01

    This paper provides an overview of the tool, language, and methodology used for Verification and Validation Planning on the Large Synoptic Survey Telescope (LSST) Project. LSST has implemented a Model Based Systems Engineering (MBSE) approach as a means of defining all systems engineering planning and definition activities that have historically been captured in paper documents. Specifically, LSST has adopted the Systems Modeling Language (SysML) standard and is utilizing a software tool called Enterprise Architect, developed by Sparx Systems. Much of the historical use of SysML has focused on the early phases of the project life cycle. Our approach is to extend the advantages of MBSE into later stages of the construction project. This paper details the methodology employed to use the tool to document the verification planning phases, including the extension of the language to accommodate the project's needs. The process includes defining the Verification Plan for each requirement, which in turn consists of a Verification Requirement, Success Criteria, Verification Method(s), Verification Level, and Verification Owner. Each Verification Method for each Requirement is defined as a Verification Activity and mapped into Verification Events, which are collections of activities that can be executed concurrently in an efficient and complementary way. Verification Event dependency and sequences are modeled using Activity Diagrams. The methodology employed also ties in to the Project Management Control System (PMCS), which utilizes Primavera P6 software, mapping each Verification Activity as a step in a planned activity. This approach leads to full traceability from initial Requirement to scheduled, costed, and resource loaded PMCS task-based activities, ensuring all requirements will be verified.

  15. Design for Verification: Enabling Verification of High Dependability Software-Intensive Systems

    NASA Technical Reports Server (NTRS)

    Mehlitz, Peter C.; Penix, John; Markosian, Lawrence Z.; Koga, Dennis (Technical Monitor)

    2003-01-01

    Strategies to achieve confidence that high-dependability applications are correctly implemented include testing and automated verification. Testing deals mainly with a limited number of expected execution paths. Verification usually attempts to deal with a larger number of possible execution paths. While the impact of architecture design on testing is well known, its impact on most verification methods is not as well understood. The Design for Verification approach considers verification from the application development perspective, in which system architecture is designed explicitly according to the application's key properties. The D4V-hypothesis is that the same general architecture and design principles that lead to good modularity, extensibility and complexity/functionality ratio can be adapted to overcome some of the constraints on verification tools, such as the production of hand-crafted models and the limits on dynamic and static analysis caused by state space explosion.

  16. Space Shuttle Range Safety Command Destruct System Analysis and Verification. Phase 1. Destruct System Analysis and Verification

    DTIC Science & Technology

    1981-03-01

    overcome the shortcomings of this system. A phase III study develops the breakup model of the Space Shuttle clus’ter at various times into flight. The...2-1 ROCKET MODEL ..................................................... 2-5 COMBUSTION CHAMBER OPERATION ................................... 2-5...2-19 RESULTS .......................................................... 2-22 ROCKET MODEL

  17. Why Verifying Diagnostic Decisions with a Checklist Can Help: Insights from Eye Tracking

    ERIC Educational Resources Information Center

    Sibbald, Matthew; de Bruin, Anique B. H.; Yu, Eric; van Merrienboer, Jeroen J. G.

    2015-01-01

    Making a diagnosis involves ratifying or verifying a proposed answer. Formalizing this verification process with checklists, which highlight key variables involved in the diagnostic decision, is often advocated. However, the mechanisms by which a checklist might allow clinicians to improve their verification process have not been well studied. We…

  18. 38 CFR 21.7652 - Certification of enrollment and verification of pursuit.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... maintain daily attendance records for any course leading to a standard college degree. (a) Content of... breaks between school years. (3) When a reservist enrolls in independent study leading to a standard...) Verification of pursuit. (1) A reservist who is pursuing a course leading to a standard college degree must...

  19. The Interaction between Surface Color and Color Knowledge: Behavioral and Electrophysiological Evidence

    ERIC Educational Resources Information Center

    Bramao, Ines; Faisca, Luis; Forkstam, Christian; Inacio, Filomena; Araujo, Susana; Petersson, Karl Magnus; Reis, Alexandra

    2012-01-01

    In this study, we used event-related potentials (ERPs) to evaluate the contribution of surface color and color knowledge information in object identification. We constructed two color-object verification tasks--a surface and a knowledge verification task--using high color diagnostic objects; both typical and atypical color versions of the same…

  20. 38 CFR 21.7652 - Certification of enrollment and verification of pursuit.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... maintain daily attendance records for any course leading to a standard college degree. (a) Content of... breaks between school years. (3) When a reservist enrolls in independent study leading to a standard...) Verification of pursuit. (1) A reservist who is pursuing a course leading to a standard college degree must...

  1. Implementation of a Blowing Boundary Condition in the LAURA Code

    NASA Technical Reports Server (NTRS)

    Thompson, Richard a.; Gnoffo, Peter A.

    2008-01-01

    Preliminary steps toward modeling a coupled ablation problem using a finite-volume Navier-Stokes code (LAURA) are presented in this paper. Implementation of a surface boundary condition with mass transfer (blowing) is described followed by verification and validation through comparisons with analytic results and experimental data. Application of the code to a carbon-nosetip ablation problem is demonstrated and the results are compared with previously published data. It is concluded that the code and coupled procedure are suitable to support further ablation analyses and studies.

  2. BASIC BIOCHEMICAL AND CLINICAL ASPECTS OF NONINVASIVE TESTS HELIC.

    PubMed

    Dmitrienko, M A; Dmitrienko, V S; Kornienko, E A; Parolova, N I; Colomina, E O; Aronov, E B

    Biochemical process that lay in the core of non-invasive detection of Helico ho cter pylod with the help of HELIC Ammonia breath test, manufactured by AMA Co Ltd., St.Petersburg, is shown. Patents from various countries, describing ammonia as H.pyiori diagnostic marker, are reviewed. Approaches for evaluation of efficacy of the test-system are analyzed, validation and verification data is provided. High diagnostic characteristics are confirmed by the results of comparative studies on patients of different age groups, reaching 97% sensitivity and 96% specificity.

  3. Analyzing Personalized Policies for Online Biometric Verification

    PubMed Central

    Sadhwani, Apaar; Yang, Yan; Wein, Lawrence M.

    2014-01-01

    Motivated by India’s nationwide biometric program for social inclusion, we analyze verification (i.e., one-to-one matching) in the case where we possess similarity scores for 10 fingerprints and two irises between a resident’s biometric images at enrollment and his biometric images during his first verification. At subsequent verifications, we allow individualized strategies based on these 12 scores: we acquire a subset of the 12 images, get new scores for this subset that quantify the similarity to the corresponding enrollment images, and use the likelihood ratio (i.e., the likelihood of observing these scores if the resident is genuine divided by the corresponding likelihood if the resident is an imposter) to decide whether a resident is genuine or an imposter. We also consider two-stage policies, where additional images are acquired in a second stage if the first-stage results are inconclusive. Using performance data from India’s program, we develop a new probabilistic model for the joint distribution of the 12 similarity scores and find near-optimal individualized strategies that minimize the false reject rate (FRR) subject to constraints on the false accept rate (FAR) and mean verification delay for each resident. Our individualized policies achieve the same FRR as a policy that acquires (and optimally fuses) 12 biometrics for each resident, which represents a five (four, respectively) log reduction in FRR relative to fingerprint (iris, respectively) policies previously proposed for India’s biometric program. The mean delay is sec for our proposed policy, compared to 30 sec for a policy that acquires one fingerprint and 107 sec for a policy that acquires all 12 biometrics. This policy acquires iris scans from 32–41% of residents (depending on the FAR) and acquires an average of 1.3 fingerprints per resident. PMID:24787752

  4. Calibration and verification of thermographic cameras for geometric measurements

    NASA Astrophysics Data System (ADS)

    Lagüela, S.; González-Jorge, H.; Armesto, J.; Arias, P.

    2011-03-01

    Infrared thermography is a technique with an increasing degree of development and applications. Quality assessment in the measurements performed with the thermal cameras should be achieved through metrology calibration and verification. Infrared cameras acquire temperature and geometric information, although calibration and verification procedures are only usual for thermal data. Black bodies are used for these purposes. Moreover, the geometric information is important for many fields as architecture, civil engineering and industry. This work presents a calibration procedure that allows the photogrammetric restitution and a portable artefact to verify the geometric accuracy, repeatability and drift of thermographic cameras. These results allow the incorporation of this information into the quality control processes of the companies. A grid based on burning lamps is used for the geometric calibration of thermographic cameras. The artefact designed for the geometric verification consists of five delrin spheres and seven cubes of different sizes. Metrology traceability for the artefact is obtained from a coordinate measuring machine. Two sets of targets with different reflectivity are fixed to the spheres and cubes to make data processing and photogrammetric restitution possible. Reflectivity was the chosen material propriety due to the thermographic and visual cameras ability to detect it. Two thermographic cameras from Flir and Nec manufacturers, and one visible camera from Jai are calibrated, verified and compared using calibration grids and the standard artefact. The calibration system based on burning lamps shows its capability to perform the internal orientation of the thermal cameras. Verification results show repeatability better than 1 mm for all cases, being better than 0.5 mm for the visible one. As it must be expected, also accuracy appears higher in the visible camera, and the geometric comparison between thermographic cameras shows slightly better results for the Nec camera.

  5. Analyzing personalized policies for online biometric verification.

    PubMed

    Sadhwani, Apaar; Yang, Yan; Wein, Lawrence M

    2014-01-01

    Motivated by India's nationwide biometric program for social inclusion, we analyze verification (i.e., one-to-one matching) in the case where we possess similarity scores for 10 fingerprints and two irises between a resident's biometric images at enrollment and his biometric images during his first verification. At subsequent verifications, we allow individualized strategies based on these 12 scores: we acquire a subset of the 12 images, get new scores for this subset that quantify the similarity to the corresponding enrollment images, and use the likelihood ratio (i.e., the likelihood of observing these scores if the resident is genuine divided by the corresponding likelihood if the resident is an imposter) to decide whether a resident is genuine or an imposter. We also consider two-stage policies, where additional images are acquired in a second stage if the first-stage results are inconclusive. Using performance data from India's program, we develop a new probabilistic model for the joint distribution of the 12 similarity scores and find near-optimal individualized strategies that minimize the false reject rate (FRR) subject to constraints on the false accept rate (FAR) and mean verification delay for each resident. Our individualized policies achieve the same FRR as a policy that acquires (and optimally fuses) 12 biometrics for each resident, which represents a five (four, respectively) log reduction in FRR relative to fingerprint (iris, respectively) policies previously proposed for India's biometric program. The mean delay is [Formula: see text] sec for our proposed policy, compared to 30 sec for a policy that acquires one fingerprint and 107 sec for a policy that acquires all 12 biometrics. This policy acquires iris scans from 32-41% of residents (depending on the FAR) and acquires an average of 1.3 fingerprints per resident.

  6. A quantification of the effectiveness of EPID dosimetry and software-based plan verification systems in detecting incidents in radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bojechko, Casey; Phillps, Mark; Kalet, Alan

    Purpose: Complex treatments in radiation therapy require robust verification in order to prevent errors that can adversely affect the patient. For this purpose, the authors estimate the effectiveness of detecting errors with a “defense in depth” system composed of electronic portal imaging device (EPID) based dosimetry and a software-based system composed of rules-based and Bayesian network verifications. Methods: The authors analyzed incidents with a high potential severity score, scored as a 3 or 4 on a 4 point scale, recorded in an in-house voluntary incident reporting system, collected from February 2012 to August 2014. The incidents were categorized into differentmore » failure modes. The detectability, defined as the number of incidents that are detectable divided total number of incidents, was calculated for each failure mode. Results: In total, 343 incidents were used in this study. Of the incidents 67% were related to photon external beam therapy (EBRT). The majority of the EBRT incidents were related to patient positioning and only a small number of these could be detected by EPID dosimetry when performed prior to treatment (6%). A large fraction could be detected by in vivo dosimetry performed during the first fraction (74%). Rules-based and Bayesian network verifications were found to be complimentary to EPID dosimetry, able to detect errors related to patient prescriptions and documentation, and errors unrelated to photon EBRT. Combining all of the verification steps together, 91% of all EBRT incidents could be detected. Conclusions: This study shows that the defense in depth system is potentially able to detect a large majority of incidents. The most effective EPID-based dosimetry verification is in vivo measurements during the first fraction and is complemented by rules-based and Bayesian network plan checking.« less

  7. Impact of a quality-assessment dashboard on the comprehensive review of pharmacist performance.

    PubMed

    Trinh, Long D; Roach, Erin M; Vogan, Eric D; Lam, Simon W; Eggers, Garrett G

    2017-09-01

    The impact of a quality-assessment dashboard and individualized pharmacist performance feedback on the adherence of order verification was evaluated. A before-and-after study was conducted at a 1,440-bed academic medical center. Adherence of order verification was defined as orders verified according to institution-derived, medication-related guidelines and policies. Formulas were developed to assess the adherence of verified orders to dosing guidelines using patient-specific height, weight, and serum creatinine clearance values from the electronic medical record at the time of pharmacist verification. A total of 5 medications were assessed by the formulas for adherence and displayed on the dashboard: ampicillin-sulbactam, ciprofloxacin, piperacillin-tazobactam, acyclovir, and enoxaparin. Adherence of order verification was assessed before (May 1-July 31, 2015) and after (November 1, 2015-January 31, 2016) individualized performance feedback was given based on trends identified by the quality-assessment dashboard. There was a significant increase in the overall adherence rate postintervention (90.1% versus 91.9%, p = 0.040). Among the 34 pharmacists who participated, the percentage of pharmacists with at least 90% overall adherence increased postintervention (52.9% versus 70.6%, p = 0.103). Time to verification was similar before and after the study intervention (median, 6.0 minutes; interquartile range, 3-13 minutes). The rate of documentation for nonadherent orders increased significantly postintervention (57.1% versus 68.5%, p = 0.019). The implementation of the quality-assessment dashboard, educational sessions, and individualized performance feedback significantly improved pharmacist order-verification adherence to institution-derived, medication-related guidelines and policies and the documentation rate of nonadherent orders. Copyright © 2017 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  8. Accuracy of Intraoperative Computed Tomography during Deep Brain Stimulation Procedures: Comparison with Postoperative Magnetic Resonance Imaging

    PubMed Central

    Bot, Maarten; van den Munckhof, Pepijn; Bakay, Roy; Stebbins, Glenn; Verhagen Metman, Leo

    2017-01-01

    Objective To determine the accuracy of intraoperative computed tomography (iCT) in localizing deep brain stimulation (DBS) electrodes by comparing this modality with postoperative magnetic resonance imaging (MRI). Background Optimal lead placement is a critical factor for the outcome of DBS procedures and preferably confirmed during surgery. iCT offers 3-dimensional verification of both microelectrode and lead location during DBS surgery. However, accurate electrode representation on iCT has not been extensively studied. Methods DBS surgery was performed using the Leksell stereotactic G frame. Stereotactic coordinates of 52 DBS leads were determined on both iCT and postoperative MRI and compared with intended final target coordinates. The resulting absolute differences in X (medial-lateral), Y (anterior-posterior), and Z (dorsal-ventral) coordinates (ΔX, ΔY, and ΔZ) for both modalities were then used to calculate the euclidean distance. Results Euclidean distances were 2.7 ± 1.1 and 2.5 ± 1.2 mm for MRI and iCT, respectively (p = 0.2). Conclusion Postoperative MRI and iCT show equivalent DBS lead representation. Intraoperative localization of both microelectrode and DBS lead in stereotactic space enables direct adjustments. Verification of lead placement with postoperative MRI, considered to be the gold standard, is unnecessary. PMID:28601874

  9. Accuracy of Intraoperative Computed Tomography during Deep Brain Stimulation Procedures: Comparison with Postoperative Magnetic Resonance Imaging.

    PubMed

    Bot, Maarten; van den Munckhof, Pepijn; Bakay, Roy; Stebbins, Glenn; Verhagen Metman, Leo

    2017-01-01

    To determine the accuracy of intraoperative computed tomography (iCT) in localizing deep brain stimulation (DBS) electrodes by comparing this modality with postoperative magnetic resonance imaging (MRI). Optimal lead placement is a critical factor for the outcome of DBS procedures and preferably confirmed during surgery. iCT offers 3-dimensional verification of both microelectrode and lead location during DBS surgery. However, accurate electrode representation on iCT has not been extensively studied. DBS surgery was performed using the Leksell stereotactic G frame. Stereotactic coordinates of 52 DBS leads were determined on both iCT and postoperative MRI and compared with intended final target coordinates. The resulting absolute differences in X (medial-lateral), Y (anterior-posterior), and Z (dorsal-ventral) coordinates (ΔX, ΔY, and ΔZ) for both modalities were then used to calculate the euclidean distance. Euclidean distances were 2.7 ± 1.1 and 2.5 ± 1.2 mm for MRI and iCT, respectively (p = 0.2). Postoperative MRI and iCT show equivalent DBS lead representation. Intraoperative localization of both microelectrode and DBS lead in stereotactic space enables direct adjustments. Verification of lead placement with postoperative MRI, considered to be the gold standard, is unnecessary. © 2017 The Author(s) Published by S. Karger AG, Basel.

  10. [Determinants of task preferences when performance is indicative of individual characteristics: self-assessment motivation and self-verification motivation].

    PubMed

    Numazaki, M; Kudo, E

    1995-04-01

    The present study was conducted to examine determinants of information-gathering behavior with regard to one's own characteristics. Four tasks with different self-congruent and incongruent diagnosticity were presented to subjects. As self-assessment theory predicted, high diagnostic tasks were preferred to low tasks. And as self-verification theory predicted, self-congruent diagnosticity had a stronger effect on task preference than self-incongruent diagnosticity. In addition, subjects who perceived the relevant characteristics important inclined to choose self-assessment behavior more than who did not. Also, subjects who were certain of their self-concept inclined to choose self-verification behavior more than who were not. These results suggest that both self-assessment and self-verification motivations play important roles in information-gathering behavior regarding one's characteristics, and strength of the motivations is determined by the importance of relevant characteristics or the certainty of self-concept.

  11. Comprehending how visual context influences incremental sentence processing: insights from ERPs and picture-sentence verification

    PubMed Central

    Knoeferle, Pia; Urbach, Thomas P.; Kutas, Marta

    2010-01-01

    To re-establish picture-sentence verification – discredited possibly for its over-reliance on post-sentence response time (RT) measures - as a task for situated comprehension, we collected event-related brain potentials (ERPs) as participants read a subject-verb-object sentence, and RTs indicating whether or not the verb matched a previously depicted action. For mismatches (vs matches), speeded RTs were longer, verb N400s over centro-parietal scalp larger, and ERPs to the object noun more negative. RTs (congruence effect) correlated inversely with the centro-parietal verb N400s, and positively with the object ERP congruence effects. Verb N400s, object ERPs, and verbal working memory scores predicted more variance in RT effects (50%) than N400s alone. Thus, (1) verification processing is not all post-sentence; (2) simple priming cannot account for these results; and (3) verification tasks can inform studies of situated comprehension. PMID:20701712

  12. Expert system verification and validation study. Delivery 3A and 3B: Trip summaries

    NASA Technical Reports Server (NTRS)

    French, Scott

    1991-01-01

    Key results are documented from attending the 4th workshop on verification, validation, and testing. The most interesting part of the workshop was when representatives from the U.S., Japan, and Europe presented surveys of VV&T within their respective regions. Another interesting part focused on current efforts to define industry standards for artificial intelligence and how that might affect approaches to VV&T of expert systems. The next part of the workshop focused on VV&T methods of applying mathematical techniques to verification of rule bases and techniques for capturing information relating to the process of developing software. The final part focused on software tools. A summary is also presented of the EPRI conference on 'Methodologies, Tools, and Standards for Cost Effective Reliable Software Verification and Validation. The conference was divided into discussion sessions on the following issues: development process, automated tools, software reliability, methods, standards, and cost/benefit considerations.

  13. PERFORMANCE VERIFICATION OF ANIMAL WATER TREATMENT TECHNOLOGIES THROUGH EPA'S ENVIRONMENTAL TECHNOLOGY VERIFICATION PROGRAM

    EPA Science Inventory

    The U.S. Environmental Protection Agency created the Environmental Technology Verification Program (ETV) to further environmental protection by accelerating the commercialization of new and innovative technology through independent performance verification and dissemination of in...

  14. National Centers for Environmental Prediction

    Science.gov Websites

    Products Operational Forecast Graphics Experimental Forecast Graphics Verification and Diagnostics Model PARALLEL/EXPERIMENTAL MODEL FORECAST GRAPHICS OPERATIONAL VERIFICATION / DIAGNOSTICS PARALLEL VERIFICATION Developmental Air Quality Forecasts and Verification Back to Table of Contents 2. PARALLEL/EXPERIMENTAL GRAPHICS

  15. National Centers for Environmental Prediction

    Science.gov Websites

    Operational Forecast Graphics Experimental Forecast Graphics Verification and Diagnostics Model Configuration /EXPERIMENTAL MODEL FORECAST GRAPHICS OPERATIONAL VERIFICATION / DIAGNOSTICS PARALLEL VERIFICATION / DIAGNOSTICS Developmental Air Quality Forecasts and Verification Back to Table of Contents 2. PARALLEL/EXPERIMENTAL GRAPHICS

  16. HDL to verification logic translator

    NASA Technical Reports Server (NTRS)

    Gambles, J. W.; Windley, P. J.

    1992-01-01

    The increasingly higher number of transistors possible in VLSI circuits compounds the difficulty in insuring correct designs. As the number of possible test cases required to exhaustively simulate a circuit design explodes, a better method is required to confirm the absence of design faults. Formal verification methods provide a way to prove, using logic, that a circuit structure correctly implements its specification. Before verification is accepted by VLSI design engineers, the stand alone verification tools that are in use in the research community must be integrated with the CAD tools used by the designers. One problem facing the acceptance of formal verification into circuit design methodology is that the structural circuit descriptions used by the designers are not appropriate for verification work and those required for verification lack some of the features needed for design. We offer a solution to this dilemma: an automatic translation from the designers' HDL models into definitions for the higher-ordered logic (HOL) verification system. The translated definitions become the low level basis of circuit verification which in turn increases the designer's confidence in the correctness of higher level behavioral models.

  17. Code and Solution Verification of 3D Numerical Modeling of Flow in the Gust Erosion Chamber

    NASA Astrophysics Data System (ADS)

    Yuen, A.; Bombardelli, F. A.

    2014-12-01

    Erosion microcosms are devices commonly used to investigate the erosion and transport characteristics of sediments at the bed of rivers, lakes, or estuaries. In order to understand the results these devices provide, the bed shear stress and flow field need to be accurately described. In this research, the UMCES Gust Erosion Microcosm System (U-GEMS) is numerically modeled using Finite Volume Method. The primary aims are to simulate the bed shear stress distribution at the surface of the sediment core/bottom of the microcosm, and to validate the U-GEMS produces uniform bed shear stress at the bottom of the microcosm. The mathematical model equations are solved by on a Cartesian non-uniform grid. Multiple numerical runs were developed with different input conditions and configurations. Prior to developing the U-GEMS model, the General Moving Objects (GMO) model and different momentum algorithms in the code were verified. Code verification of these solvers was done via simulating the flow inside the top wall driven square cavity on different mesh sizes to obtain order of convergence. The GMO model was used to simulate the top wall in the top wall driven square cavity as well as the rotating disk in the U-GEMS. Components simulated with the GMO model were rigid bodies that could have any type of motion. In addition cross-verification was conducted as results were compared with numerical results by Ghia et al. (1982), and good agreement was found. Next, CFD results were validated by simulating the flow within the conventional microcosm system without suction and injection. Good agreement was found when the experimental results by Khalili et al. (2008) were compared. After the ability of the CFD solver was proved through the above code verification steps. The model was utilized to simulate the U-GEMS. The solution was verified via classic mesh convergence study on four consecutive mesh sizes, in addition to that Grid Convergence Index (GCI) was calculated and based on that the computation uncertainty was quantified. The numerical results reveal that the bed shear stress distribution for the U-GEMS model was not uniform. The mean and standard deviation of the bed shear stress for the U-GEMS model was 0.04 and 0.019 Pa respectively.

  18. New generalized Noh solutions for HEDP hydrocode verification

    NASA Astrophysics Data System (ADS)

    Velikovich, A. L.; Giuliani, J. L.; Zalesak, S. T.; Tangri, V.

    2017-10-01

    The classic Noh solution describing stagnation of a cold ideal gas in a strong accretion shock wave has been the workhorse of compressible hydrocode verification for over three decades. We describe a number of its generalizations available for HEDP code verification. First, for an ideal gas, we have obtained self-similar solutions that describe adiabatic convergence either of a finite-pressure gas into an empty cavity or of a finite-amplitude sound wave into a uniform resting gas surrounding the center or axis of symmetry. At the moment of collapse such a flow produces a uniform gas whose velocity at each point is constant and directed towards the axis or the center, i. e. the initial condition similar to the classic solution but with a finite pressure of the converging gas. After that, a constant-velocity accretion shock propagates into the incident gas whose pressure and velocity profiles are not flat, in contrast with the classic solution. Second, for an arbitrary equation of state, we demonstrate the existence of self-similar solutions of the Noh problem in cylindrical and spherical geometry. Examples of such solutions with a three-term equation of state that includes cold, thermal ion/lattice, and thermal electron contributions are presented for aluminum and copper. These analytic solutions are compared to our numerical simulation results as an example of their use for code verification. Work supported by the US DOE/NNSA.

  19. Spectral Analysis of Forecast Error Investigated with an Observing System Simulation Experiment

    NASA Technical Reports Server (NTRS)

    Prive, N. C.; Errico, Ronald M.

    2015-01-01

    The spectra of analysis and forecast error are examined using the observing system simulation experiment (OSSE) framework developed at the National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASAGMAO). A global numerical weather prediction model, the Global Earth Observing System version 5 (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation, is cycled for two months with once-daily forecasts to 336 hours to generate a control case. Verification of forecast errors using the Nature Run as truth is compared with verification of forecast errors using self-analysis; significant underestimation of forecast errors is seen using self-analysis verification for up to 48 hours. Likewise, self analysis verification significantly overestimates the error growth rates of the early forecast, as well as mischaracterizing the spatial scales at which the strongest growth occurs. The Nature Run-verified error variances exhibit a complicated progression of growth, particularly for low wave number errors. In a second experiment, cycling of the model and data assimilation over the same period is repeated, but using synthetic observations with different explicitly added observation errors having the same error variances as the control experiment, thus creating a different realization of the control. The forecast errors of the two experiments become more correlated during the early forecast period, with correlations increasing for up to 72 hours before beginning to decrease.

  20. Experimental demonstration of an isotope-sensitive warhead verification technique using nuclear resonance fluorescence.

    PubMed

    Vavrek, Jayson R; Henderson, Brian S; Danagoulian, Areg

    2018-04-24

    Future nuclear arms reduction efforts will require technologies to verify that warheads slated for dismantlement are authentic without revealing any sensitive weapons design information to international inspectors. Despite several decades of research, no technology has met these requirements simultaneously. Recent work by Kemp et al. [Kemp RS, Danagoulian A, Macdonald RR, Vavrek JR (2016) Proc Natl Acad Sci USA 113:8618-8623] has produced a novel physical cryptographic verification protocol that approaches this treaty verification problem by exploiting the isotope-specific nature of nuclear resonance fluorescence (NRF) measurements to verify the authenticity of a warhead. To protect sensitive information, the NRF signal from the warhead is convolved with that of an encryption foil that contains key warhead isotopes in amounts unknown to the inspector. The convolved spectrum from a candidate warhead is statistically compared against that from an authenticated template warhead to determine whether the candidate itself is authentic. Here we report on recent proof-of-concept warhead verification experiments conducted at the Massachusetts Institute of Technology. Using high-purity germanium (HPGe) detectors, we measured NRF spectra from the interrogation of proxy "genuine" and "hoax" objects by a 2.52 MeV endpoint bremsstrahlung beam. The observed differences in NRF intensities near 2.2 MeV indicate that the physical cryptographic protocol can distinguish between proxy genuine and hoax objects with high confidence in realistic measurement times.

  1. Comparative evaluation of Kodak EDR2 and XV2 films for verification of intensity modulated radiation therapy.

    PubMed

    Dogan, Nesrin; Leybovich, Leonid B; Sethi, Anil

    2002-11-21

    Film dosimetry provides a convenient tool to determine dose distributions, especially for verification of IMRT plans. However, the film response to radiation shows a significant dependence on depth, energy and field size that compromise the accuracy of measurements. Kodak's XV2 film has a low saturation dose (approximately 100 cGy) and, consequently, a relatively short region of linear dose-response. The recently introduced Kodak extended range EDR2 film was reported to have a linear dose-response region extending to 500 cGy. This increased dose range may be particularly useful in the verification of IMRT plans. In this work, the dependence of Kodak EDR2 film's response on the depth, field size and energy was evaluated and compared with Kodak XV2 film. Co-60, 6 MV, 10 MV and 18 MV beams were used. Field sizes were 2 x 2, 6 x 6, 10 x 10, 14 x 14, 18 x 18 and 24 x 24 cm2. Doses for XV2 and EDR2 films were 80 cGy and 300 cGy, respectively. Optical density was converted to dose using depth-corrected sensitometric (Hurter and Driffield, or H&D) curves. For each field size, XV2 and EDR2 depth-dose curves were compared with ion chamber depth-dose curves. Both films demonstrated similar (within 1%) field size dependence. The deviation from the ion chamber for both films was small forthe fields ranging from 2 x 2 to 10 x 10 cm2: < or =2% for 6, 10 and 18 MV beams. No deviation was observed for the Co-60 beam. As the field size increased to 24 x 24 cm2, the deviation became significant for both films: approximately 7.5% for Co-60, approximately 5% for 6 MV and 10 MV, and approximately 6% for 18 MV. During the verification of IMRT plans, EDR2 film showed a better agreement with the calculated dose distributions than the XV2 film.

  2. SU-E-T-278: Realization of Dose Verification Tool for IMRT Plan Based On DPM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cai, Jinfeng; Cao, Ruifen; Dai, Yumei

    Purpose: To build a Monte Carlo dose verification tool for IMRT Plan by implementing a irradiation source model into DPM code. Extend the ability of DPM to calculate any incident angles and irregular-inhomogeneous fields. Methods: With the virtual source and the energy spectrum which unfolded from the accelerator measurement data,combined with optimized intensity maps to calculate the dose distribution of the irradiation irregular-inhomogeneous field. The irradiation source model of accelerator was substituted by a grid-based surface source. The contour and the intensity distribution of the surface source were optimized by ARTS (Accurate/Advanced Radiotherapy System) optimization module based on the tumormore » configuration. The weight of the emitter was decided by the grid intensity. The direction of the emitter was decided by the combination of the virtual source and the emitter emitting position. The photon energy spectrum unfolded from the accelerator measurement data was adjusted by compensating the contaminated electron source. For verification, measured data and realistic clinical IMRT plan were compared with DPM dose calculation. Results: The regular field was verified by comparing with the measured data. It was illustrated that the differences were acceptable (<2% inside the field, 2–3mm in the penumbra). The dose calculation of irregular field by DPM simulation was also compared with that of FSPB (Finite Size Pencil Beam) and the passing rate of gamma analysis was 95.1% for peripheral lung cancer. The regular field and the irregular rotational field were all within the range of permitting error. The computing time of regular fields were less than 2h, and the test of peripheral lung cancer was 160min. Through parallel processing, the adapted DPM could complete the calculation of IMRT plan within half an hour. Conclusion: The adapted parallelized DPM code with irradiation source model is faster than classic Monte Carlo codes. Its computational accuracy and speed satisfy the clinical requirement, and it is expectable to be a Monte Carlo dose verification tool for IMRT Plan. Strategic Priority Research Program of the China Academy of Science(XDA03040000); National Natural Science Foundation of China (81101132)« less

  3. Density scaling of phantom materials for a 3D dose verification system.

    PubMed

    Tani, Kensuke; Fujita, Yukio; Wakita, Akihisa; Miyasaka, Ryohei; Uehara, Ryuzo; Kodama, Takumi; Suzuki, Yuya; Aikawa, Ako; Mizuno, Norifumi; Kawamori, Jiro; Saitoh, Hidetoshi

    2018-05-21

    In this study, the optimum density scaling factors of phantom materials for a commercially available three-dimensional (3D) dose verification system (Delta4) were investigated in order to improve the accuracy of the calculated dose distributions in the phantom materials. At field sizes of 10 × 10 and 5 × 5 cm 2 with the same geometry, tissue-phantom ratios (TPRs) in water, polymethyl methacrylate (PMMA), and Plastic Water Diagnostic Therapy (PWDT) were measured, and TPRs in various density scaling factors of water were calculated by Monte Carlo simulation, Adaptive Convolve (AdC, Pinnacle 3 ), Collapsed Cone Convolution (CCC, RayStation), and AcurosXB (AXB, Eclipse). Effective linear attenuation coefficients (μ eff ) were obtained from the TPRs. The ratios of μ eff in phantom and water ((μ eff ) pl,water ) were compared between the measurements and calculations. For each phantom material, the density scaling factor proposed in this study (DSF) was set to be the value providing a match between the calculated and measured (μ eff ) pl,water . The optimum density scaling factor was verified through the comparison of the dose distributions measured by Delta4 and calculated with three different density scaling factors: the nominal physical density (PD), nominal relative electron density (ED), and DSF. Three plans were used for the verifications: a static field of 10 × 10 cm 2 and two intensity modulated radiation therapy (IMRT) treatment plans. DSF were determined to be 1.13 for PMMA and 0.98 for PWDT. DSF for PMMA showed good agreement for AdC and CCC with 6 MV x ray, and AdC for 10 MV x ray. DSF for PWDT showed good agreement regardless of the dose calculation algorithms and x-ray energy. DSF can be considered one of the references for the density scaling factor of Delta4 phantom materials and may help improve the accuracy of the IMRT dose verification using Delta4. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  4. The Iowa new practice model: Advancing technician roles to increase pharmacists' time to provide patient care services.

    PubMed

    Andreski, Michael; Myers, Megan; Gainer, Kate; Pudlo, Anthony

    Determine the effects of an 18-month pilot project using tech-check-tech in 7 community pharmacies on 1) rate of dispensing errors not identified during refill prescription final product verification; 2) pharmacist workday task composition; and 3) amount of patient care services provided and the reimbursement status of those services. Pretest-posttest quasi-experimental study where baseline and study periods were compared. Pharmacists and pharmacy technicians in 7 community pharmacies in Iowa. The outcome measures were 1) percentage of technician verified refill prescriptions where dispensing errors were not identified on final product verification; 2) percentage of time spent by pharmacists in dispensing, management, patient care, practice development, and other activities; 3) the number of pharmacist patient care services provided per pharmacist hours worked; and 4) percentage of time that technician product verification was used. There was no significant difference in overall errors (0.2729% vs. 0.5124%, P = 0.513), patient safety errors (0.0525% vs. 0.0651%, P = 0.837), or administrative errors (0.2204% vs. 0.4784%, P = 0.411). Pharmacist's time in dispensing significantly decreased (67.3% vs. 49.06%, P = 0.005), and time in direct patient care (19.96% vs. 34.72%, P = 0.003), increased significantly. Time in other activities did not significantly change. Reimbursable services per pharmacist hour (0.11 vs. 0.30, P = 0.129), did not significantly change. Non-reimbursable services increased significantly (2.77 vs. 4.80, P = 0.042). Total services significantly increased (2.88 vs. 5.16, P = 0.044). Pharmacy technician product verification of refill prescriptions preserved dispensing safety while significantly increasing the time spent in delivery of pharmacist provided patient care services. The total number of pharmacist services provided per hour also increased significantly, driven primarily by a significant increase in the number of non-reimbursed services. This was mostly likely due to the increased time available to provide patient care. Reimbursed services per hour did not increase significantly mostly likely due to lack of payers. Copyright © 2018 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  5. 42 CFR 457.380 - Eligibility verification.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Eligibility verification. 457.380 Section 457.380... Requirements: Eligibility, Screening, Applications, and Enrollment § 457.380 Eligibility verification. (a) The... State may establish reasonable eligibility verification mechanisms to promote enrollment of eligible...

  6. PERFORMANCE VERIFICATION OF STORMWATER TREATMENT DEVICES UNDER EPA�S ENVIRONMENTAL TECHNOLOGY VERIFICATION PROGRAM

    EPA Science Inventory

    The Environmental Technology Verification (ETV) Program was created to facilitate the deployment of innovative or improved environmental technologies through performance verification and dissemination of information. The program�s goal is to further environmental protection by a...

  7. Quantitative evaluation of 3D dosimetry for stereotactic volumetric‐modulated arc delivery using COMPASS

    PubMed Central

    Manigandan, Durai; Karrthick, Karukkupalayam Palaniappan; Sambasivaselli, Raju; Senniandavar, Vellaingiri; Ramu, Mahendran; Rajesh, Thiyagarajan; Lutz, Muller; Muthukumaran, Manavalan; Karthikeyan, Nithyanantham; Tejinder, Kataria

    2014-01-01

    The purpose of this study was to evaluate quantitatively the patient‐specific 3D dosimetry tool COMPASS with 2D array MatriXX detector for stereotactic volumetric‐modulated arc delivery. Twenty‐five patients CT images and RT structures from different sites (brain, head & neck, thorax, abdomen, and spine) were taken from CyberKnife Multiplan planning system for this study. All these patients underwent radical stereotactic treatment in CyberKnife. For each patient, linac based volumetric‐modulated arc therapy (VMAT) stereotactic plans were generated in Monaco TPS v3.1 using Elekta Beam Modulator MLC. Dose prescription was in the range of 5–20 Gy per fraction. Target prescription and critical organ constraints were tried to match the delivered treatment plans. Each plan quality was analyzed using conformity index (CI), conformity number (CN), gradient Index (GI), target coverage (TC), and dose to 95% of volume (D95). Monaco Monte Carlo (MC)‐calculated treatment plan delivery accuracy was quantitatively evaluated with COMPASS‐calculated (CCA) dose and COMPASS indirectly measured (CME) dose based on dose‐volume histogram metrics. In order to ascertain the potential of COMPASS 3D dosimetry for stereotactic plan delivery, 2D fluence verification was performed with MatriXX using MultiCube phantom. Routine quality assurance of absolute point dose verification was performed to check the overall delivery accuracy. Quantitative analyses of dose delivery verification were compared with pass and fail criteria of 3 mm and 3% distance to agreement and dose differences. Gamma passing rate was compared with 2D fluence verification from MatriXX with MultiCube. Comparison of COMPASS reconstructed dose from measured fluence and COMPASS computed dose has shown a very good agreement with TPS calculated dose. Each plan was evaluated based on dose volume parameters for target volumes such as dose at 95% of volume (D95) and average dose. For critical organs dose at 20% of volume (D20), dose at 50% of volume (D50), and maximum point doses were evaluated. Comparison was carried out using gamma analysis with passing criteria of 3 mm and 3%. Mean deviation of 1.9%±1% was observed for dose at 95% of volume (D95) of target volumes, whereas much less difference was noticed for critical organs. However, significant dose difference was noticed in two cases due to the smaller tumor size. Evaluation of this study revealed that the COMPASS 3D dosimetry is efficient and easy to use for patient‐specific QA of VMAT stereotactic delivery. 3D dosimetric QA with COMPASS provides additional degrees of freedom to check the high‐dose modulated stereotactic delivery with very high precision on patient CT images. PACS numbers: 87.55.Qr, 87.56.Fc PMID:25679152

  8. Consistency, Verification, and Validation of Turbulence Models for Reynolds-Averaged Navier-Stokes Applications

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.

    2009-01-01

    In current practice, it is often difficult to draw firm conclusions about turbulence model accuracy when performing multi-code CFD studies ostensibly using the same model because of inconsistencies in model formulation or implementation in different codes. This paper describes an effort to improve the consistency, verification, and validation of turbulence models within the aerospace community through a website database of verification and validation cases. Some of the variants of two widely-used turbulence models are described, and two independent computer codes (one structured and one unstructured) are used in conjunction with two specific versions of these models to demonstrate consistency with grid refinement for several representative problems. Naming conventions, implementation consistency, and thorough grid resolution studies are key factors necessary for success.

  9. Manipulation strategies for massive space payloads

    NASA Technical Reports Server (NTRS)

    Book, Wayne J.

    1989-01-01

    Control for the bracing strategy is being examined. It was concluded earlier that trajectory planning must be improved to best achieve the bracing motion. Very interesting results were achieved which enable the inverse dynamics of flexible arms to be calculated for linearized motion in a more efficient manner than previously published. The desired motion of the end point beginning at t=0 and ending at t=t sub f is used to calculate the required torque at the joint. The solution is separated into a causal function that is zero for t is less than 0 and an accusal function which is zero for t is greater than t sub f. A number of alternative end point trajectories were explored in terms of the peak torque required, the amount of anticipatory action, and other issues. The single link case is the immediate subject and an experimental verification of that case is being performed. Modeling with experimental verification of closed chain dynamics continues. Modeling effort has pointed out inaccuracies that result from the choice of numerical techniques used to incorporate the closed chain constraints when modeling our experimental prototype RALF (Robotic Arm Large and Flexible). Results were compared to TREETOPS, a multi body code. The experimental verification work is suggesting new ways to make comparisons with systems having structural linearity and joint and geometric nonlinearity. The generation of inertial forces was studied with a small arm that will damp the large arm's vibration.

  10. A quantitative comparison of precipitation forecasts between the storm-scale numerical weather prediction model and auto-nowcast system in Jiangsu, China

    NASA Astrophysics Data System (ADS)

    Wang, Gaili; Yang, Ji; Wang, Dan; Liu, Liping

    2016-11-01

    Extrapolation techniques and storm-scale Numerical Weather Prediction (NWP) models are two primary approaches for short-term precipitation forecasts. The primary objective of this study is to verify precipitation forecasts and compare the performances of two nowcasting schemes: a Beijing Auto-Nowcast system (BJ-ANC) based on extrapolation techniques and a storm-scale NWP model called the Advanced Regional Prediction System (ARPS). The verification and comparison takes into account six heavy precipitation events that occurred in the summer of 2014 and 2015 in Jiangsu, China. The forecast performances of the two schemes were evaluated for the next 6 h at 1-h intervals using gridpoint-based measures of critical success index, bias, index of agreement, root mean square error, and using an object-based verification method called Structure-Amplitude-Location (SAL) score. Regarding gridpoint-based measures, BJ-ANC outperforms ARPS at first, but then the forecast accuracy decreases rapidly with lead time and performs worse than ARPS after 4-5 h of the initial forecast. Regarding the object-based verification method, most forecasts produced by BJ-ANC focus on the center of the diagram at the 1-h lead time and indicate high-quality forecasts. As the lead time increases, BJ-ANC overestimates precipitation amount and produces widespread precipitation, especially at a 6-h lead time. The ARPS model overestimates precipitation at all lead times, particularly at first.

  11. A review of the UK methodology used for monitoring cigarette smoke yields, aspects of analytical data variability and their impact on current and future regulatory compliance.

    PubMed

    Purkis, Stephen W; Drake, Linda; Meger, Michael; Mariner, Derek C

    2010-04-01

    The European Union (EU) requires that tobacco products are regulated by Directive 2001/37/EC through testing and verification of results on the basis of standards developed by the International Organization for Standardization (ISO). In 2007, the European Commission provided guidance to EU Member States by issuing criteria for competent laboratories which includes accreditation to ISO 17025:2005. Another criterion requires regular laboratory participation in collaborative studies that predict the measurement tolerance that must be observed to conclude that test results on any particular product are different. However, differences will always occur when comparing overall data across products between different laboratories. A forum for technical discussion between laboratories testing products as they are manufactured and a Government appointed verification laboratory gives transparency, ensures consistency and reduces apparent compliance issues to the benefit of all parties. More than 30years ago, such a forum was set up in the UK that continued until 2007 and will be described in this document. Anticipating further testing requirements in future product regulation as proposed by the Framework Convention on Tobacco Control, cooperation between accredited laboratories, whether for testing or verification, should be established to share know-how, to ensure a standardised level of quality and to offer competent technical dialogue in the best interest of regulators and manufacturers alike. Copyright 2009 Elsevier Inc. All rights reserved.

  12. Environmental Technology Verification Report -- Baghouse filtration products, GE Energy QG061 filtration media ( tested May 2007)

    EPA Science Inventory

    EPA has created the Environmental Technology Verification Program to facilitate the deployment of innovative or improved environmental technologies through performance verification and dissemination of information. The Air Pollution Control Technology Verification Center, a cente...

  13. 40 CFR 1066.240 - Torque transducer verification.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... POLLUTION CONTROLS VEHICLE-TESTING PROCEDURES Dynamometer Specifications § 1066.240 Torque transducer verification. Verify torque-measurement systems by performing the verifications described in §§ 1066.270 and... 40 Protection of Environment 33 2014-07-01 2014-07-01 false Torque transducer verification. 1066...

  14. Data capture by digital pen in clinical trials: a qualitative and quantitative study.

    PubMed

    Estellat, Candice; Tubach, Florence; Costa, Yolande; Hoffmann, Isabelle; Mantz, Jean; Ravaud, Philippe

    2008-05-01

    To investigate the use of the digital pen (DP) system to collect data in a clinical trial. To assess the accuracy of the system in this setting. Qualitative study based on semistructured interviews and a focus group. Quantitative study comparing the DP system and a double manual data-entry system in accuracy of acquiring data by variable type (tick boxes, dates, numbers, letters). An ongoing randomised multicentric clinical trial in tertiary care in France. 27 investigators involved in the trial (anaesthetists) who did or did not include patients, 4 study monitors and the study coordinator. Six key findings emerged: 1) the DP system was easy to use; its utilisation was intuitive, even for investigators inexperienced in informatics; 2) despite its portability, the DP was not always used in front of patients; 3) the DP system did not affect patient recruitment; 4) most of the technical problems of the system occurred during setup (compatibility, password access, antivirus software); 5) the main advantage was quickness of data availability for the study coordination staff and the main hindrance was the extra time required for online verification; and 6) all investigators were ready to use the system again. The investigators had to check 16% of data obtained by the DP system during the verification step. There is no relevant difference between the number of errors for the DP and the double manual data-entry systems: 8/5022 versus 6/5022 data entries. 5 out of 8 DP-system failures were due to the intelligent character recognition system. The DP system has a good acceptability among all investigators in a clinical setting, whether they are experienced with computers or not, and a good accuracy, as compared with double manual data entry.

  15. Prototype automated post-MECO ascent I-load Verification Data Table

    NASA Technical Reports Server (NTRS)

    Lardas, George D.

    1990-01-01

    A prototype automated processor for quality assurance of Space Shuttle post-Main Engine Cut Off (MECO) ascent initialization parameters (I-loads) is described. The processor incorporates Clips rules adapted from the quality assurance criteria for the post-MECO ascent I-loads. Specifically, the criteria are implemented for nominal and abort targets, as given in the 'I-load Verification Data Table, Part 3, Post-MECO Ascent, Version 2.1, December 1989.' This processor, ivdt, compares a given l-load set with the stated mission design and quality assurance criteria. It determines which I-loads violate the stated criteria, and presents a summary of I-loads that pass or fail the tests.

  16. Analytical torque calculation and experimental verification of synchronous permanent magnet couplings with Halbach arrays

    NASA Astrophysics Data System (ADS)

    Seo, Sung-Won; Kim, Young-Hyun; Lee, Jung-Ho; Choi, Jang-Young

    2018-05-01

    This paper presents analytical torque calculation and experimental verification of synchronous permanent magnet couplings (SPMCs) with Halbach arrays. A Halbach array is composed of various numbers of segments per pole; we calculate and compare the magnetic torques for 2, 3, and 4 segments. Firstly, based on the magnetic vector potential, and using a 2D polar coordinate system, we obtain analytical solutions for the magnetic field. Next, through a series of processes, we perform magnetic torque calculations using the derived solutions and a Maxwell stress tensor. Finally, the analytical results are verified by comparison with the results of 2D and 3D finite element analysis and the results of an experiment.

  17. Study of Measurement Strategies of Geometric Deviation of the Position of the Threaded Holes

    NASA Astrophysics Data System (ADS)

    Drbul, Mário; Martikan, Pavol; Sajgalik, Michal; Czan, Andrej; Broncek, Jozef; Babik, Ondrej

    2017-12-01

    Verification of product and quality control is an integral part of current production process. In terms of functional requirements and product interoperability, it is necessary to analyze their dimensional and also geometric specifications. Threaded holes are verified elements too, which are a substantial part of detachable screw connections and have a broad presence in engineering products. This paper deals with on the analysing of measurement strategies of verification geometric deviation of the position of the threaded holes, which are the indirect method of measuring threaded pins when applying different measurement strategies which can affect the result of the verification of the product..

  18. Experimental verification of the Acuros XB and AAA dose calculation adjacent to heterogeneous media for IMRT and RapidArc of nasopharygeal carcinoma.

    PubMed

    Kan, Monica W K; Leung, Lucullus H T; So, Ronald W K; Yu, Peter K N

    2013-03-01

    To compare the doses calculated by the Acuros XB (AXB) algorithm and analytical anisotropic algorithm (AAA) with experimentally measured data adjacent to and within heterogeneous medium using intensity modulated radiation therapy (IMRT) and RapidArc(®) (RA) volumetric arc therapy plans for nasopharygeal carcinoma (NPC). Two-dimensional dose distribution immediately adjacent to both air and bone inserts of a rectangular tissue equivalent phantom irradiated using IMRT and RA plans for NPC cases were measured with GafChromic(®) EBT3 films. Doses near and within the nasopharygeal (NP) region of an anthropomorphic phantom containing heterogeneous medium were also measured with thermoluminescent dosimeters (TLD) and EBT3 films. The measured data were then compared with the data calculated by AAA and AXB. For AXB, dose calculations were performed using both dose-to-medium (AXB_Dm) and dose-to-water (AXB_Dw) options. Furthermore, target dose differences between AAA and AXB were analyzed for the corresponding real patients. The comparison of real patient plans was performed by stratifying the targets into components of different densities, including tissue, bone, and air. For the verification of planar dose distribution adjacent to air and bone using the rectangular phantom, the percentages of pixels that passed the gamma analysis with the ± 3%/3mm criteria were 98.7%, 99.5%, and 97.7% on the axial plane for AAA, AXB_Dm, and AXB_Dw, respectively, averaged over all IMRT and RA plans, while they were 97.6%, 98.2%, and 97.7%, respectively, on the coronal plane. For the verification of planar dose distribution within the NP region of the anthropomorphic phantom, the percentages of pixels that passed the gamma analysis with the ± 3%/3mm criteria were 95.1%, 91.3%, and 99.0% for AAA, AXB_Dm, and AXB_Dw, respectively, averaged over all IMRT and RA plans. Within the NP region where air and bone were present, the film measurements represented the dose close to unit density water in a heterogeneous medium, produced the best agreement with the AXB_Dw. For the verification of point doses within the target using TLD in the anthropomorphic phantom, the absolute percentage deviations between the calculated and measured data when averaged over all IMRT and RA plans were 1.8%, 1.7%, and 1.8% for AAA, AXB_Dm and AXB_Dw, respectively. From all the verification results, no significant difference was found between the IMRT and RA plans. The target dose analysis of the real patient plans showed that the discrepancies in mean doses to the PTV component in tissue among the three dose calculation options were within 2%, but up to about 4% in the bone content, with AXB_Dm giving the lowest values and AXB_Dw giving the highest values. In general, the verification measurements demonstrated that both algorithms produced acceptable accuracy when compared to the measured data. GafChromic(®) film results indicated that AXB produced slightly better accuracy compared to AAA for dose calculation adjacent to and within the heterogeneous media. Users should be aware of the differences in calculated target doses between options AXB_Dm and AXB_Dw, especially in bone, for IMRT and RA in NPC cases.

  19. High-resolution face verification using pore-scale facial features.

    PubMed

    Li, Dong; Zhou, Huiling; Lam, Kin-Man

    2015-08-01

    Face recognition methods, which usually represent face images using holistic or local facial features, rely heavily on alignment. Their performances also suffer a severe degradation under variations in expressions or poses, especially when there is one gallery per subject only. With the easy access to high-resolution (HR) face images nowadays, some HR face databases have recently been developed. However, few studies have tackled the use of HR information for face recognition or verification. In this paper, we propose a pose-invariant face-verification method, which is robust to alignment errors, using the HR information based on pore-scale facial features. A new keypoint descriptor, namely, pore-Principal Component Analysis (PCA)-Scale Invariant Feature Transform (PPCASIFT)-adapted from PCA-SIFT-is devised for the extraction of a compact set of distinctive pore-scale facial features. Having matched the pore-scale features of two-face regions, an effective robust-fitting scheme is proposed for the face-verification task. Experiments show that, with one frontal-view gallery only per subject, our proposed method outperforms a number of standard verification methods, and can achieve excellent accuracy even the faces are under large variations in expression and pose.

  20. DFM flow by using combination between design based metrology system and model based verification at sub-50nm memory device

    NASA Astrophysics Data System (ADS)

    Kim, Cheol-kyun; Kim, Jungchan; Choi, Jaeseung; Yang, Hyunjo; Yim, Donggyu; Kim, Jinwoong

    2007-03-01

    As the minimum transistor length is getting smaller, the variation and uniformity of transistor length seriously effect device performance. So, the importance of optical proximity effects correction (OPC) and resolution enhancement technology (RET) cannot be overemphasized. However, OPC process is regarded by some as a necessary evil in device performance. In fact, every group which includes process and design, are interested in whole chip CD variation trend and CD uniformity, which represent real wafer. Recently, design based metrology systems are capable of detecting difference between data base to wafer SEM image. Design based metrology systems are able to extract information of whole chip CD variation. According to the results, OPC abnormality was identified and design feedback items are also disclosed. The other approaches are accomplished on EDA companies, like model based OPC verifications. Model based verification will be done for full chip area by using well-calibrated model. The object of model based verification is the prediction of potential weak point on wafer and fast feed back to OPC and design before reticle fabrication. In order to achieve robust design and sufficient device margin, appropriate combination between design based metrology system and model based verification tools is very important. Therefore, we evaluated design based metrology system and matched model based verification system for optimum combination between two systems. In our study, huge amount of data from wafer results are classified and analyzed by statistical method and classified by OPC feedback and design feedback items. Additionally, novel DFM flow would be proposed by using combination of design based metrology and model based verification tools.

  1. Cognitive Bias in the Verification and Validation of Space Flight Systems

    NASA Technical Reports Server (NTRS)

    Larson, Steve

    2012-01-01

    Cognitive bias is generally recognized as playing a significant role in virtually all domains of human decision making. Insight into this role is informally built into many of the system engineering practices employed in the aerospace industry. The review process, for example, typically has features that help to counteract the effect of bias. This paper presents a discussion of how commonly recognized biases may affect the verification and validation process. Verifying and validating a system is arguably more challenging than development, both technically and cognitively. Whereas there may be a relatively limited number of options available for the design of a particular aspect of a system, there is a virtually unlimited number of potential verification scenarios that may be explored. The probability of any particular scenario occurring in operations is typically very difficult to estimate, which increases reliance on judgment that may be affected by bias. Implementing a verification activity often presents technical challenges that, if they can be overcome at all, often result in a departure from actual flight conditions (e.g., 1-g testing, simulation, time compression, artificial fault injection) that may raise additional questions about the meaningfulness of the results, and create opportunities for the introduction of additional biases. In addition to mitigating the biases it can introduce directly, the verification and validation process must also overcome the cumulative effect of biases introduced during all previous stages of development. A variety of cognitive biases will be described, with research results for illustration. A handful of case studies will be presented that show how cognitive bias may have affected the verification and validation process on recent JPL flight projects, identify areas of strength and weakness, and identify potential changes or additions to commonly used techniques that could provide a more robust verification and validation of future systems.

  2. Updated one-dimensional hydraulic model of the Kootenai River, Idaho-A supplement to Scientific Investigations Report 2005-5110

    USGS Publications Warehouse

    Czuba, Christiana R.; Barton, Gary J.

    2011-01-01

    The Kootenai Tribe of Idaho, in cooperation with local, State, Federal, and Canadian agency co-managers and scientists, is assessing the feasibility of a Kootenai River habitat restoration project in Boundary County, Idaho. The restoration project is focused on recovery of the endangered Kootenai River white sturgeon (Acipenser transmontanus) population, and simultaneously targets habitat-based recovery of other native river biota. River restoration is a complex undertaking that requires a thorough understanding of the river and floodplain landscape prior to restoration efforts. To assist in evaluating the feasibility of this endeavor, the U.S. Geological Survey developed an updated one-dimensional hydraulic model of the Kootenai River in Idaho between river miles (RMs) 105.6 and 171.9 to characterize the current hydraulic conditions. A previously calibrated model of the study area, based on channel geometry data collected during 2002 and 2003, was the basis for this updated model. New high-resolution bathymetric surveys conducted in the study reach between RMs 138 and 161.4 provided additional detail of channel morphology. A light detection and ranging (LIDAR) survey was flown in the Kootenai River valley in 2005 between RMs 105.6 and 159.5 to characterize the floodplain topography. Six temporary gaging stations installed in 2006-08 between RMs 154.1 and 161.2, combined with five permanent gaging stations in the study reach, provided discharge and water-surface elevations for model calibration and verification. Measured discharges ranging from about 4,800 to 63,000 cubic feet per second (ft3/s) were simulated for calibration events, and calibrated water-surface elevations ranged from about 1,745 to 1,820 feet (ft) throughout the extent of the model. Calibration was considered acceptable when the simulated and measured water-surface elevations at gaging stations differed by less than (+/-)0.15 ft. Model verification consisted of simulating 10 additional events with measured discharges ranging from about 4,900 to 52,000 ft3/s, and comparing simulated and measured water-surface elevations at gaging stations. Average water-surface-elevation error in the verification simulations was 0.05 ft, with the error ranging from -1.17 to 0.94 ft over the range of events and gaging stations. Additional verification included a graphical comparison of measured average velocities that range from 1.0 to 6.2 feet per second to simulated velocities at four sites within the study reach for measured discharges ranging from about 7,400 to 46,600 ft3/s. The availability of high-resolution bathymetric and LIDAR data, along with the additional gaging stations in the study reach, allowed for more detail to be added to the model and a more thorough calibration, sensitivity, and verification analysis to be conducted. Model resolution and performance is most improved between RMs 140 and 160, which includes the 18.3-mile reach of the Kootenai River white sturgeon critical habitat.

  3. 48 CFR 4.1302 - Acquisition of approved products and services for personal identity verification.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... products and services for personal identity verification. 4.1302 Section 4.1302 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION GENERAL ADMINISTRATIVE MATTERS Personal Identity Verification 4.1302 Acquisition of approved products and services for personal identity verification. (a) In...

  4. 48 CFR 4.1302 - Acquisition of approved products and services for personal identity verification.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... products and services for personal identity verification. 4.1302 Section 4.1302 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION GENERAL ADMINISTRATIVE MATTERS Personal Identity Verification 4.1302 Acquisition of approved products and services for personal identity verification. (a) In...

  5. 48 CFR 4.1302 - Acquisition of approved products and services for personal identity verification.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... products and services for personal identity verification. 4.1302 Section 4.1302 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION GENERAL ADMINISTRATIVE MATTERS Personal Identity Verification 4.1302 Acquisition of approved products and services for personal identity verification. (a) In...

  6. 48 CFR 4.1302 - Acquisition of approved products and services for personal identity verification.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... products and services for personal identity verification. 4.1302 Section 4.1302 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION GENERAL ADMINISTRATIVE MATTERS Personal Identity Verification 4.1302 Acquisition of approved products and services for personal identity verification. (a) In...

  7. 48 CFR 4.1302 - Acquisition of approved products and services for personal identity verification.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... products and services for personal identity verification. 4.1302 Section 4.1302 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION GENERAL ADMINISTRATIVE MATTERS Personal Identity Verification 4.1302 Acquisition of approved products and services for personal identity verification. (a) In...

  8. Joint ETV/NOWATECH verification protocol for the Sorbisense GSW40 passive sampler

    EPA Science Inventory

    Environmental technology verification (ETV) is an independent (third party) assessment of the performance of a technology or a product for a specified application, under defined conditions and quality assurance. This verification is a joint verification with the US EPA ETV schem...

  9. ENVIRONMENTAL TECHNOLOGY VERIFICATION REPORT - PERFORMANCE VERIFICATION OF THE W.L. GORE & ASSOCIATES GORE-SORBER SCREENING SURVEY

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has created the Environmental Technology Verification Program (ETV) to facilitate the deployment of innovative or improved environmental technologies through performance verification and dissemination of information. The goal of the...

  10. A Methodology for Formal Hardware Verification, with Application to Microprocessors.

    DTIC Science & Technology

    1993-08-29

    concurrent programming lan- guages. Proceedings of the NATO Advanced Study Institute on Logics and Models of Concurrent Systems ( Colle - sur - Loup , France, 8-19...restricted class of formu- las . Bose and Fisher [26] developed a symbolic model checker based on a Cosmos switch-level model. Their modeling approach...verification using SDVS-the method and a case study. 17th Anuual Microprogramming Workshop (New Orleans, LA , 30 October-2 November 1984). Published as

  11. Multi-canister overpack project -- verification and validation, MCNP 4A

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldmann, L.H.

    This supporting document contains the software verification and validation (V and V) package used for Phase 2 design of the Spent Nuclear Fuel Multi-Canister Overpack. V and V packages for both ANSYS and MCNP are included. Description of Verification Run(s): This software requires that it be compiled specifically for the machine it is to be used on. Therefore to facilitate ease in the verification process the software automatically runs 25 sample problems to ensure proper installation and compilation. Once the runs are completed the software checks for verification by performing a file comparison on the new output file and themore » old output file. Any differences between any of the files will cause a verification error. Due to the manner in which the verification is completed a verification error does not necessarily indicate a problem. This indicates that a closer look at the output files is needed to determine the cause of the error.« less

  12. AN ENVIRONMENTAL TECHNOLOGY VERIFICATION (ETV) PERFORMANCE TESTING OF THE INDUSTRIAL TEST SYSTEM, INC. CYANIDE REAGENTSTRIP™ TEST KIT

    EPA Science Inventory

    Cyanide can be present in various forms in water. The cyanide test kit evaluated in this verification study (Industrial Test System, Inc. Cyanide Regent Strip ™ Test Kit) was designed to detect free cyanide in water. This is done by converting cyanide in water to cyanogen...

  13. Characterizing irrigation water requirements for rice production from the Arkansas Rice Research Verification Program

    USDA-ARS?s Scientific Manuscript database

    This study investigated rice irrigation water use in the University of Arkansas Rice Research Verification Program between the years of 2003 and 2011. Irrigation water use averaged 747 mm (29.4 inches) over the nine years. A significant 40% water savings was reported for rice grown under a zero gr...

  14. Multiresolution comparison of precipitation datasets for large-scale models

    NASA Astrophysics Data System (ADS)

    Chun, K. P.; Sapriza Azuri, G.; Davison, B.; DeBeer, C. M.; Wheater, H. S.

    2014-12-01

    Gridded precipitation datasets are crucial for driving large-scale models which are related to weather forecast and climate research. However, the quality of precipitation products is usually validated individually. Comparisons between gridded precipitation products along with ground observations provide another avenue for investigating how the precipitation uncertainty would affect the performance of large-scale models. In this study, using data from a set of precipitation gauges over British Columbia and Alberta, we evaluate several widely used North America gridded products including the Canadian Gridded Precipitation Anomalies (CANGRD), the National Center for Environmental Prediction (NCEP) reanalysis, the Water and Global Change (WATCH) project, the thin plate spline smoothing algorithms (ANUSPLIN) and Canadian Precipitation Analysis (CaPA). Based on verification criteria for various temporal and spatial scales, results provide an assessment of possible applications for various precipitation datasets. For long-term climate variation studies (~100 years), CANGRD, NCEP, WATCH and ANUSPLIN have different comparative advantages in terms of their resolution and accuracy. For synoptic and mesoscale precipitation patterns, CaPA provides appealing performance of spatial coherence. In addition to the products comparison, various downscaling methods are also surveyed to explore new verification and bias-reduction methods for improving gridded precipitation outputs for large-scale models.

  15. Numerical Simulations For the F-16XL Aircraft Configuration

    NASA Technical Reports Server (NTRS)

    Elmiligui, Alaa A.; Abdol-Hamid, Khaled; Cavallo, Peter A.; Parlette, Edward B.

    2014-01-01

    Numerical simulations of flow around the F-16XL are presented as a contribution to the Cranked Arrow Wing Aerodynamic Project International II (CAWAPI-II). The NASA Tetrahedral Unstructured Software System (TetrUSS) is used to perform numerical simulations. This CFD suite, developed and maintained by NASA Langley Research Center, includes an unstructured grid generation program called VGRID, a postprocessor named POSTGRID, and the flow solver USM3D. The CRISP CFD package is utilized to provide error estimates and grid adaption for verification of USM3D results. A subsonic high angle-of-attack case flight condition (FC) 25 is computed and analyzed. Three turbulence models are used in the calculations: the one-equation Spalart-Allmaras (SA), the two-equation shear stress transport (SST) and the ke turbulence models. Computational results, and surface static pressure profiles are presented and compared with flight data. Solution verification is performed using formal grid refinement studies, the solution of Error Transport Equations, and adaptive mesh refinement. The current study shows that the USM3D solver coupled with CRISP CFD can be used in an engineering environment in predicting vortex-flow physics on a complex configuration at flight Reynolds numbers.

  16. Verification of S&D Solutions for Network Communications and Devices

    NASA Astrophysics Data System (ADS)

    Rudolph, Carsten; Compagna, Luca; Carbone, Roberto; Muñoz, Antonio; Repp, Jürgen

    This chapter describes the tool-supported verification of S&D Solutions on the level of network communications and devices. First, the general goals and challenges of verification in the context of AmI systems are highlighted and the role of verification and validation within the SERENITY processes is explained.Then, SERENITY extensions to the SH VErification tool are explained using small examples. Finally, the applicability of existing verification tools is discussed in the context of the AVISPA toolset. The two different tools show that for the security analysis of network and devices S&D Patterns relevant complementary approachesexist and can be used.

  17. Projected Impact of Compositional Verification on Current and Future Aviation Safety Risk

    NASA Technical Reports Server (NTRS)

    Reveley, Mary S.; Withrow, Colleen A.; Leone, Karen M.; Jones, Sharon M.

    2014-01-01

    The projected impact of compositional verification research conducted by the National Aeronautic and Space Administration System-Wide Safety and Assurance Technologies on aviation safety risk was assessed. Software and compositional verification was described. Traditional verification techniques have two major problems: testing at the prototype stage where error discovery can be quite costly and the inability to test for all potential interactions leaving some errors undetected until used by the end user. Increasingly complex and nondeterministic aviation systems are becoming too large for these tools to check and verify. Compositional verification is a "divide and conquer" solution to addressing increasingly larger and more complex systems. A review of compositional verification research being conducted by academia, industry, and Government agencies is provided. Forty-four aviation safety risks in the Biennial NextGen Safety Issues Survey were identified that could be impacted by compositional verification and grouped into five categories: automation design; system complexity; software, flight control, or equipment failure or malfunction; new technology or operations; and verification and validation. One capability, 1 research action, 5 operational improvements, and 13 enablers within the Federal Aviation Administration Joint Planning and Development Office Integrated Work Plan that could be addressed by compositional verification were identified.

  18. A multi-institutional study of independent calculation verification in inhomogeneous media using a simple and effective method of heterogeneity correction integrated with the Clarkson method.

    PubMed

    Jinno, Shunta; Tachibana, Hidenobu; Moriya, Shunsuke; Mizuno, Norifumi; Takahashi, Ryo; Kamima, Tatsuya; Ishibashi, Satoru; Sato, Masanori

    2018-05-21

    In inhomogeneous media, there is often a large systematic difference in the dose between the conventional Clarkson algorithm (C-Clarkson) for independent calculation verification and the superposition-based algorithms of treatment planning systems (TPSs). These treatment site-dependent differences increase the complexity of the radiotherapy planning secondary check. We developed a simple and effective method of heterogeneity correction integrated with the Clarkson algorithm (L-Clarkson) to account for the effects of heterogeneity in the lateral dimension, and performed a multi-institutional study to evaluate the effectiveness of the method. In the method, a 2D image reconstructed from computed tomography (CT) images is divided according to lines extending from the reference point to the edge of the multileaf collimator (MLC) or jaw collimator for each pie sector, and the radiological path length (RPL) of each line is calculated on the 2D image to obtain a tissue maximum ratio and phantom scatter factor, allowing the dose to be calculated. A total of 261 plans (1237 beams) for conventional breast and lung treatments and lung stereotactic body radiotherapy were collected from four institutions. Disagreements in dose between the on-site TPSs and a verification program using the C-Clarkson and L-Clarkson algorithms were compared. Systematic differences with the L-Clarkson method were within 1% for all sites, while the C-Clarkson method resulted in systematic differences of 1-5%. The L-Clarkson method showed smaller variations. This heterogeneity correction integrated with the Clarkson algorithm would provide a simple evaluation within the range of -5% to +5% for a radiotherapy plan secondary check.

  19. SU-E-T-439: Fundamental Verification of Respiratory-Gated Spot Scanning Proton Beam Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamano, H; Yamakawa, T; Hayashi, N

    Purpose: The spot-scanning proton beam irradiation with respiratory gating technique provides quite well dose distribution and requires both dosimetric and geometric verification prior to clinical implementation. The purpose of this study is to evaluate the impact of gating irradiation as a fundamental verification. Methods: We evaluated field width, flatness, symmetry, and penumbra in the gated and non-gated proton beams. The respiration motion was distinguished into 3 patterns: 10, 20, and 30 mm. We compared these contents between the gated and non-gated beams. A 200 MeV proton beam from PROBEAT-III unit (Hitachi Co.Ltd) was used in this study. Respiratory gating irradiationmore » was performed by Quasar phantom (MODUS medical devices) with a combination of dedicated respiratory gating system (ANZAI Medical Corporation). For radiochromic film dosimetry, the calibration curve was created with Gafchromic EBT3 film (Ashland) on FilmQA Pro 2014 (Ashland) as film analysis software. Results: The film was calibrated at the middle of spread out Bragg peak in passive proton beam. The field width, flatness and penumbra in non-gated proton irradiation with respiratory motion were larger than those of reference beam without respiratory motion: the maximum errors of the field width, flatness and penumbra in respiratory motion of 30 mm were 1.75% and 40.3% and 39.7%, respectively. The errors of flatness and penumbra in gating beam (motion: 30 mm, gating rate: 25%) were 0.0% and 2.91%, respectively. The results of symmetry in all proton beams with gating technique were within 0.6%. Conclusion: The field width, flatness, symmetry and penumbra were improved with the gating technique in proton beam. The spot scanning proton beam with gating technique is feasible for the motioned target.« less

  20. Mapping {sup 15}O Production Rate for Proton Therapy Verification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grogg, Kira; Alpert, Nathaniel M.; Zhu, Xuping

    Purpose: This work was a proof-of-principle study for the evaluation of oxygen-15 ({sup 15}O) production as an imaging target through the use of positron emission tomography (PET), to improve verification of proton treatment plans and to study the effects of perfusion. Methods and Materials: Dynamic PET measurements of irradiation-produced isotopes were made for a phantom and rabbit thigh muscles. The rabbit muscle was irradiated and imaged under both live and dead conditions. A differential equation was fitted to phantom and in vivo data, yielding estimates of {sup 15}O production and clearance rates, which were compared to live versus dead rates formore » the rabbit and to Monte Carlo predictions. Results: PET clearance rates agreed with decay constants of the dominant radionuclide species in 3 different phantom materials. In 2 oxygen-rich materials, the ratio of {sup 15}O production rates agreed with the expected ratio. In the dead rabbit thighs, the dynamic PET concentration histories were accurately described using {sup 15}O decay constant, whereas the live thigh activity decayed faster. Most importantly, the {sup 15}O production rates agreed within 2% (P>.5) between conditions. Conclusions: We developed a new method for quantitative measurement of {sup 15}O production and clearance rates in the period immediately following proton therapy. Measurements in the phantom and rabbits were well described in terms of {sup 15}O production and clearance rates, plus a correction for other isotopes. These proof-of-principle results support the feasibility of detailed verification of proton therapy treatment delivery. In addition, {sup 15}O clearance rates may be useful in monitoring permeability changes due to therapy.« less

  1. 78 FR 27882 - VA Veteran-Owned Small Business (VOSB) Verification Guidelines

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-13

    ... Verification Self-Assessment Tool that walks the veteran through the regulation and how it applies to the...) Verification Guidelines AGENCY: Department of Veterans Affairs. ACTION: Advanced notice of proposed rulemaking... regulations governing the Department of Veterans Affairs (VA) Veteran-Owned Small Business (VOSB) Verification...

  2. 78 FR 32010 - Pipeline Safety: Public Workshop on Integrity Verification Process

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-28

    .... PHMSA-2013-0119] Pipeline Safety: Public Workshop on Integrity Verification Process AGENCY: Pipeline and... announcing a public workshop to be held on the concept of ``Integrity Verification Process.'' The Integrity Verification Process shares similar characteristics with fitness for service processes. At this workshop, the...

  3. Interim Letter Report - Verification Survey of Partial Grid E9, David Witherspoon, Inc. 1630 Site Knoxville, Tennessee

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    P.C. Weaver

    2008-06-12

    Conduct verification surveys of available grids at the DWI 1630 in Knoxville, Tennessee. A representative with the Independent Environmental Assessment and Verification (IEAV) team from ORISE conducted a verification survey of a partial area within Grid E9.

  4. 40 CFR 1065.395 - Inertial PM balance verifications.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Inertial PM balance verifications... Inertial PM balance verifications. This section describes how to verify the performance of an inertial PM balance. (a) Independent verification. Have the balance manufacturer (or a representative approved by the...

  5. 40 CFR 1065.395 - Inertial PM balance verifications.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 33 2011-07-01 2011-07-01 false Inertial PM balance verifications... Inertial PM balance verifications. This section describes how to verify the performance of an inertial PM balance. (a) Independent verification. Have the balance manufacturer (or a representative approved by the...

  6. 22 CFR 123.14 - Import certificate/delivery verification procedure.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... REGULATIONS LICENSES FOR THE EXPORT OF DEFENSE ARTICLES § 123.14 Import certificate/delivery verification procedure. (a) The Import Certificate/Delivery Verification Procedure is designed to assure that a commodity... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Import certificate/delivery verification...

  7. 22 CFR 123.14 - Import certificate/delivery verification procedure.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... REGULATIONS LICENSES FOR THE EXPORT OF DEFENSE ARTICLES § 123.14 Import certificate/delivery verification procedure. (a) The Import Certificate/Delivery Verification Procedure is designed to assure that a commodity... 22 Foreign Relations 1 2011-04-01 2011-04-01 false Import certificate/delivery verification...

  8. 45 CFR 95.626 - Independent Verification and Validation.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 45 Public Welfare 1 2013-10-01 2013-10-01 false Independent Verification and Validation. 95.626... (FFP) Specific Conditions for Ffp § 95.626 Independent Verification and Validation. (a) An assessment for independent verification and validation (IV&V) analysis of a State's system development effort may...

  9. 45 CFR 95.626 - Independent Verification and Validation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 45 Public Welfare 1 2014-10-01 2014-10-01 false Independent Verification and Validation. 95.626... (FFP) Specific Conditions for Ffp § 95.626 Independent Verification and Validation. (a) An assessment for independent verification and validation (IV&V) analysis of a State's system development effort may...

  10. 45 CFR 95.626 - Independent Verification and Validation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 45 Public Welfare 1 2011-10-01 2011-10-01 false Independent Verification and Validation. 95.626... (FFP) Specific Conditions for Ffp § 95.626 Independent Verification and Validation. (a) An assessment for independent verification and validation (IV&V) analysis of a State's system development effort may...

  11. 45 CFR 95.626 - Independent Verification and Validation.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 45 Public Welfare 1 2012-10-01 2012-10-01 false Independent Verification and Validation. 95.626... (FFP) Specific Conditions for Ffp § 95.626 Independent Verification and Validation. (a) An assessment for independent verification and validation (IV&V) analysis of a State's system development effort may...

  12. 24 CFR 5.512 - Verification of eligible immigration status.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... immigration status. 5.512 Section 5.512 Housing and Urban Development Office of the Secretary, Department of... Noncitizens § 5.512 Verification of eligible immigration status. (a) General. Except as described in paragraph...) Primary verification—(1) Automated verification system. Primary verification of the immigration status of...

  13. International Energy Agency Ocean Energy Systems Task 10 Wave Energy Converter Modeling Verification and Validation: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wendt, Fabian F; Yu, Yi-Hsiang; Nielsen, Kim

    This is the first joint reference paper for the Ocean Energy Systems (OES) Task 10 Wave Energy Converter modeling verification and validation group. The group is established under the OES Energy Technology Network program under the International Energy Agency. OES was founded in 2001 and Task 10 was proposed by Bob Thresher (National Renewable Energy Laboratory) in 2015 and approved by the OES Executive Committee EXCO in 2016. The kickoff workshop took place in September 2016, wherein the initial baseline task was defined. Experience from similar offshore wind validation/verification projects (OC3-OC5 conducted within the International Energy Agency Wind Task 30)more » [1], [2] showed that a simple test case would help the initial cooperation to present results in a comparable way. A heaving sphere was chosen as the first test case. The team of project participants simulated different numerical experiments, such as heave decay tests and regular and irregular wave cases. The simulation results are presented and discussed in this paper.« less

  14. Earth Science Enterprise Scientific Data Purchase Project: Verification and Validation

    NASA Technical Reports Server (NTRS)

    Jenner, Jeff; Policelli, Fritz; Fletcher, Rosea; Holecamp, Kara; Owen, Carolyn; Nicholson, Lamar; Dartez, Deanna

    2000-01-01

    This paper presents viewgraphs on the Earth Science Enterprise Scientific Data Purchase Project's verification,and validation process. The topics include: 1) What is Verification and Validation? 2) Why Verification and Validation? 3) Background; 4) ESE Data Purchas Validation Process; 5) Data Validation System and Ingest Queue; 6) Shipment Verification; 7) Tracking and Metrics; 8) Validation of Contract Specifications; 9) Earth Watch Data Validation; 10) Validation of Vertical Accuracy; and 11) Results of Vertical Accuracy Assessment.

  15. Verification of Functional Fault Models and the Use of Resource Efficient Verification Tools

    NASA Technical Reports Server (NTRS)

    Bis, Rachael; Maul, William A.

    2015-01-01

    Functional fault models (FFMs) are a directed graph representation of the failure effect propagation paths within a system's physical architecture and are used to support development and real-time diagnostics of complex systems. Verification of these models is required to confirm that the FFMs are correctly built and accurately represent the underlying physical system. However, a manual, comprehensive verification process applied to the FFMs was found to be error prone due to the intensive and customized process necessary to verify each individual component model and to require a burdensome level of resources. To address this problem, automated verification tools have been developed and utilized to mitigate these key pitfalls. This paper discusses the verification of the FFMs and presents the tools that were developed to make the verification process more efficient and effective.

  16. Towards the formal verification of the requirements and design of a processor interface unit

    NASA Technical Reports Server (NTRS)

    Fura, David A.; Windley, Phillip J.; Cohen, Gerald C.

    1993-01-01

    The formal verification of the design and partial requirements for a Processor Interface Unit (PIU) using the Higher Order Logic (HOL) theorem-proving system is described. The processor interface unit is a single-chip subsystem within a fault-tolerant embedded system under development within the Boeing Defense and Space Group. It provides the opportunity to investigate the specification and verification of a real-world subsystem within a commercially-developed fault-tolerant computer. An overview of the PIU verification effort is given. The actual HOL listing from the verification effort are documented in a companion NASA contractor report entitled 'Towards the Formal Verification of the Requirements and Design of a Processor Interface Unit - HOL Listings' including the general-purpose HOL theories and definitions that support the PIU verification as well as tactics used in the proofs.

  17. WE-DE-BRA-01: SCIENCE COUNCIL JUNIOR INVESTIGATOR COMPETITION WINNER: Acceleration of a Limited-Angle Intrafraction Verification (LIVE) System Using Adaptive Prior Knowledge Based Image Estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Y; Yin, F; Ren, L

    Purpose: To develop an adaptive prior knowledge based image estimation method to reduce the scan angle needed in the LIVE system to reconstruct 4D-CBCT for intrafraction verification. Methods: The LIVE system has been previously proposed to reconstructs 4D volumetric images on-the-fly during arc treatment for intrafraction target verification and dose calculation. This system uses limited-angle beam’s eye view (BEV) MV cine images acquired from the treatment beam together with the orthogonally acquired limited-angle kV projections to reconstruct 4D-CBCT images for target verification during treatment. In this study, we developed an adaptive constrained free-form deformation reconstruction technique in LIVE to furthermore » reduce the scanning angle needed to reconstruct the CBCT images. This technique uses free form deformation with energy minimization to deform prior images to estimate 4D-CBCT based on projections acquired in limited angle (orthogonal 6°) during the treatment. Note that the prior images are adaptively updated using the latest CBCT images reconstructed by LIVE during treatment to utilize the continuity of patient motion.The 4D digital extended-cardiac-torso (XCAT) phantom was used to evaluate the efficacy of this technique with LIVE system. A lung patient was simulated with different scenario, including baseline drifts, amplitude change and phase shift. Limited-angle orthogonal kV and beam’s eye view (BEV) MV projections were generated for each scenario. The CBCT reconstructed by these projections were compared with the ground-truth generated in XCAT.Volume-percentage-difference (VPD) and center-of-mass-shift (COMS) were calculated between the reconstructed and the ground-truth tumors to evaluate the reconstruction accuracy. Results: Using orthogonal-view of 6° kV and BEV- MV projections, the VPD/COMS values were 12.7±4.0%/0.7±0.5 mm, 13.0±5.1%/0.8±0.5 mm, and 11.4±5.4%/0.5±0.3 mm for the three scenarios, respectively. Conclusion: The technique enables LIVE to accurately reconstruct 4D-CBCT images using only orthogonal 6° angle, which greatly improves the efficiency and reduces dose of LIVE for intrafraction verification.« less

  18. Characterization of the a-Si EPID in the unity MR-linac for dosimetric applications.

    PubMed

    Torres-Xirau, I; Olaciregui-Ruiz, I; Baldvinsson, G; Mijnheer, B J; van der Heide, U A; Mans, A

    2018-01-09

    Electronic portal imaging devices (EPIDs) are frequently used in external beam radiation therapy for dose verification purposes. The aim of this study was to investigate the dose-response characteristics of the EPID in the Unity MR-linac (Elekta AB, Stockholm, Sweden) relevant for dosimetric applications under clinical conditions. EPID images and ionization chamber (IC) measurements were used to study the effects of the magnetic field, the scatter generated in the MR housing reaching the EPID, and inhomogeneous attenuation from the MR housing. Dose linearity and dose rate dependencies were also determined. The magnetic field strength at EPID level did not exceed 10 mT, and dose linearity and dose rate dependencies proved to be comparable to that on a conventional linac. Profiles of fields, delivered with and without the magnetic field, were indistinguishable. The EPID center had an offset of 5.6 cm in the longitudinal direction, compared to the beam central axis, meaning that large fields in this direction will partially fall outside the detector area and not be suitable for verification. Beam attenuation by the MRI scanner and the table is gantry angle dependent, presenting a minimum attenuation of 67% relative to the 90° measurement. Repeatability, observed over two months, was within 0.5% (1 SD). In order to use the EPID for dosimetric applications in the MR-linac, challenges related to the EPID position, scatter from the MR housing, and the inhomogeneous, gantry angle-dependent attenuation of the beam will need to be solved.

  19. Characterization of the a-Si EPID in the unity MR-linac for dosimetric applications

    NASA Astrophysics Data System (ADS)

    Torres-Xirau, I.; Olaciregui-Ruiz, I.; Baldvinsson, G.; Mijnheer, B. J.; van der Heide, U. A.; Mans, A.

    2018-01-01

    Electronic portal imaging devices (EPIDs) are frequently used in external beam radiation therapy for dose verification purposes. The aim of this study was to investigate the dose-response characteristics of the EPID in the Unity MR-linac (Elekta AB, Stockholm, Sweden) relevant for dosimetric applications under clinical conditions. EPID images and ionization chamber (IC) measurements were used to study the effects of the magnetic field, the scatter generated in the MR housing reaching the EPID, and inhomogeneous attenuation from the MR housing. Dose linearity and dose rate dependencies were also determined. The magnetic field strength at EPID level did not exceed 10 mT, and dose linearity and dose rate dependencies proved to be comparable to that on a conventional linac. Profiles of fields, delivered with and without the magnetic field, were indistinguishable. The EPID center had an offset of 5.6 cm in the longitudinal direction, compared to the beam central axis, meaning that large fields in this direction will partially fall outside the detector area and not be suitable for verification. Beam attenuation by the MRI scanner and the table is gantry angle dependent, presenting a minimum attenuation of 67% relative to the 90° measurement. Repeatability, observed over two months, was within 0.5% (1 SD). In order to use the EPID for dosimetric applications in the MR-linac, challenges related to the EPID position, scatter from the MR housing, and the inhomogeneous, gantry angle-dependent attenuation of the beam will need to be solved.

  20. 7 CFR 272.8 - State income and eligibility verification system.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 4 2010-01-01 2010-01-01 false State income and eligibility verification system. 272... PARTICIPATING STATE AGENCIES § 272.8 State income and eligibility verification system. (a) General. (1) State agencies may maintain and use an income and eligibility verification system (IEVS), as specified in this...

  1. 24 CFR 985.3 - Indicators, HUD verification methods and ratings.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false Indicators, HUD verification..., HUD verification methods and ratings. This section states the performance indicators that are used to assess PHA Section 8 management. HUD will use the verification method identified for each indicator in...

  2. 78 FR 56268 - Pipeline Safety: Public Workshop on Integrity Verification Process, Comment Extension

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-12

    .... PHMSA-2013-0119] Pipeline Safety: Public Workshop on Integrity Verification Process, Comment Extension... public workshop on ``Integrity Verification Process'' which took place on August 7, 2013. The notice also sought comments on the proposed ``Integrity Verification Process.'' In response to the comments received...

  3. 10 CFR 9.54 - Verification of identity of individuals making requests.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 1 2010-01-01 2010-01-01 false Verification of identity of individuals making requests. 9... About Them § 9.54 Verification of identity of individuals making requests. (a) Identification... respecting records about himself, except that no verification of identity shall be required if the records...

  4. 76 FR 50164 - Protocol Gas Verification Program and Minimum Competency Requirements for Air Emission Testing...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-12

    ...-AQ06 Protocol Gas Verification Program and Minimum Competency Requirements for Air Emission Testing... correct certain portions of the Protocol Gas Verification Program and Minimum Competency Requirements for... final rule that amends the Agency's Protocol Gas Verification Program (PGVP) and the minimum competency...

  5. 30 CFR 227.601 - What are a State's responsibilities if it performs automated verification?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... performs automated verification? 227.601 Section 227.601 Mineral Resources MINERALS MANAGEMENT SERVICE... Perform Delegated Functions § 227.601 What are a State's responsibilities if it performs automated verification? To perform automated verification of production reports or royalty reports, you must: (a) Verify...

  6. 10 CFR 9.54 - Verification of identity of individuals making requests.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 1 2013-01-01 2013-01-01 false Verification of identity of individuals making requests. 9... About Them § 9.54 Verification of identity of individuals making requests. (a) Identification... respecting records about himself, except that no verification of identity shall be required if the records...

  7. 10 CFR 9.54 - Verification of identity of individuals making requests.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 1 2012-01-01 2012-01-01 false Verification of identity of individuals making requests. 9... About Them § 9.54 Verification of identity of individuals making requests. (a) Identification... respecting records about himself, except that no verification of identity shall be required if the records...

  8. 10 CFR 9.54 - Verification of identity of individuals making requests.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 1 2014-01-01 2014-01-01 false Verification of identity of individuals making requests. 9... About Them § 9.54 Verification of identity of individuals making requests. (a) Identification... respecting records about himself, except that no verification of identity shall be required if the records...

  9. Verification test report on a solar heating and hot water system

    NASA Technical Reports Server (NTRS)

    1978-01-01

    Information is provided on the development, qualification and acceptance verification of commercial solar heating and hot water systems and components. The verification includes the performances, the efficiences and the various methods used, such as similarity, analysis, inspection, test, etc., that are applicable to satisfying the verification requirements.

  10. 46 CFR 61.40-3 - Design verification testing.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 2 2011-10-01 2011-10-01 false Design verification testing. 61.40-3 Section 61.40-3... INSPECTIONS Design Verification and Periodic Testing of Vital System Automation § 61.40-3 Design verification testing. (a) Tests must verify that automated vital systems are designed, constructed, and operate in...

  11. An unattended verification station for UF6 cylinders: Field trial findings

    NASA Astrophysics Data System (ADS)

    Smith, L. E.; Miller, K. A.; McDonald, B. S.; Webster, J. B.; Zalavadia, M. A.; Garner, J. R.; Stewart, S. L.; Branney, S. J.; Todd, L. C.; Deshmukh, N. S.; Nordquist, H. A.; Kulisek, J. A.; Swinhoe, M. T.

    2017-12-01

    In recent years, the International Atomic Energy Agency (IAEA) has pursued innovative techniques and an integrated suite of safeguards measures to address the verification challenges posed by the front end of the nuclear fuel cycle. Among the unattended instruments currently being explored by the IAEA is an Unattended Cylinder Verification Station (UCVS), which could provide automated, independent verification of the declared relative enrichment, 235U mass, total uranium mass, and identification for all declared uranium hexafluoride cylinders in a facility (e.g., uranium enrichment plants and fuel fabrication plants). Under the auspices of the United States and European Commission Support Programs to the IAEA, a project was undertaken to assess the technical and practical viability of the UCVS concept. The first phase of the UCVS viability study was centered on a long-term field trial of a prototype UCVS system at a fuel fabrication facility. A key outcome of the study was a quantitative performance evaluation of two nondestructive assay (NDA) methods being considered for inclusion in a UCVS: Hybrid Enrichment Verification Array (HEVA), and Passive Neutron Enrichment Meter (PNEM). This paper provides a description of the UCVS prototype design and an overview of the long-term field trial. Analysis results and interpretation are presented with a focus on the performance of PNEM and HEVA for the assay of over 200 "typical" Type 30B cylinders, and the viability of an "NDA Fingerprint" concept as a high-fidelity means to periodically verify that material diversion has not occurred.

  12. Suite of Benchmark Tests to Conduct Mesh-Convergence Analysis of Nonlinear and Non-constant Coefficient Transport Codes

    NASA Astrophysics Data System (ADS)

    Zamani, K.; Bombardelli, F. A.

    2014-12-01

    Verification of geophysics codes is imperative to avoid serious academic as well as practical consequences. In case that access to any given source code is not possible, the Method of Manufactured Solution (MMS) cannot be employed in code verification. In contrast, employing the Method of Exact Solution (MES) has several practical advantages. In this research, we first provide four new one-dimensional analytical solutions designed for code verification; these solutions are able to uncover the particular imperfections of the Advection-diffusion-reaction equation, such as nonlinear advection, diffusion or source terms, as well as non-constant coefficient equations. After that, we provide a solution of Burgers' equation in a novel setup. Proposed solutions satisfy the continuity of mass for the ambient flow, which is a crucial factor for coupled hydrodynamics-transport solvers. Then, we use the derived analytical solutions for code verification. To clarify gray-literature issues in the verification of transport codes, we designed a comprehensive test suite to uncover any imperfection in transport solvers via a hierarchical increase in the level of tests' complexity. The test suite includes hundreds of unit tests and system tests to check vis-a-vis the portions of the code. Examples for checking the suite start by testing a simple case of unidirectional advection; then, bidirectional advection and tidal flow and build up to nonlinear cases. We design tests to check nonlinearity in velocity, dispersivity and reactions. The concealing effect of scales (Peclet and Damkohler numbers) on the mesh-convergence study and appropriate remedies are also discussed. For the cases in which the appropriate benchmarks for mesh convergence study are not available, we utilize symmetry. Auxiliary subroutines for automation of the test suite and report generation are designed. All in all, the test package is not only a robust tool for code verification but it also provides comprehensive insight on the ADR solvers capabilities. Such information is essential for any rigorous computational modeling of ADR equation for surface/subsurface pollution transport. We also convey our experiences in finding several errors which were not detectable with routine verification techniques.

  13. Real-Time Verification of a High-Dose-Rate Iridium 192 Source Position Using a Modified C-Arm Fluoroscope

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nose, Takayuki, E-mail: nose-takayuki@nms.ac.jp; Chatani, Masashi; Otani, Yuki

    Purpose: High-dose-rate (HDR) brachytherapy misdeliveries can occur at any institution, and they can cause disastrous results. Even a patient's death has been reported. Misdeliveries could be avoided with real-time verification methods. In 1996, we developed a modified C-arm fluoroscopic verification of an HDR Iridium 192 source position prevent these misdeliveries. This method provided excellent image quality sufficient to detect errors, and it has been in clinical use at our institutions for 20 years. The purpose of the current study is to introduce the mechanisms and validity of our straightforward C-arm fluoroscopic verification method. Methods and Materials: Conventional X-ray fluoroscopic images aremore » degraded by spurious signals and quantum noise from Iridium 192 photons, which make source verification impractical. To improve image quality, we quadrupled the C-arm fluoroscopic X-ray dose per pulse. The pulse rate was reduced by a factor of 4 to keep the average exposure compliant with Japanese medical regulations. The images were then displayed with quarter-frame rates. Results: Sufficient quality was obtained to enable observation of the source position relative to both the applicators and the anatomy. With this method, 2 errors were detected among 2031 treatment sessions for 370 patients within a 6-year period. Conclusions: With the use of a modified C-arm fluoroscopic verification method, treatment errors that were otherwise overlooked were detected in real time. This method should be given consideration for widespread use.« less

  14. Real-Time Verification of a High-Dose-Rate Iridium 192 Source Position Using a Modified C-Arm Fluoroscope.

    PubMed

    Nose, Takayuki; Chatani, Masashi; Otani, Yuki; Teshima, Teruki; Kumita, Shinichirou

    2017-03-15

    High-dose-rate (HDR) brachytherapy misdeliveries can occur at any institution, and they can cause disastrous results. Even a patient's death has been reported. Misdeliveries could be avoided with real-time verification methods. In 1996, we developed a modified C-arm fluoroscopic verification of an HDR Iridium 192 source position prevent these misdeliveries. This method provided excellent image quality sufficient to detect errors, and it has been in clinical use at our institutions for 20 years. The purpose of the current study is to introduce the mechanisms and validity of our straightforward C-arm fluoroscopic verification method. Conventional X-ray fluoroscopic images are degraded by spurious signals and quantum noise from Iridium 192 photons, which make source verification impractical. To improve image quality, we quadrupled the C-arm fluoroscopic X-ray dose per pulse. The pulse rate was reduced by a factor of 4 to keep the average exposure compliant with Japanese medical regulations. The images were then displayed with quarter-frame rates. Sufficient quality was obtained to enable observation of the source position relative to both the applicators and the anatomy. With this method, 2 errors were detected among 2031 treatment sessions for 370 patients within a 6-year period. With the use of a modified C-arm fluoroscopic verification method, treatment errors that were otherwise overlooked were detected in real time. This method should be given consideration for widespread use. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Towards the development of a rapid, portable, surface enhanced Raman spectroscopy based cleaning verification system for the drug nelarabine.

    PubMed

    Corrigan, Damion K; Salton, Neale A; Preston, Chris; Piletsky, Sergey

    2010-09-01

    Cleaning verification is a scientific and economic problem for the pharmaceutical industry. A large amount of potential manufacturing time is lost to the process of cleaning verification. This involves the analysis of residues on spoiled manufacturing equipment, with high-performance liquid chromatography (HPLC) being the predominantly employed analytical technique. The aim of this study was to develop a portable cleaning verification system for nelarabine using surface enhanced Raman spectroscopy (SERS). SERS was conducted using a portable Raman spectrometer and a commercially available SERS substrate to develop a rapid and portable cleaning verification system for nelarabine. Samples of standard solutions and swab extracts were deposited onto the SERS active surfaces, allowed to dry and then subjected to spectroscopic analysis. Nelarabine was amenable to analysis by SERS and the necessary levels of sensitivity were achievable. It is possible to use this technology for a semi-quantitative limits test. Replicate precision, however, was poor due to the heterogeneous drying pattern of nelarabine on the SERS active surface. Understanding and improving the drying process in order to produce a consistent SERS signal for quantitative analysis is desirable. This work shows the potential application of SERS for cleaning verification analysis. SERS may not replace HPLC as the definitive analytical technique, but it could be used in conjunction with HPLC so that swabbing is only carried out once the portable SERS equipment has demonstrated that the manufacturing equipment is below the threshold contamination level.

  16. SU-G-JeP3-06: Lower KV Image Dose Are Expected From a Limited-Angle Intra-Fractional Verification (LIVE) System for SBRT Treatments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, G; Yin, F; Ren, L

    Purpose: In order to track the tumor movement for patient positioning verification during arc treatment delivery or in between 3D/IMRT beams for stereotactic body radiation therapy (SBRT), the limited-angle kV projections acquisition simultaneously during arc treatment delivery or in-between static treatment beams as the gantry moves to the next beam angle was proposed. The purpose of this study is to estimate additional imaging dose resulting from multiple tomosynthesis acquisitions in-between static treatment beams and to compare with that of a conventional kV-CBCT acquisition. Methods: kV imaging system integrated into Varian TrueBeam accelerators was modeled using EGSnrc Monte Carlo user code,more » BEAMnrc and DOSXYZnrc code was used in dose calculations. The simulated realistic kV beams from the Varian TrueBeam OBI 1.5 system were used to calculate dose to patient based on CT images. Organ doses were analyzed using DVHs. The imaging dose to patient resulting from realistic multiple tomosynthesis acquisitions with each 25–30 degree kV source rotation between 6 treatment beam gantry angles was studied. Results: For a typical lung SBRT treatment delivery much lower (20–50%) kV imaging doses from the sum of realistic six tomosynthesis acquisitions with each 25–30 degree x-ray source rotation between six treatment beam gantry angles were observed compared to that from a single CBCT image acquisition. Conclusion: This work indicates that the kV imaging in this proposed Limited-angle Intra-fractional Verification (LIVE) System for SBRT Treatments has a negligible imaging dose increase. It is worth to note that the MV imaging dose caused by MV projection acquisition in-between static beams in LIVE can be minimized by restricting the imaging to the target region and reducing the number of projections acquired. For arc treatments, MV imaging acquisition in LIVE does not add additional imaging dose as the MV images are acquired from treatment beams directly during the treatment.« less

  17. SU-D-213-05: Design, Evaluation and First Applications of a Off-Site State-Of-The-Art 3D Dosimetry System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malcolm, J; Mein, S; McNiven, A

    2015-06-15

    Purpose: To design, construct and commission a prototype in-house three dimensional (3D) dose verification system for stereotatic body radiotherapy (SBRT) verification at an off-site partner institution. To investigate the potential of this system to achieve sufficient performance (1mm resolution, 3% noise, within 3% of true dose reading) for SBRT verification. Methods: The system was designed utilizing a parallel ray geometry instigated by precision telecentric lenses and an LED 630nm light source. Using a radiochromic dosimeter, a 3D dosimetric comparison with our gold-standard system and treatment planning software (Eclipse) was done for a four-field box treatment, under gamma passing criteria ofmore » 3%/3mm/10% dose threshold. Post off-site installation, deviations in the system’s dose readout performance was assessed by rescanning the four-field box irradiated dosimeter and using line-profiles to compare on-site and off-site mean and noise levels in four distinct dose regions. As a final step, an end-to-end test of the system was completed at the off-site location, including CT-simulation, irradiation of the dosimeter and a 3D dosimetric comparison of the planned (Pinnacle{sup 3}) to delivered dose for a spinal SBRT treatment(12 Gy per fraction). Results: The noise level in the high and medium dose regions of the four field box treatment was relatively 5% pre and post installation. This reflects the reduction in positional uncertainty through the new design. This At 1mm dose voxels, the gamma pass rates(3%,3mm) for our in-house gold standard system and the off-site system were comparable at 95.8% and 93.2% respectively. Conclusion: This work will describe the end-to-end process and results of designing, installing, and commissioning a state-of-the-art 3D dosimetry system created for verification of advanced radiation treatments including spinal radiosurgery.« less

  18. Evaluation of a single-scan protocol for radiochromic film dosimetry.

    PubMed

    Shimohigashi, Yoshinobu; Araki, Fujio; Maruyama, Masato; Nakaguchi, Yuji; Kuwahara, Satoshi; Nagasue, Nozomu; Kai, Yudai

    2015-03-08

    The purpose of this study was to evaluate a single-scan protocol using Gafchromic EBT3 film (EBT3) by comparing it with the commonly used 24-hr measurement protocol for radiochromic film dosimetry. Radiochromic film is generally scanned 24 hr after film exposure (24-hr protocol). The single-scan protocol enables measurement results within a short time using only the verification film, one calibration film, and unirradiated film. The single-scan protocol was scanned 30 min after film irradiation. The EBT3 calibration curves were obtained with the multichannel film dosimetry method. The dose verifications for each protocol were performed with the step pattern, pyramid pattern, and clinical treatment plans for intensity-modulated radiation therapy (IMRT). The absolute dose distributions for each protocol were compared with those calculated by the treatment planning system (TPS) using gamma evaluation at 3% and 3 mm. The dose distribution for the single-scan protocol was within 2% of the 24-hr protocol dose distribution. For the step pattern, the absolute dose discrepancies between the TPS for the single-scan and 24-hr protocols were 2.0 ± 1.8 cGy and 1.4 ± 1.2 cGy at the dose plateau, respectively. The pass rates were 96.0% for the single-scan protocol and 95.9% for the 24-hr protocol. Similarly, the dose discrepancies for the pyramid pattern were 3.6 ± 3.5cGy and 2.9 ± 3.3 cGy, respectively, while the pass rates for the pyramid pattern were 95.3% and 96.4%, respectively. The average pass rates for the four IMRT plans were 96.7% ± 1.8% for the single-scan protocol and 97.3% ± 1.4% for the 24-hr protocol. Thus, the single-scan protocol measurement is useful for dose verification of IMRT, based on its accuracy and efficiency.

  19. Evaluation of a single‐scan protocol for radiochromic film dosimetry

    PubMed Central

    Araki, Fujio; Maruyama, Masato; Nakaguchi, Yuji; Kuwahara, Satoshi; Nagasue, Nozomu; Kai, Yudai

    2015-01-01

    The purpose of this study was to evaluate a single‐scan protocol using Gafchromic EBT3 film (EBT3) by comparing it with the commonly used 24‐hr measurement protocol for radiochromic film dosimetry. Radiochromic film is generally scanned 24 hr after film exposure (24‐hr protocol). The single‐scan protocol enables measurement results within a short time using only the verification film, one calibration film, and unirradiated film. The single‐scan protocol was scanned 30 min after film irradiation. The EBT3 calibration curves were obtained with the multichannel film dosimetry method. The dose verifications for each protocol were performed with the step pattern, pyramid pattern, and clinical treatment plans for intensity‐modulated radiation therapy (IMRT). The absolute dose distributions for each protocol were compared with those calculated by the treatment planning system (TPS) using gamma evaluation at 3% and 3 mm. The dose distribution for the single‐scan protocol was within 2% of the 24‐hr protocol dose distribution. For the step pattern, the absolute dose discrepancies between the TPS for the single‐scan and 24‐hr protocols were 2.0±1.8 cGy and 1.4±1.2 cGy at the dose plateau, respectively. The pass rates were 96.0% for the single‐scan protocol and 95.9% for the 24‐hr protocol. Similarly, the dose discrepancies for the pyramid pattern were 3.6±3.5 cGy and 2.9±3.3 cGy, respectively, while the pass rates for the pyramid pattern were 95.3% and 96.4%, respectively. The average pass rates for the four IMRT plans were 96.7%±1.8% for the single‐scan protocol and 97.3%±1.4% for the 24‐hr protocol. Thus, the single‐scan protocol measurement is useful for dose verification of IMRT, based on its accuracy and efficiency. PACS number: 87.55.Qr PMID:26103194

  20. TPS(PET)-A TPS-based approach for in vivo dose verification with PET in proton therapy.

    PubMed

    Frey, K; Bauer, J; Unholtz, D; Kurz, C; Krämer, M; Bortfeld, T; Parodi, K

    2014-01-06

    Since the interest in ion-irradiation for tumour therapy has significantly increased over the last few decades, intensive investigations are performed to improve the accuracy of this form of patient treatment. One major goal is the development of methods for in vivo dose verification. In proton therapy, a PET (positron emission tomography)-based approach measuring the irradiation-induced tissue activation inside the patient has been already clinically implemented. The acquired PET images can be compared to an expectation, derived under the assumption of a correct treatment application, to validate the particle range and the lateral field position in vivo. In the context of this work, TPSPET is introduced as a new approach to predict proton-irradiation induced three-dimensional positron emitter distributions by means of the same algorithms of the clinical treatment planning system (TPS). In order to perform additional activity calculations, reaction-channel-dependent input positron emitter depth distributions are necessary, which are determined from the application of a modified filtering approach to the TPS reference depth dose profiles in water. This paper presents the implementation of TPSPET on the basis of the research treatment planning software treatment planning for particles. The results are validated in phantom and patient studies against Monte Carlo simulations, and compared to β(+)-emitter distributions obtained from a slightly modified version of the originally proposed one-dimensional filtering approach applied to three-dimensional dose distributions. In contrast to previously introduced methods, TPSPET provides a faster implementation, the results show no sensitivity to lateral field extension and the predicted β(+)-emitter densities are fully consistent to the planned treatment dose as they are calculated by the same pencil beam algorithms. These findings suggest a large potential of the application of TPSPET for in vivo dose verification in the daily clinical routine.

  1. A novel approach to EPID-based 3D volumetric dosimetry for IMRT and VMAT QA

    NASA Astrophysics Data System (ADS)

    Alhazmi, Abdulaziz; Gianoli, Chiara; Neppl, Sebastian; Martins, Juliana; Veloza, Stella; Podesta, Mark; Verhaegen, Frank; Reiner, Michael; Belka, Claus; Parodi, Katia

    2018-06-01

    Intensity modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT) are relatively complex treatment delivery techniques and require quality assurance (QA) procedures. Pre-treatment dosimetric verification represents a fundamental QA procedure in daily clinical routine in radiation therapy. The purpose of this study is to develop an EPID-based approach to reconstruct a 3D dose distribution as imparted to a virtual cylindrical water phantom to be used for plan-specific pre-treatment dosimetric verification for IMRT and VMAT plans. For each depth, the planar 2D dose distributions acquired in air were back-projected and convolved by depth-specific scatter and attenuation kernels. The kernels were obtained by making use of scatter and attenuation models to iteratively estimate the parameters from a set of reference measurements. The derived parameters served as a look-up table for reconstruction of arbitrary measurements. The summation of the reconstructed 3D dose distributions resulted in the integrated 3D dose distribution of the treatment delivery. The accuracy of the proposed approach was validated in clinical IMRT and VMAT plans by means of gamma evaluation, comparing the reconstructed 3D dose distributions with Octavius measurement. The comparison was carried out using (3%, 3 mm) criteria scoring 99% and 96% passing rates for IMRT and VMAT, respectively. An accuracy comparable to the one of the commercial device for 3D volumetric dosimetry was demonstrated. In addition, five IMRT and five VMAT were validated against the 3D dose calculation performed by the TPS in a water phantom using the same passing rate criteria. The median passing rates within the ten treatment plans was 97.3%, whereas the lowest was 95%. Besides, the reconstructed 3D distribution is obtained without predictions relying on forward dose calculation and without external phantom or dosimetric devices. Thus, the approach provides a fully automated, fast and easy QA procedure for plan-specific pre-treatment dosimetric verification.

  2. SU-F-J-32: Do We Need KV Imaging During CBCT Based Patient Set-Up for Lung Radiation Therapy?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gopal, A; Zhou, J; Prado, K

    Purpose: To evaluate the role of 2D kilovoltage (kV) imaging to complement cone beam CT (CBCT) imaging in a shift threshold based image guided radiation therapy (IGRT) strategy for conventional lung radiotherapy. Methods: A retrospective study was conducted by analyzing IGRT couch shift trends for 15 patients that received lung radiation therapy to evaluate the benefit of performing orthogonal kV imaging prior to CBCT imaging. Herein, a shift threshold based IGRT protocol was applied, which would mandate additional CBCT verification if the applied patient shifts exceeded 3 mm to avoid intraobserver variability in CBCT registration and to confirm table shifts.more » For each patient, two IGRT strategies: kV + CBCT and CBCT alone, were compared and the recorded patient shifts were categorized into whether additional CBCT acquisition would have been mandated or not. The effectiveness of either strategy was gauged by the likelihood of needing additional CBCT imaging for accurate patient set-up. Results: The use of CBCT alone was 6 times more likely to require an additional CBCT than KV+CBCT, for a 3 mm shift threshold (88% vs 14%). The likelihood of additional CBCT verification generally increased with lower shift thresholds, and was significantly lower when kV+CBCT was used (7% with 5 mm shift threshold, 36% with 2 mm threshold), than with CBCT alone (61% with 5 mm shift threshold, 97% with 2 mm threshold). With CBCT alone, treatment time increased by 2.2 min and dose increased by 1.9 cGy per fraction on average due to additional CBCT with a 3mm shift threshold. Conclusion: The benefit of kV imaging to screen for gross misalignments led to more accurate CBCT based patient localization compared with using CBCT alone. The subsequently reduced need for additional CBCT verification will minimize treatment time and result in less overall patient imaging dose.« less

  3. Simulating flow around scaled model of a hypersonic vehicle in wind tunnel

    NASA Astrophysics Data System (ADS)

    Markova, T. V.; Aksenov, A. A.; Zhluktov, S. V.; Savitsky, D. V.; Gavrilov, A. D.; Son, E. E.; Prokhorov, A. N.

    2016-11-01

    A prospective hypersonic HEXAFLY aircraft is considered in the given paper. In order to obtain the aerodynamic characteristics of a new construction design of the aircraft, experiments with a scaled model have been carried out in a wind tunnel under different conditions. The runs have been performed at different angles of attack with and without hydrogen combustion in the scaled propulsion engine. However, the measured physical quantities do not provide all the information about the flowfield. Numerical simulation can complete the experimental data as well as to reduce the number of wind tunnel experiments. Besides that, reliable CFD software can be used for calculations of the aerodynamic characteristics for any possible design of the full-scale aircraft under different operation conditions. The reliability of the numerical predictions must be confirmed in verification study of the software. The given work is aimed at numerical investigation of the flowfield around and inside the scaled model of the HEXAFLY-CIAM module under wind tunnel conditions. A cold run (without combustion) was selected for this study. The calculations are performed in the FlowVision CFD software. The flow characteristics are compared against the available experimental data. The carried out verification study confirms the capability of the FlowVision CFD software to calculate the flows discussed.

  4. Formal Verification at System Level

    NASA Astrophysics Data System (ADS)

    Mazzini, S.; Puri, S.; Mari, F.; Melatti, I.; Tronci, E.

    2009-05-01

    System Level Analysis calls for a language comprehensible to experts with different background and yet precise enough to support meaningful analyses. SysML is emerging as an effective balance between such conflicting goals. In this paper we outline some the results obtained as for SysML based system level functional formal verification by an ESA/ESTEC study, with a collaboration among INTECS and La Sapienza University of Roma. The study focuses on SysML based system level functional requirements techniques.

  5. Verification of relationship model between Korean new elderly class's recovery resilience and productive aging.

    PubMed

    Cho, Gun-Sang; Kim, Dae-Sung; Yi, Eun-Surk

    2015-12-01

    The purpose of this study is to verification of relationship model between Korean new elderly class's recovery resilience and productive aging. As of 2013, this study sampled preliminary elderly people in Gyeonggi-do and other provinces nationwide. Data from a total of effective 484 subjects was analyzed. The collected data was processed using the IBM SPSS 20.0 and AMOS 20.0, and underwent descriptive statistical analysis, confirmatory factor analysis, and structure model verification. The path coefficient associated with model fitness was examined. The standardization path coefficient between recovery resilience and productive aging is β=0.975 (t=14.790), revealing a statistically significant positive effect. Thus, it was found that the proposed basic model on the direct path of recovery resilience and productive aging was fit for the model.

  6. Verification of relationship model between Korean new elderly class’s recovery resilience and productive aging

    PubMed Central

    Cho, Gun-Sang; Kim, Dae-Sung; Yi, Eun-Surk

    2015-01-01

    The purpose of this study is to verification of relationship model between Korean new elderly class’s recovery resilience and productive aging. As of 2013, this study sampled preliminary elderly people in Gyeonggi-do and other provinces nationwide. Data from a total of effective 484 subjects was analyzed. The collected data was processed using the IBM SPSS 20.0 and AMOS 20.0, and underwent descriptive statistical analysis, confirmatory factor analysis, and structure model verification. The path coefficient associated with model fitness was examined. The standardization path coefficient between recovery resilience and productive aging is β=0.975 (t=14.790), revealing a statistically significant positive effect. Thus, it was found that the proposed basic model on the direct path of recovery resilience and productive aging was fit for the model. PMID:26730383

  7. Temporal comparison of ultrasound vs. auscultation and capnography in verification of endotracheal tube placement.

    PubMed

    Pfeiffer, P; Rudolph, S S; Børglum, J; Isbye, D L

    2011-11-01

    This study compared the time consumption of bilateral lung ultrasound with auscultation and capnography for verifying endotracheal intubation. A prospective, paired, and investigator-blinded study carried out in the operating theatre. Twenty-five adult patients requiring endotracheal intubation were included. During intubation, transtracheal ultrasound was performed to visualize passage of the endotracheal tube. During bag ventilation, bilateral lung ultrasound was performed for the detection of lung sliding as a sign of ventilation simultaneous with capnography and auscultation of the epigastrium and chest. Primary outcome measure was time difference to confirmed endotracheal intubation between ultrasound and auscultation alone. Secondary outcome measure was time difference between ultrasound and auscultation combined with capnography. Both methods verified endotracheal tube placement in all patients. In 68% of patients, endotracheal tube placement was visualized by real-time transtracheal ultrasound. Comparing ultrasound with the combination of auscultation and capnography, there was a significant difference between the two methods. Median time for ultrasound was 40 s [interquartile range (IQR) 35-48 s] vs. 48 s (IQR 45-53 s), P < 0.0001. Mean difference was -7.1 s in favour of ultrasound [95% confidence interval (CI) -9.4--4.8 s]. No significant difference was found between ultrasound compared with auscultation alone. Median time for auscultation alone was 42 s (IQR 37-47 s), P = 0.6, with a mean difference of -0.88 s in favour of ultrasound (95% CI -4.2-2.5 s). Verification of endotracheal tube placement with ultrasound is as fast as auscultation alone and faster than the standard method of auscultation and capnography. © 2011 The Authors. Acta Anaesthesiologica Scandinavica © 2011 The Acta Anaesthesiologica Scandinavica Foundation.

  8. Verified compilation of Concurrent Managed Languages

    DTIC Science & Technology

    2017-11-01

    designs for compiler intermediate representations that facilitate mechanized proofs and verification; and (d) a realistic case study that combines these...ideas to prove the correctness of a state-of- the-art concurrent garbage collector. 15. SUBJECT TERMS Program verification, compiler design ...Even though concurrency is a pervasive part of modern software and hardware systems, it has often been ignored in safety-critical system designs . A

  9. Pharmacy Automation in Navy Medicine: A Study of Naval Medical Center San Diego

    DTIC Science & Technology

    2015-09-01

    to pharmacist verification. ...............................................................7  Figure 3.  Robotic Delivery System Installed at Naval...medication, caps the vial, and affixes the label. This completed prescription is then placed on the conveyer belt for routing to pharmacist ...performing all steps, including transportation, up to pharmacist verification via the conveyer belt. Manual fills are located along the conveyor system

  10. Two-Black Box Concept for Warhead Verification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bates, Cameron Russell; Frame, Katherine Chiyoko; Mckigney, Edward Allen

    2017-03-06

    We have created a possible solution to meeting the requirements of certification/authentication while still employing complicated criteria. Technical solutions to protecting information from the host in an inspection environment needs to be assessed by those with specific expertise but, LANL can still study the verification problem. The two-black box framework developed provides another potential solution to the confidence vs. certification paradox.

  11. ENVIRONMENTAL TECHNOLOGY VERIFICATION REPORT - PHYSICAL REMOVAL OF PARTICULATE CONTAMINANTS IN DRINKING WATER: POLYMEM UF 120 S2 ULTRAFILTRATION MEMBRANE MODULE, LUXENBURG, WISCONSIN

    EPA Science Inventory

    Verification testing of the Polymem UF120 S2 Ultrafiltration Membrane Module was conducted over a 46-day period at the Green Bay Water Utility Filtration Plant, Luxemburg, Wisconsin. The ETV testing described herein was funded in conjunction with a 12-month membrane pilot study f...

  12. Review of waste package verification tests. Semiannual report, October 1982-March 1983

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soo, P.

    1983-08-01

    The current study is part of an ongoing task to specify tests that may be used to verify that engineered waste package/repository systems comply with NRC radionuclide containment and controlled release performance objectives. Work covered in this report analyzes verification tests for borosilicate glass waste forms and bentonite- and zeolite-based packing mateials (discrete backfills). 76 references.

  13. NHEXAS PHASE I ARIZONA STUDY--STANDARD OPERATING PROCEDURE FOR CODING AND CODING VERIFICATION (HAND ENTRY) (UA-D-14.0)

    EPA Science Inventory

    The purpose of this SOP is to define the coding strategy for coding and coding verification of hand-entered data. It applies to the coding of all physical forms, especially those coded by hand. The strategy was developed for use in the Arizona NHEXAS project and the "Border" st...

  14. The Verification of a Method for Detecting and Quantifying Diethylene Glycol, Triethylene Glycol, Tetraethylene Glycol, 2-Butoxyethanol and 2-Methoxyethanolin in Ground and Surface Waters

    EPA Science Inventory

    This verification study was a special project designed to determine the efficacy of a draft standard operating procedure (SOP) developed by US EPA Region 3 for the determination of selected glycols in drinking waters that may have been impacted by active unconventional oil and ga...

  15. Free and Reduced-Price Meal Application and Income Verification Practices in School Nutrition Programs in the United States

    ERIC Educational Resources Information Center

    Kwon, Junehee; Lee, Yee Ming; Park, Eunhye; Wang, Yujia; Rushing, Keith

    2017-01-01

    Purpose/Objectives: This study assessed current practices and attitudes of school nutrition program (SNP) management staff regarding free and reduced-price (F-RP) meal application and verification in SNPs. Methods: Stratified, randomly selected 1,500 SNP management staff in 14 states received a link to an online questionnaire and/or a printed…

  16. Nonlinear 3D MHD verification study: SpeCyl and PIXIE3D codes for RFP and Tokamak plasmas

    NASA Astrophysics Data System (ADS)

    Bonfiglio, D.; Cappello, S.; Chacon, L.

    2010-11-01

    A strong emphasis is presently placed in the fusion community on reaching predictive capability of computational models. An essential requirement of such endeavor is the process of assessing the mathematical correctness of computational tools, termed verification [1]. We present here a successful nonlinear cross-benchmark verification study between the 3D nonlinear MHD codes SpeCyl [2] and PIXIE3D [3]. Excellent quantitative agreement is obtained in both 2D and 3D nonlinear visco-resistive dynamics for reversed-field pinch (RFP) and tokamak configurations [4]. RFP dynamics, in particular, lends itself as an ideal non trivial test-bed for 3D nonlinear verification. Perspectives for future application of the fully-implicit parallel code PIXIE3D to RFP physics, in particular to address open issues on RFP helical self-organization, will be provided. [4pt] [1] M. Greenwald, Phys. Plasmas 17, 058101 (2010) [0pt] [2] S. Cappello and D. Biskamp, Nucl. Fusion 36, 571 (1996) [0pt] [3] L. Chac'on, Phys. Plasmas 15, 056103 (2008) [0pt] [4] D. Bonfiglio, L. Chac'on and S. Cappello, Phys. Plasmas 17 (2010)

  17. 37 CFR 201.29 - Access to, and confidentiality of, Statements of Account, Verification Auditor's Reports, and...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... confidentiality of, Statements of Account, Verification Auditor's Reports, and other verification information... GENERAL PROVISIONS § 201.29 Access to, and confidentiality of, Statements of Account, Verification Auditor... Account, including the Primary Auditor's Reports, filed under 17 U.S.C. 1003(c) and access to a Verifying...

  18. Formal Multilevel Hierarchical Verification of Synchronous MOS VLSI Circuits.

    DTIC Science & Technology

    1987-06-01

    166 12.4 Capacitance Coupling............................. 166 12.5 Multiple Abstraction Fuctions ....................... 168...depend on whether it is performing flat verification or hierarchical verification. The primary operations of Silica Pithecus when performing flat...signals never arise. The primary operation of Silica Pithecus when performing hierarchical verification is processing constraints to show they hold

  19. 49 CFR 802.7 - Requests: How, where, and when presented; verification of identity of individuals making requests...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ...; verification of identity of individuals making requests; accompanying persons; and procedures for... Procedures and Requirements § 802.7 Requests: How, where, and when presented; verification of identity of... which the record is contained. (d) Verification of identity of requester. (1) For written requests, the...

  20. 49 CFR 802.7 - Requests: How, where, and when presented; verification of identity of individuals making requests...

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ...; verification of identity of individuals making requests; accompanying persons; and procedures for... Procedures and Requirements § 802.7 Requests: How, where, and when presented; verification of identity of... which the record is contained. (d) Verification of identity of requester. (1) For written requests, the...

Top