Science.gov

Sample records for accurate performance evaluation

  1. Performance evaluation of ocean color satellite models for deriving accurate chlorophyll estimates in the Gulf of Saint Lawrence

    NASA Astrophysics Data System (ADS)

    Montes-Hugo, M.; Bouakba, H.; Arnone, R.

    2014-06-01

    The understanding of phytoplankton dynamics in the Gulf of the Saint Lawrence (GSL) is critical for managing major fisheries off the Canadian East coast. In this study, the accuracy of two atmospheric correction techniques (NASA standard algorithm, SA, and Kuchinke's spectral optimization, KU) and three ocean color inversion models (Carder's empirical for SeaWiFS (Sea-viewing Wide Field-of-View Sensor), EC, Lee's quasi-analytical, QAA, and Garver- Siegel-Maritorena semi-empirical, GSM) for estimating the phytoplankton absorption coefficient at 443 nm (aph(443)) and the chlorophyll concentration (chl) in the GSL is examined. Each model was validated based on SeaWiFS images and shipboard measurements obtained during May of 2000 and April 2001. In general, aph(443) estimates derived from coupling KU and QAA models presented the smallest differences with respect to in situ determinations as measured by High Pressure liquid Chromatography measurements (median absolute bias per cruise up to 0.005, RMSE up to 0.013). A change on the inversion approach used for estimating aph(443) values produced up to 43.4% increase on prediction error as inferred from the median relative bias per cruise. Likewise, the impact of applying different atmospheric correction schemes was secondary and represented an additive error of up to 24.3%. By using SeaDAS (SeaWiFS Data Analysis System) default values for the optical cross section of phytoplankton (i.e., aph(443) = aph(443)/chl = 0.056 m2mg-1), the median relative bias of our chl estimates as derived from the most accurate spaceborne aph(443) retrievals and with respect to in situ determinations increased up to 29%.

  2. Can Appraisers Rate Work Performance Accurately?

    ERIC Educational Resources Information Center

    Hedge, Jerry W.; Laue, Frances J.

    The ability of individuals to make accurate judgments about others is examined and literature on this subject is reviewed. A wide variety of situational factors affects the appraisal of performance. It is generally accepted that the purpose of the appraisal influences the accuracy of the appraiser. The instrumentation, or tools, available to the…

  3. An Accurate Link Correlation Estimator for Improving Wireless Protocol Performance

    PubMed Central

    Zhao, Zhiwei; Xu, Xianghua; Dong, Wei; Bu, Jiajun

    2015-01-01

    Wireless link correlation has shown significant impact on the performance of various sensor network protocols. Many works have been devoted to exploiting link correlation for protocol improvements. However, the effectiveness of these designs heavily relies on the accuracy of link correlation measurement. In this paper, we investigate state-of-the-art link correlation measurement and analyze the limitations of existing works. We then propose a novel lightweight and accurate link correlation estimation (LACE) approach based on the reasoning of link correlation formation. LACE combines both long-term and short-term link behaviors for link correlation estimation. We implement LACE as a stand-alone interface in TinyOS and incorporate it into both routing and flooding protocols. Simulation and testbed results show that LACE: (1) achieves more accurate and lightweight link correlation measurements than the state-of-the-art work; and (2) greatly improves the performance of protocols exploiting link correlation. PMID:25686314

  4. An accurate link correlation estimator for improving wireless protocol performance.

    PubMed

    Zhao, Zhiwei; Xu, Xianghua; Dong, Wei; Bu, Jiajun

    2015-02-12

    Wireless link correlation has shown significant impact on the performance of various sensor network protocols. Many works have been devoted to exploiting link correlation for protocol improvements. However, the effectiveness of these designs heavily relies on the accuracy of link correlation measurement. In this paper, we investigate state-of-the-art link correlation measurement and analyze the limitations of existing works. We then propose a novel lightweight and accurate link correlation estimation (LACE) approach based on the reasoning of link correlation formation. LACE combines both long-term and short-term link behaviors for link correlation estimation. We implement LACE as a stand-alone interface in TinyOS and incorporate it into both routing and flooding protocols. Simulation and testbed results show that LACE: (1) achieves more accurate and lightweight link correlation measurements than the state-of-the-art work; and (2) greatly improves the performance of protocols exploiting link correlation.

  5. WAIS-IV reliable digit span is no more accurate than age corrected scaled score as an indicator of invalid performance in a veteran sample undergoing evaluation for mTBI.

    PubMed

    Spencer, Robert J; Axelrod, Bradley N; Drag, Lauren L; Waldron-Perrine, Brigid; Pangilinan, Percival H; Bieliauskas, Linas A

    2013-01-01

    Reliable Digit Span (RDS) is a measure of effort derived from the Digit Span subtest of the Wechsler intelligence scales. Some authors have suggested that the age-corrected scaled score provides a more accurate measure of effort than RDS. This study examined the relative diagnostic accuracy of the traditional RDS, an extended RDS including the new Sequencing task from the Wechsler Adult Intelligence Scale-IV, and the age-corrected scaled score, relative to performance validity as determined by the Test of Memory Malingering. Data were collected from 138 Veterans seen in a traumatic brain injury clinic. The traditional RDS (≤ 7), revised RDS (≤ 11), and Digit Span age-corrected scaled score ( ≤ 6) had respective sensitivities of 39%, 39%, and 33%, and respective specificities of 82%, 89%, and 91%. Of these indices, revised RDS and the Digit Span age-corrected scaled score provide the most accurate measure of performance validity among the three measures.

  6. Accurate Evaluation Method of Molecular Binding Affinity from Fluctuation Frequency

    NASA Astrophysics Data System (ADS)

    Hoshino, Tyuji; Iwamoto, Koji; Ode, Hirotaka; Ohdomari, Iwao

    2008-05-01

    Exact estimation of the molecular binding affinity is significantly important for drug discovery. The energy calculation is a direct method to compute the strength of the interaction between two molecules. This energetic approach is, however, not accurate enough to evaluate a slight difference in binding affinity when distinguishing a prospective substance from dozens of candidates for medicine. Hence more accurate estimation of drug efficacy in a computer is currently demanded. Previously we proposed a concept of estimating molecular binding affinity, focusing on the fluctuation at an interface between two molecules. The aim of this paper is to demonstrate the compatibility between the proposed computational technique and experimental measurements, through several examples for computer simulations of an association of human immunodeficiency virus type-1 (HIV-1) protease and its inhibitor (an example for a drug-enzyme binding), a complexation of an antigen and its antibody (an example for a protein-protein binding), and a combination of estrogen receptor and its ligand chemicals (an example for a ligand-receptor binding). The proposed affinity estimation has proven to be a promising technique in the advanced stage of the discovery and the design of drugs.

  7. Apprentice Performance Evaluation.

    ERIC Educational Resources Information Center

    Gast, Clyde W.

    The Granite City (Illinois) Steel apprentices are under a performance evaluation from entry to graduation. Federally approved, the program is guided by joint apprenticeship committees whose monthly meetings include performance evaluation from three information sources: journeymen, supervisors, and instructors. Journeymen's evaluations are made…

  8. Towards an Accurate Performance Modeling of Parallel SparseFactorization

    SciTech Connect

    Grigori, Laura; Li, Xiaoye S.

    2006-05-26

    We present a performance model to analyze a parallel sparseLU factorization algorithm on modern cached-based, high-end parallelarchitectures. Our model characterizes the algorithmic behavior bytakingaccount the underlying processor speed, memory system performance, aswell as the interconnect speed. The model is validated using theSuperLU_DIST linear system solver, the sparse matrices from realapplications, and an IBM POWER3 parallel machine. Our modelingmethodology can be easily adapted to study performance of other types ofsparse factorizations, such as Cholesky or QR.

  9. Improving JWST Coronagraphic Performance with Accurate Image Registration

    NASA Astrophysics Data System (ADS)

    Van Gorkom, Kyle; Pueyo, Laurent; Lajoie, Charles-Philippe; JWST Coronagraphs Working Group

    2016-06-01

    The coronagraphs on the James Webb Space Telescope (JWST) will enable high-contrast observations of faint objects at small separations from bright hosts, such as circumstellar disks, exoplanets, and quasar disks. Despite attenuation by the coronagraphic mask, bright speckles in the host’s point spread function (PSF) remain, effectively washing out the signal from the faint companion. Suppression of these bright speckles is typically accomplished by repeating the observation with a star that lacks a faint companion, creating a reference PSF that can be subtracted from the science image to reveal any faint objects. Before this reference PSF can be subtracted, however, the science and reference images must be aligned precisely, typically to 1/20 of a pixel. Here, we present several such algorithms for performing image registration on JWST coronagraphic images. Using both simulated and pre-flight test data (taken in cryovacuum), we assess (1) the accuracy of each algorithm at recovering misaligned scenes and (2) the impact of image registration on achievable contrast. Proper image registration, combined with post-processing techniques such as KLIP or LOCI, will greatly improve the performance of the JWST coronagraphs.

  10. Accurate evaluation of homogenous and nonhomogeneous gas emissivities

    NASA Technical Reports Server (NTRS)

    Tiwari, S. N.; Lee, K. P.

    1984-01-01

    Spectral transmittance and total band adsorptance of selected infrared bands of carbon dioxide and water vapor are calculated by using the line-by-line and quasi-random band models and these are compared with available experimental results to establish the validity of the quasi-random band model. Various wide-band model correlations are employed to calculate the total band absorptance and total emissivity of these two gases under homogeneous and nonhomogeneous conditions. These results are compared with available experimental results under identical conditions. From these comparisons, it is found that the quasi-random band model can provide quite accurate results and is quite suitable for most atmospheric applications.

  11. Accurate polarimeter with multicapture fitting for plastic lens evaluation

    NASA Astrophysics Data System (ADS)

    Domínguez, Noemí; Mayershofer, Daniel; Garcia, Cristina; Arasa, Josep

    2016-02-01

    Due to their manufacturing process, plastic injection molded lenses do not achieve a constant density throughout their volume. This change of density introduces tensions in the material, inducing local birefringence, which in turn is translated into a variation of the ordinary and extraordinary refractive indices that can be expressed as a retardation phase plane using the Jones matrix notation. The detection and measurement of the value of the retardation of the phase plane are therefore very useful ways to evaluate the quality of plastic lenses. We introduce a polariscopic device to obtain two-dimensional maps of the tension distribution in the bulk of a lens, based on detection of the local birefringence. In addition to a description of the device and the mathematical approach used, a set of initial measurements is presented that confirms the validity of the developed system for the testing of the uniformity of plastic lenses.

  12. Evaluating Performance of Components

    NASA Technical Reports Server (NTRS)

    Katz, Daniel; Tisdale, Edwin; Norton, Charles

    2004-01-01

    Parallel Component Performance Benchmarks is a computer program developed to aid the evaluation of the Common Component Architecture (CCA) - a software architecture, based on a component model, that was conceived to foster high-performance computing, including parallel computing. More specifically, this program compares the performances (principally by measuring computing times) of componentized versus conventional versions of the Parallel Pyramid 2D Adaptive Mesh Refinement library - a software library that is used to generate computational meshes for solving physical problems and that is typical of software libraries in use at NASA s Jet Propulsion Laboratory.

  13. SPECT-OPT multimodal imaging enables accurate evaluation of radiotracers for β-cell mass assessments

    PubMed Central

    Eter, Wael A.; Parween, Saba; Joosten, Lieke; Frielink, Cathelijne; Eriksson, Maria; Brom, Maarten; Ahlgren, Ulf; Gotthardt, Martin

    2016-01-01

    Single Photon Emission Computed Tomography (SPECT) has become a promising experimental approach to monitor changes in β-cell mass (BCM) during diabetes progression. SPECT imaging of pancreatic islets is most commonly cross-validated by stereological analysis of histological pancreatic sections after insulin staining. Typically, stereological methods do not accurately determine the total β-cell volume, which is inconvenient when correlating total pancreatic tracer uptake with BCM. Alternative methods are therefore warranted to cross-validate β-cell imaging using radiotracers. In this study, we introduce multimodal SPECT - optical projection tomography (OPT) imaging as an accurate approach to cross-validate radionuclide-based imaging of β-cells. Uptake of a promising radiotracer for β-cell imaging by SPECT, 111In-exendin-3, was measured by ex vivo-SPECT and cross evaluated by 3D quantitative OPT imaging as well as with histology within healthy and alloxan-treated Brown Norway rat pancreata. SPECT signal was in excellent linear correlation with OPT data as compared to histology. While histological determination of islet spatial distribution was challenging, SPECT and OPT revealed similar distribution patterns of 111In-exendin-3 and insulin positive β-cell volumes between different pancreatic lobes, both visually and quantitatively. We propose ex vivo SPECT-OPT multimodal imaging as a highly accurate strategy for validating the performance of β-cell radiotracers. PMID:27080529

  14. Influence of accurate and inaccurate 'split-time' feedback upon 10-mile time trial cycling performance.

    PubMed

    Wilson, Mathew G; Lane, Andy M; Beedie, Chris J; Farooq, Abdulaziz

    2012-01-01

    The objective of the study is to examine the impact of accurate and inaccurate 'split-time' feedback upon a 10-mile time trial (TT) performance and to quantify power output into a practically meaningful unit of variation. Seven well-trained cyclists completed four randomised bouts of a 10-mile TT on a SRM™ cycle ergometer. TTs were performed with (1) accurate performance feedback, (2) without performance feedback, (3) and (4) false negative and false positive 'split-time' feedback showing performance 5% slower or 5% faster than actual performance. There were no significant differences in completion time, average power output, heart rate or blood lactate between the four feedback conditions. There were significantly lower (p < 0.001) average [Formula: see text] (ml min(-1)) and [Formula: see text] (l min(-1)) scores in the false positive (3,485 ± 596; 119 ± 33) and accurate (3,471 ± 513; 117 ± 22) feedback conditions compared to the false negative (3,753 ± 410; 127 ± 27) and blind (3,772 ± 378; 124 ± 21) feedback conditions. Cyclists spent a greater amount of time in a '20 watt zone' 10 W either side of average power in the negative feedback condition (fastest) than the accurate feedback (slowest) condition (39.3 vs. 32.2%, p < 0.05). There were no significant differences in the 10-mile TT performance time between accurate and inaccurate feedback conditions, despite significantly lower average [Formula: see text] and [Formula: see text] scores in the false positive and accurate feedback conditions. Additionally, cycling with a small variation in power output (10 W either side of average power) produced the fastest TT. Further psycho-physiological research should examine the mechanism(s) why lower [Formula: see text] and [Formula: see text] scores are observed when cycling in a false positive or accurate feedback condition compared to a false negative or blind feedback condition.

  15. Cerebral cortical activity associated with non-experts' most accurate motor performance.

    PubMed

    Dyke, Ford; Godwin, Maurice M; Goel, Paras; Rehm, Jared; Rietschel, Jeremy C; Hunt, Carly A; Miller, Matthew W

    2014-10-01

    This study's specific aim was to determine if non-experts' most accurate motor performance is associated with verbal-analytic- and working memory-related cerebral cortical activity during motor preparation. To assess this, EEG was recorded from non-expert golfers executing putts; EEG spectral power and coherence were calculated for the epoch preceding putt execution; and spectral power and coherence for the five most accurate putts were contrasted with that for the five least accurate. Results revealed marked power in the theta frequency bandwidth at all cerebral cortical regions for the most accurate putts relative to the least accurate, and considerable power in the low-beta frequency bandwidth at the left temporal region for the most accurate compared to the least. As theta power is associated with working memory and low-beta power at the left temporal region with verbal analysis, results suggest non-experts' most accurate motor performance is associated with verbal-analytic- and working memory-related cerebral cortical activity during motor preparation. PMID:25058623

  16. Evaluating Large-Scale Studies to Accurately Appraise Children's Performance

    ERIC Educational Resources Information Center

    Ernest, James M.

    2012-01-01

    Educational policy is often developed using a top-down approach. Recently, there has been a concerted shift in policy for educators to develop programs and research proposals that evolve from "scientific" studies and focus less on their intuition, aided by professional wisdom. This article analyzes several national and international educational…

  17. Functional Performance Evaluation

    NASA Technical Reports Server (NTRS)

    Greenisen, Michael C.; Hayes, Judith C.; Siconolfi, Steven F.; Moore, Alan D.

    1999-01-01

    The Extended Duration Orbiter Medical Project (EDOMP) was established to address specific issues associated with optimizing the ability of crews to complete mission tasks deemed essential to entry, landing, and egress for spaceflights lasting up to 16 days. The main objectives of this functional performance evaluation were to investigate the physiological effects of long-duration spaceflight on skeletal muscle strength and endurance, as well as aerobic capacity and orthostatic function. Long-duration exposure to a microgravity environment may produce physiological alterations that affect crew ability to complete critical tasks such as extravehicular activity (EVA), intravehicular activity (IVA), and nominal or emergency egress. Ultimately, this information will be used to develop and verify countermeasures. The answers to three specific functional performance questions were sought: (1) What are the performance decrements resulting from missions of varying durations? (2) What are the physical requirements for successful entry, landing, and emergency egress from the Shuttle? and (3) What combination of preflight fitness training and in-flight countermeasures will minimize in-flight muscle performance decrements? To answer these questions, the Exercise Countermeasures Project looked at physiological changes associated with muscle degradation as well as orthostatic intolerance. A means of ensuring motor coordination was necessary to maintain proficiency in piloting skills, EVA, and IVA tasks. In addition, it was necessary to maintain musculoskeletal strength and function to meet the rigors associated with moderate altitude bailout and with nominal or emergency egress from the landed Orbiter. Eight investigations, referred to as Detailed Supplementary Objectives (DSOs) 475, 476, 477, 606, 608, 617, 618, and 624, were conducted to study muscle degradation and the effects of exercise on exercise capacity and orthostatic function (Table 3-1). This chapter is divided into

  18. Evaluation of new reference genes in papaya for accurate transcript normalization under different experimental conditions.

    PubMed

    Zhu, Xiaoyang; Li, Xueping; Chen, Weixin; Chen, Jianye; Lu, Wangjin; Chen, Lei; Fu, Danwen

    2012-01-01

    Real-time reverse transcription PCR (RT-qPCR) is a preferred method for rapid and accurate quantification of gene expression studies. Appropriate application of RT-qPCR requires accurate normalization though the use of reference genes. As no single reference gene is universally suitable for all experiments, thus reference gene(s) validation under different experimental conditions is crucial for RT-qPCR analysis. To date, only a few studies on reference genes have been done in other plants but none in papaya. In the present work, we selected 21 candidate reference genes, and evaluated their expression stability in 246 papaya fruit samples using three algorithms, geNorm, NormFinder and RefFinder. The samples consisted of 13 sets collected under different experimental conditions, including various tissues, different storage temperatures, different cultivars, developmental stages, postharvest ripening, modified atmosphere packaging, 1-methylcyclopropene (1-MCP) treatment, hot water treatment, biotic stress and hormone treatment. Our results demonstrated that expression stability varied greatly between reference genes and that different suitable reference gene(s) or combination of reference genes for normalization should be validated according to the experimental conditions. In general, the internal reference genes EIF (Eukaryotic initiation factor 4A), TBP1 (TATA binding protein 1) and TBP2 (TATA binding protein 2) genes had a good performance under most experimental conditions, whereas the most widely present used reference genes, ACTIN (Actin 2), 18S rRNA (18S ribosomal RNA) and GAPDH (Glyceraldehyde-3-phosphate dehydrogenase) were not suitable in many experimental conditions. In addition, two commonly used programs, geNorm and Normfinder, were proved sufficient for the validation. This work provides the first systematic analysis for the selection of superior reference genes for accurate transcript normalization in papaya under different experimental conditions.

  19. Toward Accurate Measurement of Participation: Rethinking the Conceptualization and Operationalization of Participatory Evaluation

    ERIC Educational Resources Information Center

    Daigneault, Pierre-Marc; Jacob, Steve

    2009-01-01

    While participatory evaluation (PE) constitutes an important trend in the field of evaluation, its ontology has not been systematically analyzed. As a result, the concept of PE is ambiguous and inadequately theorized. Furthermore, no existing instrument accurately measures stakeholder participation. First, this article attempts to overcome these…

  20. Evaluating Economic Performance and Policies.

    ERIC Educational Resources Information Center

    Thurow, Lester C.

    1987-01-01

    Argues that a social welfare approach to evaluating economic performance is inappropriate at the high school level. Provides several historical case studies which could be used to augment instruction aimed at the evaluation of economic performance and policies. (JDH)

  1. More Bias in Performance Evaluation?

    ERIC Educational Resources Information Center

    Gallagher, Michael C.

    1978-01-01

    The results of this study indicate that a single performance evaluation should not be used for different purposes since the stated purpose of the evaluation can affect the actual performance rating. (Author/IRT)

  2. Accurate CT-MR image registration for deep brain stimulation: a multi-observer evaluation study

    NASA Astrophysics Data System (ADS)

    Rühaak, Jan; Derksen, Alexander; Heldmann, Stefan; Hallmann, Marc; Meine, Hans

    2015-03-01

    Since the first clinical interventions in the late 1980s, Deep Brain Stimulation (DBS) of the subthalamic nucleus has evolved into a very effective treatment option for patients with severe Parkinson's disease. DBS entails the implantation of an electrode that performs high frequency stimulations to a target area deep inside the brain. A very accurate placement of the electrode is a prerequisite for positive therapy outcome. The assessment of the intervention result is of central importance in DBS treatment and involves the registration of pre- and postinterventional scans. In this paper, we present an image processing pipeline for highly accurate registration of postoperative CT to preoperative MR. Our method consists of two steps: a fully automatic pre-alignment using a detection of the skull tip in the CT based on fuzzy connectedness, and an intensity-based rigid registration. The registration uses the Normalized Gradient Fields distance measure in a multilevel Gauss-Newton optimization framework and focuses on a region around the subthalamic nucleus in the MR. The accuracy of our method was extensively evaluated on 20 DBS datasets from clinical routine and compared with manual expert registrations. For each dataset, three independent registrations were available, thus allowing to relate algorithmic with expert performance. Our method achieved an average registration error of 0.95mm in the target region around the subthalamic nucleus as compared to an inter-observer variability of 1.12 mm. Together with the short registration time of about five seconds on average, our method forms a very attractive package that can be considered ready for clinical use.

  3. Post-identification feedback to eyewitnesses impairs evaluators' abilities to discriminate between accurate and mistaken testimony.

    PubMed

    Smalarz, Laura; Wells, Gary L

    2014-04-01

    Giving confirming feedback to mistaken eyewitnesses has robust distorting effects on their retrospective judgments (e.g., how certain they were, their view, etc.). Does feedback harm evaluators' abilities to discriminate between accurate and mistaken identification testimony? Participant-witnesses to a simulated crime made accurate or mistaken identifications from a lineup and then received confirming feedback or no feedback. Each then gave videotaped testimony about their identification, and a new sample of participant-evaluators judged the accuracy and credibility of the testimonies. Among witnesses who were not given feedback, evaluators were significantly more likely to believe the testimony of accurate eyewitnesses than they were to believe the testimony of mistaken eyewitnesses, indicating significant discrimination. Among witnesses who were given confirming feedback, however, evaluators believed accurate and mistaken witnesses at nearly identical rates, indicating no ability to discriminate. Moreover, there was no evidence of overbelief in the absence of feedback whereas there was significant overbelief in the confirming feedback conditions. Results demonstrate that a simple comment following a witness' identification decision ("Good job, you got the suspect") can undermine fact-finders' abilities to discern whether the witness made an accurate or a mistaken identification. PMID:24341835

  4. On the very accurate numerical evaluation of the Generalized Fermi-Dirac Integrals

    NASA Astrophysics Data System (ADS)

    Mohankumar, N.; Natarajan, A.

    2016-10-01

    We indicate a new and a very accurate algorithm for the evaluation of the Generalized Fermi-Dirac Integral with a relative error less than 10-20. The method involves Double Exponential, Trapezoidal and Gauss-Legendre quadratures. For the residue correction of the Gauss-Legendre scheme, a simple and precise continued fraction algorithm is used.

  5. Error Reduction Program. [combustor performance evaluation codes

    NASA Technical Reports Server (NTRS)

    Syed, S. A.; Chiappetta, L. M.; Gosman, A. D.

    1985-01-01

    The details of a study to select, incorporate and evaluate the best available finite difference scheme to reduce numerical error in combustor performance evaluation codes are described. The combustor performance computer programs chosen were the two dimensional and three dimensional versions of Pratt & Whitney's TEACH code. The criteria used to select schemes required that the difference equations mirror the properties of the governing differential equation, be more accurate than the current hybrid difference scheme, be stable and economical, be compatible with TEACH codes, use only modest amounts of additional storage, and be relatively simple. The methods of assessment used in the selection process consisted of examination of the difference equation, evaluation of the properties of the coefficient matrix, Taylor series analysis, and performance on model problems. Five schemes from the literature and three schemes developed during the course of the study were evaluated. This effort resulted in the incorporation of a scheme in 3D-TEACH which is usuallly more accurate than the hybrid differencing method and never less accurate.

  6. The identification of complete domains within protein sequences using accurate E-values for semi-global alignment

    PubMed Central

    Kann, Maricel G.; Sheetlin, Sergey L.; Park, Yonil; Bryant, Stephen H.; Spouge, John L.

    2007-01-01

    The sequencing of complete genomes has created a pressing need for automated annotation of gene function. Because domains are the basic units of protein function and evolution, a gene can be annotated from a domain database by aligning domains to the corresponding protein sequence. Ideally, complete domains are aligned to protein subsequences, in a ‘semi-global alignment’. Local alignment, which aligns pieces of domains to subsequences, is common in high-throughput annotation applications, however. It is a mature technique, with the heuristics and accurate E-values required for screening large databases and evaluating the screening results. Hidden Markov models (HMMs) provide an alternative theoretical framework for semi-global alignment, but their use is limited because they lack heuristic acceleration and accurate E-values. Our new tool, GLOBAL, overcomes some limitations of previous semi-global HMMs: it has accurate E-values and the possibility of the heuristic acceleration required for high-throughput applications. Moreover, according to a standard of truth based on protein structure, two semi-global HMM alignment tools (GLOBAL and HMMer) had comparable performance in identifying complete domains, but distinctly outperformed two tools based on local alignment. When searching for complete protein domains, therefore, GLOBAL avoids disadvantages commonly associated with HMMs, yet maintains their superior retrieval performance. PMID:17596268

  7. Evaluating Student Clinical Performance.

    ERIC Educational Resources Information Center

    Foster, Danny T.

    When the University of Iowa's athletic training education department developed evaluation criteria and methods to be used with students, attention was paid to validity, consistency, observation, and behaviors. The observations of student behaviors reflect three types of learning outcomes important to clinical education: cognitive, psychomotor, and…

  8. Instrument performance evaluation

    SciTech Connect

    Swinth, K.L.

    1993-03-01

    Deficiencies exist in both the performance and the quality of health physics instruments. Recognizing the implications of such deficiencies for the protection of workers and the public, in the early 1980s the DOE and the NRC encouraged the development of a performance standard and established a program to test a series of instruments against criteria in the standard. The purpose of the testing was to establish the practicality of the criteria in the standard, to determine the performance of a cross section of available instruments, and to establish a testing capability. Over 100 instruments were tested, resulting in a practical standard and an understanding of the deficiencies in available instruments. In parallel with the instrument testing, a value-impact study clearly established the benefits of implementing a formal testing program. An ad hoc committee also met several times to establish recommendations for the voluntary implementation of a testing program based on the studies and the performance standard. For several reasons, a formal program did not materialize. Ongoing tests and studies have supported the development of specific instruments and have helped specific clients understand the performance of their instruments. The purpose of this presentation is to trace the history of instrument testing to date and suggest the benefits of a centralized formal program.

  9. Performance Evaluation: A Deadly Disease?

    ERIC Educational Resources Information Center

    Aluri, Rao; Reichel, Mary

    1994-01-01

    W. Edwards Deming condemned performance evaluations as a deadly disease afflicting American management. He argued that performance evaluations nourish fear, encourage short-term thinking, stifle teamwork, and are no better than lotteries. This article examines library literature from Deming's perspective. Although that literature accepts…

  10. Blinded by Beauty: Attractiveness Bias and Accurate Perceptions of Academic Performance

    PubMed Central

    Talamas, Sean N.; Mavor, Kenneth I.; Perrett, David I.

    2016-01-01

    Despite the old adage not to ‘judge a book by its cover’, facial cues often guide first impressions and these first impressions guide our decisions. Literature suggests there are valid facial cues that assist us in assessing someone’s health or intelligence, but such cues are overshadowed by an ‘attractiveness halo’ whereby desirable attributions are preferentially ascribed to attractive people. The impact of the attractiveness halo effect on perceptions of academic performance in the classroom is concerning as this has shown to influence students’ future performance. We investigated the limiting effects of the attractiveness halo on perceptions of actual academic performance in faces of 100 university students. Given the ambiguity and various perspectives on the definition of intelligence and the growing consensus on the importance of conscientiousness over intelligence in predicting actual academic performance, we also investigated whether perceived conscientiousness was a more accurate predictor of academic performance than perceived intelligence. Perceived conscientiousness was found to be a better predictor of actual academic performance when compared to perceived intelligence and perceived academic performance, and accuracy was improved when controlling for the influence of attractiveness on judgments. These findings emphasize the misleading effect of attractiveness on the accuracy of first impressions of competence, which can have serious consequences in areas such as education and hiring. The findings also have implications for future research investigating impression accuracy based on facial stimuli. PMID:26885976

  11. Blinded by Beauty: Attractiveness Bias and Accurate Perceptions of Academic Performance.

    PubMed

    Talamas, Sean N; Mavor, Kenneth I; Perrett, David I

    2016-01-01

    Despite the old adage not to 'judge a book by its cover', facial cues often guide first impressions and these first impressions guide our decisions. Literature suggests there are valid facial cues that assist us in assessing someone's health or intelligence, but such cues are overshadowed by an 'attractiveness halo' whereby desirable attributions are preferentially ascribed to attractive people. The impact of the attractiveness halo effect on perceptions of academic performance in the classroom is concerning as this has shown to influence students' future performance. We investigated the limiting effects of the attractiveness halo on perceptions of actual academic performance in faces of 100 university students. Given the ambiguity and various perspectives on the definition of intelligence and the growing consensus on the importance of conscientiousness over intelligence in predicting actual academic performance, we also investigated whether perceived conscientiousness was a more accurate predictor of academic performance than perceived intelligence. Perceived conscientiousness was found to be a better predictor of actual academic performance when compared to perceived intelligence and perceived academic performance, and accuracy was improved when controlling for the influence of attractiveness on judgments. These findings emphasize the misleading effect of attractiveness on the accuracy of first impressions of competence, which can have serious consequences in areas such as education and hiring. The findings also have implications for future research investigating impression accuracy based on facial stimuli.

  12. Blinded by Beauty: Attractiveness Bias and Accurate Perceptions of Academic Performance.

    PubMed

    Talamas, Sean N; Mavor, Kenneth I; Perrett, David I

    2016-01-01

    Despite the old adage not to 'judge a book by its cover', facial cues often guide first impressions and these first impressions guide our decisions. Literature suggests there are valid facial cues that assist us in assessing someone's health or intelligence, but such cues are overshadowed by an 'attractiveness halo' whereby desirable attributions are preferentially ascribed to attractive people. The impact of the attractiveness halo effect on perceptions of academic performance in the classroom is concerning as this has shown to influence students' future performance. We investigated the limiting effects of the attractiveness halo on perceptions of actual academic performance in faces of 100 university students. Given the ambiguity and various perspectives on the definition of intelligence and the growing consensus on the importance of conscientiousness over intelligence in predicting actual academic performance, we also investigated whether perceived conscientiousness was a more accurate predictor of academic performance than perceived intelligence. Perceived conscientiousness was found to be a better predictor of actual academic performance when compared to perceived intelligence and perceived academic performance, and accuracy was improved when controlling for the influence of attractiveness on judgments. These findings emphasize the misleading effect of attractiveness on the accuracy of first impressions of competence, which can have serious consequences in areas such as education and hiring. The findings also have implications for future research investigating impression accuracy based on facial stimuli. PMID:26885976

  13. Energy performance evaluation of AAC

    NASA Astrophysics Data System (ADS)

    Aybek, Hulya

    The U.S. building industry constitutes the largest consumer of energy (i.e., electricity, natural gas, petroleum) in the world. The building sector uses almost 41 percent of the primary energy and approximately 72 percent of the available electricity in the United States. As global energy-generating resources are being depleted at exponential rates, the amount of energy consumed and wasted cannot be ignored. Professionals concerned about the environment have placed a high priority on finding solutions that reduce energy consumption while maintaining occupant comfort. Sustainable design and the judicious combination of building materials comprise one solution to this problem. A future including sustainable energy may result from using energy simulation software to accurately estimate energy consumption and from applying building materials that achieve the potential results derived through simulation analysis. Energy-modeling tools assist professionals with making informed decisions about energy performance during the early planning phases of a design project, such as determining the most advantageous combination of building materials, choosing mechanical systems, and determining building orientation on the site. By implementing energy simulation software to estimate the effect of these factors on the energy consumption of a building, designers can make adjustments to their designs during the design phase when the effect on cost is minimal. The primary objective of this research consisted of identifying a method with which to properly select energy-efficient building materials and involved evaluating the potential of these materials to earn LEED credits when properly applied to a structure. In addition, this objective included establishing a framework that provides suggestions for improvements to currently available simulation software that enhance the viability of the estimates concerning energy efficiency and the achievements of LEED credits. The primary objective

  14. Accurate Histological Techniques to Evaluate Critical Temperature Thresholds for Prostate In Vivo

    NASA Astrophysics Data System (ADS)

    Bronskill, Michael; Chopra, Rajiv; Boyes, Aaron; Tang, Kee; Sugar, Linda

    2007-05-01

    Various histological techniques have been compared to evaluate the boundaries of thermal damage produced by ultrasound in vivo in a canine model. When all images are accurately co-registered, H&E stained micrographs provide the best assessment of acute cellular damage. Estimates of the boundaries of 100% and 0% cell killing correspond to maximum temperature thresholds of 54.6 ± 1.7°C and 51.5 ± 1.9°C, respectively.

  15. Increasing productivity through performance evaluation.

    PubMed

    Lachman, V D

    1984-12-01

    Four components form the base for a performance evaluation system. A discussion of management/organizational shortcomings creating performance problems is followed by a focus on the importance of an ongoing discussion of goals between the manager and the subordinate. Six components that impact performance are identified, and practical suggestions are given to increase motivation. A coaching analysis process, as well as counseling and disciplining models, define the steps for solving performance problems.

  16. Room for Improvement: Performance Evaluations.

    ERIC Educational Resources Information Center

    Webb, Gisela

    1989-01-01

    Describes a performance management approach to library personnel management that stresses communication, clarification of goals, and reinforcement of new practices and behaviors. Each phase of the evaluation process (preparation, rating, administrative review, appraisal interview, and follow-up) and special evaluations to be used in cases of…

  17. Evaluating Administrative/Supervisory Performance.

    ERIC Educational Resources Information Center

    Educational Research Service, Arlington, VA.

    This is a report on the third survey conducted on procedures for evaluating the performance of administrators and supervisors in local school systems. A questionnaire was sent to school systems enrolling 25,000 or more pupils, and results indicated that 84 of the 154 responding systems have formal evaluation procedures. Tables and discussions of…

  18. How to Construct More Accurate Student Models: Comparing and Optimizing Knowledge Tracing and Performance Factor Analysis

    ERIC Educational Resources Information Center

    Gong, Yue; Beck, Joseph E.; Heffernan, Neil T.

    2011-01-01

    Student modeling is a fundamental concept applicable to a variety of intelligent tutoring systems (ITS). However, there is not a lot of practical guidance on how to construct and train such models. This paper compares two approaches for student modeling, Knowledge Tracing (KT) and Performance Factors Analysis (PFA), by evaluating their predictive…

  19. Outcomes Evaluation in "Faith"-Based Social Services: Are We Evaluating "Faith" Accurately?

    ERIC Educational Resources Information Center

    Ferguson, Kristin M.; Wu, Qiaobing; Spruijt-Metz, Donna; Dyrness, Grace

    2007-01-01

    In response to a recent call for research on the effectiveness of faith-based organizations, this article synthesizes how effectiveness has been defined and measured in evaluation research of faith-based programs. Although evidence indicates that religion can have a positive impact on individuals' well-being, no prior comprehensive review exists…

  20. An improved method for accurate and rapid measurement of flight performance in Drosophila.

    PubMed

    Babcock, Daniel T; Ganetzky, Barry

    2014-01-01

    Drosophila has proven to be a useful model system for analysis of behavior, including flight. The initial flight tester involved dropping flies into an oil-coated graduated cylinder; landing height provided a measure of flight performance by assessing how far flies will fall before producing enough thrust to make contact with the wall of the cylinder. Here we describe an updated version of the flight tester with four major improvements. First, we added a "drop tube" to ensure that all flies enter the flight cylinder at a similar velocity between trials, eliminating variability between users. Second, we replaced the oil coating with removable plastic sheets coated in Tangle-Trap, an adhesive designed to capture live insects. Third, we use a longer cylinder to enable more accurate discrimination of flight ability. Fourth we use a digital camera and imaging software to automate the scoring of flight performance. These improvements allow for the rapid, quantitative assessment of flight behavior, useful for large datasets and large-scale genetic screens. PMID:24561810

  1. Evaluating the capability of time-of-flight cameras for accurately imaging a cyclically loaded beam

    NASA Astrophysics Data System (ADS)

    Lahamy, Hervé; Lichti, Derek; El-Badry, Mamdouh; Qi, Xiaojuan; Detchev, Ivan; Steward, Jeremy; Moravvej, Mohammad

    2015-05-01

    Time-of-flight cameras are used for diverse applications ranging from human-machine interfaces and gaming to robotics and earth topography. This paper aims at evaluating the capability of the Mesa Imaging SR4000 and the Microsoft Kinect 2.0 time-of-flight cameras for accurately imaging the top surface of a concrete beam subjected to fatigue loading in laboratory conditions. Whereas previous work has demonstrated the success of such sensors for measuring the response at point locations, the aim here is to measure the entire beam surface in support of the overall objective of evaluating the effectiveness of concrete beam reinforcement with steel fibre reinforced polymer sheets. After applying corrections for lens distortions to the data and differencing images over time to remove systematic errors due to internal scattering, the periodic deflections experienced by the beam have been estimated for the entire top surface of the beam and at witness plates attached. The results have been assessed by comparison with measurements from highly-accurate laser displacement transducers. This study concludes that both the Microsoft Kinect 2.0 and the Mesa Imaging SR4000s are capable of sensing a moving surface with sub-millimeter accuracy once the image distortions have been modeled and removed.

  2. Being aware of own performance: how accurately do children with autism spectrum disorder judge own memory performance?

    PubMed

    Elmose, Mette; Happé, Francesca

    2014-12-01

    Self-awareness was investigated by assessing accuracy of judging own memory performance in a group of children with autism spectrum disorder (ASD) compared with a group of typically developing (TD) children. Effects of stimulus type (social vs. nonsocial), and availability of feedback information as the task progressed, were examined. Results overall showed comparable levels and patterns of accuracy in the ASD and TD groups. A trend level effect (p = 061, d = 0.60) was found, with ASD participants being more accurate in judging own memory for nonsocial than social stimuli and the opposite pattern for TD participants. These findings suggest that awareness of own memory can be good in children with ASD. It is discussed how this finding may be interpreted, and it is suggested that further investigation into the relation between content, frequency, and quality of self-awareness, and the context of self-awareness, is needed.

  3. Evaluating Value-Added Methods of Estimating of Teacher Performance

    ERIC Educational Resources Information Center

    Guarino, Cassandra M.; Reckase, Mark D.; Wooldridge, Jeffrey M.

    2011-01-01

    Accurate indicators of educational effectiveness are needed to advance national policy goals of raising student achievement and closing social/cultural based achievement gaps. If constructed and used appropriately, such indicators for both program evaluation and the evaluation of teacher and school performance could have a transformative effect on…

  4. Accurate Point-of-Care Detection of Ruptured Fetal Membranes: Improved Diagnostic Performance Characteristics with a Monoclonal/Polyclonal Immunoassay

    PubMed Central

    Rogers, Linda C.; Scott, Laurie; Block, Jon E.

    2016-01-01

    OBJECTIVE Accurate and timely diagnosis of rupture of membranes (ROM) is imperative to allow for gestational age-specific interventions. This study compared the diagnostic performance characteristics between two methods used for the detection of ROM as measured in the same patient. METHODS Vaginal secretions were evaluated using the conventional fern test as well as a point-of-care monoclonal/polyclonal immunoassay test (ROM Plus®) in 75 pregnant patients who presented to labor and delivery with complaints of leaking amniotic fluid. Both tests were compared to analytical confirmation of ROM using three external laboratory tests. Diagnostic performance characteristics were calculated including sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and accuracy. RESULTS Diagnostic performance characteristics uniformly favored ROM detection using the immunoassay test compared to the fern test: sensitivity (100% vs. 77.8%), specificity (94.8% vs. 79.3%), PPV (75% vs. 36.8%), NPV (100% vs. 95.8%), and accuracy (95.5% vs. 79.1%). CONCLUSIONS The point-of-care immunoassay test provides improved diagnostic accuracy for the detection of ROM compared to fern testing. It has the potential of improving patient management decisions, thereby minimizing serious complications and perinatal morbidity. PMID:27199579

  5. Performance Criteria and Evaluation System

    1992-06-18

    The Performance Criteria and Evaluation System (PCES) was developed in order to make a data base of criteria accessible to radiation safety staff. The criteria included in the package are applicable to occupational radiation safety at DOE reactor and nonreactor nuclear facilities, but any data base of criteria may be created using the Criterion Data Base Utiliity (CDU). PCES assists personnel in carrying out oversight, line, and support activities.

  6. Identification and Evaluation of Reference Genes for Accurate Transcription Normalization in Safflower under Different Experimental Conditions.

    PubMed

    Li, Dandan; Hu, Bo; Wang, Qing; Liu, Hongchang; Pan, Feng; Wu, Wei

    2015-01-01

    Safflower (Carthamus tinctorius L.) has received a significant amount of attention as a medicinal plant and oilseed crop. Gene expression studies provide a theoretical molecular biology foundation for improving new traits and developing new cultivars. Real-time quantitative PCR (RT-qPCR) has become a crucial approach for gene expression analysis. In addition, appropriate reference genes (RGs) are essential for accurate and rapid relative quantification analysis of gene expression. In this study, fifteen candidate RGs involved in multiple metabolic pathways of plants were finally selected and validated under different experimental treatments, at different seed development stages and in different cultivars and tissues for real-time PCR experiments. These genes were ABCS, 60SRPL10, RANBP1, UBCL, MFC, UBCE2, EIF5A, COA, EF1-β, EF1, GAPDH, ATPS, MBF1, GTPB and GST. The suitability evaluation was executed by the geNorm and NormFinder programs. Overall, EF1, UBCE2, EIF5A, ATPS and 60SRPL10 were the most stable genes, and MBF1, as well as MFC, were the most unstable genes by geNorm and NormFinder software in all experimental samples. To verify the validation of RGs selected by the two programs, the expression analysis of 7 CtFAD2 genes in safflower seeds at different developmental stages under cold stress was executed using different RGs in RT-qPCR experiments for normalization. The results showed similar expression patterns when the most stable RGs selected by geNorm or NormFinder software were used. However, the differences were detected using the most unstable reference genes. The most stable combination of genes selected in this study will help to achieve more accurate and reliable results in a wide variety of samples in safflower.

  7. Identification and Evaluation of Reference Genes for Accurate Transcription Normalization in Safflower under Different Experimental Conditions

    PubMed Central

    Li, Dandan; Hu, Bo; Wang, Qing; Liu, Hongchang; Pan, Feng; Wu, Wei

    2015-01-01

    Safflower (Carthamus tinctorius L.) has received a significant amount of attention as a medicinal plant and oilseed crop. Gene expression studies provide a theoretical molecular biology foundation for improving new traits and developing new cultivars. Real-time quantitative PCR (RT-qPCR) has become a crucial approach for gene expression analysis. In addition, appropriate reference genes (RGs) are essential for accurate and rapid relative quantification analysis of gene expression. In this study, fifteen candidate RGs involved in multiple metabolic pathways of plants were finally selected and validated under different experimental treatments, at different seed development stages and in different cultivars and tissues for real-time PCR experiments. These genes were ABCS, 60SRPL10, RANBP1, UBCL, MFC, UBCE2, EIF5A, COA, EF1-β, EF1, GAPDH, ATPS, MBF1, GTPB and GST. The suitability evaluation was executed by the geNorm and NormFinder programs. Overall, EF1, UBCE2, EIF5A, ATPS and 60SRPL10 were the most stable genes, and MBF1, as well as MFC, were the most unstable genes by geNorm and NormFinder software in all experimental samples. To verify the validation of RGs selected by the two programs, the expression analysis of 7 CtFAD2 genes in safflower seeds at different developmental stages under cold stress was executed using different RGs in RT-qPCR experiments for normalization. The results showed similar expression patterns when the most stable RGs selected by geNorm or NormFinder software were used. However, the differences were detected using the most unstable reference genes. The most stable combination of genes selected in this study will help to achieve more accurate and reliable results in a wide variety of samples in safflower. PMID:26457898

  8. Identification and Evaluation of Reference Genes for Accurate Transcription Normalization in Safflower under Different Experimental Conditions.

    PubMed

    Li, Dandan; Hu, Bo; Wang, Qing; Liu, Hongchang; Pan, Feng; Wu, Wei

    2015-01-01

    Safflower (Carthamus tinctorius L.) has received a significant amount of attention as a medicinal plant and oilseed crop. Gene expression studies provide a theoretical molecular biology foundation for improving new traits and developing new cultivars. Real-time quantitative PCR (RT-qPCR) has become a crucial approach for gene expression analysis. In addition, appropriate reference genes (RGs) are essential for accurate and rapid relative quantification analysis of gene expression. In this study, fifteen candidate RGs involved in multiple metabolic pathways of plants were finally selected and validated under different experimental treatments, at different seed development stages and in different cultivars and tissues for real-time PCR experiments. These genes were ABCS, 60SRPL10, RANBP1, UBCL, MFC, UBCE2, EIF5A, COA, EF1-β, EF1, GAPDH, ATPS, MBF1, GTPB and GST. The suitability evaluation was executed by the geNorm and NormFinder programs. Overall, EF1, UBCE2, EIF5A, ATPS and 60SRPL10 were the most stable genes, and MBF1, as well as MFC, were the most unstable genes by geNorm and NormFinder software in all experimental samples. To verify the validation of RGs selected by the two programs, the expression analysis of 7 CtFAD2 genes in safflower seeds at different developmental stages under cold stress was executed using different RGs in RT-qPCR experiments for normalization. The results showed similar expression patterns when the most stable RGs selected by geNorm or NormFinder software were used. However, the differences were detected using the most unstable reference genes. The most stable combination of genes selected in this study will help to achieve more accurate and reliable results in a wide variety of samples in safflower. PMID:26457898

  9. How accurately do drivers evaluate their own driving behavior? An on-road observational study.

    PubMed

    Amado, Sonia; Arıkan, Elvan; Kaça, Gülin; Koyuncu, Mehmet; Turkan, B Nilay

    2014-02-01

    Self-assessment of driving skills became a noteworthy research subject in traffic psychology, since by knowing one's strenghts and weaknesses, drivers can take an efficient compensatory action to moderate risk and to ensure safety in hazardous environments. The current study aims to investigate drivers' self-conception of their own driving skills and behavior in relation to expert evaluations of their actual driving, by using naturalistic and systematic observation method during actual on-road driving session and to assess the different aspects of driving via comprehensive scales sensitive to different specific aspects of driving. 19-63 years old male participants (N=158) attended an on-road driving session lasting approximately 80min (45km). During the driving session, drivers' errors and violations were recorded by an expert observer. At the end of the driving session, observers completed the driver evaluation questionnaire, while drivers completed the driving self-evaluation questionnaire and Driver Behavior Questionnaire (DBQ). Low to moderate correlations between driver and observer evaluations of driving skills and behavior, mainly on errors and violations of speed and traffic lights was found. Furthermore, the robust finding that drivers evaluate their driving performance as better than the expert was replicated. Over-positive appraisal was higher among drivers with higher error/violation score and with the ones that were evaluated by the expert as "unsafe". We suggest that the traffic environment might be regulated by increasing feedback indicators of errors and violations, which in turn might increase the insight into driving performance. Improving self-awareness by training and feedback sessions might play a key role for reducing the probability of risk in their driving activity.

  10. Rapid and accurate evaluation of the quality of commercial organic fertilizers using near infrared spectroscopy.

    PubMed

    Wang, Chang; Huang, Chichao; Qian, Jian; Xiao, Jian; Li, Huan; Wen, Yongli; He, Xinhua; Ran, Wei; Shen, Qirong; Yu, Guanghui

    2014-01-01

    The composting industry has been growing rapidly in China because of a boom in the animal industry. Therefore, a rapid and accurate assessment of the quality of commercial organic fertilizers is of the utmost importance. In this study, a novel technique that combines near infrared (NIR) spectroscopy with partial least squares (PLS) analysis is developed for rapidly and accurately assessing commercial organic fertilizers quality. A total of 104 commercial organic fertilizers were collected from full-scale compost factories in Jiangsu Province, east China. In general, the NIR-PLS technique showed accurate predictions of the total organic matter, water soluble organic nitrogen, pH, and germination index; less accurate results of the moisture, total nitrogen, and electrical conductivity; and the least accurate results for water soluble organic carbon. Our results suggested the combined NIR-PLS technique could be applied as a valuable tool to rapidly and accurately assess the quality of commercial organic fertilizers. PMID:24586313

  11. Rapid and accurate evaluation of the quality of commercial organic fertilizers using near infrared spectroscopy.

    PubMed

    Wang, Chang; Huang, Chichao; Qian, Jian; Xiao, Jian; Li, Huan; Wen, Yongli; He, Xinhua; Ran, Wei; Shen, Qirong; Yu, Guanghui

    2014-01-01

    The composting industry has been growing rapidly in China because of a boom in the animal industry. Therefore, a rapid and accurate assessment of the quality of commercial organic fertilizers is of the utmost importance. In this study, a novel technique that combines near infrared (NIR) spectroscopy with partial least squares (PLS) analysis is developed for rapidly and accurately assessing commercial organic fertilizers quality. A total of 104 commercial organic fertilizers were collected from full-scale compost factories in Jiangsu Province, east China. In general, the NIR-PLS technique showed accurate predictions of the total organic matter, water soluble organic nitrogen, pH, and germination index; less accurate results of the moisture, total nitrogen, and electrical conductivity; and the least accurate results for water soluble organic carbon. Our results suggested the combined NIR-PLS technique could be applied as a valuable tool to rapidly and accurately assess the quality of commercial organic fertilizers.

  12. Rapid and Accurate Evaluation of the Quality of Commercial Organic Fertilizers Using Near Infrared Spectroscopy

    PubMed Central

    Wang, Chang; Huang, Chichao; Qian, Jian; Xiao, Jian; Li, Huan; Wen, Yongli; He, Xinhua; Ran, Wei; Shen, Qirong; Yu, Guanghui

    2014-01-01

    The composting industry has been growing rapidly in China because of a boom in the animal industry. Therefore, a rapid and accurate assessment of the quality of commercial organic fertilizers is of the utmost importance. In this study, a novel technique that combines near infrared (NIR) spectroscopy with partial least squares (PLS) analysis is developed for rapidly and accurately assessing commercial organic fertilizers quality. A total of 104 commercial organic fertilizers were collected from full-scale compost factories in Jiangsu Province, east China. In general, the NIR-PLS technique showed accurate predictions of the total organic matter, water soluble organic nitrogen, pH, and germination index; less accurate results of the moisture, total nitrogen, and electrical conductivity; and the least accurate results for water soluble organic carbon. Our results suggested the combined NIR-PLS technique could be applied as a valuable tool to rapidly and accurately assess the quality of commercial organic fertilizers. PMID:24586313

  13. Accurate evaluation of the angular-dependent direct correlation function of water

    NASA Astrophysics Data System (ADS)

    Zhao, Shuangliang; Liu, Honglai; Ramirez, Rosa; Borgis, Daniel

    2013-07-01

    The direct correlation function (DCF) plays a pivotal role in addressing the thermodynamic properties with non-mean-field statistical theories of liquid state. This work provides an accurate yet efficient calculation procedure for evaluating the angular-dependent DCF of bulk SPC/E water. The DCF here represented in a discrete angles basis is computed with two typical steps: the first step involves solving the molecular Ornstein-Zernike equation with the input of total correlation function extracted from simulation; the resultant DCF is then polished in second step at small wavelength for all orientations in order to match correct thermodynamic properties. This function is also discussed in terms of its rotational invariant components. In particular, we show that the component c112(r) that accounts for dipolar symmetry reaches already its long-range asymptotic behavior at a short distance of 4 Å. With the knowledge of DCF, the angular-dependent bridge function of bulk water is thereafter computed and discussed in comparison with referenced hard-sphere bridge functions. We conclude that, even though such hard-sphere bridge functions may be relevant to improve the calculation of Helmholtz free energies in integral equations or density functional theory, they are doomed to fail at a structural level.

  14. Can a combination of ultrasonographic parameters accurately evaluate concussion and guide return-to-play decisions?

    PubMed

    Cartwright, Michael S; Dupuis, Janae E; Bargoil, Jessica M; Foster, Dana C

    2015-09-01

    Mild traumatic brain injury, often referred to as concussion, is a common, potentially debilitating, and costly condition. One of the main challenges in diagnosing and managing concussion is that there is not currently an objective test to determine the presence of a concussion and to guide return-to-play decisions for athletes. Traditional neuroimaging tests, such as brain magnetic resonance imaging, are normal in concussion, and therefore diagnosis and management are guided by reported symptoms. Some athletes will under-report symptoms to accelerate their return-to-play and others will over-report symptoms out of fear of further injury or misinterpretation of underlying conditions, such as migraine headache. Therefore, an objective measure is needed to assist in several facets of concussion management. Limited data in animal and human testing indicates that intracranial pressure increases slightly and cerebrovascular reactivity (the ability of the cerebral arteries to auto-regulate in response to changes in carbon dioxide) decreases slightly following mild traumatic brain injury. We hypothesize that a combination of ultrasonographic measurements (optic nerve sheath diameter and transcranial Doppler assessment of cerebrovascular reactivity) into a single index will allow for an accurate and non-invasive measurement of intracranial pressure and cerebrovascular reactivity, and this index will be clinically relevant and useful for guiding concussion diagnosis and management. Ultrasound is an ideal modality for the evaluation of concussion because it is portable (allowing for evaluation in many settings, such as on the playing field or in a combat zone), radiation-free (making repeat scans safe), and relatively inexpensive (resulting in nearly universal availability). This paper reviews the literature supporting our hypothesis that an ultrasonographic index can assist in the diagnosis and management of concussion, and it also presents limited data regarding the

  15. Can a combination of ultrasonographic parameters accurately evaluate concussion and guide return-to-play decisions?

    PubMed

    Cartwright, Michael S; Dupuis, Janae E; Bargoil, Jessica M; Foster, Dana C

    2015-09-01

    Mild traumatic brain injury, often referred to as concussion, is a common, potentially debilitating, and costly condition. One of the main challenges in diagnosing and managing concussion is that there is not currently an objective test to determine the presence of a concussion and to guide return-to-play decisions for athletes. Traditional neuroimaging tests, such as brain magnetic resonance imaging, are normal in concussion, and therefore diagnosis and management are guided by reported symptoms. Some athletes will under-report symptoms to accelerate their return-to-play and others will over-report symptoms out of fear of further injury or misinterpretation of underlying conditions, such as migraine headache. Therefore, an objective measure is needed to assist in several facets of concussion management. Limited data in animal and human testing indicates that intracranial pressure increases slightly and cerebrovascular reactivity (the ability of the cerebral arteries to auto-regulate in response to changes in carbon dioxide) decreases slightly following mild traumatic brain injury. We hypothesize that a combination of ultrasonographic measurements (optic nerve sheath diameter and transcranial Doppler assessment of cerebrovascular reactivity) into a single index will allow for an accurate and non-invasive measurement of intracranial pressure and cerebrovascular reactivity, and this index will be clinically relevant and useful for guiding concussion diagnosis and management. Ultrasound is an ideal modality for the evaluation of concussion because it is portable (allowing for evaluation in many settings, such as on the playing field or in a combat zone), radiation-free (making repeat scans safe), and relatively inexpensive (resulting in nearly universal availability). This paper reviews the literature supporting our hypothesis that an ultrasonographic index can assist in the diagnosis and management of concussion, and it also presents limited data regarding the

  16. Can a Combination of Ultrasonographic Parameters Accurately Evaluate Concussion and Guide Return-to-Play Decisions?

    PubMed Central

    Cartwright, Michael S.; Dupuis, Janae E.; Bargoil, Jessica M.; Foster, Dana C.

    2015-01-01

    Mild traumatic brain injury, often referred to as concussion, is a common, potentially debilitating, and costly condition. One of the main challenges in diagnosing and managing concussion is that there is not currently an objective test to determine the presence of a concussion and to guide return-to-play decisions for athletes. Traditional neuroimaging tests, such as brain magnetic resonance imaging, are normal in concussion, and therefore diagnosis and management are guided by reported symptoms. Some athletes will under-report symptoms to accelerate their return-to-play and others will over-report symptoms out of fear of further injury or misinterpretation of underlying conditions, such as migraine headache. Therefore, an objective measure is needed to assist in several facets of concussion management. Limited data in animal and human testing indicates that intracranial pressure increases slightly and cerebrovascular reactivity (the ability of the cerebral arteries to auto-regulate in response to changes in carbon dioxide) decreases slightly following mild traumatic brain injury. We hypothesize that a combination of ultrasonographic measurements (optic nerve sheath diameter and transcranial Doppler assessment of cerebrovascular reactivity) into a single index will allow for an accurate and non-invasive measurement of intracranial pressure and cerebrovascular reactivity, and this index will be clinically relevant and useful for guiding concussion diagnosis and management. Ultrasound is an ideal modality for the evaluation of concussion because it is portable (allowing for evaluation in many settings, such as on the playing field or in a combat zone), radiation-free (making repeat scans safe), and relatively inexpensive (resulting in nearly universal availability). This paper reviews the literature supporting our hypothesis that an ultrasonographic index can assist in the diagnosis and management of concussion, and it also presents limited data regarding the

  17. 48 CFR 436.604 - Performance evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 4 2010-10-01 2010-10-01 false Performance evaluation... Performance evaluation. Preparation of performance evaluation reports. (a) In addition to the requirements of FAR 36.604, performance evaluation reports shall be prepared for indefinite-delivery type...

  18. 48 CFR 436.604 - Performance evaluation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 4 2011-10-01 2011-10-01 false Performance evaluation... Performance evaluation. Preparation of performance evaluation reports. (a) In addition to the requirements of FAR 36.604, performance evaluation reports shall be prepared for indefinite-delivery type...

  19. 48 CFR 436.604 - Performance evaluation.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 4 2013-10-01 2013-10-01 false Performance evaluation... Performance evaluation. Preparation of performance evaluation reports. (a) In addition to the requirements of FAR 36.604, performance evaluation reports shall be prepared for indefinite-delivery type...

  20. 48 CFR 436.604 - Performance evaluation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 4 2014-10-01 2014-10-01 false Performance evaluation... Performance evaluation. Preparation of performance evaluation reports. (a) In addition to the requirements of FAR 36.604, performance evaluation reports shall be prepared for indefinite-delivery type...

  1. 48 CFR 436.604 - Performance evaluation.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 4 2012-10-01 2012-10-01 false Performance evaluation... Performance evaluation. Preparation of performance evaluation reports. (a) In addition to the requirements of FAR 36.604, performance evaluation reports shall be prepared for indefinite-delivery type...

  2. JCZS: An Intermolecular Potential Database for Performing Accurate Detonation and Expansion Calculations

    SciTech Connect

    Baer, M.R.; Hobbs, M.L.; McGee, B.C.

    1998-11-03

    Exponential-13,6 (EXP-13,6) potential pammeters for 750 gases composed of 48 elements were determined and assembled in a database, referred to as the JCZS database, for use with the Jacobs Cowperthwaite Zwisler equation of state (JCZ3-EOS)~l) The EXP- 13,6 force constants were obtained by using literature values of Lennard-Jones (LJ) potential functions, by using corresponding states (CS) theory, by matching pure liquid shock Hugoniot data, and by using molecular volume to determine the approach radii with the well depth estimated from high-pressure isen- tropes. The JCZS database was used to accurately predict detonation velocity, pressure, and temperature for 50 dif- 3 Accurate predictions were also ferent explosives with initial densities ranging from 0.25 glcm3 to 1.97 g/cm . obtained for pure liquid shock Hugoniots, static properties of nitrogen, and gas detonations at high initial pressures.

  3. Simple Learned Weighted Sums of Inferior Temporal Neuronal Firing Rates Accurately Predict Human Core Object Recognition Performance.

    PubMed

    Majaj, Najib J; Hong, Ha; Solomon, Ethan A; DiCarlo, James J

    2015-09-30

    database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior. PMID:26424887

  4. Simple Learned Weighted Sums of Inferior Temporal Neuronal Firing Rates Accurately Predict Human Core Object Recognition Performance

    PubMed Central

    Hong, Ha; Solomon, Ethan A.; DiCarlo, James J.

    2015-01-01

    database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior. PMID:26424887

  5. Simple Learned Weighted Sums of Inferior Temporal Neuronal Firing Rates Accurately Predict Human Core Object Recognition Performance.

    PubMed

    Majaj, Najib J; Hong, Ha; Solomon, Ethan A; DiCarlo, James J

    2015-09-30

    database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior.

  6. 48 CFR 236.604 - Performance evaluation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ..., DEPARTMENT OF DEFENSE SPECIAL CATEGORIES OF CONTRACTING CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACTS Architect-Engineer Services 236.604 Performance evaluation. Prepare a separate performance evaluation after... familiar with the architect-engineer contractor's performance....

  7. An evaluation, comparison, and accurate benchmarking of several publicly available MS/MS search algorithms: sensitivity and specificity analysis.

    PubMed

    Kapp, Eugene A; Schütz, Frédéric; Connolly, Lisa M; Chakel, John A; Meza, Jose E; Miller, Christine A; Fenyo, David; Eng, Jimmy K; Adkins, Joshua N; Omenn, Gilbert S; Simpson, Richard J

    2005-08-01

    MS/MS and associated database search algorithms are essential proteomic tools for identifying peptides. Due to their widespread use, it is now time to perform a systematic analysis of the various algorithms currently in use. Using blood specimens used in the HUPO Plasma Proteome Project, we have evaluated five search algorithms with respect to their sensitivity and specificity, and have also accurately benchmarked them based on specified false-positive (FP) rates. Spectrum Mill and SEQUEST performed well in terms of sensitivity, but were inferior to MASCOT, X!Tandem, and Sonar in terms of specificity. Overall, MASCOT, a probabilistic search algorithm, correctly identified most peptides based on a specified FP rate. The rescoring algorithm, PeptideProphet, enhanced the overall performance of the SEQUEST algorithm, as well as provided predictable FP error rates. Ideally, score thresholds should be calculated for each peptide spectrum or minimally, derived from a reversed-sequence search as demonstrated in this study based on a validated data set. The availability of open-source search algorithms, such as X!Tandem, makes it feasible to further improve the validation process (manual or automatic) on the basis of "consensus scoring", i.e., the use of multiple (at least two) search algorithms to reduce the number of FPs. complement.

  8. An evaluation, comparison, and accurate benchmarking of several publicly available MS/MS search algorithms: Sensitivity and Specificity analysis.

    SciTech Connect

    Kapp, Eugene; Schutz, Frederick; Connolly, Lisa M.; Chakel, John A.; Meza, Jose E.; Miller, Christine A.; Fenyo, David; Eng, Jimmy K.; Adkins, Joshua N.; Omenn, Gilbert; Simpson, Richard

    2005-08-01

    MS/MS and associated database search algorithms are essential proteomic tools for identifying peptides. Due to their widespread use, it is now time to perform a systematic analysis of the various algorithms currently in use. Using blood specimens used in the HUPO Plasma Proteome Project, we have evaluated five search algorithms with respect to their sensitivity and specificity, and have also accurately benchmarked them based on specified false-positive (FP) rates. Spectrum Mill and SEQUEST performed well in terms of sensitivity, but were inferior to MASCOT, X-Tandem, and Sonar in terms of specificity. Overall, MASCOT, a probabilistic search algorithm, correctly identified most peptides based on a specified FP rate. The rescoring algorithm, Peptide Prophet, enhanced the overall performance of the SEQUEST algorithm, as well as provided predictable FP error rates. Ideally, score thresholds should be calculated for each peptide spectrum or minimally, derived from a reversed-sequence search as demonstrated in this study based on a validated data set. The availability of open-source search algorithms, such as X-Tandem, makes it feasible to further improve the validation process (manual or automatic) on the basis of ''consensus scoring'', i.e., the use of multiple (at least two) search algorithms to reduce the number of FPs. complement.

  9. Ultra-performance liquid chromatography/tandem mass spectrometry for accurate quantification of global DNA methylation in human sperms.

    PubMed

    Wang, Xiaoli; Suo, Yongshan; Yin, Ruichuan; Shen, Heqing; Wang, Hailin

    2011-06-01

    Aberrant DNA methylation in human sperms has been proposed to be a possible mechanism associated with male infertility. We developed an ultra-performance liquid chromatography/tandem mass spectrometry (UPLC-MS/MS) method for rapid, sensitive, and specific detection of global DNA methylation level in human sperms. Multiple-reaction monitoring (MRM) mode was used in MS/MS detection for accurate quantification of DNA methylation. The intra-day and inter-day precision values of this method were within 1.50-5.70%. By using 2-deoxyguanosine as an internal standard, UPLC-MS/MS method was applied for the detection of global DNA methylation levels in three cultured cell lines. DNA methyltransferases inhibitor 5-aza-2'-deoxycytidine can significantly reduce global DNA methylation levels in treated cell lines, showing the reliability of our method. We further examined global DNA methylation levels in human sperms, and found that global methylation values varied from 3.79% to 4.65%. The average global DNA methylation level of sperm samples washed only by PBS (4.03%) was relatively lower than that of sperm samples in which abnormal and dead sperm cells were removed by density gradient centrifugation (4.25%), indicating the possible aberrant DNA methylation level in abnormal sperm cells. Clinical application of UPLC-MS/MS method in global DNA methylation detection of human sperms will be useful in human sperm quality evaluation and the study of epigenetic mechanisms responsible for male infertility.

  10. Toward accurate molecular identification of species in complex environmental samples: testing the performance of sequence filtering and clustering methods

    PubMed Central

    Flynn, Jullien M; Brown, Emily A; Chain, Frédéric J J; MacIsaac, Hugh J; Cristescu, Melania E

    2015-01-01

    Metabarcoding has the potential to become a rapid, sensitive, and effective approach for identifying species in complex environmental samples. Accurate molecular identification of species depends on the ability to generate operational taxonomic units (OTUs) that correspond to biological species. Due to the sometimes enormous estimates of biodiversity using this method, there is a great need to test the efficacy of data analysis methods used to derive OTUs. Here, we evaluate the performance of various methods for clustering length variable 18S amplicons from complex samples into OTUs using a mock community and a natural community of zooplankton species. We compare analytic procedures consisting of a combination of (1) stringent and relaxed data filtering, (2) singleton sequences included and removed, (3) three commonly used clustering algorithms (mothur, UCLUST, and UPARSE), and (4) three methods of treating alignment gaps when calculating sequence divergence. Depending on the combination of methods used, the number of OTUs varied by nearly two orders of magnitude for the mock community (60–5068 OTUs) and three orders of magnitude for the natural community (22–22191 OTUs). The use of relaxed filtering and the inclusion of singletons greatly inflated OTU numbers without increasing the ability to recover species. Our results also suggest that the method used to treat gaps when calculating sequence divergence can have a great impact on the number of OTUs. Our findings are particularly relevant to studies that cover taxonomically diverse species and employ markers such as rRNA genes in which length variation is extensive. PMID:26078860

  11. SEASAT SAR performance evaluation study

    NASA Technical Reports Server (NTRS)

    1982-01-01

    The performance of the SEASAT synthetic aperture radar (SAR) sensor was evaluated using data processed by the MDA digital processor. Two particular aspects are considered the location accuracy of image data, and the calibration of the measured backscatter amplitude of a set of corner reflectors. The image location accuracy was assessed by selecting identifiable targets in several scenes, converting their image location to UTM coordinates, and comparing the results to map sheets. The error standard deviation is measured to be approximately 30 meters. The amplitude was calibrated by measuring the responses of the Goldstone corner reflector array and comparing the results to theoretical values. A linear regression of the measured against theoretical values results in a slope of 0.954 with a correlation coefficient of 0.970.

  12. Evaluation of stroke performance in tennis.

    PubMed

    Vergauwen, L; Spaepen, A J; Lefevre, J; Hespel, P

    1998-08-01

    In the present studies, the Leuven Tennis Performance Test (LTPT), a newly developed test procedure to measure stroke performance in match-like conditions in elite tennis players, was evaluated as to its value for research purposes. The LTPT is enacted on a regular tennis court. It consists of first and second services, and of returning balls projected by a machine to target zones indicated by a lighted sign. Neutral, defensive, and offensive tactical situations are elicited by appropriately programming the machine. Stroke quality is determined from simultaneous measurements of error rate, ball velocity, and precision of ball placement. A velocity/precision (VP) an a velocity/precision/error (VPE) index are also calculated. The validity and sensitivity of the LTPT were determined by verifying whether LTPT scores reflect minor differences in tennis ranking on the one hand and the effects of fatigue on the other hand. Compared with lower ranked players, higher ones made fewer errors (P < 0.05). In addition, stroke velocity was higher (P < 0.05), and lateral stroke precision, VP, and VPE scores were better (P < 0.05) in the latter. Furthermore, fatigue induced by a prolonged tennis load increased (P < 0.05) error rate and decreased (P < 0.05) stroke velocity and the VP and VPE indices. It is concluded that the LTPT is an accurate, reliable, and valid instrument for the evaluation of stroke quality in high-level tennis players. PMID:9710870

  13. 13 CFR 304.4 - Performance evaluations.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Performance evaluations. 304.4... ECONOMIC DEVELOPMENT DISTRICTS § 304.4 Performance evaluations. (a) EDA shall evaluate the management... of at least one (1) other District Organization in the performance evaluation on a...

  14. 48 CFR 2936.604 - Performance evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 7 2010-10-01 2010-10-01 false Performance evaluation... Performance evaluation. (a) The HCA must establish procedures to evaluate architect-engineer contractor... reports must be made using Standard Form 1421, Performance Evaluation (Architect-Engineer) as...

  15. 48 CFR 2936.604 - Performance evaluation.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 7 2012-10-01 2012-10-01 false Performance evaluation... Performance evaluation. (a) The HCA must establish procedures to evaluate architect-engineer contractor... reports must be made using Standard Form 1421, Performance Evaluation (Architect-Engineer) as...

  16. 48 CFR 2936.604 - Performance evaluation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 7 2014-10-01 2014-10-01 false Performance evaluation... Performance evaluation. (a) The HCA must establish procedures to evaluate architect-engineer contractor... reports must be made using Standard Form 1421, Performance Evaluation (Architect-Engineer) as...

  17. 48 CFR 2936.604 - Performance evaluation.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 7 2013-10-01 2012-10-01 true Performance evaluation... Performance evaluation. (a) The HCA must establish procedures to evaluate architect-engineer contractor... reports must be made using Standard Form 1421, Performance Evaluation (Architect-Engineer) as...

  18. Seismic Performance Evaluation of Concentrically Braced Frames

    NASA Astrophysics Data System (ADS)

    Hsiao, Po-Chien

    Concentrically braced frames (CBFs) are broadly used as lateral-load resisting systems in buildings throughout the US. In high seismic regions, special concentrically braced frames (SCBFs) where ductility under seismic loading is necessary. Their large elastic stiffness and strength efficiently sustains the seismic demands during smaller, more frequent earthquakes. During large, infrequent earthquakes, SCBFs exhibit highly nonlinear behavior due to brace buckling and yielding and the inelastic behavior induced by secondary deformation of the framing system. These response modes reduce the system demands relative to an elastic system without supplemental damping. In design the re reduced demands are estimated using a response modification coefficient, commonly termed the R factor. The R factor values are important to the seismic performance of a building. Procedures put forth in FEMAP695 developed to R factors through a formalized procedure with the objective of consistent level of collapse potential for all building types. The primary objective of the research was to evaluate the seismic performance of SCBFs. To achieve this goal, an improved model including a proposed gusset plate connection model for SCBFs that permits accurate simulation of inelastic deformations of the brace, gusset plate connections, beams and columns and brace fracture was developed and validated using a large number of experiments. Response history analyses were conducted using the validated model. A series of different story-height SCBF buildings were designed and evaluated. The FEMAP695 method and an alternate procedure were applied to SCBFs and NCBFs. NCBFs are designed without ductile detailing. The evaluation using P695 method shows contrary results to the alternate evaluation procedure and the current knowledge in which short-story SCBF structures are more venerable than taller counterparts and NCBFs are more vulnerable than SCBFs.

  19. 48 CFR 236.604 - Performance evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Performance evaluation... Architect-Engineer Services 236.604 Performance evaluation. (a) Preparation of performance reports. Use DD Form 2631, Performance Evaluation (Architect-Engineer), instead of SF 1421. (2) Prepare a...

  20. Accurate evaluation of viscoelasticity of radial artery wall during flow-mediated dilation in ultrasound measurement

    NASA Astrophysics Data System (ADS)

    Sakai, Yasumasa; Taki, Hirofumi; Kanai, Hiroshi

    2016-07-01

    In our previous study, the viscoelasticity of the radial artery wall was estimated to diagnose endothelial dysfunction using a high-frequency (22 MHz) ultrasound device. In the present study, we employed a commercial ultrasound device (7.5 MHz) and estimated the viscoelasticity using arterial pressure and diameter, both of which were measured at the same position. In a phantom experiment, the proposed method successfully estimated the elasticity and viscosity of the phantom with errors of 1.8 and 30.3%, respectively. In an in vivo measurement, the transient change in the viscoelasticity was measured for three healthy subjects during flow-mediated dilation (FMD). The proposed method revealed the softening of the arterial wall originating from the FMD reaction within 100 s after avascularization. These results indicate the high performance of the proposed method in evaluating vascular endothelial function just after avascularization, where the function is difficult to be estimated by a conventional FMD measurement.

  1. Can Young Children Be More Accurate Predictors of Their Recall Performance?

    ERIC Educational Resources Information Center

    Lipko-Speed, Amanda R.

    2013-01-01

    Preschoolers persistently predict that they will perform better than they actually can perform on a picture recall task. The current investigation sought to explore a condition under which young children might be able to improve their predictive accuracy. Namely, children were asked to predict their recall twice for the same set of items.…

  2. An accurate method of extracting fat droplets in liver images for quantitative evaluation

    NASA Astrophysics Data System (ADS)

    Ishikawa, Masahiro; Kobayashi, Naoki; Komagata, Hideki; Shinoda, Kazuma; Yamaguchi, Masahiro; Abe, Tokiya; Hashiguchi, Akinori; Sakamoto, Michiie

    2015-03-01

    The steatosis in liver pathological tissue images is a promising indicator of nonalcoholic fatty liver disease (NAFLD) and the possible risk of hepatocellular carcinoma (HCC). The resulting values are also important for ensuring the automatic and accurate classification of HCC images, because the existence of many fat droplets is likely to create errors in quantifying the morphological features used in the process. In this study we propose a method that can automatically detect, and exclude regions with many fat droplets by using the feature values of colors, shapes and the arrangement of cell nuclei. We implement the method and confirm that it can accurately detect fat droplets and quantify the fat droplet ratio of actual images. This investigation also clarifies the effective characteristics that contribute to accurate detection.

  3. Evaluation of the EURO-CORDEX RCMs to accurately simulate the Etesian wind system

    NASA Astrophysics Data System (ADS)

    Dafka, Stella; Xoplaki, Elena; Toreti, Andrea; Zanis, Prodromos; Tyrlis, Evangelos; Luterbacher, Jürg

    2016-04-01

    The Etesians are among the most persistent regional scale wind systems in the lower troposphere that blow over the Aegean Sea during the extended summer season. ΑAn evaluation of the high spatial resolution, EURO-CORDEX Regional Climate Models (RCMs) is here presented. The study documents the performance of the individual models in representing the basic spatiotemporal pattern of the Etesian wind system for the period 1989-2004. The analysis is mainly focused on evaluating the abilities of the RCMs in simulating the surface wind over the Aegean Sea and the associated large scale atmospheric circulation. Mean Sea Level Pressure (SLP), wind speed and geopotential height at 500 hPa are used. The simulated results are validated against reanalysis datasets (20CR-v2c and ERA20-C) and daily observational measurements (12:00 UTC) from the mainland Greece and Aegean Sea. The analysis highlights the general ability of the RCMs to capture the basic features of the Etesians, but also indicates considerable deficiencies for selected metrics, regions and subperiods. Some of these deficiencies include the significant underestimation (overestimation) of the mean SLP in the northeastern part of the analysis domain in all subperiods (for May and June) when compared to 20CR-v2c (ERA20-C), the significant overestimation of the anomalous ridge over the Balkans and central Europe and the underestimation of the wind speed over the Aegean Sea. Future work will include an assessment of the Etesians for the next decades using EURO-CORDEX projections under different RCP scenarios and estimate the future potential for wind energy production.

  4. Accurate simulation of MPPT methods performance when applied to commercial photovoltaic panels.

    PubMed

    Cubas, Javier; Pindado, Santiago; Sanz-Andrés, Ángel

    2015-01-01

    A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers' datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature) and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commercial solar panel, within a day, and under realistic ambient conditions.

  5. Accurate simulation of MPPT methods performance when applied to commercial photovoltaic panels.

    PubMed

    Cubas, Javier; Pindado, Santiago; Sanz-Andrés, Ángel

    2015-01-01

    A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers' datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature) and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commercial solar panel, within a day, and under realistic ambient conditions. PMID:25874262

  6. Accurate Simulation of MPPT Methods Performance When Applied to Commercial Photovoltaic Panels

    PubMed Central

    2015-01-01

    A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers' datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature) and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commercial solar panel, within a day, and under realistic ambient conditions. PMID:25874262

  7. 48 CFR 36.604 - Performance evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Performance evaluation. 36.604 Section 36.604 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION SPECIAL... Performance evaluation. See 42.1502(f) for the requirements for preparing past performance evaluations...

  8. 48 CFR 36.604 - Performance evaluation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 1 2011-10-01 2011-10-01 false Performance evaluation. 36.604 Section 36.604 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION SPECIAL... Performance evaluation. See 42.1502(f) for the requirements for preparing past performance evaluations...

  9. 48 CFR 36.604 - Performance evaluation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 1 2014-10-01 2014-10-01 false Performance evaluation. 36.604 Section 36.604 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION SPECIAL... Performance evaluation. See 42.1502(f) for the requirements for preparing past performance evaluations...

  10. 48 CFR 236.604 - Performance evaluation.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 3 2013-10-01 2013-10-01 false Performance evaluation. 236.604 Section 236.604 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS SYSTEM... Architect-Engineer Services 236.604 Performance evaluation. Prepare a separate performance evaluation...

  11. 13 CFR 304.4 - Performance evaluations.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 13 Business Credit and Assistance 1 2013-01-01 2013-01-01 false Performance evaluations. 304.4... ECONOMIC DEVELOPMENT DISTRICTS § 304.4 Performance evaluations. (a) EDA shall evaluate the management... the District Organization continues to receive Investment Assistance. EDA's evaluation shall...

  12. Can medical students accurately predict their learning? A study comparing perceived and actual performance in neuroanatomy.

    PubMed

    Hall, Samuel R; Stephens, Jonny R; Seaby, Eleanor G; Andrade, Matheus Gesteira; Lowry, Andrew F; Parton, Will J C; Smith, Claire F; Border, Scott

    2016-10-01

    It is important that clinicians are able to adequately assess their level of knowledge and competence in order to be safe practitioners of medicine. The medical literature contains numerous examples of poor self-assessment accuracy amongst medical students over a range of subjects however this ability in neuroanatomy has yet to be observed. Second year medical students attending neuroanatomy revision sessions at the University of Southampton and the competitors of the National Undergraduate Neuroanatomy Competition were asked to rate their level of knowledge in neuroanatomy. The responses from the former group were compared to performance on a ten item multiple choice question examination and the latter group were compared to their performance within the competition. In both cohorts, self-assessments of perceived level of knowledge correlated weakly to their performance in their respective objective knowledge assessments (r = 0.30 and r = 0.44). Within the NUNC, this correlation improved when students were instead asked to rate their performance on a specific examination within the competition (spotter, rS = 0.68; MCQ, rS = 0.58). Despite its inherent difficulty, medical student self-assessment accuracy in neuroanatomy is comparable to other subjects within the medical curriculum. Anat Sci Educ 9: 488-495. © 2016 American Association of Anatomists.

  13. Teacher Performance Pay Signals and Student Achievement: Are Signals Accurate, and How well Do They Work?

    ERIC Educational Resources Information Center

    Manzeske, David; Garland, Marshall; Williams, Ryan; West, Benjamin; Kistner, Alexandra Manzella; Rapaport, Amie

    2016-01-01

    High-performing teachers tend to seek out positions at more affluent or academically challenging schools, which tend to hire more experienced, effective educators. Consequently, low-income and minority students are more likely to attend schools with less experienced and less effective educators (see, for example, DeMonte & Hanna, 2014; Office…

  14. Evaluating GC/MS Performance

    SciTech Connect

    Alcaraz, A; Dougan, A

    2006-11-26

    and Water Check': By selecting View - Diagnostics/Vacuum Control - Vacuum - Air and Water Check. A Yes/No dialogue box will appear; select No (use current values). It is very important to select No! Otherwise the tune values are drastically altered. The software program will generate a water/air report similar to figure 3. Evaluating the GC/MS system with a performance standard: This procedure should allow the analyst to verify that the chromatographic column and associated components are working adequately to separate the various classes of chemical compounds (e.g., hydrocarbons, alcohols, fatty acids, aromatics, etc.). Use the same GC/MS conditions used to collect the system background and solvent check (part 1 of this document). Figure 5 is an example of a commercial GC/MS column test mixture used to evaluate GC/MS prior to analysis.

  15. Evaluation of automated threshold selection methods for accurately sizing microscopic fluorescent cells by image analysis.

    PubMed Central

    Sieracki, M E; Reichenbach, S E; Webb, K L

    1989-01-01

    The accurate measurement of bacterial and protistan cell biomass is necessary for understanding their population and trophic dynamics in nature. Direct measurement of fluorescently stained cells is often the method of choice. The tedium of making such measurements visually on the large numbers of cells required has prompted the use of automatic image analysis for this purpose. Accurate measurements by image analysis require an accurate, reliable method of segmenting the image, that is, distinguishing the brightly fluorescing cells from a dark background. This is commonly done by visually choosing a threshold intensity value which most closely coincides with the outline of the cells as perceived by the operator. Ideally, an automated method based on the cell image characteristics should be used. Since the optical nature of edges in images of light-emitting, microscopic fluorescent objects is different from that of images generated by transmitted or reflected light, it seemed that automatic segmentation of such images may require special considerations. We tested nine automated threshold selection methods using standard fluorescent microspheres ranging in size and fluorescence intensity and fluorochrome-stained samples of cells from cultures of cyanobacteria, flagellates, and ciliates. The methods included several variations based on the maximum intensity gradient of the sphere profile (first derivative), the minimum in the second derivative of the sphere profile, the minimum of the image histogram, and the midpoint intensity. Our results indicated that thresholds determined visually and by first-derivative methods tended to overestimate the threshold, causing an underestimation of microsphere size. The method based on the minimum of the second derivative of the profile yielded the most accurate area estimates for spheres of different sizes and brightnesses and for four of the five cell types tested. A simple model of the optical properties of fluorescing objects and

  16. How To Evaluate Teacher Performence.

    ERIC Educational Resources Information Center

    Wilson, Laval S.

    Teacher evaluations tend to be like clothes. Whatever is in vogue at the time is utilized extensively by those who are attempting to remain modern and current. If you stay around long enough, the "hot" methods of today will probably recycle to be the new discovery of the future. In the end, each school district develops an evaluation process that…

  17. Method for evaluating performance of clinical pharmacists.

    PubMed

    Schumock, G T; Leister, K A; Edwards, D; Wareham, P S; Burkhart, V D

    1990-01-01

    A performance-evaluation process that satisfies Joint Commission on Accreditation of Healthcare Organizations criteria and state policies is described. A three-part, criteria-based, weighted performance-evaluation tool specific for clinical pharmacists was designed for use in two institutions affiliated with the University of Washington. The three parts are self-appraisal and goal setting, peer evaluation, and supervisory evaluation. Objective criteria within each section were weighted to reflect the relative importance of that characteristic to the job that the clinical pharmacist performs. The performance score for each criterion is multiplied by the weighted value to produce an outcome score. The peer evaluation and self-appraisal/goal-setting parts of the evaluation are completed before the formal performance-evaluation interview. The supervisory evaluation is completed during the interview. For this evaluation, supervisors use both the standard university employee performance evaluation form and a set of specific criteria applicable to the clinical pharmacists in these institutions. The first performance evaluations done under this new system were conducted in May 1989. Pharmacists believed that the new system was more objective and allowed more interchange between the manager and the pharmacist. The peer-evaluation part of the system was seen as extremely constructive. This three-part, criteria-based system for evaluation of the job performance of clinical pharmacists could easily be adopted by other pharmacy departments.

  18. Short-term retention of relational memory in amnesia revisited: accurate performance depends on hippocampal integrity.

    PubMed

    Yee, Lydia T S; Hannula, Deborah E; Tranel, Daniel; Cohen, Neal J

    2014-01-01

    Traditionally, it has been proposed that the hippocampus and adjacent medial temporal lobe cortical structures are selectively critical for long-term declarative memory, which entails memory for inter-item and item-context relationships. Whether the hippocampus might also contribute to short-term retention of relational memory representations has remained controversial. In two experiments, we revisit this question by testing memory for relationships among items embedded in scenes using a standard working memory trial structure in which a sample stimulus is followed by a brief delay and the corresponding test stimulus. In each experimental block, eight trials using different exemplars of the same scene were presented. The exemplars contained the same items but with different spatial relationships among them. By repeating the pictures across trials, any potential contributions of item or scene memory to performance were minimized, and relational memory could be assessed more directly than has been done previously. When test displays were presented, participants indicated whether any of the item-location relationships had changed. Then, regardless of their responses (and whether any item did change its location), participants indicated on a forced-choice test, which item might have moved, guessing if necessary. Amnesic patients were impaired on the change detection test, and were frequently unable to specify the change after having reported correctly that a change had taken place. Comparison participants, by contrast, frequently identified the change even when they failed to report the mismatch, an outcome that speaks to the sensitivity of the change specification measure. These results confirm past reports of hippocampal contributions to short-term retention of relational memory representations, and suggest that the role of the hippocampus in memory has more to do with relational memory requirements than the length of a retention interval.

  19. Short-term retention of relational memory in amnesia revisited: accurate performance depends on hippocampal integrity

    PubMed Central

    Yee, Lydia T. S.; Hannula, Deborah E.; Tranel, Daniel; Cohen, Neal J.

    2014-01-01

    Traditionally, it has been proposed that the hippocampus and adjacent medial temporal lobe cortical structures are selectively critical for long-term declarative memory, which entails memory for inter-item and item-context relationships. Whether the hippocampus might also contribute to short-term retention of relational memory representations has remained controversial. In two experiments, we revisit this question by testing memory for relationships among items embedded in scenes using a standard working memory trial structure in which a sample stimulus is followed by a brief delay and the corresponding test stimulus. In each experimental block, eight trials using different exemplars of the same scene were presented. The exemplars contained the same items but with different spatial relationships among them. By repeating the pictures across trials, any potential contributions of item or scene memory to performance were minimized, and relational memory could be assessed more directly than has been done previously. When test displays were presented, participants indicated whether any of the item-location relationships had changed. Then, regardless of their responses (and whether any item did change its location), participants indicated on a forced-choice test, which item might have moved, guessing if necessary. Amnesic patients were impaired on the change detection test, and were frequently unable to specify the change after having reported correctly that a change had taken place. Comparison participants, by contrast, frequently identified the change even when they failed to report the mismatch, an outcome that speaks to the sensitivity of the change specification measure. These results confirm past reports of hippocampal contributions to short-term retention of relational memory representations, and suggest that the role of the hippocampus in memory has more to do with relational memory requirements than the length of a retention interval. PMID:24478681

  20. Estimation method of point spread function based on Kalman filter for accurately evaluating real optical properties of photonic crystal fibers.

    PubMed

    Shen, Yan; Lou, Shuqin; Wang, Xin

    2014-03-20

    The evaluation accuracy of real optical properties of photonic crystal fibers (PCFs) is determined by the accurate extraction of air hole edges from microscope images of cross sections of practical PCFs. A novel estimation method of point spread function (PSF) based on Kalman filter is presented to rebuild the micrograph image of the PCF cross-section and thus evaluate real optical properties for practical PCFs. Through tests on both artificially degraded images and microscope images of cross sections of practical PCFs, we prove that the proposed method can achieve more accurate PSF estimation and lower PSF variance than the traditional Bayesian estimation method, and thus also reduce the defocus effect. With this method, we rebuild the microscope images of two kinds of commercial PCFs produced by Crystal Fiber and analyze the real optical properties of these PCFs. Numerical results are in accord with the product parameters.

  1. A Weibull statistics-based lignocellulose saccharification model and a built-in parameter accurately predict lignocellulose hydrolysis performance.

    PubMed

    Wang, Mingyu; Han, Lijuan; Liu, Shasha; Zhao, Xuebing; Yang, Jinghua; Loh, Soh Kheang; Sun, Xiaomin; Zhang, Chenxi; Fang, Xu

    2015-09-01

    Renewable energy from lignocellulosic biomass has been deemed an alternative to depleting fossil fuels. In order to improve this technology, we aim to develop robust mathematical models for the enzymatic lignocellulose degradation process. By analyzing 96 groups of previously published and newly obtained lignocellulose saccharification results and fitting them to Weibull distribution, we discovered Weibull statistics can accurately predict lignocellulose saccharification data, regardless of the type of substrates, enzymes and saccharification conditions. A mathematical model for enzymatic lignocellulose degradation was subsequently constructed based on Weibull statistics. Further analysis of the mathematical structure of the model and experimental saccharification data showed the significance of the two parameters in this model. In particular, the λ value, defined the characteristic time, represents the overall performance of the saccharification system. This suggestion was further supported by statistical analysis of experimental saccharification data and analysis of the glucose production levels when λ and n values change. In conclusion, the constructed Weibull statistics-based model can accurately predict lignocellulose hydrolysis behavior and we can use the λ parameter to assess the overall performance of enzymatic lignocellulose degradation. Advantages and potential applications of the model and the λ value in saccharification performance assessment were discussed.

  2. A Weibull statistics-based lignocellulose saccharification model and a built-in parameter accurately predict lignocellulose hydrolysis performance.

    PubMed

    Wang, Mingyu; Han, Lijuan; Liu, Shasha; Zhao, Xuebing; Yang, Jinghua; Loh, Soh Kheang; Sun, Xiaomin; Zhang, Chenxi; Fang, Xu

    2015-09-01

    Renewable energy from lignocellulosic biomass has been deemed an alternative to depleting fossil fuels. In order to improve this technology, we aim to develop robust mathematical models for the enzymatic lignocellulose degradation process. By analyzing 96 groups of previously published and newly obtained lignocellulose saccharification results and fitting them to Weibull distribution, we discovered Weibull statistics can accurately predict lignocellulose saccharification data, regardless of the type of substrates, enzymes and saccharification conditions. A mathematical model for enzymatic lignocellulose degradation was subsequently constructed based on Weibull statistics. Further analysis of the mathematical structure of the model and experimental saccharification data showed the significance of the two parameters in this model. In particular, the λ value, defined the characteristic time, represents the overall performance of the saccharification system. This suggestion was further supported by statistical analysis of experimental saccharification data and analysis of the glucose production levels when λ and n values change. In conclusion, the constructed Weibull statistics-based model can accurately predict lignocellulose hydrolysis behavior and we can use the λ parameter to assess the overall performance of enzymatic lignocellulose degradation. Advantages and potential applications of the model and the λ value in saccharification performance assessment were discussed. PMID:26121186

  3. Strapdown system performance optimization test evaluations (SPOT), volume 1

    NASA Technical Reports Server (NTRS)

    Blaha, R. J.; Gilmore, J. P.

    1973-01-01

    A three axis inertial system was packaged in an Apollo gimbal fixture for fine grain evaluation of strapdown system performance in dynamic environments. These evaluations have provided information to assess the effectiveness of real-time compensation techniques and to study system performance tradeoffs to factors such as quantization and iteration rate. The strapdown performance and tradeoff studies conducted include: (1) Compensation models and techniques for the inertial instrument first-order error terms were developed and compensation effectivity was demonstrated in four basic environments; single and multi-axis slew, and single and multi-axis oscillatory. (2) The theoretical coning bandwidth for the first-order quaternion algorithm expansion was verified. (3) Gyro loop quantization was identified to affect proportionally the system attitude uncertainty. (4) Land navigation evaluations identified the requirement for accurate initialization alignment in order to pursue fine grain navigation evaluations.

  4. Toward More Performance Evaluation in Chemistry

    NASA Astrophysics Data System (ADS)

    Rasp, Sharon L.

    1998-01-01

    The history of the author's experiences in testing and changes in evaluation philosophy are chronicled. Tests in her classroom have moved from solely paper-pencil, multiple-choiced/objective formats to include also lab performance evaluatiors. Examples of performance evaluations in both a traditional chemistry course and a consumer-level chmistry course are given. Analysis of test results rof students indicates the need to continue to include a variety of methods in evaluating student performance in science.

  5. EEMD based pitch evaluation method for accurate grating measurement by AFM

    NASA Astrophysics Data System (ADS)

    Li, Changsheng; Yang, Shuming; Wang, Chenying; Jiang, Zhuangde

    2016-09-01

    The pitch measurement and AFM calibration precision are significantly influenced by the grating pitch evaluation method. This paper presents the ensemble empirical mode decomposition (EEMD) based pitch evaluation method to relieve the accuracy deterioration caused by high and low frequency components of scanning profile during pitch evaluation. The simulation analysis shows that the application of EEMD can improve the pitch accuracy of the FFT-FT algorithm. The pitch error is small when the iteration number of the FFT-FT algorithms was 8. The AFM measurement of the 500 nm-pitch one-dimensional grating shows that the EEMD based pitch evaluation method could improve the pitch precision, especially the grating line position precision, and greatly expand the applicability of the gravity center algorithm when particles and impression marks were distributed on the sample surface. The measurement indicates that the nonlinearity was stable, and the nonlinearity of x axis and forward scanning was much smaller than their counterpart. Finally, a detailed pitch measurement uncertainty evaluation model suitable for commercial AFMs was demonstrated and a pitch uncertainty in the sub-nanometer range was achieved. The pitch uncertainty was reduced about 10% by EEMD.

  6. INTEGRATED WATER TREATMENT SYSTEM PERFORMANCE EVALUATION

    SciTech Connect

    SEXTON RA; MEEUWSEN WE

    2009-03-12

    This document describes the results of an evaluation of the current Integrated Water Treatment System (IWTS) operation against design performance and a determination of short term and long term actions recommended to sustain IWTS performance.

  7. Fast and accurate simulations of diffusion-weighted MRI signals for the evaluation of acquisition sequences

    NASA Astrophysics Data System (ADS)

    Rensonnet, Gaëtan; Jacobs, Damien; Macq, Benoît.; Taquet, Maxime

    2016-03-01

    Diffusion-weighted magnetic resonance imaging (DW-MRI) is a powerful tool to probe the diffusion of water through tissues. Through the application of magnetic gradients of appropriate direction, intensity and duration constituting the acquisition parameters, information can be retrieved about the underlying microstructural organization of the brain. In this context, an important and open question is to determine an optimal sequence of such acquisition parameters for a specific purpose. The use of simulated DW-MRI data for a given microstructural configuration provides a convenient and efficient way to address this problem. We first present a novel hybrid method for the synthetic simulation of DW-MRI signals that combines analytic expressions in simple geometries such as spheres and cylinders and Monte Carlo (MC) simulations elsewhere. Our hybrid method remains valid for any acquisition parameters and provides identical levels of accuracy with a computational time that is 90% shorter than that required by MC simulations for commonly-encountered microstructural configurations. We apply our novel simulation technique to estimate the radius of axons under various noise levels with different acquisition protocols commonly used in the literature. The results of our comparison suggest that protocols favoring a large number of gradient intensities such as a Cube and Sphere (CUSP) imaging provide more accurate radius estimation than conventional single-shell HARDI acquisitions for an identical acquisition time.

  8. Variable impedance cardiography waveforms: how to evaluate the preejection period more accurately

    NASA Astrophysics Data System (ADS)

    Ermishkin, V. V.; Kolesnikov, V. A.; Lukoshkova, E. V.; Mokh, V. P.; Sonina, R. S.; Dupik, N. V.; Boitsov, S. A.

    2012-12-01

    Impedance method has been successfully applied for left ventricular function assessment during functional tests. The preejection period (PEP), the interval between Q peak in ECG and a specific mark on impedance cardiogram (ICG) which corresponds to aortic valve opening, is an important indicator of the contractility state and its neurogenic control. Accurate identification of ejection onset by ICG is often problematic, especially in the cardiologic patients, due to peculiar waveforms. An essential obstacle is variability of the shape of the ICG waveform during the exercise and subsequent recovery. A promissing solution can be introduction of an additional pulse sensor placed in the nearby region. We tested this idea in 28 healthy subjects and 6 cardiologic patients using a dual-channel impedance cardiograph for simultaneous recording from the aortic and neck regions, and an earlobe photoplethysmograph. Our findings suggest that incidence of abnormal complicated ICG waveforms increases with age. The combination of standard ICG with ear photoplethysmography and/or additional impedance channel significantly improves the efficacy and accuracy of PEP estimation.

  9. A new performance evaluation tool

    SciTech Connect

    Kindl, F.H.

    1996-12-31

    The paper describes a Steam Cycle Diagnostic Program (SCDP), that has been specifically designed to respond to the increasing need of electric power generators for periodic performance monitoring, and quick identification of the causes for any observed increase in fuel consumption. There is a description of program objectives, modeling and test data inputs, results, underlying program logic, validation of program accuracy by comparison with acceptance test quality data, and examples of program usage.

  10. Is internal target volume accurate for dose evaluation in lung cancer stereotactic body radiotherapy?

    PubMed Central

    Peng, Jiayuan; Zhang, Zhen; Wang, Jiazhou; Xie, Jiang; Hu, Weigang

    2016-01-01

    Purpose 4DCT delineated internal target volume (ITV) was applied to determine the tumor motion and used as planning target in treatment planning in lung cancer stereotactic body radiotherapy (SBRT). This work is to study the accuracy of using ITV to predict the real target dose in lung cancer SBRT. Materials and methods Both for phantom and patient cases, the ITV and gross tumor volumes (GTVs) were contoured on the maximum intensity projection (MIP) CT and ten CT phases, respectively. A SBRT plan was designed using ITV as the planning target on average projection (AVG) CT. This plan was copied to each CT phase and the dose distribution was recalculated. The GTV_4D dose was acquired through accumulating the GTV doses over all ten phases and regarded as the real target dose. To analyze the ITV dose error, the ITV dose was compared to the real target dose by endpoints of D99, D95, D1 (doses received by the 99%, 95% and 1% of the target volume), and dose coverage endpoint of V100(relative volume receiving at least the prescription dose). Results The phantom study shows that the ITV underestimates the real target dose by 9.47%∼19.8% in D99, 4.43%∼15.99% in D95, and underestimates the dose coverage by 5% in V100. The patient cases show that the ITV underestimates the real target dose and dose coverage by 3.8%∼10.7% in D99, 4.7%∼7.2% in D95, and 3.96%∼6.59% in V100 in motion target cases. Conclusions Cautions should be taken that ITV is not accurate enough to predict the real target dose in lung cancer SBRT with large tumor motions. Restricting the target motion or reducing the target dose heterogeneity could reduce the ITV dose underestimation effect in lung SBRT. PMID:26968812

  11. S-191 sensor performance evaluation

    NASA Technical Reports Server (NTRS)

    Hughes, C. L.

    1975-01-01

    A final analysis was performed on the Skylab S-191 spectrometer data received from missions SL-2, SL-3, and SL-4. The repeatability and accuracy of the S-191 spectroradiometric internal calibration was determined by correlation to the output obtained from well-defined external targets. These included targets on the moon and earth as well as deep space. In addition, the accuracy of the S-191 short wavelength autocalibration was flight checked by correlation of the earth resources experimental package S-191 outputs and the Backup Unit S-191 outputs after viewing selected targets on the moon.

  12. Development and evaluation of polycrystalline cadmium telluride dosimeters for accurate quality assurance in radiation therapy

    NASA Astrophysics Data System (ADS)

    Oh, K.; Han, M.; Kim, K.; Heo, Y.; Moon, C.; Park, S.; Nam, S.

    2016-02-01

    For quality assurance in radiation therapy, several types of dosimeters are used such as ionization chambers, radiographic films, thermo-luminescent dosimeter (TLD), and semiconductor dosimeters. Among them, semiconductor dosimeters are particularly useful for in vivo dosimeters or high dose gradient area such as the penumbra region because they are more sensitive and smaller in size compared to typical dosimeters. In this study, we developed and evaluated Cadmium Telluride (CdTe) dosimeters, one of the most promising semiconductor dosimeters due to their high quantum efficiency and charge collection efficiency. Such CdTe dosimeters include single crystal form and polycrystalline form depending upon the fabrication process. Both types of CdTe dosimeters are commercially available, but only the polycrystalline form is suitable for radiation dosimeters, since it is less affected by volumetric effect and energy dependence. To develop and evaluate polycrystalline CdTe dosimeters, polycrystalline CdTe films were prepared by thermal evaporation. After that, CdTeO3 layer, thin oxide layer, was deposited on top of the CdTe film by RF sputtering to improve charge carrier transport properties and to reduce leakage current. Also, the CdTeO3 layer which acts as a passivation layer help the dosimeter to reduce their sensitivity changes with repeated use due to radiation damage. Finally, the top and bottom electrodes, In/Ti and Pt, were used to have Schottky contact. Subsequently, the electrical properties under high energy photon beams from linear accelerator (LINAC), such as response coincidence, dose linearity, dose rate dependence, reproducibility, and percentage depth dose, were measured to evaluate polycrystalline CdTe dosimeters. In addition, we compared the experimental data of the dosimeter fabricated in this study with those of the silicon diode dosimeter and Thimble ionization chamber which widely used in routine dosimetry system and dose measurements for radiation

  13. Evaluating Economic Performance and Policies: A Comment.

    ERIC Educational Resources Information Center

    Schur, Leon M.

    1987-01-01

    Offers a critique of Thurow's paper on the evaluation of economic performance (see SO516719). Concludes that the alternative offered by Thurow is inadequate, and states that the standards developed by the "Framework" are adequate for evaluating economic performance and policies. (JDH)

  14. Colorimetric evaluation of display performance

    NASA Astrophysics Data System (ADS)

    Kosmowski, Bogdan B.

    2001-08-01

    The development of information techniques, using new technologies, physical phenomena and coding schemes, enables new application areas to be benefited form the introduction of displays. The full utilization of the visual perception of a human operator, requires the color coding process to be implemented. The evolution of displays, from achromatic (B&W) and monochromatic, to multicolor and full-color, enhances the possibilities of information coding, creating however a need for the quantitative methods of display parameter assessment. Quantitative assessment of color displays, restricted to photometric measurements of their parameters, is an estimate leading to considerable errors. Therefore, the measurements of a display's color properties have to be based on spectral measurements of the display and its elements. The quantitative assessment of the display system parameters should be made using colorimetric systems like CIE1931, CIE1976 LAB or LUV. In the paper, the constraints on the measurement method selection for the color display evaluation are discussed and the relations between their qualitative assessment and the ergonomic conditions of their application are also presented. The paper presents the examples of using LUV colorimetric system and color difference (Delta) E in the optimization of color liquid crystal displays.

  15. The level of detail required in a deformable phantom to accurately perform quality assurance of deformable image registration

    NASA Astrophysics Data System (ADS)

    Saenz, Daniel L.; Kim, Hojin; Chen, Josephine; Stathakis, Sotirios; Kirby, Neil

    2016-09-01

    The primary purpose of the study was to determine how detailed deformable image registration (DIR) phantoms need to adequately simulate human anatomy and accurately assess the quality of DIR algorithms. In particular, how many distinct tissues are required in a phantom to simulate complex human anatomy? Pelvis and head-and-neck patient CT images were used for this study as virtual phantoms. Two data sets from each site were analyzed. The virtual phantoms were warped to create two pairs consisting of undeformed and deformed images. Otsu’s method was employed to create additional segmented image pairs of n distinct soft tissue CT number ranges (fat, muscle, etc). A realistic noise image was added to each image. Deformations were applied in MIM Software (MIM) and Velocity deformable multi-pass (DMP) and compared with the known warping. Images with more simulated tissue levels exhibit more contrast, enabling more accurate results. Deformation error (magnitude of the vector difference between known and predicted deformation) was used as a metric to evaluate how many CT number gray levels are needed for a phantom to serve as a realistic patient proxy. Stabilization of the mean deformation error was reached by three soft tissue levels for Velocity DMP and MIM, though MIM exhibited a persisting difference in accuracy between the discrete images and the unprocessed image pair. A minimum detail of three levels allows a realistic patient proxy for use with Velocity and MIM deformation algorithms.

  16. The level of detail required in a deformable phantom to accurately perform quality assurance of deformable image registration.

    PubMed

    Saenz, Daniel L; Kim, Hojin; Chen, Josephine; Stathakis, Sotirios; Kirby, Neil

    2016-09-01

    The primary purpose of the study was to determine how detailed deformable image registration (DIR) phantoms need to adequately simulate human anatomy and accurately assess the quality of DIR algorithms. In particular, how many distinct tissues are required in a phantom to simulate complex human anatomy? Pelvis and head-and-neck patient CT images were used for this study as virtual phantoms. Two data sets from each site were analyzed. The virtual phantoms were warped to create two pairs consisting of undeformed and deformed images. Otsu's method was employed to create additional segmented image pairs of n distinct soft tissue CT number ranges (fat, muscle, etc). A realistic noise image was added to each image. Deformations were applied in MIM Software (MIM) and Velocity deformable multi-pass (DMP) and compared with the known warping. Images with more simulated tissue levels exhibit more contrast, enabling more accurate results. Deformation error (magnitude of the vector difference between known and predicted deformation) was used as a metric to evaluate how many CT number gray levels are needed for a phantom to serve as a realistic patient proxy. Stabilization of the mean deformation error was reached by three soft tissue levels for Velocity DMP and MIM, though MIM exhibited a persisting difference in accuracy between the discrete images and the unprocessed image pair. A minimum detail of three levels allows a realistic patient proxy for use with Velocity and MIM deformation algorithms. PMID:27494827

  17. Theory and Practice on Teacher Performance Evaluation

    ERIC Educational Resources Information Center

    Yonghong, Cai; Chongde, Lin

    2006-01-01

    Teacher performance evaluation plays a key role in educational personnel reform, so it has been an important yet difficult issue in educational reform. Previous evaluations on teachers failed to make strict distinction among the three dominant types of evaluation, namely, capability, achievement, and effectiveness. Moreover, teacher performance…

  18. Evaluation of a low-cost and accurate ocean temperature logger on subsurface mooring systems

    SciTech Connect

    Tian, Chuan; Deng, Zhiqun; Lu, Jun; Xu, Xiaoyang; Zhao, Wei; Xu, Ming

    2014-06-23

    Monitoring seawater temperature is important to understanding evolving ocean processes. To monitor internal waves or ocean mixing, a large number of temperature loggers are typically mounted on subsurface mooring systems to obtain high-resolution temperature data at different water depths. In this study, we redesigned and evaluated a compact, low-cost, self-contained, high-resolution and high-accuracy ocean temperature logger, TC-1121. The newly designed TC-1121 loggers are smaller, more robust, and their sampling intervals can be automatically changed by indicated events. They have been widely used in many mooring systems to study internal wave and ocean mixing. The logger’s fundamental design, noise analysis, calibration, drift test, and a long-term sea trial are discussed in this paper.

  19. Evaluating survival model performance: a graphical approach.

    PubMed

    Mandel, M; Galai, N; Simchen, E

    2005-06-30

    In the last decade, many statistics have been suggested to evaluate the performance of survival models. These statistics evaluate the overall performance of a model ignoring possible variability in performance over time. Using an extension of measures used in binary regression, we propose a graphical method to depict the performance of a survival model over time. The method provides estimates of performance at specific time points and can be used as an informal test for detecting time varying effects of covariates in the Cox model framework. The method is illustrated on real and simulated data using Cox proportional hazard model and rank statistics.

  20. Neither Fair nor Accurate: Research-Based Reasons Why High-Stakes Tests Should Not Be Used to Evaluate Teachers

    ERIC Educational Resources Information Center

    Au, Wayne

    2011-01-01

    Current and former leaders of many major urban school districts, including Washington, D.C.'s Michelle Rhee and New Orleans' Paul Vallas, have sought to use tests to evaluate teachers. In fact, the use of high-stakes standardized tests to evaluate teacher performance in the manner of value-added measurement (VAM) has become one of the cornerstones…

  1. Conductor gestures influence evaluations of ensemble performance

    PubMed Central

    Morrison, Steven J.; Price, Harry E.; Smedley, Eric M.; Meals, Cory D.

    2014-01-01

    Previous research has found that listener evaluations of ensemble performances vary depending on the expressivity of the conductor’s gestures, even when performances are otherwise identical. It was the purpose of the present study to test whether this effect of visual information was evident in the evaluation of specific aspects of ensemble performance: articulation and dynamics. We constructed a set of 32 music performances that combined auditory and visual information and were designed to feature a high degree of contrast along one of two target characteristics: articulation and dynamics. We paired each of four music excerpts recorded by a chamber ensemble in both a high- and low-contrast condition with video of four conductors demonstrating high- and low-contrast gesture specifically appropriate to either articulation or dynamics. Using one of two equivalent test forms, college music majors and non-majors (N = 285) viewed sixteen 30 s performances and evaluated the quality of the ensemble’s articulation, dynamics, technique, and tempo along with overall expressivity. Results showed significantly higher evaluations for performances featuring high rather than low conducting expressivity regardless of the ensemble’s performance quality. Evaluations for both articulation and dynamics were strongly and positively correlated with evaluations of overall ensemble expressivity. PMID:25104944

  2. Conductor gestures influence evaluations of ensemble performance.

    PubMed

    Morrison, Steven J; Price, Harry E; Smedley, Eric M; Meals, Cory D

    2014-01-01

    Previous research has found that listener evaluations of ensemble performances vary depending on the expressivity of the conductor's gestures, even when performances are otherwise identical. It was the purpose of the present study to test whether this effect of visual information was evident in the evaluation of specific aspects of ensemble performance: articulation and dynamics. We constructed a set of 32 music performances that combined auditory and visual information and were designed to feature a high degree of contrast along one of two target characteristics: articulation and dynamics. We paired each of four music excerpts recorded by a chamber ensemble in both a high- and low-contrast condition with video of four conductors demonstrating high- and low-contrast gesture specifically appropriate to either articulation or dynamics. Using one of two equivalent test forms, college music majors and non-majors (N = 285) viewed sixteen 30 s performances and evaluated the quality of the ensemble's articulation, dynamics, technique, and tempo along with overall expressivity. Results showed significantly higher evaluations for performances featuring high rather than low conducting expressivity regardless of the ensemble's performance quality. Evaluations for both articulation and dynamics were strongly and positively correlated with evaluations of overall ensemble expressivity. PMID:25104944

  3. How Accurately Can Older Adults Evaluate the Quality of Their Text Recall? The Effect of Providing Standards on Judgment Accuracy.

    PubMed

    Baker, Julie; Dunlosky, John; Hertzog, Christopher

    2009-01-01

    Adults have difficulties accurately judging how well they have learned text materials; unfortunately, such low levels of accuracy may obscure age-related deficits. Higher levels of accuracy have been obtained when younger adults make postdictions about which test questions they answered correctly. Accordingly, we focus on the accuracy of postdictive judgments to evaluate whether age deficits would emerge with higher levels of accuracy and whether people's postdictive accuracy would benefit from providing an appropriate standard of evlauation. Participants read texts with definitions embedded in them, attempted to recall each definition, and then made a postdictive judgment about the quality of their recall. When making these judgments, participants either received no standard or were presented the correct definition as a standard for evaluation. Age-related equivalence was found in the relative accuracy of these term-specific judgments, and older adults' absolute accuracy benefited from providing standards to the same degree as did younger adults.

  4. Performance Evaluation of Undulator Radiation at CEBAF

    SciTech Connect

    Chuyu Liu, Geoffrey Krafft, Guimei Wang

    2010-05-01

    The performance of undulator radiation (UR) at CEBAF with a 3.5 m helical undulator is evaluated and compared with APS undulator-A radiation in terms of brilliance, peak brilliance, spectral flux, flux density and intensity distribution.

  5. ATAMM enhancement and multiprocessor performance evaluation

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Mielke, Roland R.; Som, Sukhamoy; Obando, Rodrigo; Malekpour, Mahyar R.; Jones, Robert L., III; Mandala, Brij Mohan V.

    1991-01-01

    ATAMM (Algorithm To Architecture Mapping Model) enhancement and multiprocessor performance evaluation is discussed. The following topics are included: the ATAMM model; ATAMM enhancement; ADM (Advanced Development Model) implementation of ATAMM; and ATAMM support tools.

  6. Improvement of Automotive Part Supplier Performance Evaluation

    NASA Astrophysics Data System (ADS)

    Kongmunee, Chalermkwan; Chutima, Parames

    2016-05-01

    This research investigates the problem of the part supplier performance evaluation in a major Japanese automotive plant in Thailand. Its current evaluation scheme is based on experiences and self-opinion of the evaluators. As a result, many poor performance suppliers are still considered as good suppliers and allow to supply parts to the plant without further improvement obligation. To alleviate this problem, the brainstorming session among stakeholders and evaluators are formally conducted. The result of which is the appropriate evaluation criteria and sub-criteria. The analytical hierarchy process is also used to find suitable weights for each criteria and sub-criteria. The results show that a newly developed evaluation method is significantly better than the previous one in segregating between good and poor suppliers.

  7. Evaluation of high-performance computing software

    SciTech Connect

    Browne, S.; Dongarra, J.; Rowan, T.

    1996-12-31

    The absence of unbiased and up to date comparative evaluations of high-performance computing software complicates a user`s search for the appropriate software package. The National HPCC Software Exchange (NHSE) is attacking this problem using an approach that includes independent evaluations of software, incorporation of author and user feedback into the evaluations, and Web access to the evaluations. We are applying this approach to the Parallel Tools Library (PTLIB), a new software repository for parallel systems software and tools, and HPC-Netlib, a high performance branch of the Netlib mathematical software repository. Updating the evaluations with feed-back and making it available via the Web helps ensure accuracy and timeliness, and using independent reviewers produces unbiased comparative evaluations difficult to find elsewhere.

  8. Performance-Based Evaluation and School Librarians

    ERIC Educational Resources Information Center

    Church, Audrey P.

    2015-01-01

    Evaluation of instructional personnel is standard procedure in our Pre-K-12 public schools, and its purpose is to document educator effectiveness. With Race to the Top and No Child Left Behind waivers, states are required to implement performance-based evaluations that demonstrate student academic progress. This three-year study describes the…

  9. Building Leadership Talent through Performance Evaluation

    ERIC Educational Resources Information Center

    Clifford, Matthew

    2015-01-01

    Most states and districts scramble to provide professional development to support principals, but "principal evaluation" is often lost amid competing priorities. Evaluation is an important method for supporting principal growth, communicating performance expectations to principals, and improving leadership practice. It provides leaders…

  10. Reference Service Standards, Performance Criteria, and Evaluation.

    ERIC Educational Resources Information Center

    Schwartz, Diane G.; Eakin, Dottie

    1986-01-01

    Describes process by which reference service standards were developed at a university medical library and their impact on the evaluation of work of librarians. Highlights include establishment of preliminary criteria, literature review, reference service standards, performance evaluation, peer review, and staff development. Checklist of reference…

  11. Assessment beyond Performance: Phenomenography in Educational Evaluation

    ERIC Educational Resources Information Center

    Micari, Marina; Light, Gregory; Calkins, Susanna; Streitwieser, Bernhard

    2007-01-01

    Increasing calls for accountability in education have promoted improvements in quantitative evaluation approaches that measure student performance; however, this has often been to the detriment of qualitative approaches, reducing the richness of educational evaluation as an enterprise. In this article the authors assert that it is not merely…

  12. Evaluating Economic Performance and Policies: A Comment.

    ERIC Educational Resources Information Center

    Walstad, William B.

    1987-01-01

    Critiques Thurow's paper on the evaluation of economic performance (see SO516719). Concludes that the Joint Council's "Framework" offers a solid foundation for teaching about economic performance if the Joint Council can persuade high school economics teachers to use it. (JDH)

  13. High Specificity in Circulating Tumor Cell Identification Is Required for Accurate Evaluation of Programmed Death-Ligand 1

    PubMed Central

    Schultz, Zachery D.; Warrick, Jay W.; Guckenberger, David J.; Pezzi, Hannah M.; Sperger, Jamie M.; Heninger, Erika; Saeed, Anwaar; Leal, Ticiana; Mattox, Kara; Traynor, Anne M.; Campbell, Toby C.; Berry, Scott M.; Beebe, David J.; Lang, Joshua M.

    2016-01-01

    Background Expression of programmed-death ligand 1 (PD-L1) in non-small cell lung cancer (NSCLC) is typically evaluated through invasive biopsies; however, recent advances in the identification of circulating tumor cells (CTCs) may be a less invasive method to assay tumor cells for these purposes. These liquid biopsies rely on accurate identification of CTCs from the diverse populations in the blood, where some tumor cells share characteristics with normal blood cells. While many blood cells can be excluded by their high expression of CD45, neutrophils and other immature myeloid subsets have low to absent expression of CD45 and also express PD-L1. Furthermore, cytokeratin is typically used to identify CTCs, but neutrophils may stain non-specifically for intracellular antibodies, including cytokeratin, thus preventing accurate evaluation of PD-L1 expression on tumor cells. This holds even greater significance when evaluating PD-L1 in epithelial cell adhesion molecule (EpCAM) positive and EpCAM negative CTCs (as in epithelial-mesenchymal transition (EMT)). Methods To evaluate the impact of CTC misidentification on PD-L1 evaluation, we utilized CD11b to identify myeloid cells. CTCs were isolated from patients with metastatic NSCLC using EpCAM, MUC1 or Vimentin capture antibodies and exclusion-based sample preparation (ESP) technology. Results Large populations of CD11b+CD45lo cells were identified in buffy coats and stained non-specifically for intracellular antibodies including cytokeratin. The amount of CD11b+ cells misidentified as CTCs varied among patients; accounting for 33–100% of traditionally identified CTCs. Cells captured with vimentin had a higher frequency of CD11b+ cells at 41%, compared to 20% and 18% with MUC1 or EpCAM, respectively. Cells misidentified as CTCs ultimately skewed PD-L1 expression to varying degrees across patient samples. Conclusions Interfering myeloid populations can be differentiated from true CTCs with additional staining criteria

  14. Evaluation of the sample needed to accurately estimate outcome-based measurements of dairy welfare on farm.

    PubMed

    Endres, M I; Lobeck-Luchterhand, K M; Espejo, L A; Tucker, C B

    2014-01-01

    Dairy welfare assessment programs are becoming more common on US farms. Outcome-based measurements, such as locomotion, hock lesion, hygiene, and body condition scores (BCS), are included in these assessments. The objective of the current study was to investigate the proportion of cows in the pen or subsamples of pens on a farm needed to provide an accurate estimate of the previously mentioned measurements. In experiment 1, we evaluated cows in 52 high pens (50 farms) for lameness using a 1- to 5-scale locomotion scoring system (1 = normal and 5 = severely lame; 24.4 and 6% of animals were scored ≥ 3 or ≥ 4, respectively). Cows were also given a BCS using a 1- to 5-scale, where 1 = emaciated and 5 = obese; cows were rarely thin (BCS ≤ 2; 0.10% of cows) or fat (BCS ≥ 4; 0.11% of cows). Hygiene scores were assessed on a 1- to 5-scale with 1 = clean and 5 = severely dirty; 54.9% of cows had a hygiene score ≥ 3. Hock injuries were classified as 1 = no lesion, 2 = mild lesion, and 3 = severe lesion; 10.6% of cows had a score of 3. Subsets of data were created with 10 replicates of random sampling that represented 100, 90, 80, 70, 60, 50, 40, 30, 20, 15, 10, 5, and 3% of the cows measured/pen. In experiment 2, we scored the same outcome measures on all cows in lactating pens from 12 farms and evaluated using pen subsamples: high; high and fresh; high, fresh, and hospital; and high, low, and hospital. For both experiments, the association between the estimates derived from all subsamples and entire pen (experiment 1) or herd (experiment 2) prevalence was evaluated using linear regression. To be considered a good estimate, 3 criteria must be met: R(2)>0.9, slope = 1, and intercept = 0. In experiment 1, on average, recording 15% of the pen represented the percentage of clinically lame cows (score ≥ 3), whereas 30% needed to be measured to estimate severe lameness (score ≥ 4). Only 15% of the pen was needed to estimate the percentage of the herd with a hygiene

  15. AMTEC RC-10 Performance Evaluation Test Program

    NASA Astrophysics Data System (ADS)

    Schuller, Michael; Reiners, Elinor; Lemire, Robert; Sievers, Robert

    1994-07-01

    The Phillips Laboratory Power and Thermal Management Division (PL/VTP), in conjunction with ORION International Technologies, initiated the Alkali Metal Thermal to Electric Conversion (AMTEC), Remote Condensed-10% efficient (RC-10) Performance Evaluation Test Program to investigate cell design variations intended to increase efficiency in AMTEC cells. The RC-10 cell, fabricated by Advanced Modular Power Systems, uses a remote condensing region to reduce radiative heat losses from the electrode. The cell has operated at 10% efficiency. PL/VTP tested the RC-10 to evaluate its performance and efficiency. The impact of temperature variations along the length of the cell wall on performance were evaluated. Testing was performed in air, with a `` guard heater'' surrounding the cell to simulate the system environment of the cell.

  16. Evaluation of a Second-Order Accurate Navier-Stokes Code for Detached Eddy Simulation Past a Circular Cylinder

    NASA Technical Reports Server (NTRS)

    Vatsa, Veer N.; Singer, Bart A.

    2003-01-01

    We evaluate the applicability of a production computational fluid dynamics code for conducting detached eddy simulation for unsteady flows. A second-order accurate Navier-Stokes code developed at NASA Langley Research Center, known as TLNS3D, is used for these simulations. We focus our attention on high Reynolds number flow (Re = 5 x 10(sup 4) - 1.4 x 10(sup 5)) past a circular cylinder to simulate flows with large-scale separations. We consider two types of flow situations: one in which the flow at the separation point is laminar, and the other in which the flow is already turbulent when it detaches from the surface of the cylinder. Solutions are presented for two- and three-dimensional calculations using both the unsteady Reynolds-averaged Navier-Stokes paradigm and the detached eddy simulation treatment. All calculations use the standard Spalart-Allmaras turbulence model as the base model.

  17. Non-destructive evaluation of the cladding thickness in LEU fuel plates by accurate ultrasonic scanning technique

    SciTech Connect

    Borring, J.; Gundtoft, H.E.; Borum, K.K.; Toft, P.

    1997-08-01

    In an effort to improve their ultrasonic scanning technique for accurate determination of the cladding thickness in LEU fuel plates, new equipment and modifications to the existing hardware and software have been tested and evaluated. The authors are now able to measure an aluminium thickness down to 0.25 mm instead of the previous 0.35 mm. Furthermore, they have shown how the measuring sensitivity can be improved from 0.03 mm to 0.01 mm. It has now become possible to check their standard fuel plates for DR3 against the minimum cladding thickness requirements non-destructively. Such measurements open the possibility for the acceptance of a thinner nominal cladding than normally used today.

  18. Effects of Performers' External Characteristics on Performance Evaluations.

    ERIC Educational Resources Information Center

    Bermingham, Gudrun A.

    2000-01-01

    States that fairness has been a major concern in the field of music adjudication. Reviews the research literature to reveal information about three external characteristics (race, gender, and physical attractiveness) that may affect judges' performance evaluations and influence fairness of music adjudication. Includes references. (CMK)

  19. Smith Newton Vehicle Performance Evaluation (Brochure)

    SciTech Connect

    Not Available

    2012-08-01

    The Fleet Test and Evaluation Team at the U.S. Department of Energy's National Renewable Energy Laboratory is evaluating and documenting the performance of electric and plug-in hybrid electric drive systems in medium-duty trucks across the nation. Through this project, Smith Electric Vehicles will build and deploy 500 all-electric medium-duty trucks. The trucks will be deployed in diverse climates across the country.

  20. Performance Evaluation and Benchmarking of Intelligent Systems

    SciTech Connect

    Madhavan, Raj; Messina, Elena; Tunstel, Edward

    2009-09-01

    To design and develop capable, dependable, and affordable intelligent systems, their performance must be measurable. Scientific methodologies for standardization and benchmarking are crucial for quantitatively evaluating the performance of emerging robotic and intelligent systems technologies. There is currently no accepted standard for quantitatively measuring the performance of these systems against user-defined requirements; and furthermore, there is no consensus on what objective evaluation procedures need to be followed to understand the performance of these systems. The lack of reproducible and repeatable test methods has precluded researchers working towards a common goal from exchanging and communicating results, inter-comparing system performance, and leveraging previous work that could otherwise avoid duplication and expedite technology transfer. Currently, this lack of cohesion in the community hinders progress in many domains, such as manufacturing, service, healthcare, and security. By providing the research community with access to standardized tools, reference data sets, and open source libraries of solutions, researchers and consumers will be able to evaluate the cost and benefits associated with intelligent systems and associated technologies. In this vein, the edited book volume addresses performance evaluation and metrics for intelligent systems, in general, while emphasizing the need and solutions for standardized methods. To the knowledge of the editors, there is not a single book on the market that is solely dedicated to the subject of performance evaluation and benchmarking of intelligent systems. Even books that address this topic do so only marginally or are out of date. The research work presented in this volume fills this void by drawing from the experiences and insights of experts gained both through theoretical development and practical implementation of intelligent systems in a variety of diverse application domains. The book presents

  1. Evaluation of high-definition television for remote task performance

    SciTech Connect

    Draper, J.V.; Fujita, Y.; Herndon, J.N.

    1987-04-01

    High-definition television (HDTV) transmits a video image with more than twice the number (1125 for HDTV to 525 for standard-resolution TV) of horizontal scan lines that standard-resolution TV provides. The improvement in picture quality (compared to standard-resolution TV) that the extra scan lines provide is impressive. Objects in the HDTV picture have more sharply defined edges, better contrast, and more accurate reproduction of shading and color patterns than do those in the standard-resolution TV picture. Because the TV viewing system is a key component for teleoperator performance, an improvement in TV picture quality could mean an improvement in the speed and accuracy with which teleoperators perform tasks. This report describes three experiments designed to evaluate the impact of HDTV on the performance of typical remote tasks. The performance of HDTV was compared to that of standard-resolution, monochromatic TV and standard-resolution, stereoscopic, monochromatic TV in the context of judgment of depth in a televised scene, visual inspection of an object, and performance of a typical remote handling task. The results of the three experiments show that in some areas HDTV can lead to improvement in teleoperator performance. Observers inspecting a small object for a flaw were more accurate with HDTV than with either of the standard-resolution systems. High resolution is critical for detection of small-scale flaws of the type in the experiment (a scratch on a glass bottle). These experiments provided an evaluation of HDTV television for use in tasks that must be routinely performed to remotely maintain a nuclear fuel reprocessing facility. 5 refs., 7 figs., 9 tabs.

  2. Performance and Evaluation of LISP Systems

    SciTech Connect

    Gabriel, R.P.

    1985-01-01

    The final report of the Stanford Lisp Performance Study, Performance and Evaluation of Lisp Systems is the first book to present descriptions on Lisp implementation techniques actually in use. It provides performance information using the tools of benchmarking to measure the various Lisp systems, and provides an understanding of the technical tradeoffs made during the implementation of a Lisp system. The study is divided into three parts. The first provides the theoretical background, outlining the factors that go into evaluating the performance of a Lisp system. The second part presents the Lisp implementations: MacLisp, MIT CADR, LMI Lambda, S-I Lisp, Franz Lisp, MIL, Spice Lisp, Vax Common Lisp, Portable Standard Lisp, and Xerox D-Machine. A final part describes the benchmark suite that was used during the major portion of the study and the results themselves.

  3. Smith Newton Vehicle Performance Evaluation - Cumulative (Brochure)

    SciTech Connect

    Not Available

    2014-08-01

    The Fleet Test and Evaluation Team at the U.S. Department of Energy's National Renewable Energy Laboratory is evaluating and documenting the performance of electric and plug-in hybrid electric drive systems in medium-duty trucks across the nation. U.S. companies participating in this evaluation project received funding from the American Recovery and Reinvestment Act to cover part of the cost of purchasing these vehicles. Through this project, Smith Electric Vehicles is building and deploying 500 all-electric medium-duty trucks that will be deployed by a variety of companies in diverse climates across the country.

  4. Prospective safety performance evaluation on construction sites.

    PubMed

    Wu, Xianguo; Liu, Qian; Zhang, Limao; Skibniewski, Miroslaw J; Wang, Yanhong

    2015-05-01

    This paper presents a systematic Structural Equation Modeling (SEM) based approach for Prospective Safety Performance Evaluation (PSPE) on construction sites, with causal relationships and interactions between enablers and the goals of PSPE taken into account. According to a sample of 450 valid questionnaire surveys from 30 Chinese construction enterprises, a SEM model with 26 items included for PSPE in the context of Chinese construction industry is established and then verified through the goodness-of-fit test. Three typical types of construction enterprises, namely the state-owned enterprise, private enterprise and Sino-foreign joint venture, are selected as samples to measure the level of safety performance given the enterprise scale, ownership and business strategy are different. Results provide a full understanding of safety performance practice in the construction industry, and indicate that the level of overall safety performance situation on working sites is rated at least a level of III (Fair) or above. This phenomenon can be explained that the construction industry has gradually matured with the norms, and construction enterprises should improve the level of safety performance as not to be eliminated from the government-led construction industry. The differences existing in the safety performance practice regarding different construction enterprise categories are compared and analyzed according to evaluation results. This research provides insights into cause-effect relationships among safety performance factors and goals, which, in turn, can facilitate the improvement of high safety performance in the construction industry.

  5. Prospective safety performance evaluation on construction sites.

    PubMed

    Wu, Xianguo; Liu, Qian; Zhang, Limao; Skibniewski, Miroslaw J; Wang, Yanhong

    2015-05-01

    This paper presents a systematic Structural Equation Modeling (SEM) based approach for Prospective Safety Performance Evaluation (PSPE) on construction sites, with causal relationships and interactions between enablers and the goals of PSPE taken into account. According to a sample of 450 valid questionnaire surveys from 30 Chinese construction enterprises, a SEM model with 26 items included for PSPE in the context of Chinese construction industry is established and then verified through the goodness-of-fit test. Three typical types of construction enterprises, namely the state-owned enterprise, private enterprise and Sino-foreign joint venture, are selected as samples to measure the level of safety performance given the enterprise scale, ownership and business strategy are different. Results provide a full understanding of safety performance practice in the construction industry, and indicate that the level of overall safety performance situation on working sites is rated at least a level of III (Fair) or above. This phenomenon can be explained that the construction industry has gradually matured with the norms, and construction enterprises should improve the level of safety performance as not to be eliminated from the government-led construction industry. The differences existing in the safety performance practice regarding different construction enterprise categories are compared and analyzed according to evaluation results. This research provides insights into cause-effect relationships among safety performance factors and goals, which, in turn, can facilitate the improvement of high safety performance in the construction industry. PMID:25746166

  6. Accurate low-cost methods for performance evaluation of cache memory systems

    NASA Technical Reports Server (NTRS)

    Laha, Subhasis; Patel, Janak H.; Iyer, Ravishankar K.

    1988-01-01

    Methods of simulation based on statistical techniques are proposed to decrease the need for large trace measurements and for predicting true program behavior. Sampling techniques are applied while the address trace is collected from a workload. This drastically reduces the space and time needed to collect the trace. Simulation techniques are developed to use the sampled data not only to predict the mean miss rate of the cache, but also to provide an empirical estimate of its actual distribution. Finally, a concept of primed cache is introduced to simulate large caches by the sampling-based method.

  7. Hypersonic Interceptor Performance Evaluation Center aero-optics performance predictions

    NASA Astrophysics Data System (ADS)

    Sutton, George W.; Pond, John E.; Snow, Ronald; Hwang, Yanfang

    1993-06-01

    This paper describes the Hypersonic Interceptor Performance Evaluation Center's (HIPEC) aerooptics performance predictions capability. It includes code results for three dimensional shapes and comparisons to initial experiments. HIPEC consists of a collection of aerothermal, aerodynamic computational codes which are capable of covering the entire flight regime from subsonic to hypersonic flow and include chemical reactions and turbulence. Heat transfer to the various surfaces is calculated as an input to cooling and ablation processes. HIPEC also has aero-optics codes to determine the effect of the mean flowfield and turbulence on the tracking and imaging capability of on-board optical sensors. The paper concentrates on the latter aspects.

  8. Evaluating Performance Portability of OpenACC

    SciTech Connect

    Sabne, Amit J; Sakdhnagool, Putt; Lee, Seyong; Vetter, Jeffrey S

    2015-01-01

    Accelerator-based heterogeneous computing is gaining momentum in High Performance Computing arena. However, the increased complexity of the accelerator architectures demands more generic, high-level programming models. OpenACC is one such attempt to tackle the problem. While the abstraction endowed by OpenACC offers productivity, it raises questions on its portability. This paper evaluates the performance portability obtained by OpenACC on twelve OpenACC programs on NVIDIA CUDA, AMD GCN, and Intel MIC architectures. We study the effects of various compiler optimizations and OpenACC program settings on these architectures to provide insights into the achieved performance portability.

  9. Performance evaluation of an air solar collector

    NASA Technical Reports Server (NTRS)

    1979-01-01

    Indoor tests on signal-glazed flat-plate collector are described in report. Marhsall Space Flight Center solar simulator is used to make tests. Test included evaluations on thermal performance under various combinations of flow rate, incident flux, inlet temperature, and wind speed. Results are presented in graph/table form.

  10. ASBESTOS IN DRINKING WATER PERFORMANCE EVALUATION STUDIES

    EPA Science Inventory

    Performance evaluations of laboratories testing for asbestos in drinking water according to USEPA Test Method 100.1 or 100.2 are complicated by the difficulty of providing stable sample dispersions of asbestos in water. Reference samples of a graduated series of chrysotile asbes...

  11. ASBESTOS IN DRINKING WATER PERFORMANCE EVALUATION STUDIES

    EPA Science Inventory

    Performance evaluations of laboratories testing for asbestos in drinking water according to USEPA Test Method 100.1 or 100.2 are complicated by the difficulty of providing stable sample dispersions of asbestos in water. Reference samples of a graduated series of chrysotile asbest...

  12. A New Approach to Evaluating Performance.

    PubMed

    Bleich, Michael R

    2016-09-01

    A leadership task is evaluating the performance of individuals for organizational fit. Traditional approaches have included leader-subordinate reviews, self-review, and peer review. A new approach is evolving in team-based organizations, introduced in this article. J Contin Educ Nurs. 2016;47(9):393-394. PMID:27580504

  13. An Evaluation of a Performance Contract.

    ERIC Educational Resources Information Center

    Dembo, Myron H.; Wilson, Donald E.

    This paper reports an evaluation of a performance contract in reading with 2,500 seventh-grade students. Seventy-five percent of the students were to increase their reading speed five times over their beginning level with ten percent more comprehension after three months of instruction. Results indicated that only thirteen percent of the students…

  14. GENERAL METHODS FOR REMEDIAL PERFORMANCE EVALUATIONS

    EPA Science Inventory

    This document was developed by an EPA-funded project to explain technical considerations and principles necessary to evaluated the performance of ground-water contamination remediations at hazardous waste sites. This is neither a "cookbook", nor an encyclopedia of recommended fi...

  15. EVALUATION OF CONFOCAL MICROSCOPY SYSTEM PERFORMANCE

    EPA Science Inventory

    BACKGROUND. The confocal laser scanning microscope (CLSM) has enormous potential in many biological fields. Currently there is a subjective nature in the assessment of a confocal microscope's performance by primarily evaluating the system with a specific test slide provided by ea...

  16. Performance and race in evaluating minority mayors.

    PubMed

    Howell, S E

    2001-01-01

    This research compares a performance model to a racial model in explaining approval of a black mayor. The performance model emphasizes citizen evaluations of conditions in the city and the mayor's perceived effectiveness in dealing with urban problems. The racial model stipulates that approval of a black mayor is based primarily on racial identification or racism. A model of mayoral approval is tested with two surveys over different years of citizens in a city that has had 20 years' experience with black mayors. Findings indicate that performance matters when evaluating black mayors, indicating that the national performance models of presidential approval are generalizable to local settings with black executives. Implications for black officeholders are discussed. However, the racial model is alive and well, as indicated by its impact on approval and the finding that, in this context, performance matters more to white voters than to black voters. A final, highly tentative conclusion is offered that context conditions the relative power of these models. The performance model may explain more variation in approval of the black mayor than the racial model in a context of rapidly changing city conditions that focuses citizen attention on performance, but during a period of relative stability the two models are evenly matched.

  17. 40 CFR 63.5850 - How do I conduct performance tests, performance evaluations, and design evaluations?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Pollutants: Reinforced Plastic Composites Production Testing and Initial Compliance Requirements § 63.5850... performance test, performance evaluation, and design evaluation in 40 CFR part 63, subpart SS, that applies to... requirements in § 63.7(e)(1) and under the specific conditions that 40 CFR part 63, subpart SS, specifies....

  18. 40 CFR 63.5850 - How do I conduct performance tests, performance evaluations, and design evaluations?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Pollutants: Reinforced Plastic Composites Production Testing and Initial Compliance Requirements § 63.5850... performance test, performance evaluation, and design evaluation in 40 CFR part 63, subpart SS, that applies to... requirements in § 63.7(e)(1) and under the specific conditions that 40 CFR part 63, subpart SS, specifies....

  19. 40 CFR 63.5850 - How do I conduct performance tests, performance evaluations, and design evaluations?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... performance test, performance evaluation, and design evaluation in 40 CFR part 63, subpart SS, that applies to... requirements in § 63.7(e)(1) and under the specific conditions that 40 CFR part 63, subpart SS, specifies. (c... and under the specific conditions that 40 CFR part 63, subpart SS, specifies. (d) You may not...

  20. New feedback detection method for performance evaluation of hearing aids

    NASA Astrophysics Data System (ADS)

    Shin, Mincheol; Wang, Semyung; Bentler, Ruth A.; He, Shuman

    2007-04-01

    New objective and accurate feedback detection method, transfer function variation criterion (TVC), has been developed for evaluating the performance of feedback cancellation techniques. The proposed method is able to classify stable, unstable, and sub-oscillatory stages of feedback in hearing aids. The sub-oscillatory stage is defined as a state where the hearing aid user may perceive distortion of sound quality without the occurrence of oscillation. This detection algorithm focuses on the transfer function variation of hearing aids and the relationship between system stability and feedback oscillation. The transfer functions are obtained using the FIR Wiener filtering algorithm off-line. An anechoic test box is used for the exact and reliable evaluation of different hearing aids. The results are listed and compared with the conventional power concentration ratio (PCR), which has been generally adopted as a feedback detection method for the performance evaluation of hearing aids. The possibility of real-time implementation is discussed in terms of a more convenient and exact performance evaluation of feedback cancellation techniques.

  1. Behavioral patterns of environmental performance evaluation programs.

    PubMed

    Li, Wanxin; Mauerhofer, Volker

    2016-11-01

    During the past decades numerous environmental performance evaluation programs have been developed and implemented on different geographic scales. This paper develops a taxonomy of environmental management behavioral patterns in order to provide a practical comparison tool for environmental performance evaluation programs. Ten such programs purposively selected are mapped against the identified four behavioral patterns in the form of diagnosis, negotiation, learning, and socialization and learning. Overall, we found that schemes which serve to diagnose environmental abnormalities are mainly externally imposed and have been developed as a result of technical debates concerning data sources, methodology and ranking criteria. Learning oriented scheme is featured by processes through which free exchange of ideas, mutual and adaptive learning can occur. Scheme developed by higher authority for influencing behaviors of lower levels of government has been adopted by the evaluated to signal their excellent environmental performance. The socializing and learning classified evaluation schemes have incorporated dialogue, participation, and capacity building in program design. In conclusion we consider the 'fitness for purpose' of the various schemes, the merits of our analytical model and the future possibilities of fostering capacity building in the realm of wicked environmental challenges. PMID:27513220

  2. Behavioral patterns of environmental performance evaluation programs.

    PubMed

    Li, Wanxin; Mauerhofer, Volker

    2016-11-01

    During the past decades numerous environmental performance evaluation programs have been developed and implemented on different geographic scales. This paper develops a taxonomy of environmental management behavioral patterns in order to provide a practical comparison tool for environmental performance evaluation programs. Ten such programs purposively selected are mapped against the identified four behavioral patterns in the form of diagnosis, negotiation, learning, and socialization and learning. Overall, we found that schemes which serve to diagnose environmental abnormalities are mainly externally imposed and have been developed as a result of technical debates concerning data sources, methodology and ranking criteria. Learning oriented scheme is featured by processes through which free exchange of ideas, mutual and adaptive learning can occur. Scheme developed by higher authority for influencing behaviors of lower levels of government has been adopted by the evaluated to signal their excellent environmental performance. The socializing and learning classified evaluation schemes have incorporated dialogue, participation, and capacity building in program design. In conclusion we consider the 'fitness for purpose' of the various schemes, the merits of our analytical model and the future possibilities of fostering capacity building in the realm of wicked environmental challenges.

  3. Performance evaluation of fingerprint verification systems.

    PubMed

    Cappelli, Raffaele; Maio, Dario; Maltoni, Davide; Wayman, James L; Jain, Anil K

    2006-01-01

    This paper is concerned with the performance evaluation of fingerprint verification systems. After an initial classification of biometric testing initiatives, we explore both the theoretical and practical issues related to performance evaluation by presenting the outcome of the recent Fingerprint Verification Competition (FVC2004). FVC2004 was organized by the authors of this work for the purpose of assessing the state-of-the-art in this challenging pattern recognition application and making available a new common benchmark for an unambiguous comparison of fingerprint-based biometric systems. FVC2004 is an independent, strongly supervised evaluation performed at the evaluators' site on evaluators' hardware. This allowed the test to be completely controlled and the computation times of different algorithms to be fairly compared. The experience and feedback received from previous, similar competitions (FVC2000 and FVC2002) allowed us to improve the organization and methodology of FVC2004 and to capture the attention of a significantly higher number of academic and commercial organizations (67 algorithms were submitted for FVC2004). A new, "Light" competition category was included to estimate the loss of matching performance caused by imposing computational constraints. This paper discusses data collection and testing protocols, and includes a detailed analysis of the results. We introduce a simple but effective method for comparing algorithms at the score level, allowing us to isolate difficult cases (images) and to study error correlations and algorithm "fusion." The huge amount of information obtained, including a structured classification of the submitted algorithms on the basis of their features, makes it possible to better understand how current fingerprint recognition systems work and to delineate useful research directions for the future.

  4. Performance Evaluation of a Data Validation System

    NASA Technical Reports Server (NTRS)

    Wong, Edmond (Technical Monitor); Sowers, T. Shane; Santi, L. Michael; Bickford, Randall L.

    2005-01-01

    Online data validation is a performance-enhancing component of modern control and health management systems. It is essential that performance of the data validation system be verified prior to its use in a control and health management system. A new Data Qualification and Validation (DQV) Test-bed application was developed to provide a systematic test environment for this performance verification. The DQV Test-bed was used to evaluate a model-based data validation package known as the Data Quality Validation Studio (DQVS). DQVS was employed as the primary data validation component of a rocket engine health management (EHM) system developed under NASA's NGLT (Next Generation Launch Technology) program. In this paper, the DQVS and DQV Test-bed software applications are described, and the DQV Test-bed verification procedure for this EHM system application is presented. Test-bed results are summarized and implications for EHM system performance improvements are discussed.

  5. OMPS SDR Status and Performance Evaluation

    NASA Astrophysics Data System (ADS)

    Pan, S.; Weng, F.; Wu, X.; Flynn, L. E.; Jaross, G.; Buss, R. H.; Niu, J.; Seftor, C. J.

    2012-12-01

    Launched on October 28, 2011, OMPS has successfully passed different operational phases from the Early Observation and Activation (LEO&A) to Early Orbit Checkout (EOC), and is currently in the Intensive CAL/Val (ICV) phase. OMPS data gathered during the on-orbit calibration and validation activities allow us to evaluate the instrument on-orbit performance and validate Sensor Data Records (SDRs). Detector performance shows that offset, gain, and dark current rate trends remain within 0.2% of the pre-launch values with significant margin below sensor requirements. Detector gain and offset performance trends are generally stable and observed solar irradiance is within an average of 2% of predicted values. This presentation will update the status of the OMPS SDRs with newly established calibration measurements. Examples of analysis of dark calibration, linearity performance, solar irradiance validation, sensor noise and wavelength change are provided.

  6. Performance evaluation soil samples utilizing encapsulation technology

    DOEpatents

    Dahlgran, James R.

    1999-01-01

    Performance evaluation soil samples and method of their preparation using encapsulation technology to encapsulate analytes which are introduced into a soil matrix for analysis and evaluation by analytical laboratories. Target analytes are mixed in an appropriate solvent at predetermined concentrations. The mixture is emulsified in a solution of polymeric film forming material. The emulsified solution is polymerized to form microcapsules. The microcapsules are recovered, quantitated and introduced into a soil matrix in a predetermined ratio to form soil samples with the desired analyte concentration.

  7. Performance evaluation soil samples utilizing encapsulation technology

    DOEpatents

    Dahlgran, J.R.

    1999-08-17

    Performance evaluation soil samples and method of their preparation uses encapsulation technology to encapsulate analytes which are introduced into a soil matrix for analysis and evaluation by analytical laboratories. Target analytes are mixed in an appropriate solvent at predetermined concentrations. The mixture is emulsified in a solution of polymeric film forming material. The emulsified solution is polymerized to form microcapsules. The microcapsules are recovered, quantitated and introduced into a soil matrix in a predetermined ratio to form soil samples with the desired analyte concentration. 1 fig.

  8. Performance Evaluation Methods for Assistive Robotic Technology

    NASA Astrophysics Data System (ADS)

    Tsui, Katherine M.; Feil-Seifer, David J.; Matarić, Maja J.; Yanco, Holly A.

    Robots have been developed for several assistive technology domains, including intervention for Autism Spectrum Disorders, eldercare, and post-stroke rehabilitation. Assistive robots have also been used to promote independent living through the use of devices such as intelligent wheelchairs, assistive robotic arms, and external limb prostheses. Work in the broad field of assistive robotic technology can be divided into two major research phases: technology development, in which new devices, software, and interfaces are created; and clinical, in which assistive technology is applied to a given end-user population. Moving from technology development towards clinical applications is a significant challenge. Developing performance metrics for assistive robots poses a related set of challenges. In this paper, we survey several areas of assistive robotic technology in order to derive and demonstrate domain-specific means for evaluating the performance of such systems. We also present two case studies of applied performance measures and a discussion regarding the ubiquity of functional performance measures across the sampled domains. Finally, we present guidelines for incorporating human performance metrics into end-user evaluations of assistive robotic technologies.

  9. A fully automatic tool to perform accurate flood mapping by merging remote sensing imagery and ancillary data

    NASA Astrophysics Data System (ADS)

    D'Addabbo, Annarita; Refice, Alberto; Lovergine, Francesco; Pasquariello, Guido

    2016-04-01

    Flooding is one of the most frequent and expansive natural hazard. High-resolution flood mapping is an essential step in the monitoring and prevention of inundation hazard, both to gain insight into the processes involved in the generation of flooding events, and from the practical point of view of the precise assessment of inundated areas. Remote sensing data are recognized to be useful in this respect, thanks to the high resolution and regular revisit schedules of state-of-the-art satellites, moreover offering a synoptic overview of the extent of flooding. In particular, Synthetic Aperture Radar (SAR) data present several favorable characteristics for flood mapping, such as their relative insensitivity to the meteorological conditions during acquisitions, as well as the possibility of acquiring independently of solar illumination, thanks to the active nature of the radar sensors [1]. However, flood scenarios are typical examples of complex situations in which different factors have to be considered to provide accurate and robust interpretation of the situation on the ground: the presence of many land cover types, each one with a particular signature in presence of flood, requires modelling the behavior of different objects in the scene in order to associate them to flood or no flood conditions [2]. Generally, the fusion of multi-temporal, multi-sensor, multi-resolution and/or multi-platform Earth observation image data, together with other ancillary information, seems to have a key role in the pursuit of a consistent interpretation of complex scenes. In the case of flooding, distance from the river, terrain elevation, hydrologic information or some combination thereof can add useful information to remote sensing data. Suitable methods, able to manage and merge different kind of data, are so particularly needed. In this work, a fully automatic tool, based on Bayesian Networks (BNs) [3] and able to perform data fusion, is presented. It supplies flood maps

  10. Model Performance Evaluation and Scenario Analysis (MPESA) Tutorial

    EPA Science Inventory

    This tool consists of two parts: model performance evaluation and scenario analysis (MPESA). The model performance evaluation consists of two components: model performance evaluation metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit m...

  11. A new methodology for non-contact accurate crack width measurement through photogrammetry for automated structural safety evaluation

    NASA Astrophysics Data System (ADS)

    Jahanshahi, Mohammad R.; Masri, Sami F.

    2013-03-01

    In mechanical, aerospace and civil structures, cracks are important defects that can cause catastrophes if neglected. Visual inspection is currently the predominant method for crack assessment. This approach is tedious, labor-intensive, subjective and highly qualitative. An inexpensive alternative to current monitoring methods is to use a robotic system that could perform autonomous crack detection and quantification. To reach this goal, several image-based crack detection approaches have been developed; however, the crack thickness quantification, which is an essential element for a reliable structural condition assessment, has not been sufficiently investigated. In this paper, a new contact-less crack quantification methodology, based on computer vision and image processing concepts, is introduced and evaluated against a crack quantification approach which was previously developed by the authors. The proposed approach in this study utilizes depth perception to quantify crack thickness and, as opposed to most previous studies, needs no scale attachment to the region under inspection, which makes this approach ideal for incorporation with autonomous or semi-autonomous mobile inspection systems. Validation tests are performed to evaluate the performance of the proposed approach, and the results show that the new proposed approach outperforms the previously developed one.

  12. Performance evaluation of two OCR systems

    SciTech Connect

    Chen, S.; Subramaniam, S.; Haralick, R.M.; Phillips, I.T.

    1994-12-31

    An experimental protocol for the performance evaluation of Optical Character Recognition (OCR) algorithms is described. The protocol is intended to serve as a model for using the University of Washington English Document Image Database-I to evaluate OCR systems. The plain text zones (without special symbols) in this database have over 2,300,000 characters. The performances of two UNIX-based OCR systems, namely Caere OCR v109a and Xerox ScanWorX v2.0, are measured. The results suggest that Caere OCR outperforms ScanWorX in terms of recognition accuracy; however, ScanWorX is more robust in the presence of image flaws.

  13. Application performation evaluation of the HTMT architecture.

    SciTech Connect

    Hereld, M.; Judson, I. R.; Stevens, R.

    2004-02-23

    In this report we summarize findings from a study of the predicted performance of a suite of application codes taken from the research environment and analyzed against a modeling framework for the HTMT architecture. We find that the inward bandwidth of the data vortex may be a limiting factor for some applications. We also find that available memory in the cryogenic layer is a constraining factor in the partitioning of applications into parcels. The architecture in several examples may be inadequately exploited; in particular, applications typically did not capitalize well on the available computational power or data organizational capability in the PIM layers. The application suite provided significant examples of wide excursions from the accepted (if simplified) program execution model--in particular, by required complex in-SPELL synchronization between parcels. The availability of the HTMT-C emulation environment did not contribute significantly to the ability to analyze applications, because of the large gap between the available hardware descriptions and parameters in the modeling framework and the types of data that could be collected via HTMT-C emulation runs. Detailed analysis of application performance, and indeed further credible development of the HTMT-inspired program execution model and system architecture, requires development of much better tools. Chief among them are cycle-accurate simulation tools for computational, network, and memory components. Additionally, there is a critical need for a whole system simulation tool to allow detailed programming exercises and performance tests to be developed. We address three issues in this report: (1) The landscape for applications of petaflops computing; (2) The performance of applications on the HTMT architecture; and (3) The effectiveness of HTMT-C as a tool for studying and developing the HTMT architecture. We set the scene with observations about the course of application development as petaflops

  14. A performance evaluation of biometric identification devices

    SciTech Connect

    Holmes, J.P.; Maxwell, R.L.; Wright, L.J.

    1990-06-01

    A biometric identification device is an automatic device that can verify a person's identity from a measurement of a physical feature or repeatable action of the individual. A reference measurement of the biometric is obtained when the individual is enrolled on the device. Subsequent verifications are made by comparing the submitted biometric feature against the reference sample. Sandia Laboratories has been evaluating the relative performance of several identity verifiers, using volunteer test subjects. Sandia testing methods and results are discussed.

  15. Automated Laser Seeker Performance Evaluation System (ALSPES)

    NASA Astrophysics Data System (ADS)

    Martin, Randal G.; Robinson, Elisa L.

    1988-01-01

    The Automated Laser Seeker Performance Evaluation System (ALSPES), which supports the Hellfire missile and Copperhead projectile laser seekers, is discussed. The ALSPES capabilities in manual and automatic operation are described, and the ALSPES test hardware is examined, including the computer system, the laser/attenuator, optics systems, seeker test fixture, and the measurement and test equipment. The calibration of laser energy and test signals in ALSPES is considered.

  16. Metrics for Offline Evaluation of Prognostic Performance

    NASA Technical Reports Server (NTRS)

    Saxena, Abhinav; Celaya, Jose; Saha, Bhaskar; Saha, Sankalita; Goebel, Kai

    2010-01-01

    Prognostic performance evaluation has gained significant attention in the past few years. Currently, prognostics concepts lack standard definitions and suffer from ambiguous and inconsistent interpretations. This lack of standards is in part due to the varied end-user requirements for different applications, time scales, available information, domain dynamics, etc. to name a few. The research community has used a variety of metrics largely based on convenience and their respective requirements. Very little attention has been focused on establishing a standardized approach to compare different efforts. This paper presents several new evaluation metrics tailored for prognostics that were recently introduced and were shown to effectively evaluate various algorithms as compared to other conventional metrics. Specifically, this paper presents a detailed discussion on how these metrics should be interpreted and used. These metrics have the capability of incorporating probabilistic uncertainty estimates from prognostic algorithms. In addition to quantitative assessment they also offer a comprehensive visual perspective that can be used in designing the prognostic system. Several methods are suggested to customize these metrics for different applications. Guidelines are provided to help choose one method over another based on distribution characteristics. Various issues faced by prognostics and its performance evaluation are discussed followed by a formal notational framework to help standardize subsequent developments.

  17. Group 3: Performance evaluation and assessment

    NASA Technical Reports Server (NTRS)

    Frink, A.

    1981-01-01

    Line-oriented flight training provides a unique learning experience and an opportunity to look at aspects of performance other types of training did not provide. Areas such as crew coordination, resource management, leadership, and so forth, can be readily evaluated in such a format. While individual performance is of the utmost importance, crew performance deserves equal emphasis, therefore, these areas should be carefully observed by the instructors as an rea for discussion in the same way that individual performane is observed. To be effective, it must be accepted by the crew members, and administered by the instructors as pure training-learning through experience. To keep open minds, to benefit most from the experience, both in the doing and in the follow-on discussion, it is essential that it be entered into with a feeling of freedom, openness, and enthusiasm. Reserve or defensiveness because of concern for failure must be inhibit participation.

  18. Evaluating Algorithm Performance Metrics Tailored for Prognostics

    NASA Technical Reports Server (NTRS)

    Saxena, Abhinav; Celaya, Jose; Saha, Bhaskar; Saha, Sankalita; Goebel, Kai

    2009-01-01

    Prognostics has taken a center stage in Condition Based Maintenance (CBM) where it is desired to estimate Remaining Useful Life (RUL) of the system so that remedial measures may be taken in advance to avoid catastrophic events or unwanted downtimes. Validation of such predictions is an important but difficult proposition and a lack of appropriate evaluation methods renders prognostics meaningless. Evaluation methods currently used in the research community are not standardized and in many cases do not sufficiently assess key performance aspects expected out of a prognostics algorithm. In this paper we introduce several new evaluation metrics tailored for prognostics and show that they can effectively evaluate various algorithms as compared to other conventional metrics. Specifically four algorithms namely; Relevance Vector Machine (RVM), Gaussian Process Regression (GPR), Artificial Neural Network (ANN), and Polynomial Regression (PR) are compared. These algorithms vary in complexity and their ability to manage uncertainty around predicted estimates. Results show that the new metrics rank these algorithms in different manner and depending on the requirements and constraints suitable metrics may be chosen. Beyond these results, these metrics offer ideas about how metrics suitable to prognostics may be designed so that the evaluation procedure can be standardized. 1

  19. Accurate screening for synthetic preservatives in beverage using high performance liquid chromatography with time-of-flight mass spectrometry.

    PubMed

    Li, Xiu Qin; Zhang, Feng; Sun, Yan Yan; Yong, Wei; Chu, Xiao Gang; Fang, Yan Yan; Zweigenbaum, Jerry

    2008-02-11

    In this study, liquid chromatography time-of-flight mass spectrometry (HPLC/TOF-MS) is applied to qualitation and quantitation of 18 synthetic preservatives in beverage. The identification by HPLC/TOF-MS is accomplished with the accurate mass (the subsequent generated empirical formula) of the protonated molecules [M+H]+ or the deprotonated molecules [M-H]-, along with the accurate mass of their main fragment ions. In order to obtain sufficient sensitivity for quantitation purposes (using the protonated or deprotonated molecule) and additional qualitative mass spectrum information provided by the fragments ions, segment program of fragmentor voltages is designed in positive and negative ion mode, respectively. Accurate mass measurements are highly useful in the complex sample analyses since they allow us to achieve a high degree of specificity, often needed when other interferents are present in the matrix. The mass accuracy typically obtained is routinely better than 3 ppm. The 18 compounds behave linearly in the 0.005-5.0mg.kg(-1) concentration range, with correlation coefficient >0.996. The recoveries at the tested concentrations of 1.0mg.kg(-1)-100mg.kg(-1) are 81-106%, with coefficients of variation <7.5%. Limits of detection (LODs) range from 0.0005 to 0.05 mg.kg(-1), which are far below the required maximum residue level (MRL) for these preservatives in foodstuff. The method is suitable for routine quantitative and qualitative analyses of synthetic preservatives in foodstuff.

  20. Deciphering the mechanisms of cellular uptake of engineered nanoparticles by accurate evaluation of internalization using imaging flow cytometry

    PubMed Central

    2013-01-01

    Background The uptake of nanoparticles (NPs) by cells remains to be better characterized in order to understand the mechanisms of potential NP toxicity as well as for a reliable risk assessment. Real NP uptake is still difficult to evaluate because of the adsorption of NPs on the cellular surface. Results Here we used two approaches to distinguish adsorbed fluorescently labeled NPs from the internalized ones. The extracellular fluorescence was either quenched by Trypan Blue or the uptake was analyzed using imaging flow cytometry. We used this novel technique to define the inside of the cell to accurately study the uptake of fluorescently labeled (SiO2) and even non fluorescent but light diffracting NPs (TiO2). Time course, dose-dependence as well as the influence of surface charges on the uptake were shown in the pulmonary epithelial cell line NCI-H292. By setting up an integrative approach combining these flow cytometric analyses with confocal microscopy we deciphered the endocytic pathway involved in SiO2 NP uptake. Functional studies using energy depletion, pharmacological inhibitors, siRNA-clathrin heavy chain induced gene silencing and colocalization of NPs with proteins specific for different endocytic vesicles allowed us to determine macropinocytosis as the internalization pathway for SiO2 NPs in NCI-H292 cells. Conclusion The integrative approach we propose here using the innovative imaging flow cytometry combined with confocal microscopy could be used to identify the physico-chemical characteristics of NPs involved in their uptake in view to redesign safe NPs. PMID:23388071

  1. 40 CFR 63.5850 - How do I conduct performance tests, performance evaluations, and design evaluations?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...: Reinforced Plastic Composites Production Testing and Initial Compliance Requirements § 63.5850 How do I... test, performance evaluation, and design evaluation in 40 CFR part 63, subpart SS, that applies to you... requirements in § 63.7(e)(1) and under the specific conditions that 40 CFR part 63, subpart SS, specifies....

  2. 40 CFR 63.5850 - How do I conduct performance tests, performance evaluations, and design evaluations?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... test, performance evaluation, and design evaluation in 40 CFR part 63, subpart SS, that applies to you... requirements in § 63.7(e)(1) and under the specific conditions that 40 CFR part 63, subpart SS, specifies. (c... and under the specific conditions that 40 CFR part 63, subpart SS, specifies. (d) You may not...

  3. Performance Evaluation of Triangulation Based Range Sensors

    PubMed Central

    Guidi, Gabriele; Russo, Michele; Magrassi, Grazia; Bordegoni, Monica

    2010-01-01

    The performance of 2D digital imaging systems depends on several factors related with both optical and electronic processing. These concepts have originated standards, which have been conceived for photographic equipment and bi-dimensional scanning systems, and which have been aimed at estimating different parameters such as resolution, noise or dynamic range. Conversely, no standard test protocols currently exist for evaluating the corresponding performances of 3D imaging systems such as laser scanners or pattern projection range cameras. This paper is focused on investigating experimental processes for evaluating some critical parameters of 3D equipment, by extending the concepts defined by the ISO standards to the 3D domain. The experimental part of this work concerns the characterization of different range sensors through the extraction of their resolution, accuracy and uncertainty from sets of 3D data acquisitions of specifically designed test objects whose geometrical characteristics are known in advance. The major objective of this contribution is to suggest an easy characterization process for generating a reliable comparison between the performances of different range sensors and to check if a specific piece of equipment is compliant with the expected characteristics. PMID:22163599

  4. Performance evaluation of an automotive thermoelectric generator

    NASA Astrophysics Data System (ADS)

    Dubitsky, Andrei O.

    Around 40% of the total fuel energy in typical internal combustion engines (ICEs) is rejected to the environment in the form of exhaust gas waste heat. Efficient recovery of this waste heat in automobiles can promise a fuel economy improvement of 5%. The thermal energy can be harvested through thermoelectric generators (TEGs) utilizing the Seebeck effect. In the present work, a versatile test bench has been designed and built in order to simulate conditions found on test vehicles. This allows experimental performance evaluation and model validation of automotive thermoelectric generators. An electrically heated exhaust gas circuit and a circulator based coolant loop enable integrated system testing of hot and cold side heat exchangers, thermoelectric modules (TEMs), and thermal interface materials at various scales. A transient thermal model of the coolant loop was created in order to design a system which can maintain constant coolant temperature under variable heat input. Additionally, as electrical heaters cannot match the transient response of an ICE, modelling was completed in order to design a relaxed exhaust flow and temperature history utilizing the system thermal lag. This profile reduced required heating power and gas flow rates by over 50%. The test bench was used to evaluate a DOE/GM initial prototype automotive TEG and validate analytical performance models. The maximum electrical power generation was found to be 54 W with a thermal conversion efficiency of 1.8%. It has been found that thermal interface management is critical for achieving maximum system performance, with novel designs being considered for further improvement.

  5. Evaluating iterative reconstruction performance in computed tomography

    SciTech Connect

    Chen, Baiyu Solomon, Justin; Ramirez Giraldo, Juan Carlos; Samei, Ehsan

    2014-12-15

    Purpose: Iterative reconstruction (IR) offers notable advantages in computed tomography (CT). However, its performance characterization is complicated by its potentially nonlinear behavior, impacting performance in terms of specific tasks. This study aimed to evaluate the performance of IR with both task-specific and task-generic strategies. Methods: The performance of IR in CT was mathematically assessed with an observer model that predicted the detection accuracy in terms of the detectability index (d′). d′ was calculated based on the properties of the image noise and resolution, the observer, and the detection task. The characterizations of image noise and resolution were extended to accommodate the nonlinearity of IR. A library of tasks was mathematically modeled at a range of sizes (radius 1–4 mm), contrast levels (10–100 HU), and edge profiles (sharp and soft). Unique d′ values were calculated for each task with respect to five radiation exposure levels (volume CT dose index, CTDI{sub vol}: 3.4–64.8 mGy) and four reconstruction algorithms (filtered backprojection reconstruction, FBP; iterative reconstruction in imaging space, IRIS; and sinogram affirmed iterative reconstruction with strengths of 3 and 5, SAFIRE3 and SAFIRE5; all provided by Siemens Healthcare, Forchheim, Germany). The d′ values were translated into the areas under the receiver operating characteristic curve (AUC) to represent human observer performance. For each task and reconstruction algorithm, a threshold dose was derived as the minimum dose required to achieve a threshold AUC of 0.9. A task-specific dose reduction potential of IR was calculated as the difference between the threshold doses for IR and FBP. A task-generic comparison was further made between IR and FBP in terms of the percent of all tasks yielding an AUC higher than the threshold. Results: IR required less dose than FBP to achieve the threshold AUC. In general, SAFIRE5 showed the most significant dose reduction

  6. A performance evaluation of personnel identity verifiers

    SciTech Connect

    Maxwell, R.L.; Wright, L.J.

    1987-01-01

    Personnel identity verification devices, which are based on the examination and assessment of a body feature or a unique repeatable personal action, are steadily improving. These biometric devices are becoming more practical with respect to accuracy, speed, user compatibility, reliability and cost, but more development is necessary to satisfy the varied and sometimes ill-defined future requirements of the security industry. In an attempt to maintain an awareness of the availability and the capabilities of identity verifiers for the DOE security community, Sandia Laboratories continue to comparatively evaluate the capabilities and improvements of developing devices. An evaluation of several recently available verifiers is discussed in this paper. Operating environments and procedures more typical of physical access control use can reveal performance substantially different from the basic laboratory tests.

  7. A performance evaluation of personnel identity verifiers

    SciTech Connect

    Maxwell, R.L.; Wright, L.J.

    1987-07-01

    Personnel identity verification devices, which are based on the examination and assessment of a body feature or a unique repeatable personal action, are steadily improving. These biometric devices are becoming more practical with respect to accuracy, speed, user compatibility, reliability and cost, but more development is necessary to satisfy the varied and sometimes ill-defined future requirements of the security industry. In an attempt to maintain an awareness of the availability and the capabilities of identity verifiers for the DOE security community, Sandia Laboratories continues to comparatively evaluate the capabilities and improvements of developing devices. An evaluation of several recently available verifiers is discussed in this paper. Operating environments and procedures more typical of physical access control use can reveal performance substantially different from the basic laboratory tests.

  8. PredictSNP2: A Unified Platform for Accurately Evaluating SNP Effects by Exploiting the Different Characteristics of Variants in Distinct Genomic Regions.

    PubMed

    Bendl, Jaroslav; Musil, Miloš; Štourač, Jan; Zendulka, Jaroslav; Damborský, Jiří; Brezovský, Jan

    2016-05-01

    An important message taken from human genome sequencing projects is that the human population exhibits approximately 99.9% genetic similarity. Variations in the remaining parts of the genome determine our identity, trace our history and reveal our heritage. The precise delineation of phenotypically causal variants plays a key role in providing accurate personalized diagnosis, prognosis, and treatment of inherited diseases. Several computational methods for achieving such delineation have been reported recently. However, their ability to pinpoint potentially deleterious variants is limited by the fact that their mechanisms of prediction do not account for the existence of different categories of variants. Consequently, their output is biased towards the variant categories that are most strongly represented in the variant databases. Moreover, most such methods provide numeric scores but not binary predictions of the deleteriousness of variants or confidence scores that would be more easily understood by users. We have constructed three datasets covering different types of disease-related variants, which were divided across five categories: (i) regulatory, (ii) splicing, (iii) missense, (iv) synonymous, and (v) nonsense variants. These datasets were used to develop category-optimal decision thresholds and to evaluate six tools for variant prioritization: CADD, DANN, FATHMM, FitCons, FunSeq2 and GWAVA. This evaluation revealed some important advantages of the category-based approach. The results obtained with the five best-performing tools were then combined into a consensus score. Additional comparative analyses showed that in the case of missense variations, protein-based predictors perform better than DNA sequence-based predictors. A user-friendly web interface was developed that provides easy access to the five tools' predictions, and their consensus scores, in a user-understandable format tailored to the specific features of different categories of variations. To

  9. PredictSNP2: A Unified Platform for Accurately Evaluating SNP Effects by Exploiting the Different Characteristics of Variants in Distinct Genomic Regions

    PubMed Central

    Brezovský, Jan

    2016-01-01

    An important message taken from human genome sequencing projects is that the human population exhibits approximately 99.9% genetic similarity. Variations in the remaining parts of the genome determine our identity, trace our history and reveal our heritage. The precise delineation of phenotypically causal variants plays a key role in providing accurate personalized diagnosis, prognosis, and treatment of inherited diseases. Several computational methods for achieving such delineation have been reported recently. However, their ability to pinpoint potentially deleterious variants is limited by the fact that their mechanisms of prediction do not account for the existence of different categories of variants. Consequently, their output is biased towards the variant categories that are most strongly represented in the variant databases. Moreover, most such methods provide numeric scores but not binary predictions of the deleteriousness of variants or confidence scores that would be more easily understood by users. We have constructed three datasets covering different types of disease-related variants, which were divided across five categories: (i) regulatory, (ii) splicing, (iii) missense, (iv) synonymous, and (v) nonsense variants. These datasets were used to develop category-optimal decision thresholds and to evaluate six tools for variant prioritization: CADD, DANN, FATHMM, FitCons, FunSeq2 and GWAVA. This evaluation revealed some important advantages of the category-based approach. The results obtained with the five best-performing tools were then combined into a consensus score. Additional comparative analyses showed that in the case of missense variations, protein-based predictors perform better than DNA sequence-based predictors. A user-friendly web interface was developed that provides easy access to the five tools’ predictions, and their consensus scores, in a user-understandable format tailored to the specific features of different categories of variations

  10. Performance evaluation of swimmers: scientific tools.

    PubMed

    Smith, David J; Norris, Stephen R; Hogg, John M

    2002-01-01

    The purpose of this article is to provide a critical commentary of the physiological and psychological tools used in the evaluation of swimmers. The first-level evaluation should be the competitive performance itself, since it is at this juncture that all elements interplay and provide the 'highest form' of assessment. Competition video analysis of major swimming events has progressed to the point where it has become an indispensable tool for coaches, athletes, sport scientists, equipment manufacturers, and even the media. The breakdown of each swimming performance at the individual level to its constituent parts allows for comparison with the predicted or sought after execution, as well as allowing for comparison with identified world competition levels. The use of other 'on-going' monitoring protocols to evaluate training efficacy typically involves criterion 'effort' swims and specific training sets where certain aspects are scrutinised in depth. Physiological parameters that are often examined alongside swimming speed and technical aspects include oxygen uptake, heart rate, blood lactate concentration, blood lactate accumulation and clearance rates. Simple and more complex procedures are available for in-training examination of technical issues. Strength and power may be quantified via several modalities although, typically, tethered swimming and dry-land isokinetic devices are used. The availability of a 'swimming flume' does afford coaches and sport scientists a higher degree of flexibility in the type of monitoring and evaluation that can be undertaken. There is convincing evidence that athletes can be distinguished on the basis of their psychological skills and emotional competencies and that these differences become further accentuated as the athlete improves. No matter what test format is used (physiological, biomechanical or psychological), similar criteria of validity must be ensured so that the test provides useful and associative information

  11. Performance evaluation of swimmers: scientific tools.

    PubMed

    Smith, David J; Norris, Stephen R; Hogg, John M

    2002-01-01

    The purpose of this article is to provide a critical commentary of the physiological and psychological tools used in the evaluation of swimmers. The first-level evaluation should be the competitive performance itself, since it is at this juncture that all elements interplay and provide the 'highest form' of assessment. Competition video analysis of major swimming events has progressed to the point where it has become an indispensable tool for coaches, athletes, sport scientists, equipment manufacturers, and even the media. The breakdown of each swimming performance at the individual level to its constituent parts allows for comparison with the predicted or sought after execution, as well as allowing for comparison with identified world competition levels. The use of other 'on-going' monitoring protocols to evaluate training efficacy typically involves criterion 'effort' swims and specific training sets where certain aspects are scrutinised in depth. Physiological parameters that are often examined alongside swimming speed and technical aspects include oxygen uptake, heart rate, blood lactate concentration, blood lactate accumulation and clearance rates. Simple and more complex procedures are available for in-training examination of technical issues. Strength and power may be quantified via several modalities although, typically, tethered swimming and dry-land isokinetic devices are used. The availability of a 'swimming flume' does afford coaches and sport scientists a higher degree of flexibility in the type of monitoring and evaluation that can be undertaken. There is convincing evidence that athletes can be distinguished on the basis of their psychological skills and emotional competencies and that these differences become further accentuated as the athlete improves. No matter what test format is used (physiological, biomechanical or psychological), similar criteria of validity must be ensured so that the test provides useful and associative information

  12. Comparative Evaluation of Software Features and Performances.

    PubMed

    Cecconi, Daniela

    2016-01-01

    Analysis of two-dimensional gel images is a crucial step for the determination of changes in the protein expression, but at present, it still represents one of the bottlenecks in 2-DE studies. Over the years, different commercial and academic software packages have been developed for the analysis of 2-DE images. Each of these shows different advantageous characteristics in terms of quality of analysis. In this chapter, the characteristics of the different commercial software packages are compared in order to evaluate their main features and performances.

  13. Performance evaluation of TCP over ABT protocols

    NASA Astrophysics Data System (ADS)

    Ata, Shingo; Murata, Masayuki; Miyahara, Hideo

    1998-10-01

    ABT is promising for effectively transferring a highly bursty data traffic in ATM networks. Most of past studies focused on the data transfer capability of ABT within the ATM layer. In actual, however, we need to consider the upper layer transport protocol since the transport layer protocol also supports a network congestion control mechanism. One such example is TCP, which is now widely used in the Internet. In this paper, we evaluate the performance of TCP over ABT protocols. Simulation results show that the retransmission mechanism of ABT can effectively overlay the TCP congestion control mechanism so that TCP operates in a stable fashion and works well only as an error recovery mechanism.

  14. MSAD actuator solenoid, performance evaluation and modification

    SciTech Connect

    North, G.

    1983-04-19

    A small conical-faced solenoid actuator is tested in order to develop design criteria for improved performance including increased pull sensitivity. In addition to increased pull for the normal electrical inputs, a reduction in pull response to short duration electrical noise pulses is also required. Along with dynamic testing of the solenoid, a linear circuit model is developed. This model permits calculation of the dynamic forces and currents which can be expected with various electrical inputs. The model parameters are related to the actual solenoid and allow the effects of winding density and shading rings to be evaluated.

  15. Sandia solar dryer: preliminary performance evaluation

    SciTech Connect

    Glass, J.S.; Holm-Hansen, T.; Tills, J.; Pierce, J.D.

    1986-01-01

    Preliminary performance evaluations were conducted with the prototype modular solar dryer for wastewater sludge at Sandia National Laboratories. Operational parameters which appeared to influence sludge drying efficiency included condensation system capacity and air turbulence at the sludge surface. Sludge heating profiles showed dependencies on sludge moisture content, sludge depth and seasonal variability in available solar energy. Heat-pasteurization of sludge in the module was demonstrated in two dynamic-processing experiments. Through balanced utilization of drying and heating functions, the facility has the potential for year-round sludge treatment application.

  16. Performance Evaluation of Phasor Measurement Systems

    SciTech Connect

    Huang, Zhenyu; Kasztenny, Bogdan; Madani, Vahid; Martin, Kenneth E.; Meliopoulos, Sakis; Novosel, Damir; Stenbakken, Jerry

    2008-07-20

    After two decades of phasor network deployment, phasor measurements are now available at many major substations and power plants. The North American SynchroPhasor Initiative (NASPI), supported by both the US Department of Energy and the North American Electricity Reliability Council (NERC), provides a forum to facilitate the efforts in phasor technology in North America. Phasor applications have been explored and some are in today’s utility practice. IEEE C37.118 Standard is a milestone in standardizing phasor measurements and defining performance requirements. To comply with IEEE C37.118 and to better understand the impact of phasor quality on applications, the NASPI Performance and Standards Task Team (PSTT) initiated and accomplished the development of two important documents to address characterization of PMUs and instrumentation channels, which leverage prior work (esp. in WECC) and international experience. This paper summarizes the accomplished PSTT work and presents the methods for phasor measurement evaluation.

  17. Evaluating cryostat performance for naval applications

    NASA Astrophysics Data System (ADS)

    Knoll, David; Willen, Dag; Fesmire, James; Johnson, Wesley; Smith, Jonathan; Meneghelli, Barry; Demko, Jonathan; George, Daniel; Fowler, Brian; Huber, Patti

    2012-06-01

    The Navy intends to use High Temperature Superconducting Degaussing (HTSDG) coil systems on future Navy platforms. The Navy Metalworking Center (NMC) is leading a team that is addressing cryostat configuration and manufacturing issues associated with fabricating long lengths of flexible, vacuum-jacketed cryostats that meet Navy shipboard performance requirements. The project includes provisions to evaluate the reliability performance, as well as proofing of fabrication techniques. Navy cryostat performance specifications include less than 1 Wm-1 heat loss, 2 MPa working pressure, and a 25-year vacuum life. Cryostat multilayer insulation (MLI) systems developed on the project have been validated using a standardized cryogenic test facility and implemented on 5-meterlong test samples. Performance data from these test samples, which were characterized using both LN2 boiloff and flow-through measurement techniques, will be presented. NMC is working with an Integrated Project Team consisting of Naval Sea Systems Command, Naval Surface Warfare Center-Carderock Division, Southwire Company, nkt cables, Oak Ridge National Laboratory (ORNL), ASRC Aerospace, and NASA Kennedy Space Center (NASA-KSC) to complete these efforts. Approved for public release; distribution is unlimited. This material is submitted with the understanding that right of reproduction for governmental purposes is reserved for the Office of Naval Research, Arlington, Virginia 22203-1995.

  18. Performance evaluation of bound diamond ring tools

    SciTech Connect

    Piscotty, M.A.; Taylor, J.S.; Blaedel, K.L.

    1995-07-14

    LLNL is collaborating with the Center for Optics Manufacturing (COM) and the American Precision Optics Manufacturers Association (APOMA) to optimize bound diamond ring tools for the spherical generation of high quality optical surfaces. An important element of this work is establishing an experimentally-verified link between tooling properties and workpiece quality indicators such as roughness, subsurface damage and removal rate. In this paper, we report on a standardized methodology for assessing ring tool performance and its preliminary application to a set of commercially-available wheels. Our goals are to (1) assist optics manufacturers (users of the ring tools) in evaluating tools and in assessing their applicability for a given operation, and (2) provide performance feedback to wheel manufacturers to help optimize tooling for the optics industry. Our paper includes measurements of wheel performance for three 2-4 micron diamond bronze-bond wheels that were supplied by different manufacturers to nominally- identical specifications. Preliminary data suggests that the difference in performance levels among the wheels were small.

  19. 40 CFR 35.9055 - Evaluation of recipient performance.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 1 2010-07-01 2010-07-01 false Evaluation of recipient performance. 35... Evaluation of recipient performance. The Regional Administrator will oversee each recipient's performance... schedule for evaluation in the assistance agreement and will evaluate recipient performance and...

  20. 48 CFR 436.201 - Evaluation of contractor performance.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Construction 436.201 Evaluation of contractor performance. Preparation of performance evaluation reports. In addition to the requirements of FAR 36.201, performance evaluation reports shall be prepared for indefinite... of services to be ordered exceeds $500,000.00. For these contracts, performance evaluation...

  1. Development and clinical evaluation of a highly accurate dengue NS1 rapid test: from the preparation of a soluble NS1 antigen to the construction of an RDT.

    PubMed

    Lee, Jihoo; Kim, Hak-Yong; Chong, Chom-Kyu; Song, Hyun-Ok

    2015-06-01

    Early diagnosis of dengue virus (DENV) is important. There are numerous products on the market claiming to detect DENV NS1, but these are not always reliable. In this study, a highly sensitive and accurate rapid diagnostic test (RDT) was developed using anti-dengue NS1 monoclonal antibodies. A recombinant NS1 protein was produced with high antigenicity and purity. Monoclonal antibodies were raised against this purified NS1 antigen. The RDT was constructed using a capturing (4A6A10, Kd=7.512±0.419×10(-9)) and a conjugating antibody (3E12E6, Kd=7.032±0.322×10(-9)). The diagnostic performance was evaluated with NS1-positive clinical samples collected from various dengue endemic countries and compared to SD BioLine Dengue NS1 Ag kit. The constructed RDT exhibited higher sensitivity (92.9%) with more obvious diagnostic performance than the commercial kit (83.3%). The specificity of constructed RDT was 100%. The constructed RDT could offer a reliable point-of-care testing tool for the early detection of dengue infections in remote areas and contribute to the control of dengue-related diseases. PMID:25824725

  2. Development and clinical evaluation of a highly accurate dengue NS1 rapid test: from the preparation of a soluble NS1 antigen to the construction of an RDT.

    PubMed

    Lee, Jihoo; Kim, Hak-Yong; Chong, Chom-Kyu; Song, Hyun-Ok

    2015-06-01

    Early diagnosis of dengue virus (DENV) is important. There are numerous products on the market claiming to detect DENV NS1, but these are not always reliable. In this study, a highly sensitive and accurate rapid diagnostic test (RDT) was developed using anti-dengue NS1 monoclonal antibodies. A recombinant NS1 protein was produced with high antigenicity and purity. Monoclonal antibodies were raised against this purified NS1 antigen. The RDT was constructed using a capturing (4A6A10, Kd=7.512±0.419×10(-9)) and a conjugating antibody (3E12E6, Kd=7.032±0.322×10(-9)). The diagnostic performance was evaluated with NS1-positive clinical samples collected from various dengue endemic countries and compared to SD BioLine Dengue NS1 Ag kit. The constructed RDT exhibited higher sensitivity (92.9%) with more obvious diagnostic performance than the commercial kit (83.3%). The specificity of constructed RDT was 100%. The constructed RDT could offer a reliable point-of-care testing tool for the early detection of dengue infections in remote areas and contribute to the control of dengue-related diseases.

  3. Evaluation of Infiltration Basin Performance in Florida

    NASA Astrophysics Data System (ADS)

    Bean, E.

    2012-12-01

    Infiltration basins are commonly utilized to reduce or eliminate urban runoff in Florida. For permitting purposes, basins are required to recover their design volume, runoff from a one inch rainfall event, within 72 hours to satisfy the design criteria and are not required to account for groundwater mounding if volume recovery can be accomplished by filling of soil porosity by vertical infiltration below the basin surface. Forty infiltration basins were included in a field study to determine whether basin hydraulic performance was significantly different from their designed performance. Basins ranged in age from less than one year to over twenty years and land uses were equally divided between Florida Department of Transportation (FDOT) and residential developments. Six test sites within each basin were typically selected to measure infiltration rates using a double ring infiltrometer (DRI), a common method for infiltration basin sizing. Measured rates were statistically compared to designed infiltration rates, taking into account factors of safety. In addition, a surface soil boring was collected from each of the test sites for a series of analyses, including soil texture, bulk density, and organic matter content. Eleven of the 40 evaluated basins were monitored between March 2008 and January 2012 to evaluate whether basins recovered their volumes from one inch events within 72 hours and to evaluate the effectiveness of using DRI rates to evaluate basin performance. Based on DRI rates, 16 (40%) basins had rates less than their designed rates, 10 (25%) had rates equal to their designed rates, and 14 (35%) basins had rates greater than their designed rates. Additionally, basins with coarser soils were also more likely to have DRI rates greater than designs and FDOT basins were more likely than residential basins to have infiltration rates at or above their designed rates. Five of the eleven monitored basins were expected to function as designed by recovering their

  4. Ground truth and benchmarks for performance evaluation

    NASA Astrophysics Data System (ADS)

    Takeuchi, Ayako; Shneier, Michael; Hong, Tsai Hong; Chang, Tommy; Scrapper, Christopher; Cheok, Geraldine S.

    2003-09-01

    Progress in algorithm development and transfer of results to practical applications such as military robotics requires the setup of standard tasks, of standard qualitative and quantitative measurements for performance evaluation and validation. Although the evaluation and validation of algorithms have been discussed for over a decade, the research community still faces a lack of well-defined and standardized methodology. The range of fundamental problems include a lack of quantifiable measures of performance, a lack of data from state-of-the-art sensors in calibrated real-world environments, and a lack of facilities for conducting realistic experiments. In this research, we propose three methods for creating ground truth databases and benchmarks using multiple sensors. The databases and benchmarks will provide researchers with high quality data from suites of sensors operating in complex environments representing real problems of great relevance to the development of autonomous driving systems. At NIST, we have prototyped a High Mobility Multi-purpose Wheeled Vehicle (HMMWV) system with a suite of sensors including a Riegl ladar, GDRS ladar, stereo CCD, several color cameras, Global Position System (GPS), Inertial Navigation System (INS), pan/tilt encoders, and odometry . All sensors are calibrated with respect to each other in space and time. This allows a database of features and terrain elevation to be built. Ground truth for each sensor can then be extracted from the database. The main goal of this research is to provide ground truth databases for researchers and engineers to evaluate algorithms for effectiveness, efficiency, reliability, and robustness, thus advancing the development of algorithms.

  5. Manipulator Performance Evaluation Using Fitts' Taping Task

    SciTech Connect

    Draper, J.V.; Jared, B.C.; Noakes, M.W.

    1999-04-25

    Metaphorically, a teleoperator with master controllers projects the user's arms and hands into a re- mote area, Therefore, human users interact with teleoperators at a more fundamental level than they do with most human-machine systems. Instead of inputting decisions about how the system should func- tion, teleoperator users input the movements they might make if they were truly in the remote area and the remote machine must recreate their trajectories and impedance. This intense human-machine inter- action requires displays and controls more carefully attuned to human motor capabilities than is neces- sary with most systems. It is important for teleoperated manipulators to be able to recreate human trajectories and impedance in real time. One method for assessing manipulator performance is to observe how well a system be- haves while a human user completes human dexterity tasks with it. Fitts' tapping task has been, used many times in the past for this purpose. This report describes such a performance assessment. The International Submarine Engineering (ISE) Autonomous/Teleoperated Operations Manipulator (ATOM) servomanipulator system was evalu- ated using a generic positioning accuracy task. The task is a simple one but has the merits of (1) pro- ducing a performance function estimate rather than a point estimate and (2) being widely used in the past for human and servomanipulator dexterity tests. Results of testing using this task may, therefore, allow comparison with other manipulators, and is generically representative of a broad class of tasks. Results of the testing indicate that the ATOM manipulator is capable of performing the task. Force reflection had a negative impact on task efficiency in these data. This was most likely caused by the high resistance to movement the master controller exhibited with the force reflection engaged. Measurements of exerted forces were not made, so it is not possible to say whether the force reflection helped partici- pants

  6. A Method for Missile Autopilot Performance Evaluation

    NASA Astrophysics Data System (ADS)

    Eguchi, Hirofumi

    The essential benefit of HardWare-In-the-Loop (HWIL) simulation can be summarized as that the performance of autopilot system is evaluated realistically without the modeling error by using actual hardware such as seeker systems, autopilot systems and servo equipments. HWIL simulation, however, requires very expensive facilities: in these facilities, the target model generator is the indispensable subsystem. In this paper, one example of HWIL simulation facility with a target model generator for RF seeker systems is introduced at first. But this generator has the functional limitation on the line-of-sight angle as almost other generators, then, a test method to overcome the line-of-sight angle limitation is proposed.

  7. Performance Evaluation and Metrics for Perception in Intelligent Manufacturing

    NASA Astrophysics Data System (ADS)

    Eastman, Roger; Hong, Tsai; Shi, Jane; Hanning, Tobias; Muralikrishnan, Bala; Young, S. Susan; Chang, Tommy

    An unsolved but important problem in intelligent manufacturing is dynamic pose estimation under complex environmental conditions—tracking an object's pose and position as it moves in an environment with uncontrolled lighting and background. This is a central task in robotic perception, and a robust, highly accurate solution would be of use in a number of manufacturing applications. To be commercially feasible, a solution must also be benchmarked against performance standards so manufacturers fully understand its nature and capabilities. The PerMIS 2008 Special Session on “Performance Metrics for Perception in Intelligent Manufacturing,” held August 20, 2008, brought together academic, industrial and governmental researchers interested in calibrating and benchmarking vision and metrology systems. The special session had a series of speakers who each addressed a component of the general problem of benchmarking complex perception tasks, including dynamic pose estimation. The components included assembly line motion analysis, camera calibration, laser tracker calibration, super-resolution range data enhancement and evaluation, and evaluation of 6DOF pose estimation for visual servoing. This Chapter combines and summarizes the results of the special session, giving a framework for benchmarking perception systems and relating the individual components to the general framework.

  8. Performance Evaluations of Ceramic Wafer Seals

    NASA Technical Reports Server (NTRS)

    Dunlap, Patrick H., Jr.; DeMange, Jeffrey J.; Steinetz, Bruce M.

    2006-01-01

    Future hypersonic vehicles will require high temperature, dynamic seals in advanced ramjet/scramjet engines and on the vehicle airframe to seal the perimeters of movable panels, flaps, and doors. Seal temperatures in these locations can exceed 2000 F, especially when the seals are in contact with hot ceramic matrix composite sealing surfaces. NASA Glenn Research Center is developing advanced ceramic wafer seals to meet the needs of these applications. High temperature scrub tests performed between silicon nitride wafers and carbon-silicon carbide rub surfaces revealed high friction forces and evidence of material transfer from the rub surfaces to the wafer seals. Stickage between adjacent wafers was also observed after testing. Several design changes to the wafer seals were evaluated as possible solutions to these concerns. Wafers with recessed sides were evaluated as a potential means of reducing friction between adjacent wafers. Alternative wafer materials are also being considered as a means of reducing friction between the seals and their sealing surfaces and because the baseline silicon nitride wafer material (AS800) is no longer commercially available.

  9. Ultrahigh-Power-Factor Carbon Nanotubes and an Ingenious Strategy for Thermoelectric Performance Evaluation.

    PubMed

    Zhou, Wenbin; Fan, Qingxia; Zhang, Qiang; Li, Kewei; Cai, Le; Gu, Xiaogang; Yang, Feng; Zhang, Nan; Xiao, Zhuojian; Chen, Huiliang; Xiao, Shiqi; Wang, Yanchun; Liu, Huaping; Zhou, Weiya; Xie, Sishen

    2016-07-01

    An ingenious strategy is put forward to evaluate accurately the thermoelectric performance of carbon nanotube (CNT) thin films, including thermal conductivity, electrical conductivity, and Seebeck coefficient in the same direction. The results reveal that the as-prepared CNT interconnected films and CNT fibers possess enormous potential of thermoelectric applications because of their ultrahigh power factors. PMID:27199099

  10. 48 CFR 236.201 - Evaluation of contractor performance.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... CONTRACTS Special Aspects of Contracting for Construction 236.201 Evaluation of contractor performance. (a) Preparation of performance evaluation reports. Use DD Form 2626, Performance Evaluation (Construction... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Evaluation of...

  11. SEMICONDUCTOR INTEGRATED CIRCUITS: Accurate metamodels of device parameters and their applications in performance modeling and optimization of analog integrated circuits

    NASA Astrophysics Data System (ADS)

    Tao, Liang; Xinzhang, Jia; Junfeng, Chen

    2009-11-01

    Techniques for constructing metamodels of device parameters at BSIM3v3 level accuracy are presented to improve knowledge-based circuit sizing optimization. Based on the analysis of the prediction error of analytical performance expressions, operating point driven (OPD) metamodels of MOSFETs are introduced to capture the circuit's characteristics precisely. In the algorithm of metamodel construction, radial basis functions are adopted to interpolate the scattered multivariate data obtained from a well tailored data sampling scheme designed for MOSFETs. The OPD metamodels can be used to automatically bias the circuit at a specific DC operating point. Analytical-based performance expressions composed by the OPD metamodels show obvious improvement for most small-signal performances compared with simulation-based models. Both operating-point variables and transistor dimensions can be optimized in our nesting-loop optimization formulation to maximize design flexibility. The method is successfully applied to a low-voltage low-power amplifier.

  12. Computer Aided Evaluation of Higher Education Tutors' Performance

    ERIC Educational Resources Information Center

    Xenos, Michalis; Papadopoulos, Thanos

    2007-01-01

    This article presents a method for computer-aided tutor evaluation: Bayesian Networks are used for organizing the collected data about tutors and for enabling accurate estimations and predictions about future tutor behavior. The model provides indications about each tutor's strengths and weaknesses, which enables the evaluator to exploit strengths…

  13. 48 CFR 1252.216-72 - Performance evaluation plan.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ....216-72 Performance evaluation plan. As prescribed in (TAR) 48 CFR 1216.406(b), insert the following clause: Performance Evaluation Plan (OCT 1994) (a) A Performance Evaluation Plan shall be unilaterally... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Performance...

  14. Polynomial chaos theory for performance evaluation of ATR systems

    NASA Astrophysics Data System (ADS)

    DeVore, Michael D.; Bateman, Alec J.

    2010-04-01

    The development of a more unified theory of automatic target recognition (ATR) has received considerable attention over the last several years from individual researchers, working groups, and workshops. One of the major benefits expected to accrue from such a theory is an ability to analytically derive performance metrics that accurately predict real-world behavior. Numerous sources of uncertainty affect the actual performance of an ATR system, so direct calculation has been limited in practice to a few special cases because of the practical difficulties of manipulating arbitrary probability distributions over high dimensional spaces. This paper introduces an alternative approach for evaluating ATR performance based on a generalization of NorbertWiener's polynomial chaos theory. Through this theory, random quantities are expressed not in terms of joint distribution functions but as convergent orthogonal series over a shared random basis. This form can be used to represent any finite-variance distribution and can greatly simplify the propagation of uncertainties through complex systems and algorithms. The paper presents an overview of the relevant theory and, as an example application, a discussion of how it can be applied to model the distribution of position errors from target tracking algorithms.

  15. High definition television: Evaluation for remote task performance

    NASA Astrophysics Data System (ADS)

    Draper, J. V.; Handel, S. J.; Herndon, J. N.

    High definition television (HDTV) transmits a video image with more than twice the number of horizontal scan lines that standard resolution television provides (1125 for HDTV to 525 for standard resolution television), with impressive picture quality improvement. These experimental activities are part of a joint collaboration between the U.S. Department of Energy (USDOE) and the Power Reactor and Nuclear Fuel Development Corporation (PNC) of Japan in the field of the Nuclear Fuel Cycle: Reprocessing Technology. Objects in the HDTV picture have more sharply defined edges, better contrast, and more accurate shading and color pattern reproduction. Because television is a key component for teleoperator performance, picture quality improvement could improve speed and accuracy. This paper describes three experiments which evaluated the impact of HDTV on remote task performance. HDTV was compared to standard resolution, monochromatic television and standard resolution, stereoscopic, monochromatic television. Tasks included judgement of depth in a televised scene, visual inspection, and a remote maintenance task. The experiments show that HDTV can improve performance. HDTV is superior to monoscopic, monochromatic, standard resolution television and to stereoscopic television for remote inspection tasks; it is less proficient than stereo television for distance matching. HDTV leads to lower error rate during tasks but does not reduce time required to complete tasks.

  16. The Berlin Brain--Computer Interface: accurate performance from first-session in BCI-naïve subjects.

    PubMed

    Blankertz, Benjamin; Losch, Florian; Krauledat, Matthias; Dornhege, Guido; Curio, Gabriel; Müller, Klaus-Robert

    2008-10-01

    The Berlin Brain--Computer Interface (BBCI) project develops a noninvasive BCI system whose key features are: 1) the use of well-established motor competences as control paradigms; 2) high-dimensional features from multichannel EEG; and 3) advanced machine-learning techniques. Spatio-spectral changes of sensorimotor rhythms are used to discriminate imagined movements (left hand, right hand, and foot). A previous feedback study [M. Krauledat, K.-R. MUller, and G. Curio. (2007) The non-invasive Berlin brain--computer Interface: Fast acquisition of effective performance in untrained subjects. NeuroImage. [Online]. 37(2), pp. 539--550. Available: http://dx.doi.org/10.1016/j.neuroimage.2007.01.051] with ten subjects provided preliminary evidence that the BBCI system can be operated at high accuracy for subjects with less than five prior BCI exposures. Here, we demonstrate in a group of 14 fully BCI-naIve subjects that 8 out of 14 BCI novices can perform at >84% accuracy in their very first BCI session, and a further four subjects at >70%. Thus, 12 out of 14 BCI-novices had significant above-chance level performances without any subject training even in the first session, as based on an optimized EEG analysis by advanced machine-learning algorithms. PMID:18838371

  17. High-Performance Monopropellants and Catalysts Evaluated

    NASA Technical Reports Server (NTRS)

    Reed, Brian D.

    2004-01-01

    The NASA Glenn Research Center is sponsoring efforts to develop advanced monopropellant technology. The focus has been on monopropellant formulations composed of an aqueous solution of hydroxylammonium nitrate (HAN) and a fuel component. HAN-based monopropellants do not have a toxic vapor and do not need the extraordinary procedures for storage, handling, and disposal required of hydrazine (N2H4). Generically, HAN-based monopropellants are denser and have lower freezing points than N2H4. The performance of HAN-based monopropellants depends on the selection of fuel, the HAN-to-fuel ratio, and the amount of water in the formulation. HAN-based monopropellants are not seen as a replacement for N2H4 per se, but rather as a propulsion option in their own right. For example, HAN-based monopropellants would prove beneficial to the orbit insertion of small, power-limited satellites because of this propellant's high performance (reduced system mass), high density (reduced system volume), and low freezing point (elimination of tank and line heaters). Under a Glenn-contracted effort, Aerojet Redmond Rocket Center conducted testing to provide the foundation for the development of monopropellant thrusters with an I(sub sp) goal of 250 sec. A modular, workhorse reactor (representative of a 1-lbf thruster) was used to evaluate HAN formulations with catalyst materials. Stoichiometric, oxygen-rich, and fuelrich formulations of HAN-methanol and HAN-tris(aminoethyl)amine trinitrate were tested to investigate the effects of stoichiometry on combustion behavior. Aerojet found that fuelrich formulations degrade the catalyst and reactor faster than oxygen-rich and stoichiometric formulations do. A HAN-methanol formulation with a theoretical Isp of 269 sec (designated HAN269MEO) was selected as the baseline. With a combustion efficiency of at least 93 percent demonstrated for HAN-based monopropellants, HAN269MEO will meet the I(sub sp) 250 sec goal.

  18. DRACS thermal performance evaluation for FHR

    SciTech Connect

    Lv, Q.; Lin, H. C.; Kim, I. H.; Sun, X.; Christensen, R. N.; Blue, T. E.; Yoder, G. L.; Wilson, D. F.; Sabharwall, P.

    2015-03-01

    Direct Reactor Auxiliary Cooling System (DRACS) is a passive decay heat removal system proposed for the Fluoride-salt-cooled High-temperature Reactor (FHR) that combines coated particle fuel and a graphite moderator with a liquid fluoride salt as the coolant. The DRACS features three coupled natural circulation/convection loops, relying completely on buoyancy as the driving force. These loops are coupled through two heat exchangers, namely, the DRACS Heat Exchanger and the Natural Draft Heat Exchanger. In addition, a fluidic diode is employed to minimize the parasitic flow into the DRACS primary loop and correspondingly the heat loss to the DRACS during normal operation of the reactor, and to keep the DRACS ready for activation, if needed, during accidents. To help with the design and thermal performance evaluation of the DRACS, a computer code using MATLAB has been developed. This code is based on a one-dimensional formulation and its principle is to solve the energy balance and integral momentum equations. By discretizing the DRACS system in the axial direction, a bulk mean temperature is assumed for each mesh cell. The temperatures of all the cells, as well as the mass flow rates in the DRACS loops, are predicted by solving the governing equations that are obtained by integrating the energy conservation equation over each cell and integrating the momentum conservation equation over each of the DRACS loops. In addition, an intermediate heat transfer loop equipped with a pump has also been modeled in the code. This enables the study of flow reversal phenomenon in the DRACS primary loop, associated with the pump trip process. Experimental data from a High-Temperature DRACS Test Facility (HTDF) are not available yet to benchmark the code. A preliminary code validation is performed by using natural circulation experimental data available in the literature, which are as closely relevant as possible. The code is subsequently applied to the HTDF that is under

  19. An accurate method for evaluating the kernel of the integral equation relating lift to downwash in unsteady potential flow

    NASA Technical Reports Server (NTRS)

    Desmarais, R. N.

    1982-01-01

    The method is capable of generating approximations of arbitrary accuracy. It is based on approximating the algebraic part of the nonelementary integrals in the kernel by exponential functions and then integrating termwise. The exponent spacing in the approximation is a geometric sequence. The coefficients and exponent multiplier of the exponential approximation are computed by least squares so the method is completely automated. Exponential approximates generated in this manner are two orders of magnitude more accurate than the exponential approximation that is currently most often used for this purpose. The method can be used to generate approximations to attain any desired trade-off between accuracy and computing cost.

  20. LANDSAT-4 horizon scanner performance evaluation

    NASA Technical Reports Server (NTRS)

    Bilanow, S.; Chen, L. C.; Davis, W. M.; Stanley, J. P.

    1984-01-01

    Representative data spans covering a little more than a year since the LANDSAT-4 launch were analyzed to evaluate the flight performance of the satellite's horizon scanner. High frequency noise was filtered out by 128-point averaging. The effects of Earth oblateness and spacecraft altitude variations are modeled, and residual systematic errors are analyzed. A model for the predicted radiance effects is compared with the flight data and deficiencies in the radiance effects modeling are noted. Correction coefficients are provided for a finite Fourier series representation of the systematic errors in the data. Analysis of the seasonal dependence of the coefficients indicates the effects of some early mission problems with the reference attitudes which were computed by the onboard computer using star trackers and gyro data. The effects of sun and moon interference, unexplained anomalies in the data, and sensor noise characteristics and their power spectrum are described. The variability of full orbit data averages is shown. Plots of the sensor data for all the available data spans are included.

  1. Coherent lidar airborne windshear sensor: performance evaluation.

    PubMed

    Targ, R; Kavaya, M J; Huffaker, R M; Bowles, R L

    1991-05-20

    National attention has focused on the critical problem of detecting and avoiding windshear since the crash on 2 Aug. 1985 of a Lockheed L-1011 at Dallas/Fort Worth International Airport. As part of the NASA/FAA National Integrated Windshear Program, we have defined a measurable windshear hazard index that can be remotely sensed from an aircraft, to give the pilot information about the wind conditions he will experience at some later time if he continues along the present flight path. A technology analysis and end-to-end performance simulation measuring signal-to-noise ratios and resulting wind velocity errors for competing coherent laser radar (lidar) systems have been carried out. The results show that a Ho:YAG lidar at a wavelength of 2.1 microm and a CO(2) lidar at 10.6 microm can give the pilot information about the line-of-sight component of a windshear threat from his present position to a region extending 2-4 km in front of the aircraft. This constitutes a warning time of 20-40 s, even in conditions of moderately heavy precipitation. Using these results, a Coherent Lidar Airborne Shear Sensor (CLASS) that uses a Q-switched CO(2) laser at 10.6 microm is being designed and developed for flight evaluation in the fall of 1991.

  2. Performance evaluation of an infrared thermocouple.

    PubMed

    Chen, Chiachung; Weng, Yu-Kai; Shen, Te-Ching

    2010-01-01

    The measurement of the leaf temperature of forests or agricultural plants is an important technique for the monitoring of the physiological state of crops. The infrared thermometer is a convenient device due to its fast response and nondestructive measurement technique. Nowadays, a novel infrared thermocouple, developed with the same measurement principle of the infrared thermometer but using a different detector, has been commercialized for non-contact temperature measurement. The performances of two-kinds of infrared thermocouples were evaluated in this study. The standard temperature was maintained by a temperature calibrator and a special black cavity device. The results indicated that both types of infrared thermocouples had good precision. The error distribution ranged from -1.8 °C to 18 °C as the reading values served as the true values. Within the range from 13 °C to 37 °C, the adequate calibration equations were the high-order polynomial equations. Within the narrower range from 20 °C to 35 °C, the adequate equation was a linear equation for one sensor and a two-order polynomial equation for the other sensor. The accuracy of the two kinds of infrared thermocouple was improved by nearly 0.4 °C with the calibration equations. These devices could serve as mobile monitoring tools for in situ and real time routine estimation of leaf temperatures.

  3. A robust and accurate center-frequency estimation (RACE) algorithm for improving motion estimation performance of SinMod on tagged cardiac MR images without known tagging parameters.

    PubMed

    Liu, Hong; Wang, Jie; Xu, Xiangyang; Song, Enmin; Wang, Qian; Jin, Renchao; Hung, Chih-Cheng; Fei, Baowei

    2014-11-01

    A robust and accurate center-frequency (CF) estimation (RACE) algorithm for improving the performance of the local sine-wave modeling (SinMod) method, which is a good motion estimation method for tagged cardiac magnetic resonance (MR) images, is proposed in this study. The RACE algorithm can automatically, effectively and efficiently produce a very appropriate CF estimate for the SinMod method, under the circumstance that the specified tagging parameters are unknown, on account of the following two key techniques: (1) the well-known mean-shift algorithm, which can provide accurate and rapid CF estimation; and (2) an original two-direction-combination strategy, which can further enhance the accuracy and robustness of CF estimation. Some other available CF estimation algorithms are brought out for comparison. Several validation approaches that can work on the real data without ground truths are specially designed. Experimental results on human body in vivo cardiac data demonstrate the significance of accurate CF estimation for SinMod, and validate the effectiveness of RACE in facilitating the motion estimation performance of SinMod.

  4. Evaluation of gravimetric and volumetric dispensers of particles of nuclear material. [Accurate dispensing of fissile and fertile fuel into fuel rods

    SciTech Connect

    Bayne, C.K.; Angelini, P.

    1981-08-01

    Theoretical and experimental studies compared the abilities of volumetric and gravimetric dispensers to dispense accurately fissile and fertile fuel particles. Such devices are being developed for the fabrication of sphere-pac fuel rods for high-temperature gas-cooled light water and fast breeder reactors. The theoretical examination suggests that, although the fuel particles are dispensed more accurately by the gravimetric dispenser, the amount of nuclear material in the fuel particles dispensed by the two methods is not significantly different. The experimental results demonstrated that the volumetric dispenser can dispense both fuel particles and nuclear materials that meet standards for fabricating fuel rods. Performance of the more complex gravimetric dispenser was not significantly better than that of the simple yet accurate volumetric dispenser.

  5. Appraisal of Artificial Screening Techniques of Tomato to Accurately Reflect Field Performance of the Late Blight Resistance

    PubMed Central

    Nowakowska, Marzena; Nowicki, Marcin; Kłosińska, Urszula; Maciorowski, Robert; Kozik, Elżbieta U.

    2014-01-01

    Late blight (LB) caused by the oomycete Phytophthora infestans continues to thwart global tomato production, while only few resistant cultivars have been introduced locally. In order to gain from the released tomato germplasm with LB resistance, we compared the 5-year field performance of LB resistance in several tomato cultigens, with the results of controlled conditions testing (i.e., detached leaflet/leaf, whole plant). In case of these artificial screening techniques, the effects of plant age and inoculum concentration were additionally considered. In the field trials, LA 1033, L 3707, L 3708 displayed the highest LB resistance, and could be used for cultivar development under Polish conditions. Of the three methods using controlled conditions, the detached leaf and the whole plant tests had the highest correlation with thefield experiments. The plant age effect on LB resistance in tomato reported here, irrespective of the cultigen tested or inoculum concentration used, makes it important to standardize the test parameters when screening for resistance. Our results help show why other reports disagree on LB resistance in tomato. PMID:25279467

  6. 48 CFR 8.406-7 - Contractor Performance Evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Performance Evaluation. Ordering activities must prepare an evaluation of contractor performance for each... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Contractor Performance Evaluation. 8.406-7 Section 8.406-7 Federal Acquisition Regulations System FEDERAL ACQUISITION...

  7. 48 CFR 1552.209-76 - Contractor performance evaluations.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 1552.209-76 Contractor performance evaluations. As prescribed in section 1509.170-1, insert the following clause in all applicable solicitations and contracts. Contractor Performance Evaluations (OCT 2002... compliance with safety standards performance categories if deemed appropriate for the evaluation or...

  8. 10 CFR 1045.9 - RD classification performance evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 4 2010-01-01 2010-01-01 false RD classification performance evaluation. 1045.9 Section... classification performance evaluation. (a) Heads of agencies shall ensure that RD management officials and those... RD or FRD documents shall have their personnel performance evaluated with respect to...

  9. 24 CFR 570.491 - Performance and evaluation report.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 3 2010-04-01 2010-04-01 false Performance and evaluation report... Development Block Grant Program § 570.491 Performance and evaluation report. The annual performance and evaluation report shall be submitted in accordance with 24 CFR part 91. (Approved by the Office of...

  10. 24 CFR 570.491 - Performance and evaluation report.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... Development Block Grant Program § 570.491 Performance and evaluation report. The annual performance and evaluation report shall be submitted in accordance with 24 CFR part 91. (Approved by the Office of Management... 24 Housing and Urban Development 3 2011-04-01 2010-04-01 true Performance and evaluation...

  11. Accurate Evaluation of Ion Conductivity of the Gramicidin A Channel Using a Polarizable Force Field without Any Corrections.

    PubMed

    Peng, Xiangda; Zhang, Yuebin; Chu, Huiying; Li, Yan; Zhang, Dinglin; Cao, Liaoran; Li, Guohui

    2016-06-14

    Classical molecular dynamic (MD) simulation of membrane proteins faces significant challenges in accurately reproducing and predicting experimental observables such as ion conductance and permeability due to its incapability of precisely describing the electronic interactions in heterogeneous systems. In this work, the free energy profiles of K(+) and Na(+) permeating through the gramicidin A channel are characterized by using the AMOEBA polarizable force field with a total sampling time of 1 μs. Our results indicated that by explicitly introducing the multipole terms and polarization into the electrostatic potentials, the permeation free energy barrier of K(+) through the gA channel is considerably reduced compared to the overestimated results obtained from the fixed-charge model. Moreover, the estimated maximum conductance, without any corrections, for both K(+) and Na(+) passing through the gA channel are much closer to the experimental results than any classical MD simulations, demonstrating the power of AMOEBA in investigating the membrane proteins. PMID:27171823

  12. Evaluating a Performance-Ideal vs. Great Performance

    ERIC Educational Resources Information Center

    Bar-Elli, Gilead

    2004-01-01

    Based on a conception in which a musical composition determines aesthetic-normative properties, a distinction is drawn between two notions of performance: the "autonomous", in which a performance is regarded as a musical work on its own, and the "intentionalistic", in which it is regarded as essentially of a particular work. An ideal…

  13. Flexible pavement performance evaluation using deflection criteria

    NASA Astrophysics Data System (ADS)

    Wedner, R. J.

    1980-04-01

    Flexible pavement projects in Nebraska were monitored for dynamic deflections, roughness, and distress for six consecutive years. Present surface conditions were characterized and data for evaluating rehabilitation needs, including amount of overlay, were provided. Data were evaluated and factors were isolated for determining the structural adequacy of flexible pavements, evaluating existing pavement strength and soil subgrade conditions, and determining overlay thickness requirements. Terms for evaluating structural condition for pavement sufficiently ratings were developed and existing soil support value and subgrade strength province maps were evaluated.

  14. Small and efficient basis sets for the evaluation of accurate interaction energies: aromatic molecule-argon ground-state intermolecular potentials and rovibrational states.

    PubMed

    Cybulski, Hubert; Baranowska-Łączkowska, Angelika; Henriksen, Christian; Fernández, Berta

    2014-11-01

    By evaluating a representative set of CCSD(T) ground state interaction energies for van der Waals dimers formed by aromatic molecules and the argon atom, we test the performance of the polarized basis sets of Sadlej et al. (J. Comput. Chem. 2005, 26, 145; Collect. Czech. Chem. Commun. 1988, 53, 1995) and the augmented polarization-consistent bases of Jensen (J. Chem. Phys. 2002, 117, 9234) in providing accurate intermolecular potentials for the benzene-, naphthalene-, and anthracene-argon complexes. The basis sets are extended by addition of midbond functions. As reference we consider CCSD(T) results obtained with Dunning's bases. For the benzene complex a systematic basis set study resulted in the selection of the (Z)Pol-33211 and the aug-pc-1-33321 bases to obtain the intermolecular potential energy surface. The interaction energy values and the shape of the CCSD(T)/(Z)Pol-33211 calculated potential are very close to the best available CCSD(T)/aug-cc-pVTZ-33211 potential with the former basis set being considerably smaller. The corresponding differences for the CCSD(T)/aug-pc-1-33321 potential are larger. In the case of the naphthalene-argon complex, following a similar study, we selected the (Z)Pol-3322 and aug-pc-1-333221 bases. The potentials show four symmetric absolute minima with energies of -483.2 cm(-1) for the (Z)Pol-3322 and -486.7 cm(-1) for the aug-pc-1-333221 basis set. To further check the performance of the selected basis sets, we evaluate intermolecular bound states of the complexes. The differences between calculated vibrational levels using the CCSD(T)/(Z)Pol-33211 and CCSD(T)/aug-cc-pVTZ-33211 benzene-argon potentials are small and for the lowest energy levels do not exceed 0.70 cm(-1). Such differences are substantially larger for the CCSD(T)/aug-pc-1-33321 calculated potential. For naphthalene-argon, bound state calculations demonstrate that the (Z)Pol-3322 and aug-pc-1-333221 potentials are of similar quality. The results show that these

  15. Development and evaluation of a liquid chromatography-mass spectrometry method for rapid, accurate quantitation of malondialdehyde in human plasma.

    PubMed

    Sobsey, Constance A; Han, Jun; Lin, Karen; Swardfager, Walter; Levitt, Anthony; Borchers, Christoph H

    2016-09-01

    Malondialdhyde (MDA) is a commonly used marker of lipid peroxidation in oxidative stress. To provide a sensitive analytical method that is compatible with high throughput, we developed a multiple reaction monitoring-mass spectrometry (MRM-MS) approach using 3-nitrophenylhydrazine chemical derivatization, isotope-labeling, and liquid chromatography (LC) with electrospray ionization (ESI)-tandem mass spectrometry assay to accurately quantify MDA in human plasma. A stable isotope-labeled internal standard was used to compensate for ESI matrix effects. The assay is linear (R(2)=0.9999) over a 20,000-fold concentration range with a lower limit of quantitation of 30fmol (on-column). Intra- and inter-run coefficients of variation (CVs) were <2% and ∼10% respectively. The derivative was stable for >36h at 5°C. Standards spiked into plasma had recoveries of 92-98%. When compared to a common LC-UV method, the LC-MS method found near-identical MDA concentrations. A pilot project to quantify MDA in patient plasma samples (n=26) in a study of major depressive disorder with winter-type seasonal pattern (MDD-s) confirmed known associations between MDA concentrations and obesity (p<0.02). The LC-MS method provides high sensitivity and high reproducibility for quantifying MDA in human plasma. The simple sample preparation and rapid analysis time (5x faster than LC-UV) offers high throughput for large-scale clinical applications. PMID:27437618

  16. Performance Analysis of GYRO: A Tool Evaluation

    SciTech Connect

    Worley, P.; Roth, P.; Candy, J.; Shan, Hongzhang; Mahinthakumar,G.; Sreepathi, S.; Carrington, L.; Kaiser, T.; Snavely, A.; Reed, D.; Zhang, Y.; Huck, K.; Malony, A.; Shende, S.; Moore, S.; Wolf, F.

    2005-06-26

    The performance of the Eulerian gyrokinetic-Maxwell solver code GYRO is analyzed on five high performance computing systems. First, a manual approach is taken, using custom scripts to analyze the output of embedded wall clock timers, floating point operation counts collected using hardware performance counters, and traces of user and communication events collected using the profiling interface to Message Passing Interface (MPI) libraries. Parts of the analysis are then repeated or extended using a number of sophisticated performance analysis tools: IPM, KOJAK, SvPablo, TAU, and the PMaC modeling tool suite. The paper briefly discusses what has been discovered via this manual analysis process, what performance analyses are inconvenient or infeasible to attempt manually, and to what extent the tools show promise in accelerating or significantly extending the manual performance analyses.

  17. Reservoir evaluation of thin-bedded turbidites and hydrocarbon pore thickness estimation for an accurate quantification of resource

    NASA Astrophysics Data System (ADS)

    Omoniyi, Bayonle; Stow, Dorrik

    2016-04-01

    One of the major challenges in the assessment of and production from turbidite reservoirs is to take full account of thin and medium-bedded turbidites (<10cm and <30cm respectively). Although such thinner, low-pay sands may comprise a significant proportion of the reservoir succession, they can go unnoticed by conventional analysis and so negatively impact on reserve estimation, particularly in fields producing from prolific thick-bedded turbidite reservoirs. Field development plans often take little note of such thin beds, which are therefore bypassed by mainstream production. In fact, the trapped and bypassed fluids can be vital where maximising field value and optimising production are key business drivers. We have studied in detail, a succession of thin-bedded turbidites associated with thicker-bedded reservoir facies in the North Brae Field, UKCS, using a combination of conventional logs and cores to assess the significance of thin-bedded turbidites in computing hydrocarbon pore thickness (HPT). This quantity, being an indirect measure of thickness, is critical for an accurate estimation of original-oil-in-place (OOIP). By using a combination of conventional and unconventional logging analysis techniques, we obtain three different results for the reservoir intervals studied. These results include estimated net sand thickness, average sand thickness, and their distribution trend within a 3D structural grid. The net sand thickness varies from 205 to 380 ft, and HPT ranges from 21.53 to 39.90 ft. We observe that an integrated approach (neutron-density cross plots conditioned to cores) to HPT quantification reduces the associated uncertainties significantly, resulting in estimation of 96% of actual HPT. Further work will focus on assessing the 3D dynamic connectivity of the low-pay sands with the surrounding thick-bedded turbidite facies.

  18. FLUORESCENT TRACER EVALUATION OF PROTECTIVE CLOTHING PERFORMANCE

    EPA Science Inventory

    Field studies evaluating chemical protective clothing (CPC), which is often employed as a primary control option to reduce occupational exposures during pesticide applications, are limited. This study, supported by the U.S. Environmental Protection Agency (EPA), was designed to...

  19. How accurately can students estimate their performance on an exam and how does this relate to their actual performance on the exam?

    NASA Astrophysics Data System (ADS)

    Rebello, N. Sanjay

    2012-02-01

    Research has shown students' beliefs regarding their own abilities in math and science can influence their performance in these disciplines. I investigated the relationship between students' estimated performance and actual performance on five exams in a second semester calculus-based physics class. Students in a second-semester calculus-based physics class were given about 72 hours after the completion of each of five exams, to estimate their individual and class mean score on each exam. Students were given extra credit worth 1% of the exam points for estimating their score correct within 2% of the actual score and another 1% extra credit for estimating the class mean score within 2% of the correct value. I compared students' individual and mean score estimations with the actual scores to investigate the relationship between estimation accuracies and exam performance of the students as well as trends over the semester.

  20. Evaluating Performances of Solar-Energy Systems

    NASA Technical Reports Server (NTRS)

    Jaffe, L. D.

    1987-01-01

    CONC11 computer program calculates performances of dish-type solar thermal collectors and power systems. Solar thermal power system consists of one or more collectors, power-conversion subsystems, and powerprocessing subsystems. CONC11 intended to aid system designer in comparing performance of various design alternatives. Written in Athena FORTRAN and Assembler.

  1. Building China's municipal healthcare performance evaluation system: a Tuscan perspective.

    PubMed

    Li, Hao; Barsanti, Sara; Bonini, Anna

    2012-08-01

    Regional healthcare performance evaluation systems can help optimize healthcare resources on regional basis and improve the performance of healthcare services provided. The Tuscany region in Italy is a good example of an institution which meets these requirements. China has yet to build such a system based on international experience. In this paper, based on comparative studies between Tuscany and China, we propose that the managing institutions in China's experimental cities can select and commission a third-party agency to, respectively, evaluate the performance of their affiliated hospitals and community health service centers. Following some features of the Tuscan experience, the Chinese municipal healthcare performance evaluation system can be built by focusing on the selection of an appropriate performance evaluation agency, the design of an adequate performance evaluation mechanism and the formulation of a complete set of laws, rules and regulations. When a performance evaluation system at city level is formed, the provincial government can extend the successful experience to other cities.

  2. 13 CFR 306.7 - Performance evaluations of University Centers.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 13 Business Credit and Assistance 1 2012-01-01 2012-01-01 false Performance evaluations of..., DEPARTMENT OF COMMERCE TRAINING, RESEARCH AND TECHNICAL ASSISTANCE INVESTMENTS University Center Economic Development Program § 306.7 Performance evaluations of University Centers. (a) EDA will: (1) Evaluate...

  3. 48 CFR 1536.201 - Evaluation of contracting performance.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Contracting for Construction 1536.201 Evaluation of contracting performance. (a) The Contracting Officer will... will file the form in the contractor performance evaluation files which it maintains. (e) The Quality... 48 Federal Acquisition Regulations System 6 2010-10-01 2010-10-01 true Evaluation of...

  4. 48 CFR 2936.201 - Evaluation of contractor performance.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Construction 2936.201 Evaluation of contractor performance. The HCA must establish procedures to evaluate... 48 Federal Acquisition Regulations System 7 2010-10-01 2010-10-01 false Evaluation of contractor performance. 2936.201 Section 2936.201 Federal Acquisition Regulations System DEPARTMENT OF LABOR...

  5. 48 CFR 36.201 - Evaluation of contractor performance.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Contracting for Construction 36.201 Evaluation of contractor performance. See 42.1502(e) for the requirements for preparing past performance evaluations for construction contracts. ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Evaluation of...

  6. 13 CFR 306.7 - Performance evaluations of University Centers.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Performance evaluations of University Centers. 306.7 Section 306.7 Business Credit and Assistance ECONOMIC DEVELOPMENT ADMINISTRATION... Development Program § 306.7 Performance evaluations of University Centers. (a) EDA will: (1) Evaluate...

  7. Team Primacy Concept (TPC) Based Employee Evaluation and Job Performance

    ERIC Educational Resources Information Center

    Muniute, Eivina I.; Alfred, Mary V.

    2007-01-01

    This qualitative study explored how employees learn from Team Primacy Concept (TPC) based employee evaluation and how they use the feedback in performing their jobs. TPC based evaluation is a form of multirater evaluation, during which the employee's performance is discussed by one's peers in a face-to-face team setting. The study used Kolb's…

  8. Proper bibeta ROC model: algorithm, software, and performance evaluation

    NASA Astrophysics Data System (ADS)

    Chen, Weijie; Hu, Nan

    2016-03-01

    Semi-parametric models are often used to fit data collected in receiver operating characteristic (ROC) experiments to obtain a smooth ROC curve and ROC parameters for statistical inference purposes. The proper bibeta model as recently proposed by Mossman and Peng enjoys several theoretical properties. In addition to having explicit density functions for the latent decision variable and an explicit functional form of the ROC curve, the two parameter bibeta model also has simple closed-form expressions for true-positive fraction (TPF), false-positive fraction (FPF), and the area under the ROC curve (AUC). In this work, we developed a computational algorithm and R package implementing this model for ROC curve fitting. Our algorithm can deal with any ordinal data (categorical or continuous). To improve accuracy, efficiency, and reliability of our software, we adopted several strategies in our computational algorithm including: (1) the LABROC4 categorization to obtain the true maximum likelihood estimation of the ROC parameters; (2) a principled approach to initializing parameters; (3) analytical first-order and second-order derivatives of the likelihood function; (4) an efficient optimization procedure (the L-BFGS algorithm in the R package "nlopt"); and (5) an analytical delta method to estimate the variance of the AUC. We evaluated the performance of our software with intensive simulation studies and compared with the conventional binormal and the proper binormal-likelihood-ratio models developed at the University of Chicago. Our simulation results indicate that our software is highly accurate, efficient, and reliable.

  9. Quantum chemical approach for condensed-phase thermochemistry (III): Accurate evaluation of proton hydration energy and standard hydrogen electrode potential

    NASA Astrophysics Data System (ADS)

    Ishikawa, Atsushi; Nakai, Hiromi

    2016-04-01

    Gibbs free energy of hydration of a proton and standard hydrogen electrode potential were evaluated using high-level quantum chemical calculations. The solvent effect was included using the cluster-continuum model, which treated short-range effects by quantum chemical calculations of proton-water complexes, and the long-range effects by a conductor-like polarizable continuum model. The harmonic solvation model (HSM) was employed to estimate enthalpy and entropy contributions due to nuclear motions of the clusters by including the cavity-cluster interactions. Compared to the commonly used ideal gas model, HSM treatment significantly improved the contribution of entropy, showing a systematic convergence toward the experimental data.

  10. EVALUATION OF VENTILATION PERFORMANCE FOR INDOOR SPACE

    EPA Science Inventory

    The paper discusses a personal-computer-based application of computational fluid dynamics that can be used to determine the turbulent flow field and time-dependent/steady-state contaminant concentration distributions within isothermal indoor space. (NOTE: Ventilation performance ...

  11. Evaluation of performance impairment by spacecraft contaminants

    NASA Technical Reports Server (NTRS)

    Geller, I.; Hartman, R. J., Jr.; Mendez, V. M.

    1977-01-01

    The environmental contaminants (isolated as off-gases in Skylab and Apollo missions) were evaluated. Specifically, six contaminants were evaluated for their effects on the behavior of juvenile baboons. The concentrations of contaminants were determined through preliminary range-finding studies with laboratory rats. The contaminants evaluated were acetone, methyl ethyl ketone (MEK), methyl isobutyl ketone (MIBK), trichloroethylene (TCE), heptane and Freon 21. When the studies of the individual gases were completed, the baboons were also exposed to a mixture of MEK and TCE. The data obtained revealed alterations in the behavior of baboons exposed to relatively low levels of the contaminants. These findings were presented at the First International Symposium on Voluntary Inhalation of Industrial Solvents in Mexico City, June 21-24, 1976. A preprint of the proceedings is included.

  12. 24 CFR 968.330 - PHA performance and evaluation report.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false PHA performance and evaluation... 250 or More Public Housing Units) § 968.330 PHA performance and evaluation report. For any FFY in which a PHA has received assistance under this subpart, the PHA shall submit a Performance...

  13. Accurately evaluating Young's modulus of polymers through nanoindentations: A phenomenological correction factor to the Oliver and Pharr procedure

    NASA Astrophysics Data System (ADS)

    Tranchida, Davide; Piccarolo, Stefano; Loos, Joachim; Alexeev, Alexander

    2006-10-01

    The Oliver and Pharr [J. Mater. Res. 7, 1564 (1992)] procedure is a widely used tool to analyze nanoindentation force curves obtained on metals or ceramics. Its application to polymers is, however, difficult, as Young's moduli are commonly overestimated mainly because of viscoelastic effects and pileup. However, polymers spanning a large range of morphologies have been used in this work to introduce a phenomenological correction factor. It depends on indenter geometry: sets of calibration indentations have to be performed on some polymers with known elastic moduli to characterize each indenter.

  14. Performance evaluation of 1 kw PEFC

    SciTech Connect

    Komaki, Hideaki; Tsuchiyama, Syozo

    1996-12-31

    This report covers part of a joint study on a PEFC propulsion system for surface ships, summarized in a presentation to this Seminar, entitled {open_quote}Study on a PEFC Propulsion System for Surface Ships{close_quotes}, and which envisages application to a 1,500 DWT cargo vessel. The aspect treated here concerns the effects brought on PEFC operating performance by conditions particular to shipboard operation. The performance characteristics were examined through tests performed on a 1 kw stack and on a single cell (Manufactured by Fuji Electric Co., Ltd.). The tests covered the items (1) to (4) cited in the headings of the sections that follow. Specifications of the stack and single cell are as given.

  15. Performance evaluation of SAR/GMTI algorithms

    NASA Astrophysics Data System (ADS)

    Garber, Wendy; Pierson, William; Mcginnis, Ryan; Majumder, Uttam; Minardi, Michael; Sobota, David

    2016-05-01

    There is a history and understanding of exploiting moving targets within ground moving target indicator (GMTI) data, including methods for modeling performance. However, many assumptions valid for GMTI processing are invalid for synthetic aperture radar (SAR) data. For example, traditional GMTI processing assumes targets are exo-clutter and a system that uses a GMTI waveform, i.e. low bandwidth (BW) and low pulse repetition frequency (PRF). Conversely, SAR imagery is typically formed to focus data at zero Doppler and requires high BW and high PRF. Therefore, many of the techniques used in performance estimation of GMTI systems are not valid for SAR data. However, as demonstrated by papers in the recent literature,1-11 there is interest in exploiting moving targets within SAR data. The techniques employed vary widely, including filter banks to form images at multiple Dopplers, performing smear detection, and attempting to address the issue through waveform design. The above work validates the need for moving target exploitation in SAR data, but it does not represent a theory allowing for the prediction or bounding of performance. This work develops an approach to estimate and/or bound performance for moving target exploitation specific to SAR data. Synthetic SAR data is generated across a range of sensor, environment, and target parameters to test the exploitation algorithms under specific conditions. This provides a design tool allowing radar systems to be tuned for specific moving target exploitation applications. In summary, we derive a set of rules that bound the performance of specific moving target exploitation algorithms under variable operating conditions.

  16. Integrating metabolic performance, thermal tolerance, and plasticity enables for more accurate predictions on species vulnerability to acute and chronic effects of global warming.

    PubMed

    Magozzi, Sarah; Calosi, Piero

    2015-01-01

    Predicting species vulnerability to global warming requires a comprehensive, mechanistic understanding of sublethal and lethal thermal tolerances. To date, however, most studies investigating species physiological responses to increasing temperature have focused on the underlying physiological traits of either acute or chronic tolerance in isolation. Here we propose an integrative, synthetic approach including the investigation of multiple physiological traits (metabolic performance and thermal tolerance), and their plasticity, to provide more accurate and balanced predictions on species and assemblage vulnerability to both acute and chronic effects of global warming. We applied this approach to more accurately elucidate relative species vulnerability to warming within an assemblage of six caridean prawns occurring in the same geographic, hence macroclimatic, region, but living in different thermal habitats. Prawns were exposed to four incubation temperatures (10, 15, 20 and 25 °C) for 7 days, their metabolic rates and upper thermal limits were measured, and plasticity was calculated according to the concept of Reaction Norms, as well as Q10 for metabolism. Compared to species occupying narrower/more stable thermal niches, species inhabiting broader/more variable thermal environments (including the invasive Palaemon macrodactylus) are likely to be less vulnerable to extreme acute thermal events as a result of their higher upper thermal limits. Nevertheless, they may be at greater risk from chronic exposure to warming due to the greater metabolic costs they incur. Indeed, a trade-off between acute and chronic tolerance was apparent in the assemblage investigated. However, the invasive species P. macrodactylus represents an exception to this pattern, showing elevated thermal limits and plasticity of these limits, as well as a high metabolic control. In general, integrating multiple proxies for species physiological acute and chronic responses to increasing

  17. Space Shuttle Underside Astronaut Communications Performance Evaluation

    NASA Technical Reports Server (NTRS)

    Hwu, Shian U.; Dobbins, Justin A.; Loh, Yin-Chung; Kroll, Quin D.; Sham, Catherine C.

    2005-01-01

    The Space Shuttle Ultra High Frequency (UHF) communications system is planned to provide Radio Frequency (RF) coverage for astronauts working underside of the Space Shuttle Orbiter (SSO) for thermal tile inspection and repairing. This study is to assess the Space Shuttle UHF communication performance for astronauts in the shadow region without line-of-sight (LOS) to the Space Shuttle and Space Station UHF antennas. To insure the RF coverage performance at anticipated astronaut worksites, the link margin between the UHF antennas and Extravehicular Activity (EVA) Astronauts with significant vehicle structure blockage was analyzed. A series of near-field measurements were performed using the NASA/JSC Anechoic Chamber Antenna test facilities. Computational investigations were also performed using the electromagnetic modeling techniques. The computer simulation tool based on the Geometrical Theory of Diffraction (GTD) was used to compute the signal strengths. The signal strength was obtained by computing the reflected and diffracted fields along the propagation paths between the transmitting and receiving antennas. Based on the results obtained in this study, RF coverage for UHF communication links was determined for the anticipated astronaut worksite in the shadow region underneath the Space Shuttle.

  18. Performance Evaluation Gravity Probe B Design

    NASA Technical Reports Server (NTRS)

    Francis, Ronnie; Wells, Eugene M.

    1996-01-01

    This final report documents the work done to develop a 6 degree-of-freedom simulation of the Lockheed Martin Gravity Probe B (GPB) Spacecraft. This simulation includes the effects of vehicle flexibility and propellant slosh. The simulation was used to investigate the control performance of the spacecraft when subjected to realistic on orbit disturbances.

  19. Game Performance Evaluation in Male Goalball Players

    PubMed Central

    Molik, Bartosz; Morgulec-Adamowicz, Natalia; Kosmol, Andrzej; Perkowski, Krzysztof; Bednarczuk, Grzegorz; Skowroński, Waldemar; Gomez, Miguel Angel; Koc, Krzysztof; Rutkowska, Izabela; Szyman, Robert J

    2015-01-01

    Goalball is a Paralympic sport exclusively for athletes who are visually impaired and blind. The aims of this study were twofold: to describe game performance of elite male goalball players based upon the degree of visual impairment, and to determine if game performance was related to anthropometric characteristics of elite male goalball players. The study sample consisted of 44 male goalball athletes. A total of 38 games were recorded during the Summer Paralympic Games in London 2012. Observations were reported using the Game Efficiency Sheet for Goalball. Additional anthropometric measurements included body mass (kg), body height (cm), the arm span (cm) and length of the body in the defensive position (cm). The results differentiating both groups showed that the players with total blindness obtained higher means than the players with visual impairment for game indicators such as the sum of defense (p = 0.03) and the sum of good defense (p = 0.04). The players with visual impairment obtained higher results than those with total blindness for attack efficiency (p = 0.04), the sum of penalty defenses (p = 0.01), and fouls (p = 0.01). The study showed that athletes with blindness demonstrated higher game performance in defence. However, athletes with visual impairment presented higher efficiency in offensive actions. The analyses confirmed that body mass, body height, the arm span and length of the body in the defensive position did not differentiate players’ performance at the elite level. PMID:26834872

  20. Using Ratio Analysis to Evaluate Financial Performance.

    ERIC Educational Resources Information Center

    Minter, John; And Others

    1982-01-01

    The ways in which ratio analysis can help in long-range planning, budgeting, and asset management to strengthen financial performance and help avoid financial difficulties are explained. Types of ratios considered include balance sheet ratios, net operating ratios, and contribution and demand ratios. (MSE)

  1. Evaluating Suit Fit Using Performance Degradation

    NASA Technical Reports Server (NTRS)

    Margerum, Sarah E.; Cowley, Matthew; Harvill, Lauren; Benson, Elizabeth; Rajulu, Sudhakar

    2012-01-01

    The Mark III planetary technology demonstrator space suit can be tailored to an individual by swapping the modular components of the suit, such as the arms, legs, and gloves, as well as adding or removing sizing inserts in key areas. A method was sought to identify the transition from an ideal suit fit to a bad fit and how to quantify this breakdown using a metric of mobility-based human performance data. To this end, the degradation of the range of motion of the elbow and wrist of the suit as a function of suit sizing modifications was investigated to attempt to improve suit fit. The sizing range tested spanned optimal and poor fit and was adjusted incrementally in order to compare each joint angle across five different sizing configurations. Suited range of motion data were collected using a motion capture system for nine isolated and functional tasks utilizing the elbow and wrist joints. A total of four subjects were tested with motions involving both arms simultaneously as well as the right arm by itself. Findings indicated that no single joint drives the performance of the arm as a function of suit size; instead it is based on the interaction of multiple joints along a limb. To determine a size adjustment range where an individual can operate the suit at an acceptable level, a performance detriment limit was set. This user-selected limit reveals the task-dependent tolerance of the suit fit around optimal size. For example, the isolated joint motion indicated that the suit can deviate from optimal by as little as -0.6 in to -2.6 in before experiencing a 10% performance drop in the wrist or elbow joint. The study identified a preliminary method to quantify the impact of size on performance and developed a new way to gauge tolerances around optimal size.

  2. An hierarchical approach to performance evaluation of expert systems

    NASA Technical Reports Server (NTRS)

    Dominick, Wayne D. (Editor); Kavi, Srinu

    1985-01-01

    The number and size of expert systems is growing rapidly. Formal evaluation of these systems - which is not performed for many systems - increases the acceptability by the user community and hence their success. Hierarchical evaluation that had been conducted for computer systems is applied for expert system performance evaluation. Expert systems are also evaluated by treating them as software systems (or programs). This paper reports many of the basic concepts and ideas in the Performance Evaluation of Expert Systems Study being conducted at the University of Southwestern Louisiana.

  3. An evaluation of memory accuracy in food hoarding marsh tits Poecile palustris--how accurate are they compared to humans?

    PubMed

    Brodin, Anders; Urhan, A Utku

    2013-07-01

    Laboratory studies of scatter hoarding birds have become a model system for spatial memory studies. Considering that such birds are known to have a good spatial memory, recovery success in lab studies seems low. In parids (titmice and chickadees) typically ranging between 25 and 60% if five seeds are cached in 50-128 available caching sites. Since these birds store many thousands of food items in nature in one autumn one might expect that they should easily retrieve five seeds in a laboratory where they know the environment with its caching sites in detail. We designed a laboratory set up to be as similar as possible with previous studies and trained wild caught marsh tits Poecile palustris to store and retrieve in this set up. Our results agree closely with earlier studies, of the first ten looks around 40% were correct when the birds had stored five seeds in 100 available sites both 5 and 24h after storing. The cumulative success curve suggests high success during the first 15 looks where after it declines. Humans performed much better, in the first five looks most subjects were 100% correct. We discuss possible reasons for why the birds were not doing better.

  4. PERFORMANCE EVALUATION OF TYPE I MARINE SANITATION DEVICES

    EPA Science Inventory

    This performance test was designed to evaluate the effectiveness of two Type I Marine Sanitation Devices (MSDs): the Electro Scan Model EST 12, manufactured by Raritan Engineering Company, Inc., and the Thermopure-2, manufactured by Gross Mechanical Laboratories, Inc. Performance...

  5. Fenestration System Performance Research, Testing, and Evaluation

    SciTech Connect

    Jim Benney

    2009-11-30

    The US DOE was and is instrumental to NFRC's beginning and its continued success. The 2005 to 2009 funding enables NFRC to continue expanding and create new, improved ratings procedures. Research funded by the US DOE enables increased fenestration energy rating accuracy. International harmonization efforts supported by the US DOE allow the US to be the global leader in fenestration energy ratings. Many other governments are working with the NFRC to share its experience and knowledge toward development of their own national fenestration rating process similar to the NFRC's. The broad and diverse membership composition of NFRC allows anyone with a fenestration interest to come forward with an idea or improvement to the entire fenestration community for consideration. The NFRC looks forward to the next several years of growth while remaining the nation's resource for fair, accurate, and credible fenestration product energy ratings. NFRC continues to improve its rating system by considering new research, methodologies, and expanding to include new fenestration products. Currently, NFRC is working towards attachment energy ratings. Attachments are blinds, shades, awnings, and overhangs. Attachments may enable a building to achieve significant energy savings. An NFRC rating will enable fair competition, a basis for code references, and a new ENERGY STAR product category. NFRC also is developing rating methods to consider non specular glazing such as fritted glass. Commercial applications frequently use fritted glazing, but no rating method exists. NFRC is testing new software that may enable this new rating and contribute further to energy conservation. Around the world, many nations are seeking new energy conservation methods and NFRC is poised to harmonize its rating system assisting these nations to better manage and conserve energy in buildings by using NFRC rated and labeled fenestration products. As this report has shown, much more work needs to be done to

  6. Three-Dimensional Numerical Evaluation of Thermal Performance of Uninsulated Wall Assemblies: Preprint

    SciTech Connect

    Ridouane, E. H.; Bianchi, M.

    2011-11-01

    This study describes a detailed three-dimensional computational fluid dynamics modeling to evaluate the thermal performance of uninsulated wall assemblies accounting for conduction through framing, convection, and radiation. The model allows for material properties variations with temperature. Parameters that were varied in the study include ambient outdoor temperature and cavity surface emissivity. Understanding the thermal performance of uninsulated wall cavities is essential for accurate prediction of energy use in residential buildings. The results can serve as input for building energy simulation tools for modeling the temperature dependent energy performance of homes with uninsulated walls.

  7. Performance Evaluation Method for Dissimilar Aircraft Designs

    NASA Technical Reports Server (NTRS)

    Walker, H. J.

    1979-01-01

    A rationale is presented for using the square of the wingspan rather than the wing reference area as a basis for nondimensional comparisons of the aerodynamic and performance characteristics of aircraft that differ substantially in planform and loading. Working relationships are developed and illustrated through application to several categories of aircraft covering a range of Mach numbers from 0.60 to 2.00. For each application, direct comparisons of drag polars, lift-to-drag ratios, and maneuverability are shown for both nondimensional systems. The inaccuracies that may arise in the determination of aerodynamic efficiency based on reference area are noted. Span loading is introduced independently in comparing the combined effects of loading and aerodynamic efficiency on overall performance. Performance comparisons are made for the NACA research aircraft, lifting bodies, century-series fighter aircraft, F-111A aircraft with conventional and supercritical wings, and a group of supersonic aircraft including the B-58 and XB-70 bomber aircraft. An idealized configuration is included in each category to serve as a standard for comparing overall efficiency.

  8. Evaluating Models of Human Performance: Safety-Critical Systems Applications

    NASA Technical Reports Server (NTRS)

    Feary, Michael S.

    2012-01-01

    This presentation is part of panel discussion on Evaluating Models of Human Performance. The purpose of this panel is to discuss the increasing use of models in the world today and specifically focus on how to describe and evaluate models of human performance. My presentation will focus on discussions of generating distributions of performance, and the evaluation of different strategies for humans performing tasks with mixed initiative (Human-Automation) systems. I will also discuss issues with how to provide Human Performance modeling data to support decisions on acceptability and tradeoffs in the design of safety critical systems. I will conclude with challenges for the future.

  9. Accurate mass analysis of ethanesulfonic acid degradates of acetochlor and alachlor using high-performance liquid chromatography and time-of-flight mass spectrometry

    USGS Publications Warehouse

    Thurman, E.M.; Ferrer, I.; Parry, R.

    2002-01-01

    Degradates of acetochlor and alachlor (ethanesulfonic acids, ESAs) were analyzed in both standards and in a groundwater sample using high-performance liquid chromatography-time-of-flight mass spectrometry with electrospray ionization. The negative pseudomolecular ion of the secondary amide of acetochlor ESA and alachlor ESA gave average masses of 256.0750??0.0049 amu and 270.0786??0.0064 amu respectively. Acetochlor and alachlor ESA gave similar masses of 314.1098??0.0061 amu and 314.1153??0.0048 amu; however, they could not be distinguished by accurate mass because they have the same empirical formula. On the other hand, they may be distinguished using positive-ion electrospray because of different fragmentation spectra, which did not occur using negative-ion electrospray.

  10. HENC performance evaluation and plutonium calibration

    SciTech Connect

    Menlove, H.O.; Baca, J.; Pecos, J.M.; Davidson, D.R.; McElroy, R.D.; Brochu, D.B.

    1997-10-01

    The authors have designed a high-efficiency neutron counter (HENC) to increase the plutonium content in 200-L waste drums. The counter uses totals neutron counting, coincidence counting, and multiplicity counting to determine the plutonium mass. The HENC was developed as part of a Cooperative Research and Development Agreement between the Department of Energy and Canberra Industries. This report presents the results of the detector modifications, the performance tests, the add-a-source calibration, and the plutonium calibration at Los Alamos National Laboratory (TA-35) in 1996.

  11. ATAMM enhancement and multiprocessing performance evaluation

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.

    1994-01-01

    The algorithm to architecture mapping model (ATAAM) is a Petri net based model which provides a strategy for periodic execution of a class of real-time algorithms on multicomputer dataflow architecture. The execution of large-grained, decision-free algorithms on homogeneous processing elements is studied. The ATAAM provides an analytical basis for calculating performance bounds on throughput characteristics. Extension of the ATAMM as a strategy for cyclo-static scheduling provides for a truly distributed ATAMM multicomputer operating system. An ATAAM testbed consisting of a centralized graph manager and three processors is described using embedded firmware on 68HC11 microcontrollers.

  12. Phased array performance evaluation with photoelastic visualization

    SciTech Connect

    Ginzel, Robert; Dao, Gavin

    2014-02-18

    New instrumentation and a widening range of phased array transducer options are affording the industry a greater potential. Visualization of the complex wave components using the photoelastic system can greatly enhance understanding of the generated signals. Diffraction, mode conversion and wave front interaction, together with beam forming for linear, sectorial and matrix arrays, will be viewed using the photoelastic system. Beam focus and steering performance will be shown with a range of embedded and surface targets within glass samples. This paper will present principles and sound field images using this visualization system.

  13. Traction contact performance evaluation at high speeds

    NASA Technical Reports Server (NTRS)

    Tevaarwerk, J. L.

    1981-01-01

    The results of traction tests performed on two fluids are presented. These tests covered a pressure range of 1.0 to 2.5 GPa, an inlet temperature range of 30 'C to 70 'C, a speed range of 10 to 80 m/sec, aspect ratios of .5 to 5 and spin from 0 to 2.1 percent. The test results are presented in the form of two dimensionless parameters, the initial traction slope and the maximum traction peak. With the use of a suitable rheological fluid model the actual traction curves measured can now be reconstituted from the two fluid parameters. More importantly, the knowledge of these parameters together with the fluid rheological model, allow the prediction of traction under conditions of spin, slip and any combination thereof. Comparison between theoretically predicted traction under these conditions and those measured in actual traction tests shows that this method gives good results.

  14. 48 CFR 3052.216-72 - Performance evaluation plan.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 7 2010-10-01 2010-10-01 false Performance evaluation... CONTRACT CLAUSES Text of Provisions and Clauses 3052.216-72 Performance evaluation plan. As prescribed in (HSAR) 48 CFR 3016.406(e)(i)(ii), insert a clause substantially the same as the following:...

  15. Evaluation of Section Heads' Performance at Kuwait Secondary Schools

    ERIC Educational Resources Information Center

    Al-Hamdan, Jasem M.; Al-Yacoub, Ali M.

    2005-01-01

    Purpose: The study attempts to examine the viewpoints of those involved in evaluating the performance of section heads in Kuwait secondary schools; mainly section heads themselves, supervisors and principals. It sets out to determine the strength and weaknesses in the performance evaluation form designed for section heads.…

  16. Sexism and Beautyism in Women's Evaluations of Peer Performance.

    ERIC Educational Resources Information Center

    Cash, Thomas F.; Trimer, Claire A.

    1984-01-01

    Investigated independent and interactive effects of physical attractiveness (PA), sex, and task sex-typing on performance evaluations by 216 college women. Found that the halo effect ("beauty is talent") of PA operated when subjects evaluated both sexes, with the exception of ratings of attractive women in out-of-role ("masculine") performances.…

  17. Evaluation of PV Module Field Performance

    SciTech Connect

    Wohlgemuth, John; Silverman, Timothy; Miller, David C.; McNutt, Peter; Kempe, Michael; Deceglie, Michael

    2015-06-14

    This paper describes an effort to inspect and evaluate PV modules in order to determine what failure or degradation modes are occurring in field installations. This paper will report on the results of six site visits, including the Sacramento Municipal Utility District (SMUD) Hedge Array, Tucson Electric Power (TEP) Springerville, Central Florida Utility, Florida Solar Energy Center (FSEC), the TEP Solar Test Yard, and University of Toledo installations. The effort here makes use of a recently developed field inspection data collection protocol, and the results were input into a corresponding database. The results of this work have also been used to develop a draft of the IEC standard for climate and application specific accelerated stress testing beyond module qualification. TEP Solar Test Yard, and University of Toledo installations. The effort here makes use of a recently developed field inspection data collection protocol, and the results were input into a corresponding database. The results of this work have also been used to develop a draft of the IEC standard for climate and application specific accelerated stress testing beyond module qualification. TEP Solar Test Yard, and University of Toledo installations. The effort here makes use of a recently developed field inspection data collection protocol, and the results were input into a corresponding database. The results of this work have also been used to develop a draft of the IEC standard for climate and application specific accelerated stress testing beyond module qualification.

  18. Evaluation of ECCS performance for an SBWR

    SciTech Connect

    Abe, Nobuaki; Arai, Kenji; Hamazaki, Ryouichi; Nagasaka, Hideo

    1990-01-01

    A simplified boiling water reactor (SBWR), one of the next generation of light water reactors, is now under development. From the safety viewpoint, the SBWR is characterized by the adoption of a passive emergency core cooling system (ECCS) and a passive containment cooling system (PCCS). The ECCS network for an SBWR consists of depressurization valves (DPVs) and a gravity-driven cooling system (GDCS). The DPV and GDCS are designed to keep the core covered with water following any loss-of-coolant accident (LOCA) assuming a single failure in the ECCS. The SAPPHIRE code has been developed in order to evaluate the effectiveness of the ECCS of the SBWR. The SAPPHIRE code has been developed to calculate the short-term thermal-hydraulic phenomena simultaneously inside the contaminant including the RPV, drywell, and wetwell. The predictive capability of SAPPHIRE for SBWR LOCA analysis has been demonstrated by a comparison with the best estimate TRAC code. Both SAPPHIRE and TRAC codes indicate no core uncovery during a maximum drain line break.

  19. Evaluating hospital performance based on excess cause-specific incidence.

    PubMed

    Van Rompaye, Bart; Eriksson, Marie; Goetghebeur, Els

    2015-04-15

    Formal evaluation of hospital performance in specific types of care is becoming an indispensable tool for quality assurance in the health care system. When the prime concern lies in reducing the risk of a cause-specific event, we propose to evaluate performance in terms of an average excess cumulative incidence, referring to the center's observed patient mix. Its intuitive interpretation helps give meaning to the evaluation results and facilitates the determination of important benchmarks for hospital performance. We apply it to the evaluation of cerebrovascular deaths after stroke in Swedish stroke centers, using data from Riksstroke, the Swedish stroke registry.

  20. Thrust Stand for Electric Propulsion Performance Evaluation

    NASA Technical Reports Server (NTRS)

    Markusic, T. E.; Jones, J. E.; Cox, M. D.

    2004-01-01

    An electric propulsion thrust stand capable of supporting thrusters with total mass of up to 125 kg and 1 mN to 1 N thrust levels has been developed and tested. The mechanical design features a conventional hanging pendulum arm attached to a balance mechanism that transforms horizontal motion into amplified vertical motion, with accommodation for variable displacement sensitivity. Unlike conventional hanging pendulum thrust stands, the deflection is independent of the length of the pendulum arm, and no reference structure is required at the end of the pendulum. Displacement is measured using a non-contact, optical linear gap displacement transducer. Mechanical oscillations are attenuated using a passive, eddy current damper. An on-board microprocessor-based level control system, which includes a two axis accelerometer and two linear-displacement stepper motors, continuously maintains the level of the balance mechanism - counteracting mechanical %era drift during thruster testing. A thermal control system, which includes heat exchange panels, thermocouples, and a programmable recirculating water chiller, continuously adjusts to varying thermal loads to maintain the balance mechanism temperature, to counteract thermal drifts. An in-situ calibration rig allows for steady state calibration both prior to and during thruster testing. Thrust measurements were carried out on a well-characterized 1 kW Hall thruster; the thrust stand was shown to produce repeatable results consistent with previously published performance data.

  1. Using Business Performance To Evaluate Multimedia Training in Manufacturing.

    ERIC Educational Resources Information Center

    Lachenmaier, Lynn S.; Moor, William C.

    1997-01-01

    Discusses training evaluation and shows how an abbreviated form of Kirkpatrick's four-level evaluation model can be used effectively to evaluate multimedia-based manufacturing training. Topics include trends in manufacturing training, quantifying performance improvement, and statistical comparisons using the Mann-Whitney test and the Tukey Quick…

  2. At-Risk Youth Appearance and Job Performance Evaluation

    ERIC Educational Resources Information Center

    Freeburg, Beth Winfrey; Workman, Jane E.

    2008-01-01

    The goal of this study was to identify the relationship of at-risk youth workplace appearance to other job performance criteria. Employers (n = 30; each employing from 1 to 17 youths) evaluated 178 at-risk high school youths who completed a paid summer employment experience. Appearance evaluations were significantly correlated with evaluations of…

  3. Thrust Stand for Electric Propulsion Performance Evaluation

    NASA Technical Reports Server (NTRS)

    Polzin, Kurt A.; Markusic, Thomas E.; Stanojev, Boris J.; Dehoyos, Amado; Spaun, Benjamin

    2006-01-01

    An electric propulsion thrust stand capable of supporting testing of thrusters having a total mass of up to 125 kg and producing thrust levels between 100 microN to 1 N has been developed and tested. The design features a conventional hanging pendulum arm attached to a balance mechanism that converts horizontal deflections produced by the operating thruster into amplified vertical motion of a secondary arm. The level of amplification is changed through adjustment of the location of one of the pivot points linking the system. Response of the system depends on the relative magnitudes of the restoring moments applied by the displaced thruster mass and the twisting torsional pivots connecting the members of the balance mechanism. Displacement is measured using a non-contact, optical linear gap displacement transducer and balance oscillatory motion is attenuated using a passive, eddy-current damper. The thrust stand employs an automated leveling and thermal control system. Pools of liquid gallium are used to deliver power to the thruster without using solid wire connections, which can exert undesirable time-varying forces on the balance. These systems serve to eliminate sources of zero-drift that can occur as the stand thermally or mechanically shifts during the course of an experiment. An in-situ calibration rig allows for steady-state calibration before, during and after thruster operation. Thrust measurements were carried out on a cylindrical Hall thruster that produces mN-level thrust. The measurements were very repeatable, producing results that compare favorably with previously published performance data, but with considerably smaller uncertainty.

  4. Performance evaluation of salivary amylase activity monitor.

    PubMed

    Yamaguchi, Masaki; Kanemori, Takahiro; Kanemaru, Masashi; Takai, Noriyasu; Mizuno, Yasufumi; Yoshida, Hiroshi

    2004-10-15

    In order to quantify psychological stress and to distinguish eustress and distress, we have been investigating the establishment of a method that can quantify salivary amylase activity (SMA). Salivary glands not only act as amplifiers of a low level of norepinephrine, but also respond more quickly and sensitively to psychological stress than cortisol levels. Moreover, the time-course changes of the salivary amylase activity have a possibility to distinguish eustress and distress. Thus, salivary amylase activity can be utilized as an excellent index for psychological stress. However, in dry chemistry system, a method for quantification of the enzymatic activity still needs to be established that can provide with sufficient substrate in a testing tape as well as can control enzymatic reaction time. Moreover, it is necessary to develop a method that has the advantages of using saliva, such as ease of collection, rapidity of response, and able to use at any time. In order to establish an easy method to monitor the salivary amylase activity, a salivary transcription device was fabricated to control the enzymatic reaction time. A fabricated salivary amylase activity monitor consisted of three devices, the salivary transcription device, a testing-strip and an optical analyzer. By adding maltose as a competitive inhibitor to a substrate Ga1-G2-CNP, a broad-range activity testing-strip was fabricated that could measure the salivary amylase activity with a range of 0-200 kU/l within 150 s. The calibration curve of the monitor for the salivary amylase activity showed R2=0.941, indicating that it was possible to use this monitor for the analysis of the salivary amylase activity without the need to determine the salivary volume quantitatively. In order to evaluate the assay variability of the monitor, salivary amylase activity was measured using Kraepelin psychodiagnostic test as a psychological stressor. A significant difference of salivary amylase activity was recognized

  5. Performance evaluation methodology for historical document image binarization.

    PubMed

    Ntirogiannis, Konstantinos; Gatos, Basilis; Pratikakis, Ioannis

    2013-02-01

    Document image binarization is of great importance in the document image analysis and recognition pipeline since it affects further stages of the recognition process. The evaluation of a binarization method aids in studying its algorithmic behavior, as well as verifying its effectiveness, by providing qualitative and quantitative indication of its performance. This paper addresses a pixel-based binarization evaluation methodology for historical handwritten/machine-printed document images. In the proposed evaluation scheme, the recall and precision evaluation measures are properly modified using a weighting scheme that diminishes any potential evaluation bias. Additional performance metrics of the proposed evaluation scheme consist of the percentage rates of broken and missed text, false alarms, background noise, character enlargement, and merging. Several experiments conducted in comparison with other pixel-based evaluation measures demonstrate the validity of the proposed evaluation scheme.

  6. Performance evaluation methodology for historical document image binarization.

    PubMed

    Ntirogiannis, Konstantinos; Gatos, Basilis; Pratikakis, Ioannis

    2013-02-01

    Document image binarization is of great importance in the document image analysis and recognition pipeline since it affects further stages of the recognition process. The evaluation of a binarization method aids in studying its algorithmic behavior, as well as verifying its effectiveness, by providing qualitative and quantitative indication of its performance. This paper addresses a pixel-based binarization evaluation methodology for historical handwritten/machine-printed document images. In the proposed evaluation scheme, the recall and precision evaluation measures are properly modified using a weighting scheme that diminishes any potential evaluation bias. Additional performance metrics of the proposed evaluation scheme consist of the percentage rates of broken and missed text, false alarms, background noise, character enlargement, and merging. Several experiments conducted in comparison with other pixel-based evaluation measures demonstrate the validity of the proposed evaluation scheme. PMID:23008259

  7. Reinvigorating performance evaluation: first steps in a local health department.

    PubMed

    Smith, Kathleen N; Gunzenhauser, Jeffrey D; Fielding, Jonathan E

    2010-01-01

    The ability of a local health department to assess and improve employee performance through an effective evaluation process is critical to overall organizational success. A constructive performance evaluation process not only provides meaningful feedback on work performance but also provides opportunities to reinforce work behaviors that support the organization's mission, to recognize exceptional work, and to guide future growth and learning. The Los Angeles County Department of Public Health is creating a new approach to performance evaluation that recognizes 3 distinct components of work performance: standard business practices, competencies, and standards of practice. This multidimensional perspective acknowledges that the expectations of workers are complex and that evaluations of performance are not easily captured with single-dimension assessment tools. This report describes the conceptual relationships of these 3 components and how they integrate to form a single performance evaluation process. Key elements within this structure include a base document of competencies for all workers, expanded competency sets for professional staff, role-specific duty statements for workers who perform similar work, and standards of competent practice related to the mission of units to which individuals are assigned. Key first steps are to define the terminology of performance evaluation and to create role-specific duty statements.

  8. Evaluating Preference for Graphic Feedback on Correct versus Incorrect Performance

    ERIC Educational Resources Information Center

    Sigurdsson, Sigurdur O.; Ring, Brandon M.

    2013-01-01

    The current study evaluated preferences of undergraduate students for graphic feedback on percentage of incorrect performance versus feedback on percentage of correct performance. A total of 108 participants were enrolled in the study and received graphic feedback on performance on 12 online quizzes. One half of participants received graphic…

  9. A Conceptual Approach to Sex-Fair Performance Evaluation.

    ERIC Educational Resources Information Center

    DeLong, Barbara J.

    Preplanning to insure sex-fair evaluation of student performance in physical education should include: (1) establishment of instructional and performance objectives for each activity; (2) development of performance standards which take into account ability levels and documented biological differences between the sexes; and (3) a well-defined…

  10. Analysis and design of rolling-contact joints for evaluating bone plate performance.

    PubMed

    Slocum, Alexander H; Cervantes, Thomas M; Seldin, Edward B; Varanasi, Kripa K

    2012-09-01

    An apparatus for testing maxillofacial bone plates has been designed using a rolling contact joint. First, a free-body representation of the fracture fixation techniques utilizing bone plates is used to illustrate how rolling contact joints accurately simulate in vivo biomechanics. Next, a deterministic description of machine functional requirements is given, and is then used to drive the subsequent selection and design of machine elements. Hertz contact stress and fatigue analysis for two elements are used to ensure that the machine will both withstand loads required to deform different plates, and maintain a high cycle lifetime for testing large numbers of plates. Additionally, clinically relevant deformations are presented to illustrate how stiffness is affected after a deformation is applied, and to highlight improvements made by the machine over current testing standards, which do not adequately re-create in vivo loading conditions. The machine performed as expected and allowed for analysis of bone plates in both deformed and un-deformed configurations to be conducted. Data for deformation experiments is presented to show that the rolling-contact testing machine leads to improved loading configurations, and thus a more accurate description of plate performance. A machine for evaluation of maxillofacial bone plates has been designed, manufactured, and used to accurately simulate in vivo loading conditions to more effectively evaluate the performance of both new and existing bone plates.

  11. Performance Evaluation and Requirements Assessment for Gravity Gradient Referenced Navigation.

    PubMed

    Lee, Jisun; Kwon, Jay Hyoun; Yu, Myeongjong

    2015-01-01

    In this study, simulation tests for gravity gradient referenced navigation (GGRN) are conducted to verify the effects of various factors such as database (DB) and sensor errors, flight altitude, DB resolution, initial errors, and measurement update rates on the navigation performance. Based on the simulation results, requirements for GGRN are established for position determination with certain target accuracies. It is found that DB and sensor errors and flight altitude have strong effects on the navigation performance. In particular, a DB and sensor with accuracies of 0.1 E and 0.01 E, respectively, are required to determine the position more accurately than or at a level similar to the navigation performance of terrain referenced navigation (TRN). In most cases, the horizontal position error of GGRN is less than 100 m. However, the navigation performance of GGRN is similar to or worse than that of a pure inertial navigation system when the DB and sensor errors are 3 E or 5 E each and the flight altitude is 3000 m. Considering that the accuracy of currently available gradiometers is about 3 E or 5 E, GGRN does not show much advantage over TRN at present. However, GGRN is expected to exhibit much better performance in the near future when accurate DBs and gravity gradiometer are available. PMID:26184212

  12. Performance Evaluation and Requirements Assessment for Gravity Gradient Referenced Navigation

    PubMed Central

    Lee, Jisun; Kwon, Jay Hyoun; Yu, Myeongjong

    2015-01-01

    In this study, simulation tests for gravity gradient referenced navigation (GGRN) are conducted to verify the effects of various factors such as database (DB) and sensor errors, flight altitude, DB resolution, initial errors, and measurement update rates on the navigation performance. Based on the simulation results, requirements for GGRN are established for position determination with certain target accuracies. It is found that DB and sensor errors and flight altitude have strong effects on the navigation performance. In particular, a DB and sensor with accuracies of 0.1 E and 0.01 E, respectively, are required to determine the position more accurately than or at a level similar to the navigation performance of terrain referenced navigation (TRN). In most cases, the horizontal position error of GGRN is less than 100 m. However, the navigation performance of GGRN is similar to or worse than that of a pure inertial navigation system when the DB and sensor errors are 3 E or 5 E each and the flight altitude is 3000 m. Considering that the accuracy of currently available gradiometers is about 3 E or 5 E, GGRN does not show much advantage over TRN at present. However, GGRN is expected to exhibit much better performance in the near future when accurate DBs and gravity gradiometer are available. PMID:26184212

  13. Models for evaluating the performability of degradable computing systems

    NASA Technical Reports Server (NTRS)

    Wu, L. T.

    1982-01-01

    Recent advances in multiprocessor technology established the need for unified methods to evaluate computing systems performance and reliability. In response to this modeling need, a general modeling framework that permits the modeling, analysis and evaluation of degradable computing systems is considered. Within this framework, several user oriented performance variables are identified and shown to be proper generalizations of the traditional notions of system performance and reliability. Furthermore, a time varying version of the model is developed to generalize the traditional fault tree reliability evaluation methods of phased missions.

  14. Numerical simulation of turbulence flow in a Kaplan turbine -Evaluation on turbine performance prediction accuracy-

    NASA Astrophysics Data System (ADS)

    Ko, P.; Kurosawa, S.

    2014-03-01

    The understanding and accurate prediction of the flow behaviour related to cavitation and pressure fluctuation in a Kaplan turbine are important to the design work enhancing the turbine performance including the elongation of the operation life span and the improvement of turbine efficiency. In this paper, high accuracy turbine and cavitation performance prediction method based on entire flow passage for a Kaplan turbine is presented and evaluated. Two-phase flow field is predicted by solving Reynolds-Averaged Navier-Stokes equations expressed by volume of fluid method tracking the free surface and combined with Reynolds Stress model. The growth and collapse of cavitation bubbles are modelled by the modified Rayleigh-Plesset equation. The prediction accuracy is evaluated by comparing with the model test results of Ns 400 Kaplan model turbine. As a result that the experimentally measured data including turbine efficiency, cavitation performance, and pressure fluctuation are accurately predicted. Furthermore, the cavitation occurrence on the runner blade surface and the influence to the hydraulic loss of the flow passage are discussed. Evaluated prediction method for the turbine flow and performance is introduced to facilitate the future design and research works on Kaplan type turbine.

  15. Fiscal year 1999 Battelle performance evaluation and fee agreement

    SciTech Connect

    DAVIS, T.L.

    1998-10-22

    Fiscal Year 1999 represents the third fill year utilizing a results-oriented, performance-based evaluation for the Contractor's operations and management of the DOE Pacific Northwest National Laboratory (here after referred to as the Laboratory). However, this is the first year that the Contractor's fee is totally performance-based utilizing the same Critical Outcomes. This document describes the critical outcomes, objectives, performance indicators, expected levels of performance, and the basis for the evaluation of the Contractor's performance for the period October 1, 1998 through September 30, 1999, as required by Clauses entitled ''Use of Objective Standards of Performance, Self Assessment and Performance Evaluation'' and ''Performance Measures Review'' of the Contract DE-ACO6-76RL01830. Furthermore, it documents the distribution of the total available performance-based fee and the methodology set for determining the amount of fee earned by the Contractor as stipulated within the causes entitled ''Estimated Cost and Annual Fee,'' ''Total Available Fee'' and ''Allowable Costs and Fee.'' In partnership with the Contractor and other key customers, the Department of Energy (DOE) Headquarters (HQ) and Richland Operations Office (RL) has defined four critical outcomes that serve as the core for the Contractor's performance-based evaluation and fee determination. The Contractor also utilizes these outcomes as a basis for overall management of the Laboratory.

  16. Building China's municipal healthcare performance evaluation system: a Tuscan perspective.

    PubMed

    Li, Hao; Barsanti, Sara; Bonini, Anna

    2012-08-01

    Regional healthcare performance evaluation systems can help optimize healthcare resources on regional basis and improve the performance of healthcare services provided. The Tuscany region in Italy is a good example of an institution which meets these requirements. China has yet to build such a system based on international experience. In this paper, based on comparative studies between Tuscany and China, we propose that the managing institutions in China's experimental cities can select and commission a third-party agency to, respectively, evaluate the performance of their affiliated hospitals and community health service centers. Following some features of the Tuscan experience, the Chinese municipal healthcare performance evaluation system can be built by focusing on the selection of an appropriate performance evaluation agency, the design of an adequate performance evaluation mechanism and the formulation of a complete set of laws, rules and regulations. When a performance evaluation system at city level is formed, the provincial government can extend the successful experience to other cities. PMID:22687705

  17. Performance Evaluation of an Oxy-coal-fired Power Plant

    NASA Astrophysics Data System (ADS)

    Lee, Kwangjin; Kim, Sungeun; Choi, Sangmin; Kim, Taehyung

    Power generation systems based on the oxy-coal combustion with carbon dioxide capture and storage (CCS) capability are being proposed and discussed lately. The proposed systems are evolving and various alternatives are to be comparatively evaluated. This paper presents a proposed approach for performance evaluation of a commercial scale power plant, which is currently being considered for ‘retrofitting’ for the demonstration of the concept. System components to be included in the discussion are listed. Evaluation criteria in terms of performance and economics are summarized based on the system heat and mass balance and simple performance parameters such as the fuel to power efficiency and brief introduction of the 2nd law analysis. Cases are selected for comparative evaluation, based on the site-specific requirements. With limited information available, preliminary evaluation is attempted for the cases.

  18. Solid rocket booster performance evaluation model. Volume 4: Program listing

    NASA Technical Reports Server (NTRS)

    1974-01-01

    All subprograms or routines associated with the solid rocket booster performance evaluation model are indexed in this computer listing. An alphanumeric list of each routine in the index is provided in a table of contents.

  19. The Context and Process for Performance Evaluations: Necessary Preconditions for the Use of Performance Evaluations as a Measure of Performance--A Critique of Perry

    ERIC Educational Resources Information Center

    McCarthy, Mary L.

    2006-01-01

    This article challenges Perry's research using performance evaluations to determine whether the educational background of child welfare workers is predictive of performance. Institutional theory, an understanding of street-level bureaucracies, and evaluations of field education performance measures are offered as necessary frameworks for Perry's…

  20. A Gold Standards Approach to Training Instructors to Evaluate Crew Performance

    NASA Technical Reports Server (NTRS)

    Baker, David P.; Dismukes, R. Key

    2003-01-01

    The Advanced Qualification Program requires that airlines evaluate crew performance in Line Oriented Simulation. For this evaluation to be meaningful, instructors must observe relevant crew behaviors and evaluate those behaviors consistently and accurately against standards established by the airline. The airline industry has largely settled on an approach in which instructors evaluate crew performance on a series of event sets, using standardized grade sheets on which behaviors specific to event set are listed. Typically, new instructors are given a class in which they learn to use the grade sheets and practice evaluating crew performance observed on videotapes. These classes emphasize reliability, providing detailed instruction and practice in scoring so that all instructors within a given class will give similar scores to similar performance. This approach has value but also has important limitations; (1) ratings within one class of new instructors may differ from those of other classes; (2) ratings may not be driven primarily by the specific behaviors on which the company wanted the crews to be scored; and (3) ratings may not be calibrated to company standards for level of performance skill required. In this paper we provide a method to extend the existing method of training instructors to address these three limitations. We call this method the "gold standards" approach because it uses ratings from the company's most experienced instructors as the basis for training rater accuracy. This approach ties the training to the specific behaviors on which the experienced instructors based their ratings.

  1. Using hybrid method to evaluate the green performance in uncertainty.

    PubMed

    Tseng, Ming-Lang; Lan, Lawrence W; Wang, Ray; Chiu, Anthony; Cheng, Hui-Ping

    2011-04-01

    Green performance measure is vital for enterprises in making continuous improvements to maintain sustainable competitive advantages. Evaluation of green performance, however, is a challenging task due to the dependence complexity of the aspects, criteria, and the linguistic vagueness of some qualitative information and quantitative data together. To deal with this issue, this study proposes a novel approach to evaluate the dependence aspects and criteria of firm's green performance. The rationale of the proposed approach, namely green network balanced scorecard, is using balanced scorecard to combine fuzzy set theory with analytical network process (ANP) and importance-performance analysis (IPA) methods, wherein fuzzy set theory accounts for the linguistic vagueness of qualitative criteria and ANP converts the relations among the dependence aspects and criteria into an intelligible structural modeling used IPA. For the empirical case study, four dependence aspects and 34 green performance criteria for PCB firms in Taiwan were evaluated. The managerial implications are discussed. PMID:20571885

  2. Evaluating supplier quality performance using fuzzy analytical hierarchy process

    NASA Astrophysics Data System (ADS)

    Ahmad, Nazihah; Kasim, Maznah Mat; Rajoo, Shanmugam Sundram Kalimuthu

    2014-12-01

    Evaluating supplier quality performance is vital in ensuring continuous supply chain improvement, reducing the operational costs and risks towards meeting customer's expectation. This paper aims to illustrate an application of Fuzzy Analytical Hierarchy Process to prioritize the evaluation criteria in a context of automotive manufacturing in Malaysia. Five main criteria were identified which were quality, cost, delivery, customer serviceand technology support. These criteria had been arranged into hierarchical structure and evaluated by an expert. The relative importance of each criteria was determined by using linguistic variables which were represented as triangular fuzzy numbers. The Center of Gravity defuzzification method was used to convert the fuzzy evaluations into their corresponding crisps values. Such fuzzy evaluation can be used as a systematic tool to overcome the uncertainty evaluation of suppliers' performance which usually associated with human being subjective judgments.

  3. Incorporating Student Performance Measures into Teacher Evaluation Systems. Technical Report

    ERIC Educational Resources Information Center

    Steele, Jennifer L.; Hamilton, Laura S.; Stecher, Brian M.

    2010-01-01

    Many existing teacher evaluation and reward systems do not capture variation in teachers' ability to improve student performance on standardized tests. Improved access to longitudinal data systems that link teachers to students facilitates the development of systems that incorporate student achievement gains into teacher evaluations. However, two…

  4. Objective Situation Awareness Measurement Based on Performance Self-Evaluation

    NASA Technical Reports Server (NTRS)

    DeMaio, Joe

    1998-01-01

    The research was conducted in support of the NASA Safe All-Weather Flight Operations for Rotorcraft (SAFOR) program. The purpose of the work was to investigate the utility of two measurement tools developed by the British Defense Evaluation Research Agency. These tools were a subjective workload assessment scale, the DRA Workload Scale and a situation awareness measurement tool. The situation awareness tool uses a comparison of the crew's self-evaluation of performance against actual performance in order to determine what information the crew attended to during the performance. These two measurement tools were evaluated in the context of a test of innovative approach to alerting the crew by way of a helmet mounted display. The situation assessment data are reported here. The performance self-evaluation metric of situation awareness was found to be highly effective. It was used to evaluate situation awareness on a tank reconnaissance task, a tactical navigation task, and a stylized task used to evaluated handling qualities. Using the self-evaluation metric, it was possible to evaluate situation awareness, without exact knowledge the relevant information in some cases and to identify information to which the crew attended or failed to attend in others.

  5. Evaluating Organizational Performance: Rational, Natural, and Open System Models

    ERIC Educational Resources Information Center

    Martz, Wes

    2013-01-01

    As the definition of organization has evolved, so have the approaches used to evaluate organizational performance. During the past 60 years, organizational theorists and management scholars have developed a comprehensive line of thinking with respect to organizational assessment that serves to inform and be informed by the evaluation discipline.…

  6. Research Performance Evaluation: Some Critical Thoughts on Standard Bibliometric Indicators

    ERIC Educational Resources Information Center

    Anninos, Loukas N.

    2014-01-01

    The bibliometric methodology is an established technique for research evaluation as it offers an objective determination and comparison of research performance. This paper aims to critically assess some standard bibliometric indicators commonly used (based on publication and citation counts) to evaluate academic units, and examine whether there…

  7. Reliability and Validity of the Professional Counseling Performance Evaluation

    ERIC Educational Resources Information Center

    Shepherd, J. Brad; Britton, Paula J.; Kress, Victoria E.

    2008-01-01

    The definition and measurement of counsellor trainee competency is an issue that has received increased attention yet lacks quantitative study. This research evaluates item responses, scale reliability and intercorrelations, interrater agreement, and criterion-related validity of the Professional Performance Fitness Evaluation/Professional…

  8. The Seven No-No's of Performance Evaluation.

    ERIC Educational Resources Information Center

    Johnston, David L.

    1999-01-01

    Presents seven Machiavellian personnel evaluation blunders that strip workers of their dignity and demoralize them. Performance evaluators err when playing "Trivial Pursuit," the "Shell Game,""I Preceptor,""Gotcha,""I Spy," the "Procrustean Bed," and "Open-Ended Story-Time" games with their employees. (MLH)

  9. Inservice Kit: Evaluating and Improving Teaching Performance. Trainer's Manual.

    ERIC Educational Resources Information Center

    Mireau, Laurie

    This manual is for use by the trainer or leader of a workshop based upon the inservice manual: "Evaluating and Improving Teaching Performance." In the introduction, suggestions for planning, logistics, equipment, and evaluation of a successful workshop are made. The chapters are linked to those of the inservice manual by topic of the workshop…

  10. The Impact of Self-Evaluation Instruction on Student Self-Evaluation, Music Performance, and Self-Evaluation Accuracy

    ERIC Educational Resources Information Center

    Hewitt, Michael P.

    2011-01-01

    The author sought to determine whether self-evaluation instruction had an impact on student self-evaluation, music performance, and self-evaluation accuracy of music performance among middle school instrumentalists. Participants (N = 211) were students at a private middle school located in a metropolitan area of a mid-Atlantic state. Students in…

  11. Performance Evaluation of a High Bandwidth Liquid Fuel Modulation Valve for Active Combustion Control

    NASA Technical Reports Server (NTRS)

    Saus, Joseph R.; DeLaat, John C.; Chang, Clarence T.; Vrnak, Daniel R.

    2012-01-01

    At the NASA Glenn Research Center, a characterization rig was designed and constructed for the purpose of evaluating high bandwidth liquid fuel modulation devices to determine their suitability for active combustion control research. Incorporated into the rig s design are features that approximate conditions similar to those that would be encountered by a candidate device if it were installed on an actual combustion research rig. The characterized dynamic performance measures obtained through testing in the rig are planned to be accurate indicators of expected performance in an actual combustion testing environment. To evaluate how well the characterization rig predicts fuel modulator dynamic performance, characterization rig data was compared with performance data for a fuel modulator candidate when the candidate was in operation during combustion testing. Specifically, the nominal and off-nominal performance data for a magnetostrictive-actuated proportional fuel modulation valve is described. Valve performance data were collected with the characterization rig configured to emulate two different combustion rig fuel feed systems. Fuel mass flows and pressures, fuel feed line lengths, and fuel injector orifice size was approximated in the characterization rig. Valve performance data were also collected with the valve modulating the fuel into the two combustor rigs. Comparison of the predicted and actual valve performance data show that when the valve is operated near its design condition the characterization rig can appropriately predict the installed performance of the valve. Improvements to the characterization rig and accompanying modeling activities are underway to more accurately predict performance, especially for the devices under development to modulate fuel into the much smaller fuel injectors anticipated in future lean-burning low-emissions aircraft engine combustors.

  12. Operator performance evaluation using multi criteria decision making methods

    NASA Astrophysics Data System (ADS)

    Rani, Ruzanita Mat; Ismail, Wan Rosmanira; Razali, Siti Fatihah

    2014-06-01

    Operator performance evaluation is a very important operation in labor-intensive manufacturing industry because the company's productivity depends on the performance of its operators. The aims of operator performance evaluation are to give feedback to operators on their performance, to increase company's productivity and to identify strengths and weaknesses of each operator. In this paper, six multi criteria decision making methods; Analytical Hierarchy Process (AHP), fuzzy AHP (FAHP), ELECTRE, PROMETHEE II, Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) and VlseKriterijumska Optimizacija I Kompromisno Resenje (VIKOR) are used to evaluate the operators' performance and to rank the operators. The performance evaluation is based on six main criteria; competency, experience and skill, teamwork and time punctuality, personal characteristics, capability and outcome. The study was conducted at one of the SME food manufacturing companies in Selangor. From the study, it is found that AHP and FAHP yielded the "outcome" criteria as the most important criteria. The results of operator performance evaluation showed that the same operator is ranked the first using all six methods.

  13. Defining Administrative Tasks, Evaluating Performance, and Developing Skills.

    ERIC Educational Resources Information Center

    Herman, Janice L.; Herman, Jerry J.

    1995-01-01

    To ensure high performance, administrators should develop an articulated structure and process systems approach that identifies the critical success factors (CSFs) of performance for each position; appropriate indicators and scales; and a personal-improvement plan based on last year's evaluation. Once CSFs are identified and written into the…

  14. 48 CFR 3036.201 - Evaluation of contractor performance.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...) 48 CFR 36.201. Access to reports is through the CPS or the government-wide system, Past Performance... 48 Federal Acquisition Regulations System 7 2010-10-01 2010-10-01 false Evaluation of contractor performance. 3036.201 Section 3036.201 Federal Acquisition Regulations System DEPARTMENT OF HOMELAND...

  15. Chinese Middle School Teachers' Preferences Regarding Performance Evaluation Measures

    ERIC Educational Resources Information Center

    Liu, Shujie; Xu, Xianxuan; Stronge, James H.

    2016-01-01

    Teacher performance evaluation currently is receiving unprecedented attention from policy makers, scholars, and practitioners worldwide. This study is one of the few studies of teacher perceptions regarding teacher performance measures that focus on China. We employed a quantitative dominant mixed research design to investigate Chinese teachers'…

  16. Attractiveness Bias in the Evaluation of Young Pianist's Performances

    ERIC Educational Resources Information Center

    Ryan, Charlene; Costa-Giomi, Eugenia

    2004-01-01

    The purpose of this study was to investigate how the attractiveness bias that influences the judgment of a variety of characteristics and behaviors in infants, children, and adults affects the evaluation of young pianists' performances. The assumption was that both the visual and the audio components of a videotaped musical performance influence…

  17. Expected Evaluation, Goals, and Performance: Mood as Input.

    ERIC Educational Resources Information Center

    Sanna, Lawrence J.; And Others

    1996-01-01

    Research indicates effortful performances are reduced when participants cannot be evaluated. Hypothesized mood interacts with goals to attenuate such reduction in performance. As predicted, when participants' tried to do as much as they could, those in negative moods put forth more effort and persisted longer than those in positive moods,…

  18. Institutional Evaluation: Can It Contribute to Improving University Performance.

    ERIC Educational Resources Information Center

    Lindsay, Alan

    1982-01-01

    Problems involved in assessing the performance of universities and their subunits and the strengths and weaknesses of available evaluation techniques are examined. The Australian Williams Report recommending the extension of research into institutional and system performance is reviewed along with other literature. (MSE)

  19. New glycoproteomics software, GlycoPep Evaluator, generates decoy glycopeptides de novo and enables accurate false discovery rate analysis for small data sets.

    PubMed

    Zhu, Zhikai; Su, Xiaomeng; Go, Eden P; Desaire, Heather

    2014-09-16

    Glycoproteins are biologically significant large molecules that participate in numerous cellular activities. In order to obtain site-specific protein glycosylation information, intact glycopeptides, with the glycan attached to the peptide sequence, are characterized by tandem mass spectrometry (MS/MS) methods such as collision-induced dissociation (CID) and electron transfer dissociation (ETD). While several emerging automated tools are developed, no consensus is present in the field about the best way to determine the reliability of the tools and/or provide the false discovery rate (FDR). A common approach to calculate FDRs for glycopeptide analysis, adopted from the target-decoy strategy in proteomics, employs a decoy database that is created based on the target protein sequence database. Nonetheless, this approach is not optimal in measuring the confidence of N-linked glycopeptide matches, because the glycopeptide data set is considerably smaller compared to that of peptides, and the requirement of a consensus sequence for N-glycosylation further limits the number of possible decoy glycopeptides tested in a database search. To address the need to accurately determine FDRs for automated glycopeptide assignments, we developed GlycoPep Evaluator (GPE), a tool that helps to measure FDRs in identifying glycopeptides without using a decoy database. GPE generates decoy glycopeptides de novo for every target glycopeptide, in a 1:20 target-to-decoy ratio. The decoys, along with target glycopeptides, are scored against the ETD data, from which FDRs can be calculated accurately based on the number of decoy matches and the ratio of the number of targets to decoys, for small data sets. GPE is freely accessible for download and can work with any search engine that interprets ETD data of N-linked glycopeptides. The software is provided at https://desairegroup.ku.edu/research.

  20. Influence of Primary Performance Instrument and Education Level on Music Performance Evaluation

    ERIC Educational Resources Information Center

    Hewitt, Michael P.

    2007-01-01

    The purpose of this study was to examine the impact that education level and primary performance instrument have on the evaluation of music performances. Participants (N = 423) in the study were middle school (n = 187), high school (n = 113), and college (n = 123) musicians who performed on either a brass (n = 115) or a nonbrass (n = 301)…

  1. Evaluating hospital performance based on excess cause-specific incidence

    PubMed Central

    Van Rompaye, Bart; Eriksson, Marie; Goetghebeur, Els

    2015-01-01

    Formal evaluation of hospital performance in specific types of care is becoming an indispensable tool for quality assurance in the health care system. When the prime concern lies in reducing the risk of a cause-specific event, we propose to evaluate performance in terms of an average excess cumulative incidence, referring to the center's observed patient mix. Its intuitive interpretation helps give meaning to the evaluation results and facilitates the determination of important benchmarks for hospital performance. We apply it to the evaluation of cerebrovascular deaths after stroke in Swedish stroke centers, using data from Riksstroke, the Swedish stroke registry. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. PMID:25640288

  2. Sensitive, accurate and rapid detection of trace aliphatic amines in environmental samples with ultrasonic-assisted derivatization microextraction using a new fluorescent reagent for high performance liquid chromatography.

    PubMed

    Chen, Guang; Liu, Jianjun; Liu, Mengge; Li, Guoliang; Sun, Zhiwei; Zhang, Shijuan; Song, Cuihua; Wang, Hua; Suo, Yourui; You, Jinmao

    2014-07-25

    A new fluorescent reagent, 1-(1H-imidazol-1-yl)-2-(2-phenyl-1H-phenanthro[9,10-d]imidazol-1-yl)ethanone (IPPIE), is synthesized, and a simple pretreatment based on ultrasonic-assisted derivatization microextraction (UDME) with IPPIE is proposed for the selective derivatization of 12 aliphatic amines (C1: methylamine-C12: dodecylamine) in complex matrix samples (irrigation water, river water, waste water, cultivated soil, riverbank soil and riverbed soil). Under the optimal experimental conditions (solvent: ACN-HCl, catalyst: none, molar ratio: 4.3, time: 8 min and temperature: 80°C), micro amount of sample (40 μL; 5mg) can be pretreated in only 10 min, with no preconcentration, evaporation or other additional manual operations required. The interfering substances (aromatic amines, aliphatic alcohols and phenols) get the derivatization yields of <5%, causing insignificant matrix effects (<4%). IPPIE-analyte derivatives are separated by high performance liquid chromatography (HPLC) and quantified by fluorescence detection (FD). The very low instrumental detection limits (IDL: 0.66-4.02 ng/L) and method detection limits (MDL: 0.04-0.33 ng/g; 5.96-45.61 ng/L) are achieved. Analytes are further identified from adjacent peaks by on-line ion trap mass spectrometry (MS), thereby avoiding additional operations for impurities. With this UDME-HPLC-FD-MS method, the accuracy (-0.73-2.12%), precision (intra-day: 0.87-3.39%; inter-day: 0.16-4.12%), recovery (97.01-104.10%) and sensitivity were significantly improved. Successful applications in environmental samples demonstrate the superiority of this method in the sensitive, accurate and rapid determination of trace aliphatic amines in micro amount of complex samples. PMID:24925451

  3. The creation of performance evaluation indicators through a focus group.

    PubMed

    Gonçalves, Vera Lúcia Mira; Lima, Antônio Fernandes Costa; Crisitano, Nanci; Hashimoto, Martha Rumiko Kaio

    2007-01-01

    This study was developed in an action research perspective and aimed to create professional performance evaluation indicators for nursing technicians and auxiliaries working at the University Hospital of the University of Sao Paulo. Data were collected through the focus group technique, involving 19 secondary-level professionals, representing different Nursing Department units. During seven meetings, participants elaborated definitions of seven indicators they and their peers considered relevant to picture the adequate performance of these professional categories. In their reports, they manifested that the adopted strategy allowed them to express themselves about the meanings and feelings attributed to the performance evaluation process. In assessing the activity, the focus group members verbalized that, besides feeling more prepared to face problems related to performance evaluation, they also felt valued by their participation in the composition of the new instrument.

  4. An integrated evaluation for the performance of clinical engineering department.

    PubMed

    Yousry, Ahmed M; Ouda, Bassem K; Eldeib, Ayman M

    2014-01-01

    Performance benchmarking have become a very important component in all successful organizations nowadays that must be used by Clinical Engineering Department (CED) in hospitals. Many researchers identified essential mainstream performance indicators needed to improve the CED's performance. These studies revealed mainstream performance indicators that use the database of a CED to evaluate its performance. In this work, we believe that those indicators are insufficient for hospitals. Additional important indicators should be included to improve the evaluation accuracy. Therefore, we added new indicators: technical/maintenance indicators, economic indicators, intrinsic criticality indicators, basic hospital indicators, equipment acquisition, and safety indicators. Data is collected from 10 hospitals that cover different types of healthcare organizations. We developed a software tool that analyses collected data to provide a score for each CED under evaluation. Our results indicate that there is an average gap of 67% between the CEDs' performance and the ideal target. The reasons for the noncompliance are discussed in order to improve performance of CEDs under evaluation.

  5. Physical Evaluation of Cleaning Performance: We Are Only Fooling Ourselves

    NASA Technical Reports Server (NTRS)

    Pratz, Earl; McCool, A. (Technical Monitor)

    2000-01-01

    Surface cleaning processes are normally evaluated using visual physical properties such as discolorations, streaking, staining and water-break-free conditions. There is an assumption that these physical methods will evaluate all surfaces all the time for all subsequent operations. We have found that these physical methods are lacking in sensitivity and selectivity with regard to surface residues and subsequent process performance. We will report several conditions where evaluations using visual physical properties are lacking. We will identify possible alternative methods and future needs for surface evaluations.

  6. Seismic design and evaluation criteria based on target performance goals

    SciTech Connect

    Murray, R.C.; Nelson, T.A.; Kennedy, R.P.; Short, S.A.

    1994-04-01

    The Department of Energy utilizes deterministic seismic design/evaluation criteria developed to achieve probabilistic performance goals. These seismic design and evaluation criteria are intended to apply equally to the design of new facilities and to the evaluation of existing facilities. In addition, the criteria are intended to cover design and evaluation of buildings, equipment, piping, and other structures. Four separate sets of seismic design/evaluation criteria have been presented each with a different performance goal. In all these criteria, earthquake loading is selected from seismic hazard curves on a probabilistic basis but seismic response evaluation methods and acceptable behavior limits are deterministic approaches with which design engineers are familiar. For analytical evaluations, conservatism has been introduced through the use of conservative inelastic demand-capacity ratios combined with ductile detailing requirements, through the use of minimum specified material strengths and conservative code capacity equations, and through the use of a seismic scale factor. For evaluation by testing or by experience data, conservatism has been introduced through the use of an increase scale factor which is applied to the prescribed design/evaluation input motion.

  7. FY 1997 performance evaluation and incentive fee agreement. Annual report

    SciTech Connect

    1997-04-01

    FY 1997 represents the second full year utilizing a results-oriented, performance-based contract. This document describes the critical outcomes, objectives, performance indicators, expected levels of performance, and the basis for the evaluation of PNNL performance for Oct. 1, 1996-Sept. 30, 1997, as required by Articles H-24 and H-25 of the contract. Section I provides the information regarding the determination of the overall performance rating for PNNL. In Section II, six critical outcomes were defined that serve as basis for overall management of PNNL: environmental molecular sciences laboratory, environmental technology, scientific excellence, environment/safety & health operations, leadership/management, and economic development (create new businesses). Section III describes the commitments for documenting and reporting PNNL self-evaluation. In Section IV, it is stated that discussions regarding FY97 fee are still ongoing.

  8. Hanford performance evaluation program for Hanford site analytical services

    SciTech Connect

    Markel, L.P.

    1995-09-01

    The U.S. Department of Energy (DOE) Order 5700.6C, Quality Assurance, and Title 10 of the Code of Federal Regulations, Part 830.120, Quality Assurance Requirements, states that it is the responsibility of DOE contractors to ensure that ``quality is achieved and maintained by those who have been assigned the responsibility for performing the work.`` Hanford Analytical Services Quality Assurance Plan (HASQAP) is designed to meet the needs of the Richland Operations Office (RL) for maintaining a consistent level of quality for the analytical chemistry services provided by contractor and commmercial analytical laboratory operations. Therefore, services supporting Hanford environmental monitoring, environmental restoration, and waste management analytical services shall meet appropriate quality standards. This performance evaluation program will monitor the quality standards of all analytical laboratories supporting the Hanforad Site including on-site and off-site laboratories. The monitoring and evaluation of laboratory performance can be completed by the use of several tools. This program will discuss the tools that will be utilized for laboratory performance evaluations. Revision 0 will primarily focus on presently available programs using readily available performance evaluation materials provided by DOE, EPA or commercial sources. Discussion of project specific PE materials and evaluations will be described in section 9.0 and Appendix A.

  9. Performance evaluation of infrared imaging system in field test

    NASA Astrophysics Data System (ADS)

    Wang, Chensheng; Guo, Xiaodong; Ren, Tingting; Zhang, Zhi-jie

    2014-11-01

    Infrared imaging system has been applied widely in both military and civilian fields. Since the infrared imager has various types and different parameters, for system manufacturers and customers, there is great demand for evaluating the performance of IR imaging systems with a standard tool or platform. Since the first generation IR imager was developed, the standard method to assess the performance has been the MRTD or related improved methods which are not perfect adaptable for current linear scanning imager or 2D staring imager based on FPA detector. For this problem, this paper describes an evaluation method based on the triangular orientation discrimination metric which is considered as the effective and emerging method to evaluate the synthesis performance of EO system. To realize the evaluation in field test, an experiment instrument is developed. And considering the importance of operational environment, the field test is carried in practical atmospheric environment. The test imagers include panoramic imaging system and staring imaging systems with different optics and detectors parameters (both cooled and uncooled). After showing the instrument and experiment setup, the experiment results are shown. The target range performance is analyzed and discussed. In data analysis part, the article gives the range prediction values obtained from TOD method, MRTD method and practical experiment, and shows the analysis and results discussion. The experimental results prove the effectiveness of this evaluation tool, and it can be taken as a platform to give the uniform performance prediction reference.

  10. Evaluating supplier quality performance using analytical hierarchy process

    NASA Astrophysics Data System (ADS)

    Kalimuthu Rajoo, Shanmugam Sundram; Kasim, Maznah Mat; Ahmad, Nazihah

    2013-09-01

    This paper elaborates the importance of evaluating supplier quality performance to an organization. Supplier quality performance evaluation reflects the actual performance of the supplier exhibited at customer's end. It is critical in enabling the organization to determine the area of improvement and thereafter works with supplier to close the gaps. Success of the customer partly depends on supplier's quality performance. Key criteria as quality, cost, delivery, technology support and customer service are categorized as main factors in contributing to supplier's quality performance. 18 suppliers' who were manufacturing automotive application parts evaluated in year 2010 using weight point system. There were few suppliers with common rating which led to common ranking observed by few suppliers'. Analytical Hierarchy Process (AHP), a user friendly decision making tool for complex and multi criteria problems was used to evaluate the supplier's quality performance challenging the weight point system that was used for 18 suppliers'. The consistency ratio was checked for criteria and sub-criteria. Final results of AHP obtained with no overlap ratings, therefore yielded a better decision making methodology as compared to weight point rating system.

  11. Accurate Analysis and Evaluation of Acidic Plant Growth Regulators in Transgenic and Nontransgenic Edible Oils with Facile Microwave-Assisted Extraction-Derivatization.

    PubMed

    Liu, Mengge; Chen, Guang; Guo, Hailong; Fan, Baolei; Liu, Jianjun; Fu, Qiang; Li, Xiu; Lu, Xiaomin; Zhao, Xianen; Li, Guoliang; Sun, Zhiwei; Xia, Lian; Zhu, Shuyun; Yang, Daoshan; Cao, Ziping; Wang, Hua; Suo, Yourui; You, Jinmao

    2015-09-16

    Determination of plant growth regulators (PGRs) in a signal transduction system (STS) is significant for transgenic food safety, but may be challenged by poor accuracy and analyte instability. In this work, a microwave-assisted extraction-derivatization (MAED) method is developed for six acidic PGRs in oil samples, allowing an efficient (<1.5 h) and facile (one step) pretreatment. Accuracies are greatly improved, particularly for gibberellin A3 (-2.72 to -0.65%) as compared with those reported (-22 to -2%). Excellent selectivity and quite low detection limits (0.37-1.36 ng mL(-1)) are enabled by fluorescence detection-mass spectrum monitoring. Results show the significant differences in acidic PGRs between transgenic and nontransgenic oils, particularly 1-naphthaleneacetic acid (1-NAA), implying the PGRs induced variations of components and genes. This study provides, for the first time, an accurate and efficient determination for labile PGRs involved in STS and a promising concept for objectively evaluating the safety of transgenic foods.

  12. Tools for evaluating team performance in simulation-based training

    PubMed Central

    Rosen, Michael A; Weaver, Sallie J; Lazzara, Elizabeth H; Salas, Eduardo; Wu, Teresa; Silvestri, Salvatore; Schiebel, Nicola; Almeida, Sandra; King, Heidi B

    2010-01-01

    Teamwork training constitutes one of the core approaches for moving healthcare systems toward increased levels of quality and safety, and simulation provides a powerful method of delivering this training, especially for face-paced and dynamic specialty areas such as Emergency Medicine. Team performance measurement and evaluation plays an integral role in ensuring that simulation-based training for teams (SBTT) is systematic and effective. However, this component of SBTT systems is overlooked frequently. This article addresses this gap by providing a review and practical introduction to the process of developing and implementing evaluation systems in SBTT. First, an overview of team performance evaluation is provided. Second, best practices for measuring team performance in simulation are reviewed. Third, some of the prominent measurement tools in the literature are summarized and discussed relative to the best practices. Subsequently, implications of the review are discussed for the practice of training teamwork in Emergency Medicine. PMID:21063558

  13. Experimental Evaluation of High Performance Integrated Heat Pump

    SciTech Connect

    Miller, William A; Berry, Robert; Durfee, Neal; Baxter, Van D

    2016-01-01

    Integrated heat pump (IHP) technology provides significant potential for energy savings and comfort improvement for residential buildings. In this study, we evaluate the performance of a high performance IHP that provides space heating, cooling, and water heating services. Experiments were conducted according to the ASHRAE Standard 206-2013 where 24 test conditions were identified in order to evaluate the IHP performance indices based on the airside performance. Empirical curve fits of the unit s compressor maps are used in conjunction with saturated condensing and evaporating refrigerant conditions to deduce the refrigerant mass flowrate, which, in turn was used to evaluate the refrigerant side performance as a check on the airside performance. Heat pump (compressor, fans, and controls) and water pump power were measured separately per requirements of Standard 206. The system was charged per the system manufacturer s specifications. System test results are presented for each operating mode. The overall IHP performance metrics are determined from the test results per the Standard 206 calculation procedures.

  14. Performance Evaluation of Video Streaming in Vehicular Adhoc Network

    NASA Astrophysics Data System (ADS)

    Rahim, Aneel; Khan, Zeeshan Shafi; Bin Muhaya, Fahad

    In Vehicular Ad-Hoc Networks (VANETs) wireless-equipped vehicles form a temporary network for sharing information among vehicles. Secure Multimedia communication enhances the safety of passenger by providing visual picture of accidents and danger situations. In this paper we will evaluate the performance of multimedia data in VANETS scenario and consider the impact of malicious node using NS-2 and Evalvid video evaluation tool.

  15. Evaluating the sound quality of reproduction systems and performance spaces

    NASA Astrophysics Data System (ADS)

    Griesinger, David

    2003-10-01

    Evaluation of sound reproduction systems through rapid A/B tests has led to enormous and rapid progress in system design. The sound quality of performance spaces is almost always evaluated through long term listening, and spaces are compared through remembered characteristics. Progress has not been rapid. This paper will present methods for rapid A/B comparisons of spaces and will demonstrate the results of such comparisons. The results can be very different from remembered impressions.

  16. A performance evaluation of the IBM 370/XT personal computer

    NASA Technical Reports Server (NTRS)

    Dominick, Wayne D. (Editor); Triantafyllopoulos, Spiros

    1984-01-01

    An evaluation of the IBM 370/XT personal computer is given. This evaluation focuses primarily on the use of the 370/XT for scientific and technical applications and applications development. A measurement of the capabilities of the 370/XT was performed by means of test programs which are presented. Also included is a review of facilities provided by the operating system (VM/PC), along with comments on the IBM 370/XT hardware configuration.

  17. Smith Newton Vehicle Performance Evaluation - 3rd Quarter 2012 (Brochure)

    SciTech Connect

    Not Available

    2013-03-01

    The Fleet Test and Evaluation Team at the U.S. Department of Energy's National Renewable Energy Laboratory is evaluating and documenting the performance of electric and plug-in hybrid electric drive systems in medium-duty trucks across the nation. Through this project, Smith Electric Vehicles will build and deploy 500 all-electric medium-duty trucks. The trucks will be deployed in diverse climates across the country.

  18. Performance evaluation of commercial radionuclide calibrators in Indonesian hospitals.

    PubMed

    Candra, Hermawan; Marsoem, Pujadi; Wurdiyanto, Gatot

    2012-09-01

    Dose calibrator is one of the supporting equipments in the field of nuclear medicine. At the hospitals, dose calibrator is used for activity measurement of radiopharmaceutical before it is administered to patients. Comparison of activity measurements of (131)I and (99m)Tc with dose calibrators was organized in Indonesia during 2007-2010 with the the aim of obtaining information dose calibrator performance in the hospitals. Seven Indonesian hospitals participated in this comparison. The measurement results were evaluated using the E(n) criteria. The result presented in this paper facilitated the evaluation of dose calibrator performance at several hospitals.

  19. Statistical scoring procedures applicable to laboratory performance evaluation

    SciTech Connect

    Streets, W Elane

    2008-11-01

    Two statistical scoring procedures based on p-values have been developed to evaluate the overall performance of analytical laboratories performing environmental measurements. The overall scores of bias and standing are used to determine how consistently a laboratory is able to measure the true (unknown) value correctly over time. The overall scores of precision and standing are used to determine how well a laboratory is able to reproduce its measurements in the long run. Criteria are established for qualitatively labeling measurements as Acceptable, Warning, and Not Acceptable and for identifying areas where laboratories should re-evaluate their measurement procedures. These statistical scoring procedures are applied to two real environmental data sets.

  20. Effects of Performer Attractiveness, Stage Behavior, and Dress on Evaluation of Children's Piano Performances.

    ERIC Educational Resources Information Center

    Wapnick, Joel; Mazza, Jolan Kovacs; Darrow, Alice Ann

    2000-01-01

    Examines whether selected nonmusical attributes of 20 sixth-grade pianists would affect ratings of their performances by 123 musically trained evaluators. States that the visual group evaluators viewed a videotape, without the sound, rating the pianists on appropriateness of dress, stage behavior, and physical attractiveness. The audio and…

  1. Accurate quantum chemical calculations

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.

    1989-01-01

    An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.

  2. Lithographic performance evaluation of a contaminated EUV mask after cleaning

    SciTech Connect

    George, Simi; Naulleau, Patrick; Okoroanyanwu, Uzodinma; Dittmar, Kornelia; Holfeld, Christian; Wuest, Andrea

    2009-11-16

    The effect of surface contamination and subsequent mask surface cleaning on the lithographic performance of a EUV mask is investigated. SEMATECH's Berkeley micro-field exposure tool (MET) printed 40 nm and 50 nm line and space (L/S) patterns are evaluated to compare the performance of a contaminated and cleaned mask to an uncontaminated mask. Since the two EUV masks have distinct absorber architectures, optical imaging models and aerial image calculations were completed to determine any expected differences in performance. Measured and calculated Bossung curves, process windows, and exposure latitudes for the two sets of L/S patterns are compared to determine how the contamination and cleaning impacts the lithographic performance of EUV masks. The observed differences in mask performance are shown to be insignificant, indicating that the cleaning process did not appreciably affect mask performance.

  3. High performance APCS conceptual design and evaluation scoping study

    SciTech Connect

    Soelberg, N.; Liekhus, K.; Chambers, A.; Anderson, G.

    1998-02-01

    This Air Pollution Control System (APCS) Conceptual Design and Evaluation study was conducted to evaluate a high-performance (APC) system for minimizing air emissions from mixed waste thermal treatment systems. Seven variations of high-performance APCS designs were conceptualized using several design objectives. One of the system designs was selected for detailed process simulation using ASPEN PLUS to determine material and energy balances and evaluate performance. Installed system capital costs were also estimated. Sensitivity studies were conducted to evaluate the incremental cost and benefit of added carbon adsorber beds for mercury control, specific catalytic reduction for NO{sub x} control, and offgas retention tanks for holding the offgas until sample analysis is conducted to verify that the offgas meets emission limits. Results show that the high-performance dry-wet APCS can easily meet all expected emission limits except for possibly mercury. The capability to achieve high levels of mercury control (potentially necessary for thermally treating some DOE mixed streams) could not be validated using current performance data for mercury control technologies. The engineering approach and ASPEN PLUS modeling tool developed and used in this study identified APC equipment and system performance, size, cost, and other issues that are not yet resolved. These issues need to be addressed in feasibility studies and conceptual designs for new facilities or for determining how to modify existing facilities to meet expected emission limits. The ASPEN PLUS process simulation with current and refined input assumptions and calculations can be used to provide system performance information for decision-making, identifying best options, estimating costs, reducing the potential for emission violations, providing information needed for waste flow analysis, incorporating new APCS technologies in existing designs, or performing facility design and permitting activities.

  4. Smith Newton Vehicle Performance Evaluation - Gen2 - 2013 (Brochure)

    SciTech Connect

    Not Available

    2014-04-01

    The Fleet Test and Evaluation Team at the U.S. Department of Energy's National Renewable Energy Laboratory is evaluating and documenting the performance of electric and plug-in hybrid electric drive systems in medium-duty trucks across the nation. U.S. companies participating in this evaluation project received funding from the American Recovery and Reinvestment Act to cover part of the cost of purchasing these vehicles. Through this project, Smith Electric Vehicles is building and deploying 500 all-electric medium-duty trucks that will be deployed by a variety of companies in diverse climates across the country.

  5. Smith Newton Vehicle Performance Evaluation - Gen 2 - Cumulative (Brochure)

    SciTech Connect

    Not Available

    2014-08-01

    The Fleet Test and Evaluation Team at the U.S. Department of Energy's National Renewable Energy Laboratory is evaluating and documenting the performance of electric and plug-in hybrid electric drive systems in medium-duty trucks across the nation. U.S. companies participating in this evaluation project received funding from the American Recovery and Reinvestment Act to cover part of the cost of purchasing these vehicles. Through this project, Smith Electric Vehicles is building and deploying 500 all-electric medium-duty trucks that will be deployed by a variety of companies in diverse climates across the country.

  6. Navistar eStar Vehicle Performance Evaluation - Cumulative (Brochure)

    SciTech Connect

    Not Available

    2014-08-01

    The Fleet Test and Evaluation Team at the U.S. Department of Energy's National Renewable Energy Laboratory is evaluating and documenting the performance of electric and plug-in hybrid electric drive systems in medium duty trucks across the nation. U.S. companies participating in this evaluation project received funding from the American Recovery and Reinvestment Act to cover part of the cost of purchasing these vehicles. Through this project, Navistar will build and deploy all-electric medium-duty trucks. The trucks will be deployed in diverse climates across the country.

  7. Smith Newton Vehicle Performance Evaluation - 1st Quarter 2014 (Brochure)

    SciTech Connect

    Not Available

    2014-04-01

    The Fleet Test and Evaluation Team at the U.S. Department of Energy's National Renewable Energy Laboratory is evaluating and documenting the performance of electric and plug-in hybrid electric drive systems in medium-duty trucks across the nation. U.S. companies participating in this evaluation project received funding from the American Recovery and Reinvestment Act to cover part of the cost of purchasing these vehicles. Through this project, Smith Electric Vehicles is building and deploying 500 all-electric medium-duty trucks that will be deployed by a variety of companies in diverse climates across the country.

  8. Evaluation of the Energy Performance of Six High-Performance Buildings: Preprint

    SciTech Connect

    Torcellini, P. A.; Pless, S.; Crawley, D. B.

    2005-04-01

    The energy performance of six high-performance buildings around the United States was monitored and evaluated by the NREL. The six buildings include the Visitor Center at Zion National Park, the NREL Thermal Test Facility, the Chesapeake Bay Foundation's Merrill Center, the BigHorn Home Improvement Center, the Cambria Office Building, and the Oberlin College Lewis Center.

  9. Competency-Based Performance Appraisals: Improving Performance Evaluations of School Nutrition Managers and Assistants/Technicians

    ERIC Educational Resources Information Center

    Cross, Evelina W.; Asperin, Amelia Estepa; Nettles, Mary Frances

    2009-01-01

    Purpose: The purpose of the research was to develop a competency-based performance appraisal resource for evaluating school nutrition (SN) managers and assistants/technicians. Methods: A two-phased process was used to develop the competency-based performance appraisal resource for SN managers and assistants/technicians. In Phase I, draft…

  10. Evaluating the Evaluator: Development, Field Testing, and Implications of a Client-Based Method for Assessing Evaluator Performance

    ERIC Educational Resources Information Center

    Dowell, Kathleen; Haley, Jean; Doino-Ingersoll, Jo Ann

    2006-01-01

    Improved services and client satisfaction are key aspects of independent evaluation consultants' practices. For evaluators to deliver the highest quality services possible, they should regularly monitor their performance as evaluators, as well as the satisfaction of their clients. The client feedback form (CFF) was developed to gather performance…

  11. Performance evaluation of an all-fiber image-reject homodyne coherent Doppler wind lidar

    NASA Astrophysics Data System (ADS)

    Abari, C. F.; Pedersen, A. T.; Dellwik, E.; Mann, J.

    2015-04-01

    The main purpose of this study is to evaluate the near-zero wind velocity measurement performance of two separate 1.5 μm all-fiber coherent Doppler lidars (CDL). The performance characterization is performed through the presentation of the results from two separate atmospheric field campaigns. In one campaign, a recently developed continuous wave (CW) CDL benefiting from an image-reject front-end was deployed. The other campaign utilized a different CW CDL, benefiting from a heterodyne receiver with intermediate frequency (IF) sampling. In both field campaigns the results are compared against a sonic anemometer, as the reference instrument. The measurements clearly show that the image-reject architecture results in more accurate measurements of radial wind velocities close to zero. Close-to-zero velocities are usually associated with the vertical component of the wind and are important to characterize.

  12. Evaluation of GPFS Connectivity Over High-Performance Networks

    SciTech Connect

    Srinivasan, Jay; Canon, Shane; Andrews, Matthew

    2009-02-17

    We present the results of an evaluation of new features of the latest release of IBM's GPFS filesystem (v3.2). We investigate different ways of connecting to a high-performance GPFS filesystem from a remote cluster using Infiniband (IB) and 10 Gigabit Ethernet. We also examine the performance of the GPFS filesystem with both serial and parallel I/O. Finally, we also present our recommendations for effective ways of utilizing high-bandwidth networks for high-performance I/O to parallel file systems.

  13. A new method to evaluate human-robot system performance

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.; Weisbin, C. R.

    2003-01-01

    One of the key issues in space exploration is that of deciding what space tasks are best done with humans, with robots, or a suitable combination of each. In general, human and robot skills are complementary. Humans provide as yet unmatched capabilities to perceive, think, and act when faced with anomalies and unforeseen events, but there can be huge potential risks to human safety in getting these benefits. Robots provide complementary skills in being able to work in extremely risky environments, but their ability to perceive, think, and act by themselves is currently not error-free, although these capabilities are continually improving with the emergence of new technologies. Substantial past experience validates these generally qualitative notions. However, there is a need for more rigorously systematic evaluation of human and robot roles, in order to optimize the design and performance of human-robot system architectures using well-defined performance evaluation metrics. This article summarizes a new analytical method to conduct such quantitative evaluations. While the article focuses on evaluating human-robot systems, the method is generally applicable to a much broader class of systems whose performance needs to be evaluated.

  14. A new method to evaluate human-robot system performance.

    PubMed

    Rodriguez, G; Weisbin, C R

    2003-01-01

    One of the key issues in space exploration is that of deciding what space tasks are best done with humans, with robots, or a suitable combination of each. In general, human and robot skills are complementary. Humans provide as yet unmatched capabilities to perceive, think, and act when faced with anomalies and unforeseen events, but there can be huge potential risks to human safety in getting these benefits. Robots provide complementary skills in being able to work in extremely risky environments, but their ability to perceive, think, and act by themselves is currently not error-free, although these capabilities are continually improving with the emergence of new technologies. Substantial past experience validates these generally qualitative notions. However, there is a need for more rigorously systematic evaluation of human and robot roles, in order to optimize the design and performance of human-robot system architectures using well-defined performance evaluation metrics. This article summarizes a new analytical method to conduct such quantitative evaluations. While the article focuses on evaluating human-robot systems, the method is generally applicable to a much broader class of systems whose performance needs to be evaluated.

  15. Implementation and Performance Evaluation Using the Fuzzy Network Balanced Scorecard

    ERIC Educational Resources Information Center

    Tseng, Ming-Lang

    2010-01-01

    The balanced scorecard (BSC) is a multi-criteria evaluation concept that highlights the importance of performance measurement. However, although there is an abundance of literature on the BSC framework, there is a scarcity of literature regarding how the framework with dependence and interactive relationships should be properly implemented in…

  16. Image processing system performance prediction and product quality evaluation

    NASA Technical Reports Server (NTRS)

    Stein, E. K.; Hammill, H. B. (Principal Investigator)

    1976-01-01

    The author has identified the following significant results. A new technique for image processing system performance prediction and product quality evaluation was developed. It was entirely objective, quantitative, and general, and should prove useful in system design and quality control. The technique and its application to determination of quality control procedures for the Earth Resources Technology Satellite NASA Data Processing Facility are described.

  17. 48 CFR 1536.201 - Evaluation of contracting performance.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... with EPA's Freedom of Information Act procedures at 40 CFR part 2. ... report to the Quality Assurance Branch, Office of Acquisition Management. The Quality Assurance Section will file the form in the contractor performance evaluation files which it maintains. (e) The...

  18. 48 CFR 1536.201 - Evaluation of contracting performance.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... with EPA's Freedom of Information Act procedures at 40 CFR part 2. ... report to the Quality Assurance Branch, Office of Acquisition Management. The Quality Assurance Section will file the form in the contractor performance evaluation files which it maintains. (e) The...

  19. 48 CFR 1536.201 - Evaluation of contracting performance.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... with EPA's Freedom of Information Act procedures at 40 CFR part 2. ... report to the Quality Assurance Branch, Office of Acquisition Management. The Quality Assurance Section will file the form in the contractor performance evaluation files which it maintains. (e) The...

  20. 48 CFR 1536.201 - Evaluation of contracting performance.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... with EPA's Freedom of Information Act procedures at 40 CFR part 2. ... report to the Quality Assurance Branch, Office of Acquisition Management. The Quality Assurance Section will file the form in the contractor performance evaluation files which it maintains. (e) The...

  1. 10 CFR 1045.9 - RD classification performance evaluation.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 4 2012-01-01 2012-01-01 false RD classification performance evaluation. 1045.9 Section 1045.9 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) NUCLEAR CLASSIFICATION AND DECLASSIFICATION Program Management of the Restricted Data and Formerly Restricted Data Classification System § 1045.9...

  2. 10 CFR 1045.9 - RD classification performance evaluation.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 4 2014-01-01 2014-01-01 false RD classification performance evaluation. 1045.9 Section 1045.9 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) NUCLEAR CLASSIFICATION AND DECLASSIFICATION Program Management of the Restricted Data and Formerly Restricted Data Classification System § 1045.9...

  3. 10 CFR 1045.9 - RD classification performance evaluation.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 4 2013-01-01 2013-01-01 false RD classification performance evaluation. 1045.9 Section 1045.9 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) NUCLEAR CLASSIFICATION AND DECLASSIFICATION Program Management of the Restricted Data and Formerly Restricted Data Classification System § 1045.9...

  4. 10 CFR 1045.9 - RD classification performance evaluation.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 4 2011-01-01 2011-01-01 false RD classification performance evaluation. 1045.9 Section 1045.9 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) NUCLEAR CLASSIFICATION AND DECLASSIFICATION Program Management of the Restricted Data and Formerly Restricted Data Classification System § 1045.9...

  5. Faculty Performance Evaluation: The CIPP-SAPS Model.

    ERIC Educational Resources Information Center

    Mitcham, Maralynne

    1981-01-01

    The issues of faculty performance evaluation for allied health professionals are addressed. Daniel Stufflebeam's CIPP (content-imput-process-product) model is introduced and its development into a CIPP-SAPS (self-administrative-peer- student) model is pursued. (Author/CT)

  6. Teacher Evaluation, Performance-Related Pay, and Constructivist Instruction

    ERIC Educational Resources Information Center

    Liang, Guodong; Akiba, Motoko

    2015-01-01

    Using statewide longitudinal teacher survey data collected in 2009 and 2010, this study examined the characteristics of teacher evaluation used to determine performance-related pay (PRP), and the association between PRP and improvement in the practice of constructivist instruction. The study found that 10.9% of middle school mathematics teachers…

  7. Accounting for Exogenous Influences in Performance Evaluations of Teachers

    ERIC Educational Resources Information Center

    De Witte, Kristof; Rogge, Nicky

    2011-01-01

    Students' evaluations of teacher performance (SETs) are increasingly used by universities. However, SETs are controversial mainly due to two issues: (1) teachers value various aspects of excellent teaching differently, and (2) SETs should not be determined on exogenous influences. Therefore, this paper constructs SETs using a tailored version of…

  8. Counselor Competence, Performance Assessment, and Program Evaluation: Using Psychometric Instruments

    ERIC Educational Resources Information Center

    Tate, Kevin A.; Bloom, Margaret L.; Tassara, Marcel H.; Caperton, William

    2014-01-01

    Psychometric instruments have been underutilized by counselor educators in performance assessment and program evaluation efforts. As such, we conducted a review of the literature that revealed 41 instruments fit for such efforts. We described and critiqued these instruments along four dimensions--"Target Domain," "Format,"…

  9. Documenting Teacher Candidates' Professional Growth through Performance Evaluation

    ERIC Educational Resources Information Center

    Brown, Elizabeth Levine; Suh, Jennifer; Parsons, Seth A.; Parker, Audra K.; Ramirez, Erin M.

    2015-01-01

    In the United States, colleges of education are responding to demands for increased accountability. The purpose of this article is to describe one teacher education program's implementation of a performance evaluation tool during final internship that measures teacher candidates' development across four domains: Planning and Preparation,…

  10. Effects of Physical Attractiveness on Evaluation of Vocal Performance.

    ERIC Educational Resources Information Center

    Wapnick, Joel; Darrow, Alice Ann; Kovacs, Jolan; Dalrymple, Lucinda

    1997-01-01

    Studies whether physical attractiveness of singers affects judges' ratings of their vocal performances. Reveals that physical attractiveness does impact evaluation, that male raters were more severe than female raters, and that the rating of undergraduate majors versus graduate students and professors combined were not differently affected by…

  11. Solid rocket booster performance evaluation model. Volume 2: Users manual

    NASA Technical Reports Server (NTRS)

    1974-01-01

    This users manual for the solid rocket booster performance evaluation model (SRB-II) contains descriptions of the model, the program options, the required program inputs, the program output format and the program error messages. SRB-II is written in FORTRAN and is operational on both the IBM 370/155 and the MSFC UNIVAC 1108 computers.

  12. The Rasch Model for Evaluating Italian Student Performance

    ERIC Educational Resources Information Center

    Camminatiello, Ida; Gallo, Michele; Menini, Tullio

    2010-01-01

    In 1997 the Organisation for Economic Co-operation and Development (OECD) launched the OECD Programme for International Student Assessment (PISA) for collecting information about 15-year-old students in participating countries. Our study analyses the PISA 2006 cognitive test for evaluating the Italian student performance in mathematics, reading…

  13. 13 CFR 306.7 - Performance evaluations of University Centers.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 13 Business Credit and Assistance 1 2011-01-01 2011-01-01 false Performance evaluations of University Centers. 306.7 Section 306.7 Business Credit and Assistance ECONOMIC DEVELOPMENT ADMINISTRATION, DEPARTMENT OF COMMERCE TRAINING, RESEARCH AND TECHNICAL ASSISTANCE INVESTMENTS University Center...

  14. Granting Teachers the "Benefit of the Doubt" in Performance Evaluations

    ERIC Educational Resources Information Center

    Rogge, Nicky

    2011-01-01

    Purpose: This paper proposes a benefit of the doubt (BoD) approach to construct and analyse teacher effectiveness scores (i.e. SET scores). Design/methodology/approach: The BoD approach is related to data envelopment analysis (DEA), a linear programming tool for evaluating the relative efficiency performance of a set of similar units (e.g. firms,…

  15. Preparing To Evaluate Your Child Care Center's Performance.

    ERIC Educational Resources Information Center

    Ratekin, Cindy; Bess, Gary

    2003-01-01

    Discusses steps to building an administrative infrastructure to support high-quality child care, in particular the development of a comprehensive evaluation plan around which measurable objectives can later be assessed against performance. Describes background information needed, situational analysis, collecting current program information, and…

  16. An Agency Theory Perspective on Student Performance Evaluation

    ERIC Educational Resources Information Center

    Smith, Michael E.; Zsidisin, George A.; Adams, Laural L.

    2005-01-01

    The emphasis in recent research on the responsibility of college and university business instructors to prepare students for future employment underscores a need to refine the evaluation of student performance. In this article, an agency theory framework is used to understand the trade-offs that may be involved in the selection of various…

  17. A comprehensive evaluation of strip performance in multiple blood glucose monitoring systems.

    PubMed

    Katz, Laurence B; Macleod, Kirsty; Grady, Mike; Cameron, Hilary; Pfützner, Andreas; Setford, Steven

    2015-05-01

    Accurate self-monitoring of blood glucose is a key component of effective self-management of glycemic control. Accurate self-monitoring of blood glucose results are required for optimal insulin dosing and detection of hypoglycemia. However, blood glucose monitoring systems may be susceptible to error from test strip, user, environmental and pharmacological factors. This report evaluated 5 blood glucose monitoring systems that each use Verio glucose test strips for precision, effect of hematocrit and interferences in laboratory testing, and lay user and system accuracy in clinical testing according to the guidelines in ISO15197:2013(E). Performance of OneTouch(®) VerioVue™ met or exceeded standards described in ISO15197:2013 for precision, hematocrit performance and interference testing in a laboratory setting. Performance of OneTouch(®) Verio IQ™, OneTouch(®) Verio Pro™, OneTouch(®) Verio™, OneTouch(®) VerioVue™ and Omni Pod each met or exceeded accuracy standards for user performance and system accuracy in a clinical setting set forth in ISO15197:2013(E). PMID:25702769

  18. Towards Reliable Evaluation of Anomaly-Based Intrusion Detection Performance

    NASA Technical Reports Server (NTRS)

    Viswanathan, Arun

    2012-01-01

    This report describes the results of research into the effects of environment-induced noise on the evaluation process for anomaly detectors in the cyber security domain. This research was conducted during a 10-week summer internship program from the 19th of August, 2012 to the 23rd of August, 2012 at the Jet Propulsion Laboratory in Pasadena, California. The research performed lies within the larger context of the Los Angeles Department of Water and Power (LADWP) Smart Grid cyber security project, a Department of Energy (DoE) funded effort involving the Jet Propulsion Laboratory, California Institute of Technology and the University of Southern California/ Information Sciences Institute. The results of the present effort constitute an important contribution towards building more rigorous evaluation paradigms for anomaly-based intrusion detectors in complex cyber physical systems such as the Smart Grid. Anomaly detection is a key strategy for cyber intrusion detection and operates by identifying deviations from profiles of nominal behavior and are thus conceptually appealing for detecting "novel" attacks. Evaluating the performance of such a detector requires assessing: (a) how well it captures the model of nominal behavior, and (b) how well it detects attacks (deviations from normality). Current evaluation methods produce results that give insufficient insight into the operation of a detector, inevitably resulting in a significantly poor characterization of a detectors performance. In this work, we first describe a preliminary taxonomy of key evaluation constructs that are necessary for establishing rigor in the evaluation regime of an anomaly detector. We then focus on clarifying the impact of the operational environment on the manifestation of attacks in monitored data. We show how dynamic and evolving environments can introduce high variability into the data stream perturbing detector performance. Prior research has focused on understanding the impact of this

  19. Project Startup: Evaluating the Performance of Hydraulic Hybrid Refuse Vehicles

    SciTech Connect

    2015-09-01

    The Fleet Test and Evaluation Team at the National Renewable Energy Laboratory (NREL) is evaluating the in-service performance of 10 next-generation hydraulic hybrid refuse vehicles (HHVs), 8 previous-generation HHVs, and 8 comparable conventional diesel vehicles operated by Miami-Dade County's Public Works and Waste Management Department in southern Florida. The HHVs under study - Autocar E3 refuse trucks equipped with Parker Hannifin's RunWise Advanced Series Hybrid Drive systems - can recover as much as 70 percent of the energy typically lost during braking and reuse it to power the vehicle. NREL's evaluation will assess the performance of this technology in commercial operation and help Miami-Dade County determine the ideal routes for maximizing the fuel-saving potential of its HHVs.

  20. Evaluation of Eco-Efficiency and Performance of Retrofit Materials

    NASA Astrophysics Data System (ADS)

    Gopinath, Smitha; Rama Chandra Murthy, A.; Iyer, Nagesh R.; Kokila, S.

    2015-12-01

    In this work three materials namely Fiber Reinforced Polymer (FRP), ferrocement and Textile Reinforced Concrete (TRC) have been evaluated towards their performance efficiency and eco-effectiveness for sustainable retrofitting applications. Investigations have been carried out for flexural strengthening of RC beams with FRP, ferrocement and TRC. It is observed that in the case of FRP, it is not possible to tailor the material according to design requirements and most of the time strengthened structure becomes over stiff. Eco-effectiveness of these retrofitting materials has been evaluated by computing the embodied energy. It is observed that the amount of CO2 emitted by TRC is less compared to other retrofit materials. Further, the performance point of retrofitted RC frames has been evaluated and damage index has been calculated to find out the effective retrofit material. It is concluded that, if RC frame is retrofitted with FRP and TRC, it undergoes less damage compared to ferrocement.

  1. Resilient Plant Monitoring System: Design, Analysis, and Performance Evaluation

    SciTech Connect

    Humberto E. Garcia; Wen-Chiao Lin; Semyon M. Meerkov; Maruthi T. Ravichandran

    2013-12-01

    Resilient monitoring systems are sensor networks that degrade gracefully under malicious attacks on their sensors, causing them to project misleading information. The goal of this paper is to design, analyze, and evaluate the performance of a resilient monitoring system intended to monitor plant conditions (normal or anomalous). The architecture developed consists of four layers: data quality assessment, process variable assessment, plant condition assessment, and sensor network adaptation. Each of these layers is analyzed by either analytical or numerical tools, and the performance of the overall system is evaluated using simulations. The measure of resiliency of the resulting system is evaluated using Kullback Leibler divergence, and is shown to be sufficiently high in all scenarios considered.

  2. Software for evaluation of EPR-dosimetry performance.

    PubMed

    Shishkina, E A; Timofeev, Yu S; Ivanov, D V

    2014-06-01

    Electron paramagnetic resonance (EPR) with tooth enamel is a method extensively used for retrospective external dosimetry. Different research groups apply different equipment, sample preparation procedures and spectrum processing algorithms for EPR dosimetry. A uniform algorithm for description and comparison of performances was designed and implemented in a new computer code. The aim of the paper is to introduce the new software 'EPR-dosimetry performance'. The computer code is a user-friendly tool for providing a full description of method-specific capabilities of EPR tooth dosimetry, from metrological characteristics to practical limitations in applications. The software designed for scientists and engineers has several applications, including support of method calibration by evaluation of calibration parameters, evaluation of critical value and detection limit for registration of radiation-induced signal amplitude, estimation of critical value and detection limit for dose evaluation, estimation of minimal detectable value for anthropogenic dose assessment and description of method uncertainty.

  3. Rapid, Sensitive, and Accurate Evaluation of Drug Resistant Mutant (NS5A-Y93H) Strain Frequency in Genotype 1b HCV by Invader Assay.

    PubMed

    Yoshimi, Satoshi; Ochi, Hidenori; Murakami, Eisuke; Uchida, Takuro; Kan, Hiromi; Akamatsu, Sakura; Hayes, C Nelson; Abe, Hiromi; Miki, Daiki; Hiraga, Nobuhiko; Imamura, Michio; Aikata, Hiroshi; Chayama, Kazuaki

    2015-01-01

    Daclatasvir and asunaprevir dual oral therapy is expected to achieve high sustained virological response (SVR) rates in patients with HCV genotype 1b infection. However, presence of the NS5A-Y93H substitution at baseline has been shown to be an independent predictor of treatment failure for this regimen. By using the Invader assay, we developed a system to rapidly and accurately detect the presence of mutant strains and evaluate the proportion of patients harboring a pre-treatment Y93H mutation. This assay system, consisting of nested PCR followed by Invader reaction with well-designed primers and probes, attained a high overall assay success rate of 98.9% among a total of 702 Japanese HCV genotype 1b patients. Even in serum samples with low HCV titers, more than half of the samples could be successfully assayed. Our assay system showed a better lower detection limit of Y93H proportion than using direct sequencing, and Y93H frequencies obtained by this method correlated well with those of deep-sequencing analysis (r = 0.85, P <0.001). The proportion of the patients with the mutant strain estimated by this assay was 23.6% (164/694). Interestingly, patients with the Y93H mutant strain showed significantly lower ALT levels (p=8.8 x 10-4), higher serum HCV RNA levels (p=4.3 x 10-7), and lower HCC risk (p=6.9 x 10-3) than those with the wild type strain. Because the method is both sensitive and rapid, the NS5A-Y93H mutant strain detection system established in this study may provide important pre-treatment information valuable not only for treatment decisions but also for prediction of disease progression in HCV genotype 1b patients. PMID:26083687

  4. Analysis of Photovoltaic System Energy Performance Evaluation Method

    SciTech Connect

    Kurtz, S.; Newmiller, J.; Kimber, A.; Flottemesch, R.; Riley, E.; Dierauf, T.; McKee, J.; Krishnani, P.

    2013-11-01

    Documentation of the energy yield of a large photovoltaic (PV) system over a substantial period can be useful to measure a performance guarantee, as an assessment of the health of the system, for verification of a performance model to then be applied to a new system, or for a variety of other purposes. Although the measurement of this performance metric might appear to be straight forward, there are a number of subtleties associated with variations in weather and imperfect data collection that complicate the determination and data analysis. A performance assessment is most valuable when it is completed with a very low uncertainty and when the subtleties are systematically addressed, yet currently no standard exists to guide this process. This report summarizes a draft methodology for an Energy Performance Evaluation Method, the philosophy behind the draft method, and the lessons that were learned by implementing the method.

  5. Evaluating the Performance of Unmanned Ground Vehicle Water Detection

    NASA Technical Reports Server (NTRS)

    Rankin, Arturo; Ivanov, Tonislav; Brennan, Shane

    2010-01-01

    Water detection is a critical perception requirement for unmanned ground vehicle (UGV) autonomous navigation over cross-country terrain. During the Robotics Collaborative Technology Alliances (RCTA) program, the Jet Propulsion Laboratory (JPL) developed a set of water detection algorithms that are used to detect, localize, and avoid water bodies large enough to be a hazard to a UGV. The JPL water detection software performs the detection and localization stages using a forward-looking stereo pair of color cameras. The 3D coordinates of water body surface points are then output to a UGV's autonomous mobility system, which is responsible for planning and executing safe paths. There are three primary methods for evaluating the performance of the water detection software. Evaluations can be performed in image space on the intermediate detection product, in map space on the final localized product, or during autonomous navigation to characterize the avoidance of a variety of water bodies. This paper describes a methodology for performing the first two types of water detection performance evaluations.

  6. Evaluation of analytical performance based on partial order methodology.

    PubMed

    Carlsen, Lars; Bruggemann, Rainer; Kenessova, Olga; Erzhigitov, Erkin

    2015-01-01

    Classical measurements of performances are typically based on linear scales. However, in analytical chemistry a simple scale may be not sufficient to analyze the analytical performance appropriately. Here partial order methodology can be helpful. Within the context described here, partial order analysis can be seen as an ordinal analysis of data matrices, especially to simplify the relative comparisons of objects due to their data profile (the ordered set of values an object have). Hence, partial order methodology offers a unique possibility to evaluate analytical performance. In the present data as, e.g., provided by the laboratories through interlaboratory comparisons or proficiency testings is used as an illustrative example. However, the presented scheme is likewise applicable for comparison of analytical methods or simply as a tool for optimization of an analytical method. The methodology can be applied without presumptions or pretreatment of the analytical data provided in order to evaluate the analytical performance taking into account all indicators simultaneously and thus elucidating a "distance" from the true value. In the present illustrative example it is assumed that the laboratories analyze a given sample several times and subsequently report the mean value, the standard deviation and the skewness, which simultaneously are used for the evaluation of the analytical performance. The analyses lead to information concerning (1) a partial ordering of the laboratories, subsequently, (2) a "distance" to the Reference laboratory and (3) a classification due to the concept of "peculiar points".

  7. Performance Evaluation and Parameter Identification on DROID III

    NASA Technical Reports Server (NTRS)

    Plumb, Julianna J.

    2011-01-01

    The DROID III project consisted of two main parts. The former, performance evaluation, focused on the performance characteristics of the aircraft such as lift to drag ratio, thrust required for level flight, and rate of climb. The latter, parameter identification, focused on finding the aerodynamic coefficients for the aircraft using a system that creates a mathematical model to match the flight data of doublet maneuvers and the aircraft s response. Both portions of the project called for flight testing and that data is now available on account of this project. The conclusion of the project is that the performance evaluation data is well-within desired standards but could be improved with a thrust model, and that parameter identification is still in need of more data processing but seems to produce reasonable results thus far.

  8. Evaluation of Process Performance for Sustainable Hard Machining

    NASA Astrophysics Data System (ADS)

    Rotella, Giovanna; Umbrello, Domenico; , Oscar W. Dillon, Jr.; Jawahir, I. S.

    This paper aims to evaluate the sustainability performance of machining operation of through-hardening steel, AISI 52100, taking into account the impact of the material removal process in its various aspects. Experiments were performed for dry and cryogenic cutting conditions using chamfered cubic boron nitride (CBN) tool inserts at varying cutting conditions (cutting speed and feed rate). Cutting forces, mechanical power, tool wear, white layer thickness, surface roughness and residual stresses were investigated in order to evaluate the effects of extreme in-process cooling on the machined surface. The results indicate that cryogenic cooling has the potential to be used for surface integrity enhancement for improved product life and more sustainable functional performance.

  9. Instrumentation for Evaluating PV System Performance Losses from Snow

    SciTech Connect

    Marion, B.; Rodriguez, J.; Pruett, J.

    2009-01-01

    When designing a photovoltaic (PV) system for northern climates, the prospective installation should be evaluated with respect to the potentially detrimental effects of snow preventing solar radiation from reaching the PV cells. The extent to which snow impacts performance is difficult to determine because snow events also increase the uncertainty of the solar radiation measurement, and the presence of snow needs to be distinguished from other events that can affect performance. This paper describes two instruments useful for evaluating PV system performance losses from the presence of snow: (1) a pyranometer with a heater to prevent buildup of ice and snow, and (2) a digital camera for remote retrieval of images to determine the presence of snow on the PV array.

  10. Evaluation of Performance of Five Parallel Biological Water Proce

    NASA Technical Reports Server (NTRS)

    Vega, Leticia M.; Kerkhof, Lee; McGuinness, Lora; Pickering, Karen

    2004-01-01

    The objective of the work entitled Molecular Characterization of Eubacteria in a Biological Water Processor was to gain an understanding of the microbial diversity and species stability of the consortia that inhabit an anoxic bioreactor and to correlate those factors with functional performance, mechanical reliability, and stability. The evaluation was divided into four studies. During Study 1, replicate biological water processor (BWP) systems were operated to evaluate variability in the microbial diversity over time as a function of the initial consortia used for inoculation of the BWP reactors. Study 2 was designed to investigate the impact of an inoculum source on BWP performance. Study 3 was a modification of Study 2 where the impact of inoculum on BWP performance from inoculation until steady state operations was monitored. In Study 4, the reactors were divided into three different operational periods, based on the operational periods of the integrated water recovery test at the Johnson Space Center (JSC) in 2001.

  11. Towards Modeling Realistic Mobility for Performance Evaluations in MANET

    NASA Astrophysics Data System (ADS)

    Aravind, Alex; Tahir, Hassan

    Simulation modeling plays crucial role in conducting research on complex dynamic systems like mobile ad hoc networks and often the only way. Simulation has been successfully applied in MANET for more than two decades. In several recent studies, it is observed that the credibility of the simulation results in the field has decreased while the use of simulation has steadily increased. Part of this credibility crisis has been attributed to the simulation of mobility of the nodes in the system. Mobility has such a fundamental influence on the behavior and performance of mobile ad hoc networks. Accurate modeling and knowledge of mobility of the nodes in the system is not only helpful but also essential for the understanding and interpretation of the performance of the system under study. Several ideas, mostly in isolation, have been proposed in the literature to infuse realism in the mobility of nodes. In this paper, we attempt a holistic analysis of creating realistic mobility models and then demonstrate creation and analysis of realistic mobility models using a software tool we have developed. Using our software tool, desired mobility of the nodes in the system can be specified, generated, analyzed, and then the trace can be exported to be used in the performance studies of proposed algorithms or systems.

  12. Performance Evaluation in Network-Based Parallel Computing

    NASA Technical Reports Server (NTRS)

    Dezhgosha, Kamyar

    1996-01-01

    Network-based parallel computing is emerging as a cost-effective alternative for solving many problems which require use of supercomputers or massively parallel computers. The primary objective of this project has been to conduct experimental research on performance evaluation for clustered parallel computing. First, a testbed was established by augmenting our existing SUNSPARCs' network with PVM (Parallel Virtual Machine) which is a software system for linking clusters of machines. Second, a set of three basic applications were selected. The applications consist of a parallel search, a parallel sort, a parallel matrix multiplication. These application programs were implemented in C programming language under PVM. Third, we conducted performance evaluation under various configurations and problem sizes. Alternative parallel computing models and workload allocations for application programs were explored. The performance metric was limited to elapsed time or response time which in the context of parallel computing can be expressed in terms of speedup. The results reveal that the overhead of communication latency between processes in many cases is the restricting factor to performance. That is, coarse-grain parallelism which requires less frequent communication between processes will result in higher performance in network-based computing. Finally, we are in the final stages of installing an Asynchronous Transfer Mode (ATM) switch and four ATM interfaces (each 155 Mbps) which will allow us to extend our study to newer applications, performance metrics, and configurations.

  13. Using Weibull Distribution Analysis to Evaluate ALARA Performance

    SciTech Connect

    E. L. Frome, J. P. Watkins, and D. A. Hagemeyer

    2009-10-01

    As Low as Reasonably Achievable (ALARA) is the underlying principle for protecting nuclear workers from potential health outcomes related to occupational radiation exposure. Radiation protection performance is currently evaluated by measures such as collective dose and average measurable dose, which do not indicate ALARA performance. The purpose of this work is to show how statistical modeling of individual doses using the Weibull distribution can provide objective supplemental performance indicators for comparing ALARA implementation among sites and for insights into ALARA practices within a site. Maximum likelihood methods were employed to estimate the Weibull shape and scale parameters used for performance indicators. The shape parameter reflects the effectiveness of maximizing the number of workers receiving lower doses and is represented as the slope of the fitted line on a Weibull probability plot. Additional performance indicators derived from the model parameters include the 99th percentile and the exceedance fraction. When grouping sites by collective total effective dose equivalent (TEDE) and ranking by 99th percentile with confidence intervals, differences in performance among sites can be readily identified. Applying this methodology will enable more efficient and complete evaluation of the effectiveness of ALARA implementation.

  14. Evaluating Teachers More Strategically: Using Performance Results to Streamline Evaluation Systems. Issue Brief

    ERIC Educational Resources Information Center

    White, Taylor

    2014-01-01

    Teacher evaluation systems introduced by states and school systems in the past several years have focused attention on improving the performance of public school teachers, but they have been cost- and time-intensive, placing a significant burden on states' and districts' resources. In Tennessee, for example, trained evaluators conducted nearly…

  15. Blind Source Parameters for Performance Evaluation of Despeckling Filters.

    PubMed

    Biradar, Nagashettappa; Dewal, M L; Rohit, ManojKumar; Gowre, Sanjaykumar; Gundge, Yogesh

    2016-01-01

    The speckle noise is inherent to transthoracic echocardiographic images. A standard noise-free reference echocardiographic image does not exist. The evaluation of filters based on the traditional parameters such as peak signal-to-noise ratio, mean square error, and structural similarity index may not reflect the true filter performance on echocardiographic images. Therefore, the performance of despeckling can be evaluated using blind assessment metrics like the speckle suppression index, speckle suppression and mean preservation index (SMPI), and beta metric. The need for noise-free reference image is overcome using these three parameters. This paper presents a comprehensive analysis and evaluation of eleven types of despeckling filters for echocardiographic images in terms of blind and traditional performance parameters along with clinical validation. The noise is effectively suppressed using the logarithmic neighborhood shrinkage (NeighShrink) embedded with Stein's unbiased risk estimation (SURE). The SMPI is three times more effective compared to the wavelet based generalized likelihood estimation approach. The quantitative evaluation and clinical validation reveal that the filters such as the nonlocal mean, posterior sampling based Bayesian estimation, hybrid median, and probabilistic patch based filters are acceptable whereas median, anisotropic diffusion, fuzzy, and Ripplet nonlinear approximation filters have limited applications for echocardiographic images. PMID:27298618

  16. Blind Source Parameters for Performance Evaluation of Despeckling Filters

    PubMed Central

    Biradar, Nagashettappa; Dewal, M. L.; Rohit, ManojKumar; Gowre, Sanjaykumar; Gundge, Yogesh

    2016-01-01

    The speckle noise is inherent to transthoracic echocardiographic images. A standard noise-free reference echocardiographic image does not exist. The evaluation of filters based on the traditional parameters such as peak signal-to-noise ratio, mean square error, and structural similarity index may not reflect the true filter performance on echocardiographic images. Therefore, the performance of despeckling can be evaluated using blind assessment metrics like the speckle suppression index, speckle suppression and mean preservation index (SMPI), and beta metric. The need for noise-free reference image is overcome using these three parameters. This paper presents a comprehensive analysis and evaluation of eleven types of despeckling filters for echocardiographic images in terms of blind and traditional performance parameters along with clinical validation. The noise is effectively suppressed using the logarithmic neighborhood shrinkage (NeighShrink) embedded with Stein's unbiased risk estimation (SURE). The SMPI is three times more effective compared to the wavelet based generalized likelihood estimation approach. The quantitative evaluation and clinical validation reveal that the filters such as the nonlocal mean, posterior sampling based Bayesian estimation, hybrid median, and probabilistic patch based filters are acceptable whereas median, anisotropic diffusion, fuzzy, and Ripplet nonlinear approximation filters have limited applications for echocardiographic images. PMID:27298618

  17. Market behavior and performance of different strategy evaluation schemes

    NASA Astrophysics Data System (ADS)

    Baek, Yongjoo; Lee, Sang Hoon; Jeong, Hawoong

    2010-08-01

    Strategy evaluation schemes are a crucial factor in any agent-based market model, as they determine the agents’ strategy preferences and consequently their behavioral pattern. This study investigates how the strategy evaluation schemes adopted by agents affect their performance in conjunction with the market circumstances. We observe the performance of three strategy evaluation schemes, the history-dependent wealth game, the trend-opposing minority game, and the trend-following majority game, in a stock market where the price is exogenously determined. The price is either directly adopted from the real stock market indices or generated with a Markov chain of order ≤2 . Each scheme’s success is quantified by average wealth accumulated by the traders equipped with the scheme. The wealth game, as it learns from the history, shows relatively good performance unless the market is highly unpredictable. The majority game is successful in a trendy market dominated by long periods of sustained price increase or decrease. On the other hand, the minority game is suitable for a market with persistent zigzag price patterns. We also discuss the consequence of implementing finite memory in the scoring processes of strategies. Our findings suggest under which market circumstances each evaluation scheme is appropriate for modeling the behavior of real market traders.

  18. Accurately measuring MPI broadcasts in a computational grid

    SciTech Connect

    Karonis N T; de Supinski, B R

    1999-05-06

    An MPI library's implementation of broadcast communication can significantly affect the performance of applications built with that library. In order to choose between similar implementations or to evaluate available libraries, accurate measurements of broadcast performance are required. As we demonstrate, existing methods for measuring broadcast performance are either inaccurate or inadequate. Fortunately, we have designed an accurate method for measuring broadcast performance, even in a challenging grid environment. Measuring broadcast performance is not easy. Simply sending one broadcast after another allows them to proceed through the network concurrently, thus resulting in inaccurate per broadcast timings. Existing methods either fail to eliminate this pipelining effect or eliminate it by introducing overheads that are as difficult to measure as the performance of the broadcast itself. This problem becomes even more challenging in grid environments. Latencies a long different links can vary significantly. Thus, an algorithm's performance is difficult to predict from it's communication pattern. Even when accurate pre-diction is possible, the pattern is often unknown. Our method introduces a measurable overhead to eliminate the pipelining effect, regardless of variations in link latencies. choose between different available implementations. Also, accurate and complete measurements could guide use of a given implementation to improve application performance. These choices will become even more important as grid-enabled MPI libraries [6, 7] become more common since bad choices are likely to cost significantly more in grid environments. In short, the distributed processing community needs accurate, succinct and complete measurements of collective communications performance. Since successive collective communications can often proceed concurrently, accurately measuring them is difficult. Some benchmarks use knowledge of the communication algorithm to predict the

  19. Roles and methods of performance evaluation of hospital academic leadership.

    PubMed

    Zhou, Ying; Yuan, Huikang; Li, Yang; Zhao, Xia; Yi, Lihua

    2016-01-01

    The rapidly advancing implementation of public hospital reform urgently requires the identification and classification of a pool of exceptional medical specialists, corresponding with incentives to attract and retain them, providing a nucleus of distinguished expertise to ensure public hospital preeminence. This paper examines the significance of academic leadership, from a strategic management perspective, including various tools, methods and mechanisms used in the theory and practice of performance evaluation, and employed in the selection, training and appointment of academic leaders. Objective methods of assessing leadership performance are also provided for reference. PMID:27061556

  20. Performance Evaluation of Supercomputers using HPCC and IMB Benchmarks

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Ciotti, Robert; Gunney, Brian T. N.; Spelce, Thomas E.; Koniges, Alice; Dossa, Don; Adamidis, Panagiotis; Rabenseifner, Rolf; Tiyyagura, Sunil R.; Mueller, Matthias; Fatoohi, Rod

    2006-01-01

    The HPC Challenge (HPCC) benchmark suite and the Intel MPI Benchmark (IMB) are used to compare and evaluate the combined performance of processor, memory subsystem and interconnect fabric of five leading supercomputers - SGI Altix BX2, Cray XI, Cray Opteron Cluster, Dell Xeon cluster, and NEC SX-8. These five systems use five different networks (SGI NUMALINK4, Cray network, Myrinet, InfiniBand, and NEC IXS). The complete set of HPCC benchmarks are run on each of these systems. Additionally, we present Intel MPI Benchmarks (IMB) results to study the performance of 11 MPI communication functions on these systems.

  1. Evaluating Organic Aerosol Model Performance: Impact of two Embedded Assumptions

    NASA Astrophysics Data System (ADS)

    Jiang, W.; Giroux, E.; Roth, H.; Yin, D.

    2004-05-01

    Organic aerosols are important due to their abundance in the polluted lower atmosphere and their impact on human health and vegetation. However, modeling organic aerosols is a very challenging task because of the complexity of aerosol composition, structure, and formation processes. Assumptions and their associated uncertainties in both models and measurement data make model performance evaluation a truly demanding job. Although some assumptions are obvious, others are hidden and embedded, and can significantly impact modeling results, possibly even changing conclusions about model performance. This paper focuses on analyzing the impact of two embedded assumptions on evaluation of organic aerosol model performance. One assumption is about the enthalpy of vaporization widely used in various secondary organic aerosol (SOA) algorithms. The other is about the conversion factor used to obtain ambient organic aerosol concentrations from measured organic carbon. These two assumptions reflect uncertainties in the model and in the ambient measurement data, respectively. For illustration purposes, various choices of the assumed values are implemented in the evaluation process for an air quality model based on CMAQ (the Community Multiscale Air Quality Model). Model simulations are conducted for the Lower Fraser Valley covering Southwest British Columbia, Canada, and Northwest Washington, United States, for a historical pollution episode in 1993. To understand the impact of the assumed enthalpy of vaporization on modeling results, its impact on instantaneous organic aerosol yields (IAY) through partitioning coefficients is analysed first. The analysis shows that utilizing different enthalpy of vaporization values causes changes in the shapes of IAY curves and in the response of SOA formation capability of reactive organic gases to temperature variations. These changes are then carried into the air quality model and cause substantial changes in the organic aerosol modeling

  2. Performance evaluation of the JPL interim digital SAR processor

    NASA Technical Reports Server (NTRS)

    Wu, C.; Barkan, B.; Curlander, J.; Jin, M.; Pang, S.

    1983-01-01

    The performance of the Interim Digital SAR Processor (IDP) was evaluated. The IDP processor was originally developed for experimental processing of digital SEASAT SAR data. One phase of the system upgrade which features parallel processing in three peripheral array processors, automated estimation for Doppler parameters, and unsupervised image pixel location determination and registration was executed. The method to compensate for the target range curvature effect was improved. A four point interpolation scheme is implemented to replace the nearest neighbor scheme used in the original IDP. The processor still maintains its fast throughput speed. The current performance and capability of the processing modes now available on the IDP system are updated.

  3. Preliminary flight evaluation of an engine performance optimization algorithm

    NASA Technical Reports Server (NTRS)

    Lambert, H. H.; Gilyard, G. B.; Chisholm, J. D.; Kerr, L. J.

    1991-01-01

    A performance seeking control (PSC) algorithm has undergone initial flight test evaluation in subsonic operation of a PW 1128 engined F-15. This algorithm is designed to optimize the quasi-steady performance of an engine for three primary modes: (1) minimum fuel consumption; (2) minimum fan turbine inlet temperature (FTIT); and (3) maximum thrust. The flight test results have verified a thrust specific fuel consumption reduction of 1 pct., up to 100 R decreases in FTIT, and increases of as much as 12 pct. in maximum thrust. PSC technology promises to be of value in next generation tactical and transport aircraft.

  4. Evaluation of a high performance fixed-ratio traction drive

    NASA Technical Reports Server (NTRS)

    Loewenthal, S. H.; Anderson, N. E.; Rohn, D. A.

    1980-01-01

    The results of a test program to evaluate a compact, high performance, fixed ratio traction drive are presented. This transmission, the Nasvytis Multiroller Traction Drive, is a fixed ratio, single stage planetary with two rows of stepped planet rollers. Two versions of the drive were parametrically tested back-to-back at speeds to 73,000 rpm and power levels to 180 kW (240 hp). Parametric tests were also conducted with the Nasvytis drive retrofitted to an automotive gas turbine engine. The drives exhibited good performance, with a nominal peak efficiency of 94 to 96 percent and a maximum speed loss due to creep of approximately 3.5 percent.

  5. Roles and methods of performance evaluation of hospital academic leadership.

    PubMed

    Zhou, Ying; Yuan, Huikang; Li, Yang; Zhao, Xia; Yi, Lihua

    2016-01-01

    The rapidly advancing implementation of public hospital reform urgently requires the identification and classification of a pool of exceptional medical specialists, corresponding with incentives to attract and retain them, providing a nucleus of distinguished expertise to ensure public hospital preeminence. This paper examines the significance of academic leadership, from a strategic management perspective, including various tools, methods and mechanisms used in the theory and practice of performance evaluation, and employed in the selection, training and appointment of academic leaders. Objective methods of assessing leadership performance are also provided for reference.

  6. Evaluation of reference genes for accurate normalization of gene expression for real time-quantitative PCR in Pyrus pyrifolia using different tissue samples and seasonal conditions.

    PubMed

    Imai, Tsuyoshi; Ubi, Benjamin E; Saito, Takanori; Moriguchi, Takaya

    2014-01-01

    We have evaluated suitable reference genes for real time (RT)-quantitative PCR (qPCR) analysis in Japanese pear (Pyrus pyrifolia). We tested most frequently used genes in the literature such as β-Tubulin, Histone H3, Actin, Elongation factor-1α, Glyceraldehyde-3-phosphate dehydrogenase, together with newly added genes Annexin, SAND and TIP41. A total of 17 primer combinations for these eight genes were evaluated using cDNAs synthesized from 16 tissue samples from four groups, namely: flower bud, flower organ, fruit flesh and fruit skin. Gene expression stabilities were analyzed using geNorm and NormFinder software packages or by ΔCt method. geNorm analysis indicated three best performing genes as being sufficient for reliable normalization of RT-qPCR data. Suitable reference genes were different among sample groups, suggesting the importance of validation of gene expression stability of reference genes in the samples of interest. Ranking of stability was basically similar between geNorm and NormFinder, suggesting usefulness of these programs based on different algorithms. ΔCt method suggested somewhat different results in some groups such as flower organ or fruit skin; though the overall results were in good correlation with geNorm or NormFinder. Gene expression of two cold-inducible genes PpCBF2 and PpCBF4 were quantified using the three most and the three least stable reference genes suggested by geNorm. Although normalized quantities were different between them, the relative quantities within a group of samples were similar even when the least stable reference genes were used. Our data suggested that using the geometric mean value of three reference genes for normalization is quite a reliable approach to evaluating gene expression by RT-qPCR. We propose that the initial evaluation of gene expression stability by ΔCt method, and subsequent evaluation by geNorm or NormFinder for limited number of superior gene candidates will be a practical way of finding out

  7. MiRduplexSVM: A High-Performing MiRNA-Duplex Prediction and Evaluation Methodology.

    PubMed

    Karathanasis, Nestoras; Tsamardinos, Ioannis; Poirazi, Panayiota

    2015-01-01

    We address the problem of predicting the position of a miRNA duplex on a microRNA hairpin via the development and application of a novel SVM-based methodology. Our method combines a unique problem representation and an unbiased optimization protocol to learn from mirBase19.0 an accurate predictive model, termed MiRduplexSVM. This is the first model that provides precise information about all four ends of the miRNA duplex. We show that (a) our method outperforms four state-of-the-art tools, namely MaturePred, MiRPara, MatureBayes, MiRdup as well as a Simple Geometric Locator when applied on the same training datasets employed for each tool and evaluated on a common blind test set. (b) In all comparisons, MiRduplexSVM shows superior performance, achieving up to a 60% increase in prediction accuracy for mammalian hairpins and can generalize very well on plant hairpins, without any special optimization. (c) The tool has a number of important applications such as the ability to accurately predict the miRNA or the miRNA*, given the opposite strand of a duplex. Its performance on this task is superior to the 2nts overhang rule commonly used in computational studies and similar to that of a comparative genomic approach, without the need for prior knowledge or the complexity of performing multiple alignments. Finally, it is able to evaluate novel, potential miRNAs found either computationally or experimentally. In relation with recent confidence evaluation methods used in miRBase, MiRduplexSVM was successful in identifying high confidence potential miRNAs.

  8. MiRduplexSVM: A High-Performing MiRNA-Duplex Prediction and Evaluation Methodology.

    PubMed

    Karathanasis, Nestoras; Tsamardinos, Ioannis; Poirazi, Panayiota

    2015-01-01

    We address the problem of predicting the position of a miRNA duplex on a microRNA hairpin via the development and application of a novel SVM-based methodology. Our method combines a unique problem representation and an unbiased optimization protocol to learn from mirBase19.0 an accurate predictive model, termed MiRduplexSVM. This is the first model that provides precise information about all four ends of the miRNA duplex. We show that (a) our method outperforms four state-of-the-art tools, namely MaturePred, MiRPara, MatureBayes, MiRdup as well as a Simple Geometric Locator when applied on the same training datasets employed for each tool and evaluated on a common blind test set. (b) In all comparisons, MiRduplexSVM shows superior performance, achieving up to a 60% increase in prediction accuracy for mammalian hairpins and can generalize very well on plant hairpins, without any special optimization. (c) The tool has a number of important applications such as the ability to accurately predict the miRNA or the miRNA*, given the opposite strand of a duplex. Its performance on this task is superior to the 2nts overhang rule commonly used in computational studies and similar to that of a comparative genomic approach, without the need for prior knowledge or the complexity of performing multiple alignments. Finally, it is able to evaluate novel, potential miRNAs found either computationally or experimentally. In relation with recent confidence evaluation methods used in miRBase, MiRduplexSVM was successful in identifying high confidence potential miRNAs. PMID:25961860

  9. [Evaluation of surgical performance: how to do it?].

    PubMed

    Fragata, José

    2006-01-01

    Assessment of surgical performance is a must for every surgical practice nowadays and can be done by using scientific methods imported mostly from the Quality control tools that have been in use for long in industry. Surgical performance comprises several dimensions including clinical activity (mortality and morbidity as end points), academic activities, research and, more and more, efficiency. Stable long time results (efficacy), reducing error (safety) and meeting patient expectations (patient satisfaction) are among other performance components. This paper focus on the precise definitions of mortality and morbidity related to surgical activities and on the tools to evaluate patient complexity and assess pre operative risk. Some graphic representations are suggested to compare performance profiles of surgeons and to define individual performance profiles. Strong emphasis is put on pre operative risk assessment and its crucial role to interpret divergent surgical results. Where risk assessment is not possible or is unavailable, observed / expected ratios (O/E) for a given endpoint , be it mortality, length of stay or morbidity, must be established and routinely used to refer results and to identify performance outliers. Morbidity is being pointed out as a most valuable performance indicator in surgery because it is sensitive and comprises efficiency, safety and quality, at large.

  10. Solid rocket booster performance evaluation model. Volume 1: Engineering description

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The space shuttle solid rocket booster performance evaluation model (SRB-II) is made up of analytical and functional simulation techniques linked together so that a single pass through the model will predict the performance of the propulsion elements of a space shuttle solid rocket booster. The available options allow the user to predict static test performance, predict nominal and off nominal flight performance, and reconstruct actual flight and static test performance. Options selected by the user are dependent on the data available. These can include data derived from theoretical analysis, small scale motor test data, large motor test data and motor configuration data. The user has several options for output format that include print, cards, tape and plots. Output includes all major performance parameters (Isp, thrust, flowrate, mass accounting and operating pressures) as a function of time as well as calculated single point performance data. The engineering description of SRB-II discusses the engineering and programming fundamentals used, the function of each module, and the limitations of each module.

  11. Performance evaluation on vibration control of MR landing gear

    NASA Astrophysics Data System (ADS)

    Lee, D. Y.; Nam, Y. J.; Yamane, R.; Park, M. K.

    2009-02-01

    This paper is concerned with the applicability of the developed MR damper to the landing gear system for the attenuating undesired shock and vibration in the landing and taxing phases. First of all, the experimental model of the MR damper is derived based on the results of performance evaluations. Next, a simplified skyhook controller, which is one of the most straightforward, but effective approaches for improving ride comport in vehicles with active suspensions, is formulated. Then, the vibration control performances of the landing gear system using the MR damper are theoretically evaluated in the landing phase of the aircraft. A series of simulation analyses show that the proposed MR damper with the skyhook controller is effective for suppressing undesired vibration of the aircraft body. Finally, the effectiveness of the simulation results are additionally verified via HILS (Hardware-in-the-loop-simulation) method.

  12. A Note for Missile Autopilot Performance Evaluation Test

    NASA Astrophysics Data System (ADS)

    Eguchi, Hirofumi

    The essential benefit of HardWare-In-the-Loop (HWIL) simulation can be summarized as that the performance of autopilot system is evaluated realistically without the modeling error by using actual hardware such as seeker systems, autopilot systems and servo equipments. The most important requirement at the HWIL simulation test is to set the homing seeker at the 3-axis gimbals center of the flight motion table. But, because of the various reasons such as the length of the homing seeker, the structure of the flight motion table and the shape of attachments, this requirement on setting is not able to be satisfied. In this paper, the effect of this position error on the guidance and control system performance is analyzed and evaluated.

  13. Evaluating performance of container terminal operation using simulation

    NASA Astrophysics Data System (ADS)

    Nawawi, Mohd Kamal Mohd; Jamil, Fadhilah Che; Hamzah, Firdaus Mohamad

    2015-05-01

    A container terminal is a facility where containers are transshipped from one mode of transport to another. Congestion problem leads to the decreasing of the customer's level of satisfaction. This study presents the application of simulation technique with the main objective of this study is to develop the current model and evaluate the performance of the container terminal. The suitable performance measure used in this study to evaluate the container terminal model are the average waiting time in queue, average of process time at berth, number of vessels enter the berth and resource utilization. Simulation technique was found to be a suitable technique to conduct in this study. The results from the simulation model had proved to solve the problem occurred in the container terminal.

  14. Performance evaluation of advanced industrial SPECT system with diverging collimator.

    PubMed

    Park, Jang Guen; Jung, Sung-Hee; Kim, Jong Bum; Moon, Jinho; Yeom, Yeon Soo; Kim, Chan Hyeong

    2014-12-01

    An advanced industrial SPECT system with 12-fold-array diverging collimator was developed for flow visualization in industrial reactors and was discussed in the previous study. The present paper describes performance evaluation of the SPECT system under both static- and dynamic- flow conditions. Under static conditions, the movement of radiotracer inside the test reactor was compared with that of color tracer (blue ink) captured with a high-speed camera. The comparison of the reconstructed images obtained with the radiotracer and the SPECT system showed fairly good agreement with video-frames of the color tracer obtained with the camera. Based on the results of the performance evaluation, it is concluded that the SPECT system is suitable for investigation and visualization of flows in industrial flow reactors.

  15. The market behavior and performance of different strategy evaluation schemes

    NASA Astrophysics Data System (ADS)

    Baek, Yongjoo; Lee, Sang Hoon; Jeong, Hawoong

    2010-03-01

    We observe the performances of three strategy evaluation schemes, which are the history-dependent wealth game, the trend-opposing minority game, and the trend-following majority game in a stock market where the price is exogenously determined. The price is either directly adopted from the real stock market indices or generated with the Markov chain of order <=2. Each scheme's success is quantified by average wealth accumulated by the traders equipped with the scheme. The wealth game, as it learns from the history, generally shows good performance unless the market is highly unpredictable. The majority game is relatively successful in a trendy market dominated by long periods of sustained price increasing or decreasing. On the other hand, the minority game is suitable for a market with persistent zig-zag price patterns. These observations agree with our intuition and support the viability of the wealth game as a strategy evaluation scheme in typical markets.

  16. Performance evaluation of advanced industrial SPECT system with diverging collimator.

    PubMed

    Park, Jang Guen; Jung, Sung-Hee; Kim, Jong Bum; Moon, Jinho; Yeom, Yeon Soo; Kim, Chan Hyeong

    2014-12-01

    An advanced industrial SPECT system with 12-fold-array diverging collimator was developed for flow visualization in industrial reactors and was discussed in the previous study. The present paper describes performance evaluation of the SPECT system under both static- and dynamic- flow conditions. Under static conditions, the movement of radiotracer inside the test reactor was compared with that of color tracer (blue ink) captured with a high-speed camera. The comparison of the reconstructed images obtained with the radiotracer and the SPECT system showed fairly good agreement with video-frames of the color tracer obtained with the camera. Based on the results of the performance evaluation, it is concluded that the SPECT system is suitable for investigation and visualization of flows in industrial flow reactors. PMID:25169132

  17. An evaluation of the nuclear fuel performance code BISON

    SciTech Connect

    Perez, D. M.; Williamson, R. L.; Novascone, S. R.; Larson, T. K.; Hales, J. D.; Spencer, B. W.; Pastore, G.

    2013-07-01

    BISON is a modern finite-element based nuclear fuel performance code that has been under development at the Idaho National Laboratory (USA) since 2009. The code is applicable to both steady and transient fuel behavior and is used to analyze either 2D axisymmetric or 3D geometries. BISON has been applied to a variety of fuel forms including LWR fuel rods, TRISO-coated fuel particles, and metallic fuel in both rod and plate geometries. Code validation is currently in progress, principally by comparison to instrumented LWR fuel rods and other well known fuel performance codes. Results from several assessment cases are reported, with emphasis on fuel centerline temperatures at various stages of fuel life, fission gas release, and clad deformation during pellet clad mechanical interaction (PCMI). BISON comparisons to fuel centerline temperature measurements are very good at beginning of life and reasonable at high burnup. Although limited to date, fission gas release comparisons are very good. Comparisons of rod diameter following significant power ramping are also good and demonstrate BISON's unique ability to model discrete pellet behavior and accurately predict clad ridging from PCMI. (authors)

  18. Performance evaluation of an all-fiber image-reject homodyne coherent Doppler wind lidar

    NASA Astrophysics Data System (ADS)

    Abari, C. F.; Pedersen, A. T.; Dellwik, E.; Mann, J.

    2015-10-01

    The main purpose of this study is to evaluate the near-zero wind velocity measurement performance of two separate 1.5 μm all-fiber coherent Doppler lidars (CDLs). The performance characterization is carried out through the presentation of the results from two separate atmospheric field campaigns. In one campaign, a recently developed continuous wave (CW) CDL benefiting from an image-reject front-end was deployed. The other campaign utilized a different CW CDL, benefiting from a heterodyne receiver with intermediate-frequency (IF) sampling. In both field campaigns the results are compared against a sonic anemometer, as the reference instrument. The measurements clearly show that the image-reject architecture results in more accurate measurements of radial wind velocities close to zero. Close-to-zero velocities are usually associated with the vertical component of the wind and are important to characterize.

  19. Evaluation of a high performance, fixed-ratio, traction drive

    NASA Technical Reports Server (NTRS)

    Loewenthal, S. H.; Anderson, N. E.; Rohn, D. A.

    1983-01-01

    A test program was initiated to evaluate the key operational and performance factors associated with the Nasvytis multiroller concept. Two sets of Nasvytis drives, each of slightly geometry, were parametrically tested on a back to back test stand. Initial results from these tests are reported. One of these units was later retrofitted to the power turbine of an automotive gas turbine engine and dynamometer tested.

  20. Performance Evaluation of a Permanent-Magnet Induction Generator

    NASA Astrophysics Data System (ADS)

    Fukami, Tadashi; Yokoi, Masahiro; Kanamaru, Yasunori; Miyamoto, Toshio

    A permanent-magnet induction generator (PMIG) is a special induction machine self-excited from the inside of the squirrel-cage rotor by a permanent-magnet rotor (PM rotor). In order to evaluate the practical value of the PMIG, its steady-state performance is analyzed theoretically and experimentally. As a result, it was found that the PMIG exhibits good power factor and efficiency compared to a general-purpose induction generator (GPIG) of the same size.

  1. Faculty performance evaluation: the CIPP-SAPS model.

    PubMed

    Mitcham, M

    1981-11-01

    The issues of faculty performance evaluation for allied health professionals are addressed. Daniel Stufflebeam's CIPP (content-input-process-product) model is introduced and its development in a CIPP-SAPS (self-administrative-peer-student) model is pursued. Data sources for the SAPS portion of the model are discussed. A suggestion for the use of the CIPP-SAPS model within a teaching contract plan is explored.

  2. Using Fuzzy Logic for Performance Evaluation in Reinforcement Learning

    NASA Technical Reports Server (NTRS)

    Berenji, Hamid R.; Khedkar, Pratap S.

    1992-01-01

    Current reinforcement learning algorithms require long training periods which generally limit their applicability to small size problems. A new architecture is described which uses fuzzy rules to initialize its two neural networks: a neural network for performance evaluation and another for action selection. This architecture is applied to control of dynamic systems and it is demonstrated that it is possible to start with an approximate prior knowledge and learn to refine it through experiments using reinforcement learning.

  3. Performance evaluation of artificial intelligence classifiers for the medical domain.

    PubMed

    Smith, A E; Nugent, C D; McClean, S I

    2002-01-01

    The application of artificial intelligence systems is still not widespread in the medical field, however there is an increasing necessity for these to handle the surfeit of information available. One drawback to their implementation is the lack of criteria or guidelines for the evaluation of these systems. This is the primary issue in their acceptability to clinicians, who require them for decision support and therefore need evidence that these systems meet the special safety-critical requirements of the domain. This paper shows evidence that the most prevalent form of intelligent system, neural networks, is generally not being evaluated rigorously regarding classification precision. A taxonomy of the types of evaluation tests that can be carried out, to gauge inherent performance of the outputs of intelligent systems has been assembled, and the results of this presented in a clear and concise form, which should be applicable to all intelligent classifiers for medicine.

  4. Oil Bypass Filter Technology Performance Evaluation - First Quarterly Report

    SciTech Connect

    Zirker, L.R.; Francfort, J.E.

    2003-01-31

    This report details the initial activities to evaluate the performance of the oil bypass filter technology being tested by the Idaho National Engineering and Environmental Laboratory (INEEL) for the U.S. Department of Energy's FreedomCAR & Vehicle Technologies Program. Eight full-size, four-cycle diesel-engine buses used to transport INEEL employees on various routes have been equipped with oil bypass systems from the puraDYN Corporation. Each bus averages about 60,000 miles a year. The evaluation includes an oil analysis regime to monitor the presence of necessary additives in the oil and to detect undesirable contaminants. Very preliminary economic analysis suggests that the oil bypass system can reduce life-cycle costs. As the evaluation continues and oil avoidance costs are quantified, it is estimated that the bypass system economics may prove increasingly favorable, given the anticipated savings in operational costs and in reduced use of oil and waste oil avoidance.

  5. Oil Bypass Filter Technology Performance Evaluation - January 2003 Quarterly Report

    SciTech Connect

    Laurence R. Zirker; James E. Francfort

    2003-01-01

    This report details the initial activities to evaluate the performance of the oil bypass filter technology being tested by the Idaho National Engineering and Environmental Laboratory (INEEL) for the U.S. Department of Energy's FreedomCAR & Vehicle Technologies Program. Eight full-size, four-cycle diesel-engine buses used to transport INEEL employees on various routes have been equipped with oil bypass systems from the puraDYN Corporation. Each bus averages about 60,000 miles a year. The evaluation includes an oil analysis regime to monitor the presence of necessary additives in the oil and to detect undesirable contaminants. Very preliminary economic analysis suggests that the oil bypass system can reduce life-cycle costs. As the evaluation continues and oil avoidance costs are quantified, it is estimated that the bypass system economics may prove increasingly favorable, given the anticipated savings in operational costs and in reduced use of oil and waste oil avoidance.

  6. Two highly accurate methods for pitch calibration

    NASA Astrophysics Data System (ADS)

    Kniel, K.; Härtig, F.; Osawa, S.; Sato, O.

    2009-11-01

    Among profiles, helix and tooth thickness pitch is one of the most important parameters of an involute gear measurement evaluation. In principle, coordinate measuring machines (CMM) and CNC-controlled gear measuring machines as a variant of a CMM are suited for these kinds of gear measurements. Now the Japan National Institute of Advanced Industrial Science and Technology (NMIJ/AIST) and the German national metrology institute the Physikalisch-Technische Bundesanstalt (PTB) have each developed independently highly accurate pitch calibration methods applicable to CMM or gear measuring machines. Both calibration methods are based on the so-called closure technique which allows the separation of the systematic errors of the measurement device and the errors of the gear. For the verification of both calibration methods, NMIJ/AIST and PTB performed measurements on a specially designed pitch artifact. The comparison of the results shows that both methods can be used for highly accurate calibrations of pitch standards.

  7. Performance evaluation of similarity measures for dense multimodal stereovision

    NASA Astrophysics Data System (ADS)

    Yaman, Mustafa; Kalkan, Sinan

    2016-05-01

    Multimodal imaging systems have recently been drawing attention in fields such as medical imaging, remote sensing, and video surveillance systems. In such systems, estimating depth has become possible due to the promising progress of multimodal matching techniques. We perform a systematic performance evaluation of similarity measures frequently used in the literature for dense multimodal stereovision. The evaluated measures include mutual information (MI), sum of squared distances, normalized cross-correlation, census transform, local self-similarity (LSS) as well as descriptors adopted to multimodal settings, like scale invariant feature transform (SIFT), speeded-up robust features (SURF), histogram of oriented gradients (HOG), binary robust independent elementary features, and fast retina keypoint (FREAK). We evaluate the measures over datasets we generated, compiled, and provided as a benchmark and compare the performances using the Winner Takes All method. The datasets are (1) synthetically modified four popular pairs from the Middlebury Stereo Dataset (namely, Tsukuba, Venus, Cones, and Teddy) and (2) our own multimodal image pairs acquired using the infrared and the electro-optical cameras of a Kinect device. The results show that MI and HOG provide promising results for multimodal imagery, and FREAK, SURF, SIFT, and LSS can be considered as alternatives depending on the multimodality level and the computational complexity requirements of the intended application.

  8. Simple method for performance evaluation of multistage rockets

    NASA Astrophysics Data System (ADS)

    Pontani, Mauro; Teofilatto, Paolo

    2014-01-01

    Multistage rockets are commonly employed to place spacecraft and satellites in their operational orbits. Performance evaluation of multistage rockets is aimed at defining the maximum payload mass at orbit injection, for specified structural, propulsive, and aerodynamic data of the launch vehicle. This work proposes a simple method for a fast performance evaluation of multistage rockets. The technique at hand is based on three steps: (i) the flight-path angle at each stage separation is guessed, (ii) the spacecraft velocity is maximized at the first and second stage separation, and (iii) for the last stage the thrust direction is obtained through the particle swarm optimization technique, in conjunction with the use of the Euler-Lagrange equations and the Pontryagin minimum principle. The coast duration at the second stage separation is optimized as well. The method at hand is extremely simple and easy-to-implement, but nevertheless it proves to be capable of yielding near-optimal ascending trajectories for a multistage launch vehicle with realistic structural, propulsive, and aerodynamic characteristics. The solutions found with the technique under consideration can be employed either for a rapid evaluation of the multistage rocket performance or as guesses for more refined optimization algorithms.

  9. Performance Evaluation of the ISS Water Processor Multifiltration Beds

    NASA Technical Reports Server (NTRS)

    Bowman, Elizabeth M.; Carter, Layne; Wilson, Mark; Cole, Harold; Orozco, Nicole; Snowdon, Doug

    2012-01-01

    The ISS Water Processor Assembly (WPA) produces potable water from a waste stream containing humidity condensate and urine distillate. The primary treatment process is achieved in the Multifiltration Bed, which includes adsorbent media and ion exchange resin for the removal of dissolved organic and inorganic contaminants. The first Multifiltration Bed was replaced on ISS in July 2010 after initial indication of inorganic breakthrough. This bed was returned to ground in July 2011 for an engineering investigation. The water resident in the bed was analyzed for various parameters to evaluate adsorbent loading, performance of the ion exchange resin, microbial activity, and generation of leachates from the ion exchange resin. Portions of the adsorbent media and ion exchange resin were sampled and subsequently desorbed to identify the primary contaminants removed at various points in the bed. In addition, an unused Multifiltration Bed was evaluated after two years in storage to assess the generation of leachates during storage. This assessment was performed to evaluate the possibility that these leachates are impacting performance of the Catalytic Reactor located downstream of the Multifiltration Bed. The results of these investigations and implications to the operation of the WPA on ISS are documented in this paper.

  10. Performance evaluation of fiber optic components in nuclear plant environments

    SciTech Connect

    Hastings, M.C.; Miller, D.W.; James, R.W.

    1996-03-01

    Over the past several years, the Electric Power Research Institute (EPRI) has funded several projects to evaluate the performance of commercially available fiber optic cables, connective devices, light sources, and light detectors under environmental conditions representative of normal and abnormal nuclear power plant operating conditions. Future projects are planned to evaluate commercially available fiber optic sensors and to install and evaluate performance of instrument loops comprised of fiber optic components in operating nuclear power plant applications. The objective of this research is to assess the viability of fiber optic components for replacement and upgrade of nuclear power plant instrument systems. Fiber optic instrument channels offer many potential advantages: commercial availability of parts and technical support, small physical size and weight, immunity to electromagnetic interference, relatively low power requirements, and high bandwidth capabilities. As existing nuclear power plants continue to replace and upgrade I&C systems, fiber optics will offer a low-cost alternative technology which also provides additional information processing capabilities. Results to date indicate that fiber optics are a viable technology for many nuclear applications, both inside and outside of containments. This work is funded and manage& under the Operations & Maintenance Cost Control research target of EPRI`s Nuclear Power Group. The work is being performed by faculty and students in the Mechanical and Nuclear Engineering Departments and the staff of the Nuclear Reactor Laboratory of the Ohio State University.

  11. Performance Evaluation and Benchmarking of Next Intelligent Systems

    SciTech Connect

    del Pobil, Angel; Madhavan, Raj; Bonsignorio, Fabio

    2009-10-01

    Performance Evaluation and Benchmarking of Intelligent Systems presents research dedicated to the subject of performance evaluation and benchmarking of intelligent systems by drawing from the experiences and insights of leading experts gained both through theoretical development and practical implementation of intelligent systems in a variety of diverse application domains. This contributed volume offers a detailed and coherent picture of state-of-the-art, recent developments, and further research areas in intelligent systems. The chapters cover a broad range of applications, such as assistive robotics, planetary surveying, urban search and rescue, and line tracking for automotive assembly. Subsystems or components described in this book include human-robot interaction, multi-robot coordination, communications, perception, and mapping. Chapters are also devoted to simulation support and open source software for cognitive platforms, providing examples of the type of enabling underlying technologies that can help intelligent systems to propagate and increase in capabilities. Performance Evaluation and Benchmarking of Intelligent Systems serves as a professional reference for researchers and practitioners in the field. This book is also applicable to advanced courses for graduate level students and robotics professionals in a wide range of engineering and related disciplines including computer science, automotive, healthcare, manufacturing, and service robotics.

  12. Factory performance evaluations of engineering controls for asphalt paving equipment.

    PubMed

    Mead, K R; Mickelsen, R L; Brumagin, T E

    1999-08-01

    This article describes a unique analytical tool to assist the development and implementation of engineering controls for the asphalt paving industry. Through an agreement with the U.S. Department of Transportation, the National Asphalt Pavement Association (NAPA) requested that the National Institute for Occupational Safety and Health (NIOSH) assist U.S. manufacturers of asphalt paving equipment with the development and evaluation of engineering controls. The intended function of the controls was to capture and remove asphalt emissions generated during the paving process. NIOSH engineers developed a protocol to evaluate prototype engineering controls using qualitative smoke and quantitative tracer gas methods. Video recordings documented each prototype's ability to capture theatrical smoke under "managed" indoor conditions. Sulfur hexafluoride (SF6), released as a tracer gas, enabled quantification of the capture efficiency and exhaust flow rate for each prototype. During indoor evaluations, individual prototypes' capture efficiencies averaged from 7 percent to 100 percent. Outdoor evaluations resulted in average capture efficiencies ranging from 81 percent down to 1 percent as wind gusts disrupted the ability of the controls to capture the SF6. The tracer gas testing protocol successfully revealed deficiencies in prototype designs which otherwise may have gone undetected. It also showed that the combination of a good enclosure and higher exhaust ventilation rate provided the highest capture efficiency. Some manufacturers used the stationary evaluation results to compare performances among multiple hood designs. All the manufacturers identified areas where their prototype designs were susceptible to cross-draft interferences. These stationary performance evaluations proved to be a valuable method to identify strengths and weaknesses in individual designs and subsequently optimize those designs prior to expensive analytical field studies. PMID:10462852

  13. Performance Evaluation and Labeling Comprehension of a New Blood Glucose Monitoring System with Integrated Information Management

    PubMed Central

    List, Susan M; Starks, Nykole; Baum, John; Greene, Carmine; Pardo, Scott; Parkes, Joan L; Schachner, Holly C; Cuddihy, Robert

    2011-01-01

    Background This study evaluated performance and product labeling of CONTOUR® USB, a new blood glucose monitoring system (BGMS) with integrated diabetes management software and a universal serial bus (USB) port, in the hands of untrained lay users and health care professionals (HCPs). Method Subjects and HCPs tested subject's finger stick capillary blood in parallel using CONTOUR USB meters; deep finger stick blood was tested on a Yellow Springs Instruments (YSI) glucose analyzer for reference. Duplicate results by both subjects and HCPs were obtained to assess system precision. System accuracy was assessed according to International Organization for Standardization (ISO) 15197:2003 guidelines [within ±15 mg/dl of mean YSI results (samples <75 mg/dl) and ±20% (samples ≥75 mg/dl)]. Clinical accuracy was determined by Parkes error grid analysis. Subject labeling comprehension was assessed by HCP ratings of subject proficiency. Key system features and ease-of-use were evaluated by subject questionnaires. Results All subjects who completed the study (N = 74) successfully performed blood glucose measurements, connected the meter to a laptop computer, and used key features of the system. The system was accurate; 98.6% (146/148) of subject results and 96.6% (143/148) of HCP results exceeded ISO 15197:2003 criteria. All subject and HCP results were clinically accurate (97.3%; zone A) or associated with benign errors (2.7%; zone B). The majority of subjects rated features of the BGMS as “very good” or “excellent.” Conclusions CONTOUR USB exceeded ISO 15197:2003 system performance criteria in the hands of untrained lay users. Subjects understood the product labeling, found the system easy to use, and successfully performed blood glucose testing. PMID:22027308

  14. Solar power plant performance evaluation: simulation and experimental validation

    NASA Astrophysics Data System (ADS)

    Natsheh, E. M.; Albarbar, A.

    2012-05-01

    In this work the performance of solar power plant is evaluated based on a developed model comprise photovoltaic array, battery storage, controller and converters. The model is implemented using MATLAB/SIMULINK software package. Perturb and observe (P&O) algorithm is used for maximizing the generated power based on maximum power point tracker (MPPT) implementation. The outcome of the developed model are validated and supported by a case study carried out using operational 28.8kW grid-connected solar power plant located in central Manchester. Measurements were taken over 21 month's period; using hourly average irradiance and cell temperature. It was found that system degradation could be clearly monitored by determining the residual (the difference) between the output power predicted by the model and the actual measured power parameters. It was found that the residual exceeded the healthy threshold, 1.7kW, due to heavy snow in Manchester last winter. More important, the developed performance evaluation technique could be adopted to detect any other reasons that may degrade the performance of the P V panels such as shading and dirt. Repeatability and reliability of the developed system performance were validated during this period. Good agreement was achieved between the theoretical simulation and the real time measurement taken the online grid connected solar power plant.

  15. Fiscal year 1998 Battelle performance evaluation agreement revision 1

    SciTech Connect

    DAVIS, T.L.

    1998-10-22

    Fiscal Year 1998 represents the second full year utilizing a results-oriented, performance-based contract. This document describes the critical outcomes, objectives, performance indicators, expected levels of performance, and the basis for the evaluation of the Contractors performance for the period October 1, 1997 through September 30, 1998, as required by Articles entitled Use of Objective Standards of Performance, Self Assessment and Performance Evaluation and Critical Outcomes Review of the Contract DE-AC08-76RLO1830. In partnership with the Contractor and other key customers, the Department of Energy (DOE) Richland Operations Office has defined six critical outcomes that same as the core for the Contractors performance evaluation. The Contractor also utilizes these outcomes as a basis for overall management of the Laboratory. As stated above six critical outcomes have been established for FY 1998. These outcomes are based on the following needs identified by DOE-HQ, RL and other customers of the Laboratory. Our Energy Research customer desires relevant, quality and cost effective science. Our Environmental Management customer wants technology developed, demonstrated, and deployed to solve environmental cleanup issues. To ensure the diversification and viability of the Laboratory as a National asset, RL and HQ alike want to increase the Science and Technical contributions of PNNL related to its core capabilities. RL wants improved leadership/management, cost-effective operations, and maintenance of a work environment, which fosters innovative thinking and high morale. RL and HQ alike desire compliance with environment, safety and health (ES and H) standards and disciplined conduct of operations for protection of the worker, environment, and the public, As with all of Hanford, DOE expects contribution of the Laboratory to the economic development of the Tri-Cities community, and the region, to build a new local economy that is less reliant on the Hanford mission

  16. Performance evaluation of two emerging media processors: VIRAM and imagine

    SciTech Connect

    Oliker, Leonid; Duell, Jason; Narayanan, Manikandan; Chatterji, Sourav

    2003-01-01

    This work presents two emerging media microprocessors, VIRAM and Imagine, and compares the implementation strategies and performance results of these unique architectures. VIRAM is a complete system on a chip which uses PIM technology to combine vector processing with embedded DRAM. Imagine is a programmable streaming architecture with a specialized memory hierarchy designed for computationally intensive data-parallel codes. First, we present a simple and effective approach for understanding and optimizing vector/stream applications. Performance results are then presented from a number of multimedia benchmarks and a computationally intensive scientific kernel. We explore the complex interact ions between programming paradigms, the architectural support at the ISA lever and the underlying microarchitecture of these two systems. Our long term goal is to evaluate leading media microprocessors as possible building blocks for future high performance systems.

  17. ECG compression: evaluation of FFT, DCT, and WT performance.

    PubMed

    GholamHosseini, H; Nazeran, H; Moran, B

    1998-12-01

    This work investigates a set of ECG data compression schemes to compare their performances in compressing and preparing ECG signals for automatic cardiac arrhythmia classification. These schemes are based on transform methods such as fast Fourier transform (FFT), discrete cosine transform (DCT), wavelet transform (WT), and their combinations. Each specific transform is applied to a pre-selected data segment from the MIT-BIH database and then compression is performed in the new domain. These transformation methods are known as an important class of ECG compression techniques. The WT has been shown as the most efficient method for further improvement. A compression ratio of 7.98 to 1 has been achieved with a percent of root mean square difference (PRD) of 0.25%, indicating that the wavelet compression technique offers the best performance over the other evaluated methods.

  18. Performance evaluation of the time delay digital tanlock loop architectures

    NASA Astrophysics Data System (ADS)

    Al-Kharji Al-Ali, Omar; Anani, Nader; Al-Qutayri, Mahmoud; Al-Araji, Saleh; Ponnapalli, Prasad

    2016-01-01

    This article presents the architectures, theoretical analyses and testing results of modified time delay digital tanlock loop (TDTLs) system. The modifications to the original TDTL architecture were introduced to overcome some of the limitations of the original TDTL and to enhance the overall performance of the particular systems. The limitations addressed in this article include the non-linearity of the phase detector, the restricted width of the locking range and the overall system acquisition speed. Each of the modified architectures was tested by subjecting the system to sudden positive and negative frequency steps and comparing its response with that of the original TDTL. In addition, the performance of all the architectures was evaluated under noise-free as well as noisy environments. The extensive simulation results using MATLAB/SIMULINK demonstrate that the new architectures overcome the limitations they addressed and the overall results confirmed significant improvements in performance compared to the conventional TDTL system.

  19. High-definition television evaluation for remote handling task performance

    SciTech Connect

    Fujita, Y.; Omori, E.; Hayashi, S.; Draper, J.V.; Herndon, J.N.

    1986-01-01

    This paper describes experiments designed to evaluate the impact of HDTV on the performance of typical remote tasks. The experiments described in this paper compared the performance of four operators using HDTV with their performance while using other television systems. The experiments included four television systems: (1) high-definition color television, (2) high-definition monochromatic television, (3) standard-resolution monochromatic television, and (4) standard-resolution stereoscopic monochromatic television. The stereo system accomplished stereoscopy by displaying two cross-polarized images, one reflected by a half-silvered mirror and one seen through the mirror. Observers wore a pair of glasses with cross-polarized lenses so that the left eye received only the view from the left camera and the right eye received only the view from the right camera.

  20. High-definition television evaluation for remote handling task performance

    NASA Astrophysics Data System (ADS)

    Fujita, Y.; Omori, E.; Hayashi, S.; Draper, J. V.; Herndon, J. N.

    Described are experiments designed to evaluate the impact of HDTV (High-Definition Television) on the performance of typical remote tasks. The experiments described in this paper compared the performance of four operators using HDTV with their performance while using other television systems. The experiments included four television systems: (1) high-definition color television, (2) high-definition monochromatic television, (3) standard-resolution monochromatic television, and (4) standard-resolution stereoscopic monochromatic television. The stereo system accomplished stereoscopy by displaying two cross-polarized images, one reflected by a half-silvered mirror and one seen through the mirror. Observers wore spectacles with cross-polarized lenses so that the left eye received only the view from the left camera and the right eye received only the view from the right camera.

  1. Performance evaluation of the SX-6 vector architecture forscientific computations

    SciTech Connect

    Oliker, Leonid; Canning, Andrew; Carter, Jonathan Carter; Shalf,John; Skinner, David; Ethier, Stephane; Biswas, Rupak; Djomehri,Jahed; Van der Wijngaart, Rob

    2005-01-01

    The growing gap between sustained and peak performance for scientific applications is a well-known problem in high performance computing. The recent development of parallel vector systems offers the potential to reduce this gap for many computational science codes and deliver a substantial increase in computing capabilities. This paper examines the intranode performance of the NEC SX-6 vector processor, and compares it against the cache-based IBMPower3 and Power4 superscalar architectures, across a number of key scientific computing areas. First, we present the performance of a microbenchmark suite that examines many low-level machine characteristics. Next, we study the behavior of the NAS Parallel Benchmarks. Finally, we evaluate the performance of several scientific computing codes. Overall results demonstrate that the SX-6 achieves high performance on a large fraction of our application suite and often significantly outperforms the cache-based architectures. However, certain classes of applications are not easily amenable to vectorization and would require extensive algorithm and implementation reengineering to utilize the SX-6 effectively.

  2. Measurement-based performance evaluation technique for high-performance computers

    NASA Technical Reports Server (NTRS)

    Sharma, S.; Natarajan, C.; Iyer, R. K.

    1993-01-01

    A measurement-based performance evaluation technique has been used to characterize the OS performance of Cedar, a hierarchical shared-memory multiprocessor system. Thirteen OS performance meters were used to capture the operating system activities for compute-bound workloads. Three representative applications from the Perfect Benchmark Suite were used to measure the OS performance in a dedicated system and in multiprogrammed workloads. It was found that 13-23 percent of the total execution time on a dedicated system was spent in executing OS-related activities. Under multiprogramming, 12-14 percent of the total execution time was used by the OS. The impact of multiprogramming on the operating system performance meters was also measured.

  3. Using simulation to evaluate the performance of resilience strategies and process failures

    SciTech Connect

    Levy, Scott N.; Topp, Bryan Embry; Arnold, Dorian C.; Ferreira, Kurt Brian; Widener, Patrick; Hoefler, Torsten

    2014-01-01

    Fault-tolerance has been identified as a major challenge for future extreme-scale systems. Current predictions suggest that, as systems grow in size, failures will occur more frequently. Because increases in failure frequency reduce the performance and scalability of these systems, significant effort has been devoted to developing and refining resilience mechanisms to mitigate the impact of failures. However, effective evaluation of these mechanisms has been challenging. Current systems are smaller and have significantly different architectural features (e.g., interconnect, persistent storage) than we expect to see in next-generation systems. To overcome these challenges, we propose the use of simulation. Simulation has been shown to be an effective tool for investigating performance characteristics of applications on future systems. In this work, we: identify the set of system characteristics that are necessary for accurate performance prediction of resilience mechanisms for HPC systems and applications; demonstrate how these system characteristics can be incorporated into an existing large-scale simulator; and evaluate the predictive performance of our modified simulator. We also describe how we were able to optimize the simulator for large temporal and spatial scales-allowing the simulator to run 4x faster and use over 100x less memory.

  4. Development and evaluation of a computer program to grade student performance on peripheral blood smears

    NASA Astrophysics Data System (ADS)

    Lehman, Donald Clifford

    Today's medical laboratories are dealing with cost containment health care policies and unfilled laboratory positions. Because there may be fewer experienced clinical laboratory scientists, students graduating from clinical laboratory science (CLS) programs are expected by their employers to perform accurately in entry-level positions with minimal training. Information in the CLS field is increasing at a dramatic rate, and instructors are expected to teach more content in the same amount of time with the same resources. With this increase in teaching obligations, instructors could use a tool to facilitate grading. The research question was, "Can computer-assisted assessment evaluate students in an accurate and time efficient way?" A computer program was developed to assess CLS students' ability to evaluate peripheral blood smears. Automated grading permits students to get results quicker and allows the laboratory instructor to devote less time to grading. This computer program could improve instruction by providing more time to students and instructors for other activities. To be valuable, the program should provide the same quality of grading as the instructor. These benefits must outweigh potential problems such as the time necessary to develop and maintain the program, monitoring of student progress by the instructor, and the financial cost of the computer software and hardware. In this study, surveys of students and an interview with the laboratory instructor were performed to provide a formative evaluation of the computer program. In addition, the grading accuracy of the computer program was examined. These results will be used to improve the program for use in future courses.

  5. An Evaluation of Performance Characteristics of Primary Display Devices.

    PubMed

    Ekpo, Ernest U; McEntee, Mark F

    2016-04-01

    The aim of this study was to complete a full evaluation of the new EIZO RX850 liquid crystal display and compare it to two currently used medical displays in Australia (EIZO GS510 and Barco MDCG 5121). The American Association of Physicists in Medicine (AAPM) Task Group 18 Quality Control test pattern was used to assess the performance of three high-resolution primary medical displays: EIZO RX850, EIZO GS510, and Barco MDCG 5121. A Konica Minolta spectroradiometer (CS-2000) was used to assess luminance response, non-uniformity, veiling glare, and color uniformity. Qualitative evaluation of noise was also performed. Seven breast lesions were displayed on each monitor and photographed with a calibrated 5.5-MP Olympus E-1 digital SLR camera. ImageJ software was used to sample pixel information from each lesion and surrounding background to calculate their conspicuity index on each of the displays. All monitor fulfilled all AAPM acceptance criteria. The performance characteristics for EIZO RX850, Barco MDCG 5121, and EIZO GS510 respectively were as follows: maximum luminance (490, 500.5, and 413 cd/m(2)), minimum luminance (0.724, 1.170, and 0.92 cd/m(2)), contrast ratio (675:1, 428:1, 449:1), just-noticeable difference index (635, 622, 609), non-uniformity (20, 5.92, and 8.5 %), veiling glare (GR = 2465.6, 720.4, 1249.8), and color uniformity (Δu'v' = +0.003, +0.002, +0.002). All monitors demonstrated low noise levels. The conspicuity index (χ) of the lesions was slightly higher in the EIZO RX850 display. All medical displays fulfilled AAPM performance criteria, and performance characteristics of EIZO RX850 are equal to or better than those of the Barco MDCG 5121 and EIZO GS510 displays.

  6. Performance evaluation of optical cross-connected networks

    NASA Astrophysics Data System (ADS)

    Castanon Avila, Gerardo Antonio

    1998-07-01

    The transmission performance of regular two-connected multi-hop transparent optical networks in uniform traffic under hot-potato, single-buffer deflection routing schemes is evaluated. Manhattan Street (MS) Network and ShuffleNet (SN) are compared in terms of bit error rate (BER) and packet error rate (PER). We implement a novel strategy of analysis, in which the transmission performance evaluation is linked to the traffic randomness of the networks. Amplifier spontaneous emission (ASE) noise, and device-induced crosstalk severely limit the characteristics of the network, such as propagation distance, sustainable traffic, and bit- rate. To improve the teletraffic/transmission performance of regular two-connected optical networks a hybrid semi- transparent store-and-forward node architecture is presented. MS and SN are compared in terms of average queueing delay, queue size, propagation delay, throughput, and BER. Packets are stored just in the case of conflict to avoid deflection, otherwise they transparently traverse the node (transparent cut-through routing) without optical-electronic conversion. This architecture performs well, in terms of throughput, propagation delay and BER. It is also shown that by combining deflection routing with the store-and-forward scheme the network can accommodate two different bit- rate. This suggests that the proposed hybrid scheme may have good potential for future multimedia networks. In addition, the steady state behavior of two-connected mesh packet-switched optical networks under wavelength translation scheme (WT) is analyzed. It is shown that wavelength translation mitigates the blocking of cells substantially in cross-connected networks. By increasing the number of wavelengths and employing wavelength translation the probability of deflection can be reduced which, in turn, leads to a significant improvement in the teletraffic performance of the network.

  7. Alvord (3000-ft Strawn) LPG flood: design and performance evaluation

    SciTech Connect

    Frazier, G.D.; Todd, M.R.

    1982-01-01

    Mitchell Energy Corporation has implemented a LPG-dry gas miscible process in the Alvord (3000 ft Strawn) Unit in Wise County, Texas utilizing the DOE tertiary incentive program. The field had been waterflooded for 14 years and was producing near its economic limit at the time this project was started. This paper presents the results of the reservoir simulation study that was conducted to evaluate pattern configuration and operating alternatives so as to maximize LPG containment and oil recovery performance. Several recommendations resulting from this study were implemented for the project. Based on the model prediction, tertiary oil recovery is expected to be between 100,000 and 130,000 bbls, or about 7 percent of th oil originally in place in the Unit. An evaluation of the project performance to date is presented. In July of 1981 the injection of a 16% HPV slug of propane was completed. Natural gas is being used to drive the propane slug. A peak oil response of 222 BOPD was achieved in August of 1981 and production has since been declining. The observed performance of the flood indicates that the actual tertiary oil recovered will reach the predicted value, although the project life will be longer than expected. The results presented in this paper indicate that, without the DOE incentive program, the economics for this project would still be uncertain at this time.

  8. Style-independent document labeling: design and performance evaluation

    NASA Astrophysics Data System (ADS)

    Mao, Song; Kim, Jong Woo; Thoma, George R.

    2003-12-01

    The Medical Article Records System or MARS has been developed at the U.S. National Library of Medicine (NLM) for automated data entry of bibliographical information from medical journals into MEDLINE, the premier bibliographic citation database at NLM. Currently, a rule-based algorithm (called ZoneCzar) is used for labeling important bibliographical fields (title, author, affiliation, and abstract) on medical journal article page images. While rules have been created for medical journals with regular layout types, new rules have to be manually created for any input journals with arbitrary or new layout types. Therefore, it is of interest to label any journal articles independent of their layout styles. In this paper, we first describe a system (called ZoneMatch) for automated generation of crucial geometric and non-geometric features of important bibliographical fields based on string-matching and clustering techniques. The rule based algorithm is then modified to use these features to perform style-independent labeling. We then describe a performance evaluation method for quantitatively evaluating our algorithm and characterizing its error distributions. Experimental results show that the labeling performance of the rule-based algorithm is significantly improved when the generated features are used.

  9. Imaging acquisition display performance: an evaluation and discussion of performance metrics and procedures.

    PubMed

    Silosky, Michael S; Marsh, Rebecca M; Scherzinger, Ann L

    2016-07-08

    When The Joint Commission updated its Requirements for Diagnostic Imaging Services for hospitals and ambulatory care facilities on July 1, 2015, among the new requirements was an annual performance evaluation for acquisition workstation displays. The purpose of this work was to evaluate a large cohort of acquisition displays used in a clinical environment and compare the results with existing performance standards provided by the American College of Radiology (ACR) and the American Association of Physicists in Medicine (AAPM). Measurements of the minimum luminance, maximum luminance, and luminance uniformity, were performed on 42 acquisition displays across multiple imaging modalities. The mean values, standard deviations, and ranges were calculated for these metrics. Additionally, visual evaluations of contrast, spatial resolution, and distortion were performed using either the Society of Motion Pictures and Television Engineers test pattern or the TG-18-QC test pattern. Finally, an evaluation of local nonuniformities was performed using either a uniform white display or the TG-18-UN80 test pattern. Displays tested were flat panel, liquid crystal displays that ranged from less than 1 to up to 10 years of use and had been built by a wide variety of manufacturers. The mean values for Lmin and Lmax for the displays tested were 0.28 ± 0.13 cd/m2 and 135.07 ± 33.35 cd/m2, respectively. The mean maximum luminance deviation for both ultrasound and non-ultrasound displays was 12.61% ± 4.85% and 14.47% ± 5.36%, respectively. Visual evaluation of display performance varied depending on several factors including brightness and contrast settings and the test pattern used for image quality assessment. This work provides a snapshot of the performance of 42 acquisition displays across several imaging modalities in clinical use at a large medical center. Comparison with existing performance standards reveals that changes in display technology and the move from cathode ray

  10. Imaging acquisition display performance: an evaluation and discussion of performance metrics and procedures.

    PubMed

    Silosky, Michael S; Marsh, Rebecca M; Scherzinger, Ann L

    2016-01-01

    When The Joint Commission updated its Requirements for Diagnostic Imaging Services for hospitals and ambulatory care facilities on July 1, 2015, among the new requirements was an annual performance evaluation for acquisition workstation displays. The purpose of this work was to evaluate a large cohort of acquisition displays used in a clinical environment and compare the results with existing performance standards provided by the American College of Radiology (ACR) and the American Association of Physicists in Medicine (AAPM). Measurements of the minimum luminance, maximum luminance, and luminance uniformity, were performed on 42 acquisition displays across multiple imaging modalities. The mean values, standard deviations, and ranges were calculated for these metrics. Additionally, visual evaluations of contrast, spatial resolution, and distortion were performed using either the Society of Motion Pictures and Television Engineers test pattern or the TG-18-QC test pattern. Finally, an evaluation of local nonuniformities was performed using either a uniform white display or the TG-18-UN80 test pattern. Displays tested were flat panel, liquid crystal displays that ranged from less than 1 to up to 10 years of use and had been built by a wide variety of manufacturers. The mean values for Lmin and Lmax for the displays tested were 0.28 ± 0.13 cd/m2 and 135.07 ± 33.35 cd/m2, respectively. The mean maximum luminance deviation for both ultrasound and non-ultrasound displays was 12.61% ± 4.85% and 14.47% ± 5.36%, respectively. Visual evaluation of display performance varied depending on several factors including brightness and contrast settings and the test pattern used for image quality assessment. This work provides a snapshot of the performance of 42 acquisition displays across several imaging modalities in clinical use at a large medical center. Comparison with existing performance standards reveals that changes in display technology and the move from cathode ray

  11. Development of the "performance competence evaluation measure": assessing qualitative aspects of dance performance.

    PubMed

    Krasnow, Donna; Chatfield, Steven J

    2009-01-01

    The aim of this study was to develop a measurement tool, the "Performance Competence Evaluation Measure" (PCEM), for the evaluation of qualitative aspects of dance performance. The project had two phases. In the first phase a literature review was conducted to examine 1. the previous development of similar measurement tools, 2. descriptions of dance technique and dance performance applicable to the development of a qualitative measurement tool, and 3. theoretical models from somatic practices that evaluate and assess qualitative aspects of movement and dance activity. The second phase involved the development of a system for using PCEM, and testing its validity and reliability. Three judges from the professional dance community volunteered to test PCEM with a sample of 20 subjects from low-intermediate to advanced classes at a university dance program. The subjects learned a dance combination and were videotaped performing it on two separate occasions, eight weeks apart. The judges reviewed the videos in random order. Logical validity of PCEM was established through assessment by two faculty members of the university dance department and the three judges. Intra-rater and inter-rater reliability demonstrated correlation coefficients of 0.95 and 0.94, respectively. It was concluded that PCEM can serve as a useful measurement tool for future dance science research.

  12. Protection performance evaluation regarding imaging sensors hardened against laser dazzling

    NASA Astrophysics Data System (ADS)

    Ritt, Gunnar; Koerber, Michael; Forster, Daniel; Eberle, Bernd

    2015-05-01

    Electro-optical imaging sensors are widely distributed and used for many different purposes, including civil security and military operations. However, laser irradiation can easily disturb their operational capability. Thus, an adequate protection mechanism for electro-optical sensors against dazzling and damaging is highly desirable. Different protection technologies exist now, but none of them satisfies the operational requirements without any constraints. In order to evaluate the performance of various laser protection measures, we present two different approaches based on triangle orientation discrimination on the one hand and structural similarity on the other hand. For both approaches, image analysis algorithms are applied to images taken of a standard test scene with triangular test patterns which is superimposed by dazzling laser light of various irradiance levels. The evaluation methods are applied to three different sensors: a standard complementary metal oxide semiconductor camera, a high dynamic range camera with a nonlinear response curve, and a sensor hardened against laser dazzling.

  13. Evaluation of the virucidal performance of domestic laundry procedures.

    PubMed

    Heinzel, Michael; Kyas, Andrea; Weide, Mirko; Breves, Roland; Bockmühl, Dirk P

    2010-09-01

    Laundering is one of the most important means to ensure a sufficient hygiene standard in the household environment. To evaluate the performance of this process, it is desirable to have methods that mimic the real-life situation as closely as possible. Although methods for the evaluation of the antibacterial and antifungal efficacy of domestic laundry procedures are available, the effect of laundering on viruses is still rather unclear. As the influence of laundry process parameters such as mechanical actions, temperature dynamics or liquor ratio cannot be simulated in vitro by suspension assays, a new in situ test method allowing virus simulation tests in washing machines has been developed. Using this in situ method we could show that conventional household washing detergents have a full virucidal efficiency at 40 degrees C also against non-enveloped surrogate viruses.

  14. Medication use evaluation: pharmacist rubric for performance improvement.

    PubMed

    Fanikos, John; Jenkins, Kathryn L; Piazza, Gregory; Connors, Jean; Goldhaber, Samuel Z

    2014-12-01

    Despite rigorous expert review, medications often fall into routine use with unrecognized and unwanted complications. Use of some medications remains controversial because information to support efficacy is conflicting, scant, or nonexistent. Medication use evaluation (MUE) is a performance improvement tool that can be used when there is uncertainty regarding whether a medication will be beneficial. It is particularly useful when limited evidence is available on how best to choose between two or more medications. MUEs can analyze the process of medication prescribing, preparation, dispensing, administration, and monitoring. MUEs can be part of a structured or mandated multidisciplinary quality management program that focuses on evaluating medication effectiveness and improving patient safety. Successful MUE programs have a structure in place to support completion of rapid-cycle data collection, analysis, and intervention that supports practice change. PMID:25521847

  15. NREL Evaluates Performance of Hydraulic Hybrid Refuse Vehicles

    SciTech Connect

    2015-09-01

    This highlight describes NREL's evaluation of the in-service performance of 10 next-generation hydraulic hybrid refuse vehicles (HHVs), 8 previous-generation (model year 2013) HHVs, and 8 comparable conventional diesel vehicles operated by Miami-Dade County's Public Works and Waste Management Department in southern Florida. Launched in March 2015, the on-road portion of this 12-month evaluation focuses on collecting and analyzing vehicle performance data - fuel economy, maintenance costs, and drive cycles - from the HHVs and the conventional diesel vehicles. The fuel economy of heavy-duty vehicles, such as refuse trucks, is largely dependent on the load carried and the drive cycles on which they operate. In the right applications, HHVs offer a potential fuel-cost advantage over their conventional counterparts. This advantage is contingent, however, on driving behavior and drive cycles with high kinetic intensity that take advantage of regenerative braking. NREL's evaluation will assess the performance of this technology in commercial operation and help Miami-Dade County determine the ideal routes for maximizing the fuel-saving potential of its HHVs. Based on the field data, NREL will develop a validated vehicle model using the Future Automotive Systems Technology Simulator, also known as FASTSim, to study the impacts of route selection and other vehicle parameters. NREL is also analyzing fueling and maintenance data to support total-cost-of-ownership estimations and forecasts. The study aims to improve understanding of the overall usage and effectiveness of HHVs in refuse operation compared to similar conventional vehicles and to provide unbiased technical information to interested stakeholders.

  16. The potential of inductively coupled plasma mass spectrometry detection for high-performance liquid chromatography combined with accurate mass measurement of organic pharmaceutical compounds.

    PubMed

    Axelsson, B O; Jörnten-Karlsson, M; Michelsen, P; Abou-Shakra, F

    2001-01-01

    Quantification of unknown components in pharmaceutical, metabolic and environmental samples is an important but difficult task. Most commonly used detectors (like UV, RI or MS) require standards of each analyte for accurate quantification. Even if the chemical structure or elemental composition is known, the response from these detectors is difficult to predict with any accuracy. In inductively coupled plasma mass spectrometry (ICP-MS) compounds are atomised and ionised irrespective of the chemical structure(s) incorporating the element of interest. Liquid chromatography coupled with inductively coupled plasma mass spectrometry (LC/ICP-MS) has been shown to provide a generic detection for structurally non-correlated compounds with common elements like phosphorus and iodine. Detection of selected elements gives a better quantification of tested 'unknowns' than UV and organic mass spectrometric detection. It was shown that the ultrasonic nebuliser did not introduce any measurable dead volume and preserves the separation efficiency of the system. ICP-MS can be used in combination with many different mobile phases ranging from 0-100% organic modifier. The dynamic range was found to exceed 2.5 orders of magnitude. The application of LC/ICP-MS to pharmaceutical drugs and formulations has shown that impurities can be quantified below the 0.1 mol-% level.

  17. Performance Evaluation of Five Turbidity Sensors in Three Primary Standards

    USGS Publications Warehouse

    Snazelle, Teri T.

    2015-10-28

    Open-File Report 2015-1172 is temporarily unavailable.Five commercially available turbidity sensors were evaluated by the U.S. Geological Survey, Hydrologic Instrumentation Facility (HIF) for accuracy and precision in three types of turbidity standards; formazin, StablCal, and AMCO Clear (AMCO–AEPA). The U.S. Environmental Protection Agency (EPA) recognizes all three turbidity standards as primary standards, meaning they are acceptable for reporting purposes. The Forrest Technology Systems (FTS) DTS-12, the Hach SOLITAX sc, the Xylem EXO turbidity sensor, the Yellow Springs Instrument (YSI) 6136 turbidity sensor, and the Hydrolab Series 5 self-cleaning turbidity sensor were evaluated to determine if turbidity measurements in the three primary standards are comparable to each other, and to ascertain if the primary standards are truly interchangeable. A formazin 4000 nephelometric turbidity unit (NTU) stock was purchased and dilutions of 40, 100, 400, 800, and 1000 NTU were made fresh the day of testing. StablCal and AMCO Clear (for Hach 2100N) standards with corresponding concentrations were also purchased for the evaluation. Sensor performance was not evaluated in turbidity levels less than 40 NTU due to the unavailability of polymer-bead turbidity standards rated for general use. The percent error was calculated as the true (not absolute) difference between the measured turbidity and the standard value, divided by the standard value.The sensors that demonstrated the best overall performance in the evaluation were the Hach SOLITAX and the Hydrolab Series 5 turbidity sensor when the operating range (0.001–4000 NTU for the SOLITAX and 0.1–3000 NTU for the Hydrolab) was considered in addition to sensor accuracy and precision. The average percent error in the three standards was 3.80 percent for the SOLITAX and -4.46 percent for the Hydrolab. The DTS-12 also demonstrated good accuracy with an average percent error of 2.02 percent and a maximum relative standard

  18. Thermal performance evaluation of the Semco (liquid) solar collector

    NASA Technical Reports Server (NTRS)

    1979-01-01

    Procedures used and results obtained during the evaluation test program on a flat plate collector which uses water as the working fluid are discussed. The absorber plate is copper tube soldered to copper fin coated with flat black paint. The glazing consists of two plates of Lo-Iron glass; the insulation is polyurethane foam. The collector weight is 242.5 pounds with overall external dimensions of approximately 48.8 in. x 120.8 in. x 4.1 in. The test program was conducted to obtain thermal performance data before and after 34 days of weather exposure test.

  19. Evaluation of the performance and flow in an axial compressor

    NASA Astrophysics Data System (ADS)

    Waddell, J. L.

    1982-10-01

    An experimental evaluation of the axial compressor test rig with one stage of symmetric blading was conducted to determine its suitability for studies of tip clearance effects. Measurements were made of performance parameters and internal flow fields. The configuration tested was found to be unsuitable due to poor flow from the inlet guide vanes, particularly near the tip region. Secondary flows and flaws in construction of the guide vanes were suggested as probable causes. Recommendations were made for a program to resolve the problem.

  20. Performances evaluation of textile electrodes for EMG remote measurements.

    PubMed

    Sumner, B; Mancuso, C; Paradiso, R

    2013-01-01

    This work focus on the evaluation of textile electrodes for EMG signals acquisition. Signals have been acquired simultaneously from textile electrode and from gold standard electrodes, by using the same acquisition system; tests were done across subjects and with multiple trials to enable a more complete analysis. This research activity was done in the frame of the European Project Interaction, aiming at the development of a system for a continuous daily-life monitoring of the functional performance of stroke survivors in their physical interaction with the environment.

  1. The MSAD actuator sclenoid, performance evaluation and modification

    NASA Astrophysics Data System (ADS)

    Worth, G.

    1983-04-01

    A small conical-faced solenoid actuator is tested in order to develop design criteria for improved performance including increased pull sensitivity. In addition to increased pull for the normal electrical inputs, a reduction in pull response to short duration electrical noise pulses is also required. Along with dynamic testing of the solenoid, a linear circuit model is developed. This model permits calculation of the dynamic forces and currents which can be expected with various electrical inputs. The model parameters are related to the actual solenoid and allow the effects of winding density and shading rings to be evaluated.

  2. Performance evaluation of space solar Brayton cycle power systems

    NASA Astrophysics Data System (ADS)

    Diao, Zheng-Gang

    1992-06-01

    Unlike gas turbine power systems which consume chemical or nuclear energy, the energy consumption and/or cycle efficiency should not be a suitable criterion for evaluating the performance of space solar Brayton cycle power. A new design goal, life cycle cost, can combine all the power system characteristics, such as mass, area, and station-keeping propellant, into a unified criterion. Effects of pressure ratio, recuperator effectiveness, and compressor inlet temperature on life cycle cost were examined. This method would aid in making design choices for a space power system.

  3. Performance and boundary-layer evaluation of a sonic inlet

    NASA Technical Reports Server (NTRS)

    Schmidt, J. F.; Ruggeri, R. S.

    1976-01-01

    Tests were conducted to determine the boundary layer characteristics and aerodynamic performance of a radial vane sonic inlet with a length/diameter ratio of 1 for several vane configurations. The sonic inlet was designed with a slight wavy wall type of diffuser geometry, which permits operation at high inlet Mach numbers (sufficiently high for good noise suppression) without boundary layer flow separation and with good total pressure recovery. A new method for evaluating the turbulent boundary layer was developed to separate the boundary layer from the inviscid core flow, which is characterized by a total pressure variation from hub to tip, and to determine the experimental boundary layer parameters.

  4. Performance Evaluation of Industrial Hygiene Air Monitoring Sensors

    SciTech Connect

    Maughan, A D.; Glissmeyer, John A.; Birnbaum, Jerome C.

    2004-12-10

    Tests were performed to evaluate the accuracy, precision and response time of certain commercially available handheld toxic gas monitors. The tests were conducted by PNNL in the Chemical Chamber Test Facility for CH2MHill Hanford Company. The instruments were tested with a set of dilute test gases including ammonia, nitrous oxide, and a mixture of organic vapors (acetone, benzene, ethanol, hexane, toluene and xylene). The certified gases were diluted to concentrations that may be encountered in the outdoor environment above the underground tank farms containing radioactive waste at the U.S. Department of Energy's Hanford site, near Richland, Washington. The challenge concentrations are near the lower limits of instrument sensitivity and response time. The performance test simulations were designed to look at how the instruments respond to changes in test gas concentrations that are similar to field conditions.

  5. Performance evaluation and clinical applications of 3D plenoptic cameras

    NASA Astrophysics Data System (ADS)

    Decker, Ryan; Shademan, Azad; Opfermann, Justin; Leonard, Simon; Kim, Peter C. W.; Krieger, Axel

    2015-06-01

    The observation and 3D quantification of arbitrary scenes using optical imaging systems is challenging, but increasingly necessary in many fields. This paper provides a technical basis for the application of plenoptic cameras in medical and medical robotics applications, and rigorously evaluates camera integration and performance in the clinical setting. It discusses plenoptic camera calibration and setup, assesses plenoptic imaging in a clinically relevant context, and in the context of other quantitative imaging technologies. We report the methods used for camera calibration, precision and accuracy results in an ideal and simulated surgical setting. Afterwards, we report performance during a surgical task. Test results showed the average precision of the plenoptic camera to be 0.90mm, increasing to 1.37mm for tissue across the calibrated FOV. The ideal accuracy was 1.14mm. The camera showed submillimeter error during a simulated surgical task.

  6. Performance Evaluation of Fiber Bragg Gratings at Elevated Temperatures

    NASA Technical Reports Server (NTRS)

    Juergens, Jeffrey; Adamovsky, Grigory; Floyd, Bertram

    2004-01-01

    The development of integrated fiber optic sensors for smart propulsion systems demands that the sensors be able to perform in extreme environments. In order to use fiber optic sensors effectively in an extreme environment one must have a thorough understanding of the sensor s limits and how it responds under various environmental conditions. The sensor evaluation currently involves examining the performance of fiber Bragg gratings at elevated temperatures. Fiber Bragg gratings (FBG) are periodic variations of the refractive index of an optical fiber. These periodic variations allow the FBG to act as an embedded optical filter passing the majority of light propagating through a fiber while reflecting back a narrow band of the incident light. The peak reflected wavelength of the FBG is known as the Bragg wavelength. Since the period and width of the refractive index variation in the fiber determines the wavelengths that are transmitted and reflected by the grating, any force acting on the fiber that alters the physical structure of the grating will change what wavelengths are transmitted and what wavelengths are reflected by the grating. Both thermal and mechanical forces acting on the grating will alter its physical characteristics allowing the FBG sensor to detect both temperature variations and physical stresses, strain, placed upon it. This ability to sense multiple physical forces makes the FBG a versatile sensor. This paper reports on test results of the performance of FBGs at elevated temperatures. The gratings looked at thus far have been either embedded in polymer matrix materials or freestanding with the primary focus of this paper being on the freestanding FBGs. Throughout the evaluation process, various parameters of the FBGs performance were monitored and recorded. These parameters include the peak Bragg wavelength, the power of the Bragg wavelength, and total power returned by the FBG. Several test samples were subjected to identical test conditions to

  7. Performance evaluation of neuro-PET using silicon photomultipliers

    NASA Astrophysics Data System (ADS)

    Jung, Jiwoong; Choi, Yong; Jung, Jin Ho; Kim, Sangsu; Im, Ki Chun

    2016-05-01

    Recently, we have developed the second prototype Silicon photomultiplier (SiPM) based positron emission tomography (PET) scanner for human brain imaging. The PET system was comprised of detector block which consisted of 4×4 SiPMs and 4×4 Lutetium Yttrium Orthosilicate arrays, charge signal transmission method, high density position decoder circuit and FPGA-embedded ADC boards. The purpose of this study was to evaluate the performance of the newly developed neuro-PET system. The energy resolution, timing resolution, spatial resolution, sensitivity, stability of the photo-peak position and count rate performance were measured. Tomographic image of 3D Hoffman brain phantom was also acquired to evaluate imaging capability of the neuro-PET. The average energy and timing resolutions measured for 511 keV gamma rays were 17±0.1% and 3±0.3 ns, respectively. Spatial resolution and sensitivity at the center of field of view (FOV) were 3.1 mm and 0.8%, respectively. The average scatter fraction was 0.4 with an energy window of 350-650 keV. The maximum true count rate and maximum NECR were measured as 43.3 kcps and 6.5 kcps at an activity concentration of 16.7 kBq/ml and 5.5 kBq/ml, respectively. Long-term stability results show that there was no significant change in the photo-peak position, energy resolution and count rate for 60 days. Phantom imaging studies were performed and they demonstrated the feasibility for high quality brain imaging. The performance tests and imaging results indicate that the newly developed PET is useful for brain imaging studies, if the axial FOV is extended to improve the system sensitivity.

  8. Performance Evaluation of Communication Software Systems for Distributed Computing

    NASA Technical Reports Server (NTRS)

    Fatoohi, Rod

    1996-01-01

    In recent years there has been an increasing interest in object-oriented distributed computing since it is better quipped to deal with complex systems while providing extensibility, maintainability, and reusability. At the same time, several new high-speed network technologies have emerged for local and wide area networks. However, the performance of networking software is not improving as fast as the networking hardware and the workstation microprocessors. This paper gives an overview and evaluates the performance of the Common Object Request Broker Architecture (CORBA) standard in a distributed computing environment at NASA Ames Research Center. The environment consists of two testbeds of SGI workstations connected by four networks: Ethernet, FDDI, HiPPI, and ATM. The performance results for three communication software systems are presented, analyzed and compared. These systems are: BSD socket programming interface, IONA's Orbix, an implementation of the CORBA specification, and the PVM message passing library. The results show that high-level communication interfaces, such as CORBA and PVM, can achieve reasonable performance under certain conditions.

  9. Performance evaluation of various stormwater best management practices.

    PubMed

    Yu, Jianghua; Yu, Haixia; Xu, Liqiang

    2013-09-01

    Many best management practices have been developed and implemented to treat the nonpoint source pollution of the aquatic environment in Korea's four major river basins. The performance and cost of these facilities were evaluated and compared using broad categories, including grassed swales, constructed wetlands, vegetated filter strips, hydrodynamic separators, media filters, and infiltration trenches, based on the monitoring and maintenance work undertaken between 2005 and 2012. Constructed wetlands, media filters, and infiltration trenches generally performed better in removing pollutants than other types of facilities, while media filters were the most expensive factor in terms of construction and operational costs. In addition, constructed wetlands incurred the least operational cost, as well as helping to control the quantity of runoff. This illustrates that a high cost facility does not necessarily give a better performance. A slightly more expensive facility, such as wetland, could prove to be a reasonably effective treatment. The selection of the most appropriate treatment for stormwater runoff should be based on an overall analysis of performance and cost.

  10. Performance, physiological, and oculometer evaluation of VTOL landing displays

    NASA Technical Reports Server (NTRS)

    North, R. A.; Stackhouse, S. P.; Graffunder, K.

    1979-01-01

    A methodological approach to measuring workload was investigated for evaluation of new concepts in VTOL aircraft displays. Physiological, visual response, and conventional flight performance measures were recorded for landing approaches performed in the NASA Visual Motion Simulator (VMS). Three displays (two computer graphic and a conventional flight director), three crosswind amplitudes, and two motion base conditions (fixed vs. moving base) were tested in a factorial design. Multivariate discriminant functions were formed from flight performance and/or visual response variables. The flight performance variable discriminant showed maximum differentation between crosswind conditions. The visual response measure discriminant maximized differences between fixed vs. motion base conditions and experimental displays. Physiological variables were used to attempt to predict the discriminant function values for each subject/condition trial. The weights of the physiological variables in these equations showed agreement with previous studies. High muscle tension, light but irregular breathing patterns, and higher heart rate with low amplitude all produced higher scores on this scale and thus represent higher workload levels.

  11. Experiments evaluating compliance and force feedback effect on manipulator performance

    NASA Technical Reports Server (NTRS)

    Kugath, D. A.

    1972-01-01

    The performance capability was assessed of operators performing simulated space tasks using manipulator systems which had compliance and force feedback varied. Two manipulators were used, the E-2 electromechanical man-equivalent (force, reach, etc.) master-slave system and a modified CAM 1400 hydraulic master-slave with 100 lbs force capability at reaches of 24 ft. The CAM 1400 was further modified to operate without its normal force feedback. Several experiments and simulations were performed. The first two involved the E-2 absorbing the energy of a moving mass and secondly, guiding a mass thru a maze. Thus, both work and self paced tasks were studied as servo compliance was varied. Three simulations were run with the E-2 mounted on the CAM 1400 to evaluate the concept of a dexterous manipulator as an end effector of a boom-manipulator. Finally, the CAM 1400 performed a maze test and also simulated the capture of a large mass as the servo compliance was varied and with force feedback included and removed.

  12. Development of Curricula and Materials to Teach Performance Skills Essential to Accurate Computer Assisted Transcription from Machine Shorthand Notes. Final Report.

    ERIC Educational Resources Information Center

    Honsberger, Marion M.

    This project was conducted at Edmonds Community College to develop curriculum and materials for use in teaching hands-on, computer-assisted court reporting. The final product of the project was a course with support materials designed to teach court reporting students performance skills by which each can rapidly create perfect computer-aided…

  13. Workforce Investment Act: Improvements Needed in Performance Measures To Provide a More Accurate Picture of WIA's Effectiveness. Report to Congressional Requesters.

    ERIC Educational Resources Information Center

    Nilsen, Sigurd R.

    A study assessed effectiveness of Workforce Investment Act (WIA) of 1998 performance measures. Data were from a survey of WIA program administrators in 50 states; visits to 5 states; interviews with labor officials and national associations representing state and local workforce development officials; and a document review. Findings indicate…

  14. Can Scores on an Interim High School Reading Assessment Accurately Predict Low Performance on College Readiness Exams? REL 2016-124

    ERIC Educational Resources Information Center

    Koon, Sharon; Petscher, Yaacov

    2016-01-01

    During the 2013/14 school year two Florida school districts sought to develop an early warning system to identify students at risk of low performance on college readiness measures in grade 11 or 12 (such as the SAT or ACT) in order to support them with remedial coursework prior to high school graduation. The study presented in this report provides…

  15. Tillandsia stricta Sol (Bromeliaceae) leaves as monitors of airborne particulate matter-A comparative SEM methods evaluation: Unveiling an accurate and odd HP-SEM method.

    PubMed

    de Oliveira, Martha Lima; de Melo, Edésio José Tenório; Miguens, Flávio Costa

    2016-09-01

    Airborne particulate matter (PM) has been included among the most important air pollutants by governmental environment agencies and academy researchers. The use of terrestrial plants for monitoring PM has been widely accepted, particularly when it is coupled with SEM/EDS. Herein, Tillandsia stricta leaves were used as monitors of PM, focusing on a comparative evaluation of Environmental SEM (ESEM) and High-Pressure SEM (HPSEM). In addition, specimens air-dried at formaldehyde atmosphere (AD/FA) were introduced as an SEM procedure. Hydrated specimen observation by ESEM was the best way to get information from T. stricta leaves. If any artifacts were introduced by AD/FA, they were indiscernible from those caused by CPD. Leaf anatomy was always well preserved. PM density was determined on adaxial and abaxial leaf epidermis for each of the SEM proceedings. When compared with ESEM, particle extraction varied from 0 to 20% in air-dried leaves while 23-78% of particles deposited on leaves surfaces were extracted by CPD procedures. ESEM was obviously the best choice over other methods but morphological artifacts increased in function of operation time while HPSEM operation time was without limit. AD/FA avoided the shrinkage observed in the air-dried leaves and particle extraction was low when compared with CPD. Structural and particle density results suggest AD/FA as an important methodological approach to air pollution biomonitoring that can be widely used in all electron microscopy labs. Otherwise, previous PM assessments using terrestrial plants as biomonitors and performed by conventional SEM could have underestimated airborne particulate matter concentration. PMID:27357408

  16. Tillandsia stricta Sol (Bromeliaceae) leaves as monitors of airborne particulate matter-A comparative SEM methods evaluation: Unveiling an accurate and odd HP-SEM method.

    PubMed

    de Oliveira, Martha Lima; de Melo, Edésio José Tenório; Miguens, Flávio Costa

    2016-09-01

    Airborne particulate matter (PM) has been included among the most important air pollutants by governmental environment agencies and academy researchers. The use of terrestrial plants for monitoring PM has been widely accepted, particularly when it is coupled with SEM/EDS. Herein, Tillandsia stricta leaves were used as monitors of PM, focusing on a comparative evaluation of Environmental SEM (ESEM) and High-Pressure SEM (HPSEM). In addition, specimens air-dried at formaldehyde atmosphere (AD/FA) were introduced as an SEM procedure. Hydrated specimen observation by ESEM was the best way to get information from T. stricta leaves. If any artifacts were introduced by AD/FA, they were indiscernible from those caused by CPD. Leaf anatomy was always well preserved. PM density was determined on adaxial and abaxial leaf epidermis for each of the SEM proceedings. When compared with ESEM, particle extraction varied from 0 to 20% in air-dried leaves while 23-78% of particles deposited on leaves surfaces were extracted by CPD procedures. ESEM was obviously the best choice over other methods but morphological artifacts increased in function of operation time while HPSEM operation time was without limit. AD/FA avoided the shrinkage observed in the air-dried leaves and particle extraction was low when compared with CPD. Structural and particle density results suggest AD/FA as an important methodological approach to air pollution biomonitoring that can be widely used in all electron microscopy labs. Otherwise, previous PM assessments using terrestrial plants as biomonitors and performed by conventional SEM could have underestimated airborne particulate matter concentration.

  17. Performance evaluation of stand alone hybrid PV-wind generator

    NASA Astrophysics Data System (ADS)

    Nasir, M. N. M.; Saharuddin, N. Z.; Sulaima, M. F.; Jali, Mohd Hafiz; Bukhari, W. M.; Bohari, Z. H.; Yahaya, M. S.

    2015-05-01

    This paper presents the performance evaluation of standalone hybrid system on Photovoltaic (PV)-Wind generator at Faculty of Electrical Engineering (FKE), UTeM. The hybrid PV-Wind in UTeM system is combining wind turbine system with the solar system and the energy capacity of this hybrid system can generate up to charge the battery and supply the LED street lighting load. The purpose of this project is to evaluate the performance of PV-Wind hybrid generator. Solar radiation meter has been used to measure the solar radiation and anemometer has been used to measure the wind speed. The effectiveness of the PV-Wind system is based on the various data that has been collected and compared between them. The result shows that hybrid system has greater reliability. Based on the solar result, the correlation coefficient shows strong relationship between the two variables of radiation and current. The reading output current followed by fluctuate of solar radiation. However, the correlation coefficient is shows moderate relationship between the two variables of wind speed and voltage. Hence, the wind turbine system in FKE show does not operate consistently to produce energy source for this hybrid system compare to PV system. When the wind system does not fully operate due to inconsistent energy source, the other system which is PV will operate and supply the load for equilibrate the extra load demand.

  18. Evaluation of performance of predictive models for deoxynivalenol in wheat.

    PubMed

    van der Fels-Klerx, H J

    2014-02-01

    The aim of this study was to evaluate the performance of two predictive models for deoxynivalenol contamination of wheat at harvest in the Netherlands, including the use of weather forecast data and external model validation. Data were collected in a different year and from different wheat fields than data used for model development. The two models were run for six preset scenarios, varying in the period for which weather forecast data were used, from zero-day (historical data only) to a 13-day period around wheat flowering. Model predictions using forecast weather data were compared to those using historical data. Furthermore, model predictions using historical weather data were evaluated against observed deoxynivalenol contamination of the wheat fields. Results showed that the use of weather forecast data rather than observed data only slightly influenced model predictions. The percent of correct model predictions, given a threshold of 1,250 μg/kg (legal limit in European Union), was about 95% for the two models. However, only three samples had a deoxynivalenol concentration above this threshold, and the models were not able to predict these samples correctly. It was concluded that two- week weather forecast data can reliable be used in descriptive models for deoxynivalenol contamination of wheat, resulting in more timely model predictions. The two models are able to predict lower deoxynivalenol contamination correctly, but model performance in situations with high deoxynivalenol contamination needs to be further validated. This will need years with conducive environmental conditions for deoxynivalenol contamination of wheat.

  19. Performance evaluation of stand alone hybrid PV-wind generator

    SciTech Connect

    Nasir, M. N. M.; Saharuddin, N. Z.; Sulaima, M. F.; Jali, Mohd Hafiz; Bukhari, W. M.; Bohari, Z. H.; Yahaya, M. S.

    2015-05-15

    This paper presents the performance evaluation of standalone hybrid system on Photovoltaic (PV)-Wind generator at Faculty of Electrical Engineering (FKE), UTeM. The hybrid PV-Wind in UTeM system is combining wind turbine system with the solar system and the energy capacity of this hybrid system can generate up to charge the battery and supply the LED street lighting load. The purpose of this project is to evaluate the performance of PV-Wind hybrid generator. Solar radiation meter has been used to measure the solar radiation and anemometer has been used to measure the wind speed. The effectiveness of the PV-Wind system is based on the various data that has been collected and compared between them. The result shows that hybrid system has greater reliability. Based on the solar result, the correlation coefficient shows strong relationship between the two variables of radiation and current. The reading output current followed by fluctuate of solar radiation. However, the correlation coefficient is shows moderate relationship between the two variables of wind speed and voltage. Hence, the wind turbine system in FKE show does not operate consistently to produce energy source for this hybrid system compare to PV system. When the wind system does not fully operate due to inconsistent energy source, the other system which is PV will operate and supply the load for equilibrate the extra load demand.

  20. 78 FR 41926 - Proposed Information Collection Request; Comment Request; Performance Evaluation Studies on...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-12

    ... AGENCY Proposed Information Collection Request; Comment Request; Performance Evaluation Studies on... an information collection request (ICR), ``Performance Evaluation Studies on Wastewater Laboratories...: (i) Evaluate whether the proposed collection of information is necessary for the proper...

  1. Analytical performance evaluation of Anyplex II HPV28 and Euroarray HPV for genotyping of cervical samples.

    PubMed

    Latsuzbaia, Ardashel; Tapp, Jessica; Nguyen, Trung; Fischer, Marc; Arbyn, Marc; Weyers, Steven; Mossong, Joël

    2016-07-01

    Analytically accurate human papillomavirus (HPV) genotyping methods are required to assess the impact of HPV vaccination. The aim of this study was to evaluate the analytical performance of Anyplex II HPV28 (Seegene, Korea) and Euroarray HPV (Euroimmun, Germany) genotyping kits, for conducting a future HPV vaccine efficacy monitoring study in Luxembourg. A total number of 150 cervical swabs were collected from women with mean age 31.4 years. Agreements for detecting any HPV between Aptima/Anyplex (88.0%) and Aptima/Euroarray (90.7%) were similar. Agreement of Anyplex/EuroArray with Aptima was higher for Genotypes 16, 18 or 45 than for the other 11 HPVs. The average number of HPV genotypes detected per sample was similar with 2.6 and 2.5, for Anyplex and EuroArray, respectively. In conclusion, Anyplex and Euroarray showed high agreement in general and in particular for detecting genotypes contained in HPV vaccines.

  2. A Comprehensive Evaluation of the Performance and Materials Chemistry of a Sililcone-Based Replicating Compound

    SciTech Connect

    Kalan, Michael

    2014-05-01

    The objective of this project was to characterize the performance and chemistry of a siliconebased replicating compound. Some silicone replicating compounds are useful for critical inspection of surface features. Common applications are for examining micro-cracks, surface pitting, scratching, and other surface defects. Materials characterization techniques were used: FTIR, XPS, ToF-SIMS, AFM, and Confocal Microscopy to evaluate the replicating compound. These techniques allowed for the characterization and verification of the resolution capabilities and surface contamination that may be a result of using the compound. FTIR showed the compound is entirely made from silicone constituents. The AFM and Confocal Microscopy results showed the compound does accurately replicate the surface features to the claimed resolution. XPS and ToF-SIMS showed there is a silicone contaminant layer left behind when a cured replica is peeled off a surface. Attempts to clean off the contamination could not completely remove all silicone residues.

  3. Design and performance evaluation of a coarse/fine precision motion control system

    SciTech Connect

    Yang, H; Buice, E S; Smith, S T; Hocken, R J; Fagan, T J; Trumper, D L; Otten, D; Seugling, R M

    2005-03-02

    This abstract presents current collaborative work on the development of a stage system for accurate nanometer level positioning for scanning specimens spanning an area of 50 mm x 50 mm. The completed system employs a coarse/fine approach which comprises a short-range, six degree-of-freedom fine-motion platform (5 microns 200 micro-radians) carried by a long-range, two-axis X-Y coarse positioning system. Relative motion of the stage to a fixed metrology frame will be measured using a heterodyne laser in an eight-pass interferometer configuration. The final stage system will be housed in a vacuum environment and operated in a temperature-controlled laboratory. Results from a simple single coarse/fine axis system will be the design basis for the final multi-axis system. It is expected that initial stage performance evaluation will be presented at the conference.

  4. Genetic Evaluation of the Performance of Malaria Parasite Clearance Rate Metrics

    PubMed Central

    Nkhoma, Standwell C.; Stepniewska, Kasia; Nair, Shalini; Phyo, Aung Pyae; McGready, Rose; Nosten, François; Anderson, Tim J. C.

    2013-01-01

    Accurate measurement of malaria parasite clearance rates (CRs) following artemisinin (ART) treatment is critical for resistance surveillance and research, and various CR metrics are currently used. We measured 13 CR metrics in 1472 ART-treated hyperparasitemia infections for which 6-hour parasite counts and parasite genotypes (93 single nucleotide polymorphisms [SNPs]) were available. We used heritability to evaluate the performance of each metric. Heritability ranged from 0.06 ± 0.06 (SD) for 50% parasite clearance times to 0.67 ± 0.04 (SD) for clearance half-lives estimated from 6-hour parasite counts. These results identify the measures that should be avoided and show that reliable clearance measures can be obtained with abbreviated monitoring protocols. PMID:23592863

  5. Genetic evaluation of the performance of malaria parasite clearance rate metrics.

    PubMed

    Nkhoma, Standwell C; Stepniewska, Kasia; Nair, Shalini; Phyo, Aung Pyae; McGready, Rose; Nosten, François; Anderson, Tim J C

    2013-07-15

    Accurate measurement of malaria parasite clearance rates (CRs) following artemisinin (ART) treatment is critical for resistance surveillance and research, and various CR metrics are currently used. We measured 13 CR metrics in 1472 ART-treated hyperparasitemia infections for which 6-hour parasite counts and parasite genotypes (93 single nucleotide polymorphisms [SNPs]) were available. We used heritability to evaluate the performance of each metric. Heritability ranged from 0.06 ± 0.06 (SD) for 50% parasite clearance times to 0.67 ± 0.04 (SD) for clearance half-lives estimated from 6-hour parasite counts. These results identify the measures that should be avoided and show that reliable clearance measures can be obtained with abbreviated monitoring protocols.

  6. Evaluation of the Performance of the THOR-alpha Dummy.

    PubMed

    van Don, B; van Ratingen, M; Bermond, F; Masson, C; Vezin, P; Hynd, D; Kallieris, D; Martinez, L

    2003-10-01

    Six European laboratories have evaluated the biomechanical response of the new advanced frontal impact dummy THOR-alpha with respect to the European impact response requirements. The results indicated that for many of the body regions (e.g. shoulder, spine, thorax, femur/knee) the THOR-alpha response was close to the human response. In addition, the durability, repeatability and sensitivity for some dummy regions have been evaluated. Based on the tests performed, it was found that the THOR-alpha is not durable enough. The lack in robustness of the THOR-alpha caused a problem in completing the full test program and in evaluating the repeatability of the dummy. The results have demonstrated that the assessment of frontal impact protection can be greatly improved with a more advanced frontal impact dummy. Regarding biofidelity and injury assessment capabilities, the THOR-alpha is a good candidate however it needs to be brought up to standard in other areas. Based on the results obtained recommendations were defined for the improvement of the THOR-alpha dummy.

  7. Performance of a second generation toxicity reduction evaluation

    SciTech Connect

    Goodfellow, W.L.; Sohn, V.A.; Kotulak, M.A.

    1994-12-31

    Historically, a large electrical component manufacturing facility with a final effluent flow of approximately 0.2 MGD was consistently out of compliance with its state and federal discharge permits for acute and chronic toxicity. Characterization of the acute and chronic toxicity of the effluent and individual waster streams indicated that the majority of the toxicity was coming from the non-contact cooling water. Toxicity Identification Evaluation procedures indicated that copper was the principal toxicant. Evaluation of the facility`s non-contact cooling water distribution system provided insight that the high copper levels were a result of leaching of copper from the piping in the kilns due to the elevated temperature and corrosive nature of the water. The facility installed selected treatment options that resulted in compliance each month with the facility`s permit since August 1990. During the spring of 1994, the effluent was again observed to produce consistent acute and chronic toxicity to Ceriodaphnia dubia as part of NPDES routine monitoring. This paper describes the activities, results and conclusions of the new Toxicity Reduction Evaluation performed on this effluent. In addition, a discussion is provided outlining information and data that should be maintained by the facility in order to minimize costs of second generation TREs while maximizing chances for success.

  8. Evaluation of Gear Condition Indicator Performance on Rotorcraft Fleet

    NASA Technical Reports Server (NTRS)

    Antolick, Lance J.; Branning, Jeremy S.; Wade, Daniel R.; Dempsey, Paula J.

    2010-01-01

    The U.S. Army is currently expanding its fleet of Health Usage Monitoring Systems (HUMS) equipped aircraft at significant rates, to now include over 1,000 rotorcraft. Two different on-board HUMS, the Honeywell Modern Signal Processing Unit (MSPU) and the Goodrich Integrated Vehicle Health Management System (IVHMS), are collecting vibration health data on aircraft that include the Apache, Blackhawk, Chinook, and Kiowa Warrior. The objective of this paper is to recommend the most effective gear condition indicators for fleet use based on both a theoretical foundation and field data. Gear diagnostics with better performance will be recommended based on both a theoretical foundation and results of in-fleet use. In order to evaluate the gear condition indicator performance on rotorcraft fleets, results of more than five years of health monitoring for gear faults in the entire HUMS equipped Army helicopter fleet will be presented. More than ten examples of gear faults indicated by the gear CI have been compiled and each reviewed for accuracy. False alarms indications will also be discussed. Performance data from test rigs and seeded fault tests will also be presented. The results of the fleet analysis will be discussed, and a performance metric assigned to each of the competing algorithms. Gear fault diagnostic algorithms that are compliant with ADS-79A will be recommended for future use and development. The performance of gear algorithms used in the commercial units and the effectiveness of the gear CI as a fault identifier will be assessed using the criteria outlined in the standards in ADS-79A-HDBK, an Army handbook that outlines the conversion from Reliability Centered Maintenance to the On-Condition status of Condition Based Maintenance.

  9. PQLX: A Software Tool to Evaluate Seismic Station Performance

    NASA Astrophysics Data System (ADS)

    McNamara, D. E.; Boaz, R. I.

    2006-12-01

    We present a new tool that will allow users to evaluate seismic station performance and characteristics by providing quick and easy transitions between visualizations of the frequency and time domains. The software is based on the probability density functions (PDF) of power spectral densities (PSD) (McNamara and Buland, 2004). The computed PSDs are stored in a MySQL database, allowing a user to access specific time periods of PSDs (PDF subsets) and time series segments through a GUI-driven interface. The power of the method and software lies in the fact that there is no need to screen the data for system transients, earthquakes or general data artifacts since they map into a background probability level. In fact, examination of artifacts related to station operation and episodic cultural noise allow us to estimate both the overall station quality and a baseline level of earth noise at each site. The output of this analysis tool is useful for both operational and scientific applications. Operationally, it is useful for characterizing the current and past performance of existing broadband stations, for conducting tests on potential new seismic station locations, for detecting problems with the recording system or sensors, and for evaluating the overall quality of data and meta-data. Scientifically, the tool allows for mining of PSDs for investigations on the evolution of seismic noise (see Aster et al., Hutt et al., Leeds et al., and Oneel et al., this meeting). The PDF algorithm and initial software were developed by the USGS as a part of the ANSS/GSN data and network QC system. Further development, supported by the IRIS Data Management Center, integrated the PDF algorithm into the IRIS QUACK system. The newest version, PQLX, combines the PDF system with the PQL time series viewing tool developed with support from IRIS PASSCAL. Currently, PQLX is operational at the USGS ANSS NOC and ASL for station performance monitoring.

  10. Prior image constrained compressed sensing: Implementation and performance evaluation

    PubMed Central

    Lauzier, Pascal Thériault; Tang, Jie; Chen, Guang-Hong

    2012-01-01

    Purpose: Prior image constrained compressed sensing (PICCS) is an image reconstruction framework which incorporates an often available prior image into the compressed sensing objective function. The images are reconstructed using an optimization procedure. In this paper, several alternative unconstrained minimization methods are used to implement PICCS. The purpose is to study and compare the performance of each implementation, as well as to evaluate the performance of the PICCS objective function with respect to image quality. Methods: Six different minimization methods are investigated with respect to convergence speed and reconstruction accuracy. These minimization methods include the steepest descent (SD) method and the conjugate gradient (CG) method. These algorithms require a line search to be performed. Thus, for each minimization algorithm, two line searching algorithms are evaluated: a backtracking (BT) line search and a fast Newton-Raphson (NR) line search. The relative root mean square error is used to evaluate the reconstruction accuracy. The algorithm that offers the best convergence speed is used to study the performance of PICCS with respect to the prior image parameter α and the data consistency parameter λ. PICCS is studied in terms of reconstruction accuracy, low-contrast spatial resolution, and noise characteristics. A numerical phantom was simulated and an animal model was scanned using a multirow detector computed tomography (CT) scanner to yield the projection datasets used in this study. Results: For λ within a broad range, the CG method with Fletcher-Reeves formula and NR line search offers the fastest convergence for an equal level of reconstruction accuracy. Using this minimization method, the reconstruction accuracy of PICCS was studied with respect to variations in α and λ. When the number of view angles is varied between 107, 80, 64, 40, 20, and 16, the relative root mean square error reaches a minimum value for α ≈ 0.5. For

  11. Performance Evaluation of the NASA/KSC Transmission System

    NASA Technical Reports Server (NTRS)

    Christensen, Kenneth J.

    2000-01-01

    NASA-KSC currently uses three bridged 100-Mbps FDDI segments as its backbone for data traffic. The FDDI Transmission System (FTXS) connects the KSC industrial area, KSC launch complex 39 area, and the Cape Canaveral Air Force Station. The report presents a performance modeling study of the FTXS and the proposed ATM Transmission System (ATXS). The focus of the study is on performance of MPEG video transmission on these networks. Commercial modeling tools - the CACI Predictor and Comnet tools - were used. In addition, custom software tools were developed to characterize conversation pairs in Sniffer trace (capture) files to use as input to these tools. A baseline study of both non-launch and launch day data traffic on the FTXS is presented. MPEG-1 and MPEG-2 video traffic was characterized and the shaping of it evaluated. It is shown that the characteristics of a video stream has a direct effect on its performance in a network. It is also shown that shaping of video streams is necessary to prevent overflow losses and resulting poor video quality. The developed models can be used to predict when the existing FTXS will 'run out of room' and for optimizing the parameters of ATM links used for transmission of MPEG video. Future work with these models can provide useful input and validation to set-top box projects within the Advanced Networks Development group in NASA-KSC Development Engineering.

  12. Evaluation of performance and uncertainty of infrared tympanic thermometers.

    PubMed

    Chung, Wenbin; Chen, Chiachung

    2010-01-01

    Infrared tympanic thermometers (ITTs) are easy to use and have a quick response time. They are widely used for temperature measurement of the human body. The accuracy and uncertainty of measurement is the importance performance indicator for these meters. The performance of two infrared tympanic thermometers, Braun THT-3020 and OMRON MC-510, were evaluated in this study. The cell of a temperature calibrator was modified to serve as the standard temperature of the blackbody. The errors of measurement for the two meters were reduced by the calibration equation. The predictive values could meet the requirements of the ASTM standard. The sources of uncertainty include the standard deviations of replication at fixed temperature or the predicted values of calibration equation, reference standard values and resolution. The uncertainty analysis shows that the uncertainty of calibration equation is the main source for combined uncertainty. Ambient temperature did not have the significant effects on the measured performance. The calibration equations could improve the accuracy of ITTs. However, these equations did not improve the uncertainty of ITTs.

  13. Performance evaluation of indirect evaporative cooler using clay pot

    NASA Astrophysics Data System (ADS)

    Ramkumar, R.; Ragupathy, A.

    2016-05-01

    The aim of the experimental study is to investigate the performance of indirect evaporator cooler in hot and humid regions. A novel approach is implemented in the cooler using clay pot with different position (single, double and three pots) and different orientation as aligned and staggered position for potential and feasibility study. The clay pot is the ceramic material where the water filled inside the pot and due to the property of porosity, the water comes outer surface of the pot and contact with the air passing over the pot surface and air get cooled. A test rig was designed and fabricated to collect experimental data. The clay pots were arranged in aligned and staggered position. In our study heat transfer was analysed with various air velocity of 1m/s to 5m/s. The air temperature, relative humidity, pressure drop and effectiveness were measured and the performance of the evaporative cooler was evaluated. The analysis of the data indicated that cooling effectiveness improve with decrease of air velocity at staggered position. It was shown that staggered position has the higher performance (57%) at 1 m/s air velocity comparison with aligned position values at three pots position.

  14. Work performance evaluation using the exercising rat model

    SciTech Connect

    Stavert, D.M.; Lehnert, B.E.

    1987-01-01

    A treadmill-metabolic chamber system and a stress testing protocol have been developed to evaluate aerobic work performance on exercising rats that have inhaled toxic substances. The chamber with an enclosed treadmill provides the means to measure the physiologic status of rats during maximal work intensities in terms of O/sub 2/ consumption (V/sub 02/) and CO/sub 2/ production (V/sub c02/). The metabolic chamber can also accommodate instrumented rats for more detailed analyses of their cardiopulmonary status, e.g., ECG, cardiac output, arterial blood gases and pH, and arterial and venous blood pressures. For such studies, an arterial/venous catheter preparation is required. Because of the severe metabolic alterations after such surgery, a post surgical recovery strategy using hyperalimentation was developed to ensure maximal performance of instrumented animals during stress testing. Actual work performance studies are conducted using an eight minute stress test protocol in which the rat is subjected to increasing external work. The metabolic state of the animal is measured from resting levels to maximum oxygen consumption (V/sub 02max/). V/sub 02max/ has been shown to be reproducible in individual rats and is a sensitive indicator of oxidant gas-induced pulmonary damage. 3 tabs.

  15. Site characterization at the Rabbit Valley Geophysical Performance Evaluation Range

    SciTech Connect

    Koppenjan, S,; Martinez, M.

    1994-06-01

    The United States Department of Energy (US DOE) is developing a Geophysical Performance Evaluation Range (GPER) at Rabbit Valley located 30 miles west of Grand Junction, Colorado. The purpose of the range is to provide a test area for geophysical instruments and survey procedures. Assessment of equipment accuracy and resolution is accomplished through the use of static and dynamic physical models. These models include targets with fixed configurations and targets that can be re-configured to simulate specific specifications. Initial testing (1991) combined with the current tests at the Rabbit Valley GPER will establish baseline data and will provide performance criteria for the development of geophysical technologies and techniques. The US DOE`s Special Technologies Laboratory (STL) staff has conducted a Ground Penetrating Radar (GPR) survey of the site with its stepped FM-CW GPR. Additionally, STL contracted several other geophysical tests. These include an airborne GPR survey incorporating a ``chirped`` FM-CW GPR system and a magnetic survey with a surfaced-towed magnetometer array unit Ground-based and aerial video and still frame pictures were also acquired. STL compiled and analyzed all of the geophysical maps and created a site characterization database. This paper discusses the results of the multi-sensor geophysical studies performed at Rabbit Valley and the future plans for the site.

  16. Evaluating the effectiveness of training strategies: performance goals and testing.

    PubMed

    Foshay, Wellesley R; Tinkey, Peggy T

    2007-01-01

    The Public Health Service policy, Animal Welfare Act regulations, and the Guide for the Care and Use of Laboratory Animals all require that institutions provide training for personnel engaged in animal research. Most research facilities have developed training programs to meet these requirements but may not have developed ways of assessing the effectiveness of these programs. Omission of this critical activity often leads to training that is ineffective, inefficient, or unnecessary. Evaluating the effectiveness of biomedical research and animal care training should involve a combination of assessments of performance, competence and knowledge, and appropriate tests for each type of knowledge, used at appropriate time intervals. In this article, the hierarchical relationship between performance, competence, and knowledge is described. The discussion of cognitive and psychomotor knowledge includes the important distinction between declarative and procedural knowledge. Measurement of performance is described and can include a variety of indirect and direct measurement techniques. Each measurement option has its own profile of strengths and weaknesses in terms of measurement validity, reliability, and costs of development and delivery. It is important to understand the tradeoffs associated with each measurement option, and to make appropriate choices of measurement strategy based on these tradeoffs arrayed against considerations of frequency, criticality, difficulty of learning, logistics, and budget. The article concludes with an example of how these measurement strategies can be combined into a cost-effective assessment plan for a biomedical research facility.

  17. Evaluating the factor structure of the Psychological Performance Inventory.

    PubMed

    Golby, Jim; Sheard, Michael; van Wersch, Anna

    2007-08-01

    This study assesses the construct validity of a measure of mental toughness, Loehr's Psychological Performance Inventory. Performers (N = 408, 303 men, 105 women, M age = 24.0 yr., SD = 6.7) drawn from eight sports (artistic rollerskating, basketball, canoeing, golf, rugby league, rugby union, soccer, swimming), and competing at either international, national, county and provincial, or club and regional standards. They completed the 42-item Psychological Performance Inventory during training camps. Principal components analysis provided minimal support for the factor structure. Instead, the exploratory analysis yielded a 4-factor 14-item model (PPI-A). A single factor underlying mental toughness (G(MT)) was identified with higher-order exploratory factor analysis using the Schmid-Leiman procedure. Psychometric analysis of the model, using confirmatory analysis techniques, fitted the data well. Collectively satisfying absolute and incremental fit index benchmarks, the inventory possesses satisfactory psychometric properties, with adequate reliability and convergent and discriminant validity. The results lend preliminary support to the factorial validity and reliability of the model; however, further investigation of its stability is required before recommending practitioners use changes in scores as an index for evaluating effects of training in psychological skills.

  18. Performance Evaluation of Smartphone Inertial Sensors Measurement for Range of Motion.

    PubMed

    Mourcou, Quentin; Fleury, Anthony; Franco, Céline; Klopcic, Frédéric; Vuillerme, Nicolas

    2015-01-01

    Over the years, smartphones have become tools for scientific and clinical research. They can, for instance, be used to assess range of motion and joint angle measurement. In this paper, our aim was to determine if smartphones are reliable and accurate enough for clinical motion research. This work proposes an evaluation of different smartphone sensors performance and different manufacturer algorithm performances with the comparison to the gold standard, an industrial robotic arm with an actual standard use inertial motion unit in clinical measurement, an Xsens product. Both dynamic and static protocols were used to perform these comparisons. Root Mean Square (RMS) mean values results for static protocol are under 0.3° for the different smartphones. RMS mean values results for dynamic protocol are more prone to bias induced by Euler angle representation. Statistical results prove that there are no filter effect on results for both protocols and no hardware effect. Smartphones performance can be compared to the Xsens gold standard for clinical research. PMID:26389900

  19. Performance Evaluation of Smartphone Inertial Sensors Measurement for Range of Motion

    PubMed Central

    Mourcou, Quentin; Fleury, Anthony; Franco, Céline; Klopcic, Frédéric; Vuillerme, Nicolas

    2015-01-01

    Over the years, smartphones have become tools for scientific and clinical research. They can, for instance, be used to assess range of motion and joint angle measurement. In this paper, our aim was to determine if smartphones are reliable and accurate enough for clinical motion research. This work proposes an evaluation of different smartphone sensors performance and different manufacturer algorithm performances with the comparison to the gold standard, an industrial robotic arm with an actual standard use inertial motion unit in clinical measurement, an Xsens product. Both dynamic and static protocols were used to perform these comparisons. Root Mean Square (RMS) mean values results for static protocol are under 0.3° for the different smartphones. RMS mean values results for dynamic protocol are more prone to bias induced by Euler angle representation. Statistical results prove that there are no filter effect on results for both protocols and no hardware effect. Smartphones performance can be compared to the Xsens gold standard for clinical research. PMID:26389900

  20. Thrust stand evaluation of engine performance improvement algorithms in an F-15 airplane

    NASA Technical Reports Server (NTRS)

    Conners, Timothy R.

    1992-01-01

    An investigation is underway to determine the benefits of a new propulsion system optimization algorithm in an F-15 airplane. The performance seeking control (PSC) algorithm optimizes the quasi-steady-state performance of an F100 derivative turbofan engine for several modes of operation. The PSC algorithm uses an onboard software engine model that calculates thrust, stall margin, and other unmeasured variables for use in the optimization. As part of the PSC test program, the F-15 aircraft was operated on a horizontal thrust stand. Thrust was measured with highly accurate load cells. The measured thrust was compared to onboard model estimates and to results from posttest performance programs. Thrust changes using the various PSC modes were recorded. Those results were compared to benefits using the less complex highly integrated digital electronic control (HIDEC) algorithm. The PSC maximum thrust mode increased intermediate power thrust by 10 percent. The PSC engine model did very well at estimating measured thrust and closely followed the transients during optimization. Quantitative results from the evaluation of the algorithms and performance calculation models are included with emphasis on measured thrust results. The report presents a description of the PSC system and a discussion of factors affecting the accuracy of the thrust stand load measurements.