Science.gov

Sample records for performance evaluation model

  1. Evaluating survival model performance: a graphical approach.

    PubMed

    Mandel, M; Galai, N; Simchen, E

    2005-06-30

    In the last decade, many statistics have been suggested to evaluate the performance of survival models. These statistics evaluate the overall performance of a model ignoring possible variability in performance over time. Using an extension of measures used in binary regression, we propose a graphical method to depict the performance of a survival model over time. The method provides estimates of performance at specific time points and can be used as an informal test for detecting time varying effects of covariates in the Cox model framework. The method is illustrated on real and simulated data using Cox proportional hazard model and rank statistics.

  2. Model Performance Evaluation and Scenario Analysis (MPESA) Tutorial

    EPA Science Inventory

    This tool consists of two parts: model performance evaluation and scenario analysis (MPESA). The model performance evaluation consists of two components: model performance evaluation metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit m...

  3. Evaluating Models of Human Performance: Safety-Critical Systems Applications

    NASA Technical Reports Server (NTRS)

    Feary, Michael S.

    2012-01-01

    This presentation is part of panel discussion on Evaluating Models of Human Performance. The purpose of this panel is to discuss the increasing use of models in the world today and specifically focus on how to describe and evaluate models of human performance. My presentation will focus on discussions of generating distributions of performance, and the evaluation of different strategies for humans performing tasks with mixed initiative (Human-Automation) systems. I will also discuss issues with how to provide Human Performance modeling data to support decisions on acceptability and tradeoffs in the design of safety critical systems. I will conclude with challenges for the future.

  4. Models for evaluating the performability of degradable computing systems

    NASA Technical Reports Server (NTRS)

    Wu, L. T.

    1982-01-01

    Recent advances in multiprocessor technology established the need for unified methods to evaluate computing systems performance and reliability. In response to this modeling need, a general modeling framework that permits the modeling, analysis and evaluation of degradable computing systems is considered. Within this framework, several user oriented performance variables are identified and shown to be proper generalizations of the traditional notions of system performance and reliability. Furthermore, a time varying version of the model is developed to generalize the traditional fault tree reliability evaluation methods of phased missions.

  5. Solid rocket booster performance evaluation model. Volume 4: Program listing

    NASA Technical Reports Server (NTRS)

    1974-01-01

    All subprograms or routines associated with the solid rocket booster performance evaluation model are indexed in this computer listing. An alphanumeric list of each routine in the index is provided in a table of contents.

  6. Evaluating Organic Aerosol Model Performance: Impact of two Embedded Assumptions

    NASA Astrophysics Data System (ADS)

    Jiang, W.; Giroux, E.; Roth, H.; Yin, D.

    2004-05-01

    Organic aerosols are important due to their abundance in the polluted lower atmosphere and their impact on human health and vegetation. However, modeling organic aerosols is a very challenging task because of the complexity of aerosol composition, structure, and formation processes. Assumptions and their associated uncertainties in both models and measurement data make model performance evaluation a truly demanding job. Although some assumptions are obvious, others are hidden and embedded, and can significantly impact modeling results, possibly even changing conclusions about model performance. This paper focuses on analyzing the impact of two embedded assumptions on evaluation of organic aerosol model performance. One assumption is about the enthalpy of vaporization widely used in various secondary organic aerosol (SOA) algorithms. The other is about the conversion factor used to obtain ambient organic aerosol concentrations from measured organic carbon. These two assumptions reflect uncertainties in the model and in the ambient measurement data, respectively. For illustration purposes, various choices of the assumed values are implemented in the evaluation process for an air quality model based on CMAQ (the Community Multiscale Air Quality Model). Model simulations are conducted for the Lower Fraser Valley covering Southwest British Columbia, Canada, and Northwest Washington, United States, for a historical pollution episode in 1993. To understand the impact of the assumed enthalpy of vaporization on modeling results, its impact on instantaneous organic aerosol yields (IAY) through partitioning coefficients is analysed first. The analysis shows that utilizing different enthalpy of vaporization values causes changes in the shapes of IAY curves and in the response of SOA formation capability of reactive organic gases to temperature variations. These changes are then carried into the air quality model and cause substantial changes in the organic aerosol modeling

  7. Faculty performance evaluation: the CIPP-SAPS model.

    PubMed

    Mitcham, M

    1981-11-01

    The issues of faculty performance evaluation for allied health professionals are addressed. Daniel Stufflebeam's CIPP (content-input-process-product) model is introduced and its development in a CIPP-SAPS (self-administrative-peer-student) model is pursued. Data sources for the SAPS portion of the model are discussed. A suggestion for the use of the CIPP-SAPS model within a teaching contract plan is explored.

  8. Faculty Performance Evaluation: The CIPP-SAPS Model.

    ERIC Educational Resources Information Center

    Mitcham, Maralynne

    1981-01-01

    The issues of faculty performance evaluation for allied health professionals are addressed. Daniel Stufflebeam's CIPP (content-imput-process-product) model is introduced and its development into a CIPP-SAPS (self-administrative-peer- student) model is pursued. (Author/CT)

  9. Solid rocket booster performance evaluation model. Volume 2: Users manual

    NASA Technical Reports Server (NTRS)

    1974-01-01

    This users manual for the solid rocket booster performance evaluation model (SRB-II) contains descriptions of the model, the program options, the required program inputs, the program output format and the program error messages. SRB-II is written in FORTRAN and is operational on both the IBM 370/155 and the MSFC UNIVAC 1108 computers.

  10. Evaluation of performance of predictive models for deoxynivalenol in wheat.

    PubMed

    van der Fels-Klerx, H J

    2014-02-01

    The aim of this study was to evaluate the performance of two predictive models for deoxynivalenol contamination of wheat at harvest in the Netherlands, including the use of weather forecast data and external model validation. Data were collected in a different year and from different wheat fields than data used for model development. The two models were run for six preset scenarios, varying in the period for which weather forecast data were used, from zero-day (historical data only) to a 13-day period around wheat flowering. Model predictions using forecast weather data were compared to those using historical data. Furthermore, model predictions using historical weather data were evaluated against observed deoxynivalenol contamination of the wheat fields. Results showed that the use of weather forecast data rather than observed data only slightly influenced model predictions. The percent of correct model predictions, given a threshold of 1,250 μg/kg (legal limit in European Union), was about 95% for the two models. However, only three samples had a deoxynivalenol concentration above this threshold, and the models were not able to predict these samples correctly. It was concluded that two- week weather forecast data can reliable be used in descriptive models for deoxynivalenol contamination of wheat, resulting in more timely model predictions. The two models are able to predict lower deoxynivalenol contamination correctly, but model performance in situations with high deoxynivalenol contamination needs to be further validated. This will need years with conducive environmental conditions for deoxynivalenol contamination of wheat.

  11. Solid rocket booster performance evaluation model. Volume 1: Engineering description

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The space shuttle solid rocket booster performance evaluation model (SRB-II) is made up of analytical and functional simulation techniques linked together so that a single pass through the model will predict the performance of the propulsion elements of a space shuttle solid rocket booster. The available options allow the user to predict static test performance, predict nominal and off nominal flight performance, and reconstruct actual flight and static test performance. Options selected by the user are dependent on the data available. These can include data derived from theoretical analysis, small scale motor test data, large motor test data and motor configuration data. The user has several options for output format that include print, cards, tape and plots. Output includes all major performance parameters (Isp, thrust, flowrate, mass accounting and operating pressures) as a function of time as well as calculated single point performance data. The engineering description of SRB-II discusses the engineering and programming fundamentals used, the function of each module, and the limitations of each module.

  12. Performance criteria to evaluate air quality modeling applications

    NASA Astrophysics Data System (ADS)

    Thunis, P.; Pederzoli, A.; Pernigotti, D.

    2012-11-01

    A set of statistical indicators fit for air quality model evaluation is selected based on experience and literature: The Root Mean Square Error (RMSE), the bias, the Standard Deviation (SD) and the correlation coefficient (R). Among these the RMSE is proposed as the key one for the description of the model skill. Model Performance Criteria (MPC) to investigate whether model results are 'good enough' for a given application are calculated based on the observation uncertainty (U). The basic concept is to allow for model results a similar margin of tolerance (in terms of uncertainty) as for observations. U is pollutant, concentration level and station dependent, therefore the proposed MPC are normalized by U. Some composite diagrams are adapted or introduced to visualize model performance in terms of the proposed MPC and are illustrated in a real modeling application. The Target diagram, used to visualize the RMSE, is adapted with a new normalization on its axis, while complementary diagrams are proposed. In this first application the dependence of U on concentrations level is ignored, and an assumption on the pollutant dependent relative error is made. The advantages of this new approach are finally described.

  13. Performance Evaluation of 3d Modeling Software for Uav Photogrammetry

    NASA Astrophysics Data System (ADS)

    Yanagi, H.; Chikatsu, H.

    2016-06-01

    UAV (Unmanned Aerial Vehicle) photogrammetry, which combines UAV and freely available internet-based 3D modeling software, is widely used as a low-cost and user-friendly photogrammetry technique in the fields such as remote sensing and geosciences. In UAV photogrammetry, only the platform used in conventional aerial photogrammetry is changed. Consequently, 3D modeling software contributes significantly to its expansion. However, the algorithms of the 3D modelling software are black box algorithms. As a result, only a few studies have been able to evaluate their accuracy using 3D coordinate check points. With this motive, Smart3DCapture and Pix4Dmapper were downloaded from the Internet and commercial software PhotoScan was also employed; investigations were performed in this paper using check points and images obtained from UAV.

  14. Proper bibeta ROC model: algorithm, software, and performance evaluation

    NASA Astrophysics Data System (ADS)

    Chen, Weijie; Hu, Nan

    2016-03-01

    Semi-parametric models are often used to fit data collected in receiver operating characteristic (ROC) experiments to obtain a smooth ROC curve and ROC parameters for statistical inference purposes. The proper bibeta model as recently proposed by Mossman and Peng enjoys several theoretical properties. In addition to having explicit density functions for the latent decision variable and an explicit functional form of the ROC curve, the two parameter bibeta model also has simple closed-form expressions for true-positive fraction (TPF), false-positive fraction (FPF), and the area under the ROC curve (AUC). In this work, we developed a computational algorithm and R package implementing this model for ROC curve fitting. Our algorithm can deal with any ordinal data (categorical or continuous). To improve accuracy, efficiency, and reliability of our software, we adopted several strategies in our computational algorithm including: (1) the LABROC4 categorization to obtain the true maximum likelihood estimation of the ROC parameters; (2) a principled approach to initializing parameters; (3) analytical first-order and second-order derivatives of the likelihood function; (4) an efficient optimization procedure (the L-BFGS algorithm in the R package "nlopt"); and (5) an analytical delta method to estimate the variance of the AUC. We evaluated the performance of our software with intensive simulation studies and compared with the conventional binormal and the proper binormal-likelihood-ratio models developed at the University of Chicago. Our simulation results indicate that our software is highly accurate, efficient, and reliable.

  15. Experimental performance evaluation of human balance control models.

    PubMed

    Huryn, Thomas P; Blouin, Jean-Sébastien; Croft, Elizabeth A; Koehle, Michael S; Van der Loos, H F Machiel

    2014-11-01

    Two factors commonly differentiate proposed balance control models for quiet human standing: 1) intermittent muscle activation and 2) prediction that overcomes sensorimotor time delays. In this experiment we assessed the viability and performance of intermittent activation and prediction in a balance control loop that included the neuromuscular dynamics of human calf muscles. Muscles were driven by functional electrical stimulation (FES). The performance of the different controllers was compared based on sway patterns and mechanical effort required to balance a human body load on a robotic balance simulator. All evaluated controllers balanced subjects with and without a neural block applied to their common peroneal and tibial nerves, showing that the models can produce stable balance in the absence of natural activation. Intermittent activation required less stimulation energy than continuous control but predisposed the system to increased sway. Relative to intermittent control, continuous control reproduced the sway size of natural standing better. Prediction was not necessary for stable balance control but did improve stability when control was intermittent, suggesting a possible benefit of a predictor for intermittent activation. Further application of intermittent activation and predictive control models may drive prolonged, stable FES-controlled standing that improves quality of life for people with balance impairments. PMID:24771586

  16. An evaluation of sex-age-kill (SAK) model performance

    USGS Publications Warehouse

    Millspaugh, Joshua J.; Skalski, John R.; Townsend, Richard L.; Diefenbach, Duane R.; Boyce, Mark S.; Hansen, Lonnie P.; Kammermeyer, Kent

    2009-01-01

    The sex-age-kill (SAK) model is widely used to estimate abundance of harvested large mammals, including white-tailed deer (Odocoileus virginianus). Despite a long history of use, few formal evaluations of SAK performance exist. We investigated how violations of the stable age distribution and stationary population assumption, changes to male or female harvest, stochastic effects (i.e., random fluctuations in recruitment and survival), and sampling efforts influenced SAK estimation. When the simulated population had a stable age distribution and λ > 1, the SAK model underestimated abundance. Conversely, when λ < 1, the SAK overestimated abundance. When changes to male harvest were introduced, SAK estimates were opposite the true population trend. In contrast, SAK estimates were robust to changes in female harvest rates. Stochastic effects caused SAK estimates to fluctuate about their equilibrium abundance, but the effect dampened as the size of the surveyed population increased. When we considered both stochastic effects and sampling error at a deer management unit scale the resultant abundance estimates were within ±121.9% of the true population level 95% of the time. These combined results demonstrate extreme sensitivity to model violations and scale of analysis. Without changes to model formulation, the SAK model will be biased when λ ≠ 1. Furthermore, any factor that alters the male harvest rate, such as changes to regulations or changes in hunter attitudes, will bias population estimates. Sex-age-kill estimates may be precise at large spatial scales, such as the state level, but less so at the individual management unit level. Alternative models, such as statistical age-at-harvest models, which require similar data types, might allow for more robust, broad-scale demographic assessments.

  17. Novel Planar Electromagnetic Sensors: Modeling and Performance Evaluation

    PubMed Central

    Mukhopadhyay, Subhas C.

    2005-01-01

    High performance planar electromagnetic sensors, their modeling and a few applications have been reported in this paper. The researches employing planar type electromagnetic sensors have started quite a few years back with the initial emphasis on the inspection of defects on printed circuit board. The use of the planar type sensing system has been extended for the evaluation of near-surface material properties such as conductivity, permittivity, permeability etc and can also be used for the inspection of defects in the near-surface of materials. Recently the sensor has been used for the inspection of quality of saxophone reeds and dairy products. The electromagnetic responses of planar interdigital sensors with pork meats have been investigated.

  18. Evaluation of Turbulence-Model Performance in Jet Flows

    NASA Technical Reports Server (NTRS)

    Woodruff, S. L.; Seiner, J. M.; Hussaini, M. Y.; Erlebacher, G.

    2001-01-01

    The importance of reducing jet noise in both commercial and military aircraft applications has made jet acoustics a significant area of research. A technique for jet noise prediction commonly employed in practice is the MGB approach, based on the Lighthill acoustic analogy. This technique requires as aerodynamic input mean flow quantities and turbulence quantities like the kinetic energy and the dissipation. The purpose of the present paper is to assess existing capabilities for predicting these aerodynamic inputs. Two modern Navier-Stokes flow solvers, coupled with several modern turbulence models, are evaluated by comparison with experiment for their ability to predict mean flow properties in a supersonic jet plume. Potential weaknesses are identified for further investigation. Another comparison with similar intent is discussed by Barber et al. The ultimate goal of this research is to develop a reliable flow solver applicable to the low-noise, propulsion-efficient, nozzle exhaust systems being developed in NASA focused programs. These programs address a broad range of complex nozzle geometries operating in high temperature, compressible, flows. Seiner et al. previously discussed the jet configuration examined here. This convergent-divergent nozzle with an exit diameter of 3.6 inches was designed for an exhaust Mach number of 2.0 and a total temperature of 1680 F. The acoustic and aerodynamic data reported by Seiner et al. covered a range of jet total temperatures from 104 F to 2200 F at the fully-expanded nozzle pressure ratio. The aerodynamic data included centerline mean velocity and total temperature profiles. Computations were performed independently with two computational fluid dynamics (CFD) codes, ISAAC and PAB3D. Turbulence models employed include the k-epsilon model, the Gatski-Speziale algebraic-stress model and the Girimaji model, with and without the Sarkar compressibility correction. Centerline values of mean velocity and mean temperature are

  19. Simulation Modeling and Performance Evaluation of Space Networks

    NASA Technical Reports Server (NTRS)

    Jennings, Esther H.; Segui, John

    2006-01-01

    In space exploration missions, the coordinated use of spacecraft as communication relays increases the efficiency of the endeavors. To conduct trade-off studies of the performance and resource usage of different communication protocols and network designs, JPL designed a comprehensive extendable tool, the Multi-mission Advanced Communications Hybrid Environment for Test and Evaluation (MACHETE). The design and development of MACHETE began in 2000 and is constantly evolving. Currently, MACHETE contains Consultative Committee for Space Data Systems (CCSDS) protocol standards such as Proximity-1, Advanced Orbiting Systems (AOS), Packet Telemetry/Telecommand, Space Communications Protocol Specification (SCPS), and the CCSDS File Delivery Protocol (CFDP). MACHETE uses the Aerospace Corporation s Satellite Orbital Analysis Program (SOAP) to generate the orbital geometry information and contact opportunities. Matlab scripts provide the link characteristics. At the core of MACHETE is a discrete event simulator, QualNet. Delay Tolerant Networking (DTN) is an end-to-end architecture providing communication in and/or through highly stressed networking environments. Stressed networking environments include those with intermittent connectivity, large and/or variable delays, and high bit error rates. To provide its services, the DTN protocols reside at the application layer of the constituent internets, forming a store-and-forward overlay network. The key capabilities of the bundling protocols include custody-based reliability, ability to cope with intermittent connectivity, ability to take advantage of scheduled and opportunistic connectivity, and late binding of names to addresses. In this presentation, we report on the addition of MACHETE models needed to support DTN, namely: the Bundle Protocol (BP) model. To illustrate the use of MACHETE with the additional DTN model, we provide an example simulation to benchmark its performance. We demonstrate the use of the DTN protocol

  20. Evaluating Organizational Performance: Rational, Natural, and Open System Models

    ERIC Educational Resources Information Center

    Martz, Wes

    2013-01-01

    As the definition of organization has evolved, so have the approaches used to evaluate organizational performance. During the past 60 years, organizational theorists and management scholars have developed a comprehensive line of thinking with respect to organizational assessment that serves to inform and be informed by the evaluation discipline.…

  1. A Geospatial Model for Remedial Design Optimization and Performance Evaluation

    SciTech Connect

    Madrid, V M; Demir, Z; Gregory, S; Valett, J; Halden, R U

    2002-02-19

    invaluable in optimizing and evaluating the remedial design and performance.

  2. Evaluating performances of simplified physically based landslide susceptibility models.

    NASA Astrophysics Data System (ADS)

    Capparelli, Giovanna; Formetta, Giuseppe; Versace, Pasquale

    2015-04-01

    Rainfall induced shallow landslides cause significant damages involving loss of life and properties. Prediction of shallow landslides susceptible locations is a complex task that involves many disciplines: hydrology, geotechnical science, geomorphology, and statistics. Usually to accomplish this task two main approaches are used: statistical or physically based model. This paper presents a package of GIS based models for landslide susceptibility analysis. It was integrated in the NewAge-JGrass hydrological model using the Object Modeling System (OMS) modeling framework. The package includes three simplified physically based models for landslides susceptibility analysis (M1, M2, and M3) and a component for models verifications. It computes eight goodness of fit indices (GOF) by comparing pixel-by-pixel model results and measurements data. Moreover, the package integration in NewAge-JGrass allows the use of other components such as geographic information system tools to manage inputs-output processes, and automatic calibration algorithms to estimate model parameters. The system offers the possibility to investigate and fairly compare the quality and the robustness of models and models parameters, according a procedure that includes: i) model parameters estimation by optimizing each of the GOF index separately, ii) models evaluation in the ROC plane by using each of the optimal parameter set, and iii) GOF robustness evaluation by assessing their sensitivity to the input parameter variation. This procedure was repeated for all three models. The system was applied for a case study in Calabria (Italy) along the Salerno-Reggio Calabria highway, between Cosenza and Altilia municipality. The analysis provided that among all the optimized indices and all the three models, Average Index (AI) optimization coupled with model M3 is the best modeling solution for our test case. This research was funded by PON Project No. 01_01503 "Integrated Systems for Hydrogeological Risk

  3. Biomechanical modelling and evaluation of construction jobs for performance improvement.

    PubMed

    Parida, Ratri; Ray, Pradip Kumar

    2012-01-01

    Occupational risk factors, such as awkward posture, repetition, lack of rest, insufficient illumination and heavy workload related to construction-related MMH activities may cause musculoskeletal disorders and poor performance of the workers, ergonomic design of construction worksystems was a critical need for improving their health and safety wherein a dynamic biomechanical models were required to be empirically developed and tested at a construction site of Tata Steel, the largest steel making company of India in private sector. In this study, a comprehensive framework is proposed for biomechanical evaluation of shovelling and grinding under diverse work environments. The benefit of such an analysis lies in its usefulness in setting guidelines for designing such jobs with minimization of risks of musculoskeletal disorders (MSDs) and enhancing correct methods of carrying out the jobs leading to reduced fatigue and physical stress. Data based on direct observations and videography were collected for the shovellers and grinders over a number of workcycles. Compressive forces and moments for a number of segments and joints are computed with respect to joint flexion and extension. The results indicate that moments and compressive forces at L5/S1 link are significant for shovellers while moments at elbow and wrist are significant for grinders.

  4. The Rasch Model for Evaluating Italian Student Performance

    ERIC Educational Resources Information Center

    Camminatiello, Ida; Gallo, Michele; Menini, Tullio

    2010-01-01

    In 1997 the Organisation for Economic Co-operation and Development (OECD) launched the OECD Programme for International Student Assessment (PISA) for collecting information about 15-year-old students in participating countries. Our study analyses the PISA 2006 cognitive test for evaluating the Italian student performance in mathematics, reading…

  5. Evaluating performances of simplified physically based models for landslide susceptibility

    NASA Astrophysics Data System (ADS)

    Formetta, G.; Capparelli, G.; Versace, P.

    2015-12-01

    Rainfall induced shallow landslides cause loss of life and significant damages involving private and public properties, transportation system, etc. Prediction of shallow landslides susceptible locations is a complex task that involves many disciplines: hydrology, geotechnical science, geomorphology, and statistics. Usually to accomplish this task two main approaches are used: statistical or physically based model. Reliable models' applications involve: automatic parameters calibration, objective quantification of the quality of susceptibility maps, model sensitivity analysis. This paper presents a methodology to systemically and objectively calibrate, verify and compare different models and different models performances indicators in order to individuate and eventually select the models whose behaviors are more reliable for a certain case study. The procedure was implemented in package of models for landslide susceptibility analysis and integrated in the NewAge-JGrass hydrological model. The package includes three simplified physically based models for landslides susceptibility analysis (M1, M2, and M3) and a component for models verifications. It computes eight goodness of fit indices by comparing pixel-by-pixel model results and measurements data. Moreover, the package integration in NewAge-JGrass allows the use of other components such as geographic information system tools to manage inputs-output processes, and automatic calibration algorithms to estimate model parameters. The system was applied for a case study in Calabria (Italy) along the Salerno-Reggio Calabria highway, between Cosenza and Altilia municipality. The analysis provided that among all the optimized indices and all the three models, the optimization of the index distance to perfect classification in the receiver operating characteristic plane (D2PC) coupled with model M3 is the best modeling solution for our test case.

  6. Evaluating hydrological model performance using information theory-based metrics

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The accuracy-based model performance metrics not necessarily reflect the qualitative correspondence between simulated and measured streamflow time series. The objective of this work was to use the information theory-based metrics to see whether they can be used as complementary tool for hydrologic m...

  7. Photovoltaic performance models: an evaluation with actual field data

    NASA Astrophysics Data System (ADS)

    TamizhMani, Govindasamy; Ishioye, John-Paul; Voropayev, Arseniy; Kang, Yi

    2008-08-01

    Prediction of energy production is crucial to the design and installation of the building integrated photovoltaic systems. This prediction should be attainable based on the commonly available parameters such as system size, orientation and tilt angle. Several commercially available as well as free downloadable software tools exist to predict energy production. Six software models have been evaluated in this study and they are: PV Watts, PVsyst, MAUI, Clean Power Estimator, Solar Advisor Model (SAM) and RETScreen. This evaluation has been done by comparing the monthly, seasonaly and annually predicted data with the actual, field data obtained over a year period on a large number of residential PV systems ranging between 2 and 3 kWdc. All the systems are located in Arizona, within the Phoenix metropolitan area which lies at latitude 33° North, and longitude 112 West, and are all connected to the electrical grid.

  8. Towards Modeling Realistic Mobility for Performance Evaluations in MANET

    NASA Astrophysics Data System (ADS)

    Aravind, Alex; Tahir, Hassan

    Simulation modeling plays crucial role in conducting research on complex dynamic systems like mobile ad hoc networks and often the only way. Simulation has been successfully applied in MANET for more than two decades. In several recent studies, it is observed that the credibility of the simulation results in the field has decreased while the use of simulation has steadily increased. Part of this credibility crisis has been attributed to the simulation of mobility of the nodes in the system. Mobility has such a fundamental influence on the behavior and performance of mobile ad hoc networks. Accurate modeling and knowledge of mobility of the nodes in the system is not only helpful but also essential for the understanding and interpretation of the performance of the system under study. Several ideas, mostly in isolation, have been proposed in the literature to infuse realism in the mobility of nodes. In this paper, we attempt a holistic analysis of creating realistic mobility models and then demonstrate creation and analysis of realistic mobility models using a software tool we have developed. Using our software tool, desired mobility of the nodes in the system can be specified, generated, analyzed, and then the trace can be exported to be used in the performance studies of proposed algorithms or systems.

  9. Performance Evaluation of the Prototype Model NEXT Ion Thruster

    NASA Technical Reports Server (NTRS)

    Herman, Daniel A.; Soulas, George C.; Patterson, Michael J.

    2008-01-01

    The performance testing results of the first prototype model NEXT ion engine, PM1, are presented. The NEXT program has developed the next generation ion propulsion system to enhance and enable Discovery, New Frontiers, and Flagship-type NASA missions. The PM1 thruster exhibits operational behavior consistent with its predecessors, the engineering model thrusters, with substantial mass savings, enhanced thermal margins, and design improvements for environmental testing compliance. The dry mass of PM1 is 12.7 kg. Modifications made in the thruster design have resulted in improved performance and operating margins, as anticipated. PM1 beginning-of-life performance satisfies all of the electric propulsion thruster mission-derived technical requirements. It demonstrates a wide range of throttleability by processing input power levels from 0.5 to 6.9 kW. At 6.9 kW, the PM1 thruster demonstrates specific impulse of 4190 s, 237 mN of thrust, and a thrust efficiency of 0.71. The flat beam profile, flatness parameters vary from 0.66 at low-power to 0.88 at full-power, and advanced ion optics reduce localized accelerator grid erosion and increases margins for electron backstreaming, impingement-limited voltage, and screen grid ion transparency. The thruster throughput capability is predicted to exceed 750 kg of xenon, an equivalent of 36,500 hr of continuous operation at the full-power operating condition.

  10. Work performance evaluation using the exercising rat model

    SciTech Connect

    Stavert, D.M.; Lehnert, B.E.

    1987-01-01

    A treadmill-metabolic chamber system and a stress testing protocol have been developed to evaluate aerobic work performance on exercising rats that have inhaled toxic substances. The chamber with an enclosed treadmill provides the means to measure the physiologic status of rats during maximal work intensities in terms of O/sub 2/ consumption (V/sub 02/) and CO/sub 2/ production (V/sub c02/). The metabolic chamber can also accommodate instrumented rats for more detailed analyses of their cardiopulmonary status, e.g., ECG, cardiac output, arterial blood gases and pH, and arterial and venous blood pressures. For such studies, an arterial/venous catheter preparation is required. Because of the severe metabolic alterations after such surgery, a post surgical recovery strategy using hyperalimentation was developed to ensure maximal performance of instrumented animals during stress testing. Actual work performance studies are conducted using an eight minute stress test protocol in which the rat is subjected to increasing external work. The metabolic state of the animal is measured from resting levels to maximum oxygen consumption (V/sub 02max/). V/sub 02max/ has been shown to be reproducible in individual rats and is a sensitive indicator of oxidant gas-induced pulmonary damage. 3 tabs.

  11. Performance evaluation of four directional emissivity analytical models with thermal SAIL model and airborne images.

    PubMed

    Ren, Huazhong; Liu, Rongyuan; Yan, Guangjian; Li, Zhao-Liang; Qin, Qiming; Liu, Qiang; Nerry, Françoise

    2015-04-01

    Land surface emissivity is a crucial parameter in the surface status monitoring. This study aims at the evaluation of four directional emissivity models, including two bi-directional reflectance distribution function (BRDF) models and two gap-frequency-based models. Results showed that the kernel-driven BRDF model could well represent directional emissivity with an error less than 0.002, and was consequently used to retrieve emissivity with an accuracy of about 0.012 from an airborne multi-angular thermal infrared data set. Furthermore, we updated the cavity effect factor relating to multiple scattering inside canopy, which improved the performance of the gap-frequency-based models.

  12. Validation of Ultrafilter Performance Model Based on Systematic Simulant Evaluation

    SciTech Connect

    Russell, Renee L.; Billing, Justin M.; Smith, Harry D.; Peterson, Reid A.

    2009-11-18

    Because of limited availability of test data with actual Hanford tank waste samples, a method was developed to estimate expected filtration performance based on physical characterization data for the Hanford Tank Waste Treatment and Immobilization Plant. A test with simulated waste was analyzed to demonstrate that filtration of this class of waste is consistent with a concentration polarization model. Subsequently, filtration data from actual waste samples were analyzed to demonstrate that centrifuged solids concentrations provide a reasonable estimate of the limiting concentration for filtration.

  13. visCOS: An R-package to evaluate model performance of hydrological models

    NASA Astrophysics Data System (ADS)

    Klotz, Daniel; Herrnegger, Mathew; Wesemann, Johannes; Schulz, Karsten

    2016-04-01

    The evaluation of model performance is a central part of (hydrological) modelling. Much attention has been given to the development of evaluation criteria and diagnostic frameworks. (Klemeš, 1986; Gupta et al., 2008; among many others). Nevertheless, many applications exist for which objective functions do not yet provide satisfying summaries. Thus, the necessity to visualize results arises in order to explore a wider range of model capacities, be it strengths or deficiencies. Visualizations are usually devised for specific projects and these efforts are often not distributed to a broader community (e.g. via open source software packages). Hence, the opportunity to explicitly discuss a state-of-the-art presentation technique is often missed. We therefore present a comprehensive R-package for evaluating model performance by visualizing and exploring different aspects of hydrological time-series. The presented package comprises a set of useful plots and visualization methods, which complement existing packages, such as hydroGOF (Zambrano-Bigiarini et al., 2012). It is derived from practical applications of the hydrological models COSERO and COSEROreg (Kling et al., 2014). visCOS, providing an interface in R, represents an easy-to-use software package for visualizing and assessing model performance and can be implemented in the process of model calibration or model development. The package provides functions to load hydrological data into R, clean the data, process, visualize, explore and finally save the results in a consistent way. Together with an interactive zoom function of the time series, an online calculation of the objective functions for variable time-windows is included. Common hydrological objective functions, such as the Nash-Sutcliffe Efficiency and the Kling-Gupta Efficiency, can also be evaluated and visualized in different ways for defined sub-periods like hydrological years or seasonal sections. Many hydrologists use long-term water-balances as a

  14. Evaluation of the Service Review Model with Performance Scorecards

    ERIC Educational Resources Information Center

    Szabo, Thomas G.; Williams, W. Larry; Rafacz, Sharlet D.; Newsome, William; Lydon, Christina A.

    2012-01-01

    The current study combined a management technique termed "Service Review" with performance scorecards to enhance staff and consumer behavior in a human service setting consisting of 11 supervisors and 56 front-line staff working with 9 adult consumers with challenging behaviors. Results of our intervention showed that service review and scorecards…

  15. Formal Implementation of a Performance Evaluation Model for the Face Recognition System

    PubMed Central

    Shin, Yong-Nyuo; Kim, Jason; Lee, Yong-Jun; Shin, Woochang; Choi, Jin-Young

    2008-01-01

    Due to usability features, practical applications, and its lack of intrusiveness, face recognition technology, based on information, derived from individuals' facial features, has been attracting considerable attention recently. Reported recognition rates of commercialized face recognition systems cannot be admitted as official recognition rates, as they are based on assumptions that are beneficial to the specific system and face database. Therefore, performance evaluation methods and tools are necessary to objectively measure the accuracy and performance of any face recognition system. In this paper, we propose and formalize a performance evaluation model for the biometric recognition system, implementing an evaluation tool for face recognition systems based on the proposed model. Furthermore, we performed evaluations objectively by providing guidelines for the design and implementation of a performance evaluation system, formalizing the performance test process. PMID:18317524

  16. A model for evaluating the social performance of construction waste management

    SciTech Connect

    Yuan Hongping

    2012-06-15

    Highlights: Black-Right-Pointing-Pointer Scant attention is paid to social performance of construction waste management (CWM). Black-Right-Pointing-Pointer We develop a model for assessing the social performance of CWM. Black-Right-Pointing-Pointer With the model, the social performance of CWM can be quantitatively simulated. - Abstract: It has been determined by existing literature that a lot of research efforts have been made to the economic performance of construction waste management (CWM), but less attention is paid to investigation of the social performance of CWM. This study therefore attempts to develop a model for quantitatively evaluating the social performance of CWM by using a system dynamics (SD) approach. Firstly, major variables affecting the social performance of CWM are identified and a holistic system for assessing the social performance of CWM is formulated in line with feedback relationships underlying these variables. The developed system is then converted into a SD model through the software iThink. An empirical case study is finally conducted to demonstrate application of the model. Results of model validation indicate that the model is robust and reasonable to reflect the situation of the real system under study. Findings of the case study offer helpful insights into effectively promoting the social performance of CWM of the project investigated. Furthermore, the model exhibits great potential to function as an experimental platform for dynamically evaluating effects of management measures on improving the social performance of CWM of construction projects.

  17. The third phase of AQMEII: evaluation strategy and multi-model performance analysis

    NASA Astrophysics Data System (ADS)

    Solazzo, Efisio; Galmarini, Stefano; Hogrefe, Christian

    2016-04-01

    AQMEII (Air Quality Model Evaluation International Initiative) is an extraordinary effort promoting policy-relevant research on regional air quality model evaluation across the European and North American atmospheric modelling communities, providing the ideal platform for advancing the evaluation of air quality models at the regional scale. This study presents a comprehensive overview of the multi-model evaluation results achieved in the ongoing third phase of AQMEII. Sixteen regional-scale chemistry transport modelling systems have simulated the air quality for the year 2010 over the two continental areas of Europe and North America, providing pollutant concentration values at the surface as well as vertical profiles. The performance of the modelling systems have been evaluated against observational data for ozone, CO, NO2, PM10, PM2.5, wind speed and temperature, offering a valuable opportunity to learn about the models' behaviour by performing model-to-model and model-to-measurement comparisons. We make use of the error apportionment strategy, a novel approach to model evaluation developed within AQMEII that combines elements of operational and diagnostic evaluation. This method apportions the model error to its spectral components, thereby identifying the space/timescale at which it is most relevant and, when possible, to infer which process/es could have generated it. We investigate the deviation between modelled and observed time series of pollutants through a revised formulation for breaking down the mean square error into bias, variance, and the minimum achievable MSE (mMSE). Each of the error components is analysed independently and apportioned to specific processes based on the corresponding timescale (long scale, synoptic, diurnal, and intra-day). Compared to a conventional operational evaluation approach, the new method allows for a more precise identification of where each portion of the model error predominantly occurs. Information about the nature of

  18. Validating the ACE Model for Evaluating Student Performance Using a Teaching-Learning Process Based on Computational Modeling Systems

    ERIC Educational Resources Information Center

    Louzada, Alexandre Neves; Elia, Marcos da Fonseca; Sampaio, Fábio Ferrentini; Vidal, Andre Luiz Pestana

    2014-01-01

    The aim of this work is to adapt and test, in a Brazilian public school, the ACE model proposed by Borkulo for evaluating student performance as a teaching-learning process based on computational modeling systems. The ACE model is based on different types of reasoning involving three dimensions. In addition to adapting the model and introducing…

  19. Optical modeling and physical performances evaluations for the JT-60SA ECRF antenna

    SciTech Connect

    Platania, P. Figini, L.; Farina, D.; Micheletti, D.; Moro, A.; Sozzi, C.; Isayama, A.; Kobayashi, T.; Moriyama, S.

    2015-12-10

    The purpose of this work is the optical modeling and physical performances evaluations of the JT-60SA ECRF launcher system. The beams have been simulated with the electromagnetic code GRASP® and used as input for ECCD calculations performed with the beam tracing code GRAY, capable of modeling propagation, absorption and current drive of an EC Gaussion beam with general astigmatism. Full details of the optical analysis has been taken into account to model the launched beams. Inductive and advanced reference scenarios has been analysed for physical evaluations in the full poloidal and toroidal steering ranges for two slightly different layouts of the launcher system.

  20. A mixed integer bi-level DEA model for bank branch performance evaluation by Stackelberg approach

    NASA Astrophysics Data System (ADS)

    Shafiee, Morteza; Lotfi, Farhad Hosseinzadeh; Saleh, Hilda; Ghaderi, Mehdi

    2016-11-01

    One of the most complicated decision making problems for managers is the evaluation of bank performance, which involves various criteria. There are many studies about bank efficiency evaluation by network DEA in the literature review. These studies do not focus on multi-level network. Wu (Eur J Oper Res 207:856-864, 2010) proposed a bi-level structure for cost efficiency at the first time. In this model, multi-level programming and cost efficiency were used. He used a nonlinear programming to solve the model. In this paper, we have focused on multi-level structure and proposed a bi-level DEA model. We then used a liner programming to solve our model. In other hand, we significantly improved the way to achieve the optimum solution in comparison with the work by Wu (2010) by converting the NP-hard nonlinear programing into a mixed integer linear programming. This study uses a bi-level programming data envelopment analysis model that embodies internal structure with Stackelberg-game relationships to evaluate the performance of banking chain. The perspective of decentralized decisions is taken in this paper to cope with complex interactions in banking chain. The results derived from bi-level programming DEA can provide valuable insights and detailed information for managers to help them evaluate the performance of the banking chain as a whole using Stackelberg-game relationships. Finally, this model was applied in the Iranian bank to evaluate cost efficiency.

  1. Modeling and performance evaluation of flexible manufacturing systems using Petri nets

    SciTech Connect

    Callotta, M.P.; Cimenez, C.; Tazza, M.

    1996-12-31

    A timed Petri net approach is used to model resource allocation-utilization-release patterns for performance evaluation. First, simple resource utilization sequences are derived from a directed graph representing the process plan of parts. Second, the place-transitions sequences are connected introducing places whose marking models the resources needed to perform the manufacturing operation indicated in the process plan. Time is introduced as a permanence time of tokens at the place-transition sequence, modeling the utilization time of resources. The corresponding model leads to a simultaneous resource possession problem. Finally, flow equations for the description of the quantitative behavior of the resulting timed Petri net are presented. A major conclusion of the paper is that performance evaluation can be adequately abstracted and analytically solved, in a simple way, even in presence of complicating factors like resource sharing and routing flexibility in process plans.

  2. Assessing the quality of classification models: Performance measures and evaluation procedures

    NASA Astrophysics Data System (ADS)

    Cichosz, Paweł

    2011-06-01

    This article systematically reviews techniques used for the evaluation of classification models and provides guidelines for their proper application. This includes performance measures assessing the model's performance on a particular dataset and evaluation procedures applying the former to appropriately selected data subsets to produce estimates of their expected values on new data. Their common purpose is to assess model generalization capabilities, which are crucial for judging the applicability and usefulness of both classification and any other data mining models. The review presented in this article is expected to be sufficiently in-depth and complete for most practical needs, while remaining clear and easy to follow with little prior knowledge. Issues that receive special attention include incorporating instance weights to performance measures, combining the same set of evaluation procedures with arbitrary performance measures, and avoiding pitfalls related to separating data subsets used for evaluation from those used for model creation. With the classification task unquestionably being one of the central data mining tasks and the vastly increasing number of data mining applications — not only in business, but also in engineering and research — this is expected to be interesting and useful for a wide audience. All presented techniques are accompanied by simple R language implementations and usage examples, which — whereas created to serve the illustration purpose mostly — can be actually used in practice.

  3. Important Physiological Parameters and Physical Activity Data for Evaluating Exposure Modeling Performance: a Synthesis

    EPA Science Inventory

    The purpose of this report is to develop a database of physiological parameters needed for understanding and evaluating performance of the APEX and SHEDS exposure/intake dose rate model used by the Environmental Protection Agency (EPA) as part of its regulatory activities. The A...

  4. Stochastic performance modeling and evaluation of obstacle detectability with imaging range sensors

    NASA Technical Reports Server (NTRS)

    Matthies, Larry; Grandjean, Pierrick

    1993-01-01

    Statistical modeling and evaluation of the performance of obstacle detection systems for Unmanned Ground Vehicles (UGVs) is essential for the design, evaluation, and comparison of sensor systems. In this report, we address this issue for imaging range sensors by dividing the evaluation problem into two levels: quality of the range data itself and quality of the obstacle detection algorithms applied to the range data. We review existing models of the quality of range data from stereo vision and AM-CW LADAR, then use these to derive a new model for the quality of a simple obstacle detection algorithm. This model predicts the probability of detecting obstacles and the probability of false alarms, as a function of the size and distance of the obstacle, the resolution of the sensor, and the level of noise in the range data. We evaluate these models experimentally using range data from stereo image pairs of a gravel road with known obstacles at several distances. The results show that the approach is a promising tool for predicting and evaluating the performance of obstacle detection with imaging range sensors.

  5. Source term model evaluations for the low-level waste facility performance assessment

    SciTech Connect

    Yim, M.S.; Su, S.I.

    1995-12-31

    The estimation of release of radionuclides from various waste forms to the bottom boundary of the waste disposal facility (source term) is one of the most important aspects of LLW facility performance assessment. In this work, several currently used source term models are comparatively evaluated for the release of carbon-14 based on a test case problem. The models compared include PRESTO-EPA-CPG, IMPACTS, DUST and NEFTRAN-II. Major differences in assumptions and approaches between the models are described and key parameters are identified through sensitivity analysis. The source term results from different models are compared and other concerns or suggestions are discussed.

  6. Evaluating Nextgen Closely Spaced Parallel Operations Concepts with Validated Human Performance Models: Flight Deck Guidelines

    NASA Technical Reports Server (NTRS)

    Hooey, Becky Lee; Gore, Brian Francis; Mahlstedt, Eric; Foyle, David C.

    2013-01-01

    The objectives of the current research were to develop valid human performance models (HPMs) of approach and land operations; use these models to evaluate the impact of NextGen Closely Spaced Parallel Operations (CSPO) on pilot performance; and draw conclusions regarding flight deck display design and pilot-ATC roles and responsibilities for NextGen CSPO concepts. This document presents guidelines and implications for flight deck display designs and candidate roles and responsibilities. A companion document (Gore, Hooey, Mahlstedt, & Foyle, 2013) provides complete scenario descriptions and results including predictions of pilot workload, visual attention and time to detect off-nominal events.

  7. Towards a benchmark simulation model for plant-wide control strategy performance evaluation of WWTPs.

    PubMed

    Jeppsson, U; Rosen, C; Alex, J; Copp, J; Gernaey, K V; Pons, M N; Vanrolleghem, P A

    2006-01-01

    The COST/IWA benchmark simulation model has been available for seven years. Its primary purpose has been to create a platform for control strategy benchmarking of activated sludge processes. The fact that the benchmark has resulted in more than 100 publications, not only in Europe but also worldwide, demonstrates the interest in such a tool within the research community In this paper, an extension of the benchmark simulation model no 1 (BSM1) is proposed. This extension aims at facilitating control strategy development and performance evaluation at a plant-wide level and, consequently, includes both pre-treatment of wastewater as well as the processes describing sludge treatment. The motivation for the extension is the increasing interest and need to operate and control wastewater treatment systems not only at an individual process level but also on a plant-wide basis. To facilitate the changes, the evaluation period has been extended to one year. A prolonged evaluation period allows for long-term control strategies to be assessed and enables the use of control handles that cannot be evaluated in a realistic fashion in the one-week BSM1 evaluation period. In the paper, the extended plant layout is proposed and the new suggested process models are described briefly. Models for influent file design, the benchmarking procedure and the evaluation criteria are also discussed. And finally, some important remaining topics, for which consensus is required, are identified.

  8. Wind farms providing secondary frequency regulation: Evaluating the performance of model-based receding horizon control

    NASA Astrophysics Data System (ADS)

    Shapiro, Carl R.; Meyers, Johan; Meneveau, Charles; Gayme, Dennice F.

    2016-09-01

    We investigate the use of wind farms to provide secondary frequency regulation for a power grid. Our approach uses model-based receding horizon control of a wind farm that is tested using a large eddy simulation (LES) framework. In order to enable real-time implementation, the control actions are computed based on a time-varying one-dimensional wake model. This model describes wake advection and interactions, both of which play an important role in wind farm power production. This controller is implemented in an LES model of an 84-turbine wind farm represented by actuator disk turbine models. Differences between the velocities at each turbine predicted by the wake model and measured in LES are used for closed-loop feedback. The controller is tested on two types of regulation signals, “RegA” and “RegD”, obtained from PJM, an independent system operator in the eastern United States. Composite performance scores, which are used by PJM to qualify plants for regulation, are used to evaluate the performance of the controlled wind farm. Our results demonstrate that the controlled wind farm consistently performs well, passing the qualification threshold for all fastacting RegD signals. For the RegA signal, which changes over slower time scales, the controlled wind farm's average performance surpasses the threshold, but further work is needed to enable the controlled system to achieve qualifying performance all of the time.

  9. An efficiency data envelopment analysis model reinforced by classification and regression tree for hospital performance evaluation.

    PubMed

    Chuang, Chun-Ling; Chang, Peng-Chan; Lin, Rong-Ho

    2011-10-01

    As changes in the medical environment and policies on national health insurance coverage have triggered tremendous impacts on the business performance and financial management of medical institutions, effective management becomes increasingly crucial for hospitals to enhance competitiveness and to strive for sustainable development. The study accordingly aims at evaluating hospital operational efficiency for better resource allocation and cost effectiveness. Several data envelopment analysis (DEA)-based models were first compared, and the DEA-artificial neural network (ANN) model was identified as more capable than the DEA and DEA-assurance region (AR) models of measuring operational efficiency and recognizing the best-performing hospital. The classification and regression tree (CART) efficiency model was then utilized to extract rules for improving resource allocation of medical institutions. PMID:20878210

  10. Evaluating Performance of Components

    NASA Technical Reports Server (NTRS)

    Katz, Daniel; Tisdale, Edwin; Norton, Charles

    2004-01-01

    Parallel Component Performance Benchmarks is a computer program developed to aid the evaluation of the Common Component Architecture (CCA) - a software architecture, based on a component model, that was conceived to foster high-performance computing, including parallel computing. More specifically, this program compares the performances (principally by measuring computing times) of componentized versus conventional versions of the Parallel Pyramid 2D Adaptive Mesh Refinement library - a software library that is used to generate computational meshes for solving physical problems and that is typical of software libraries in use at NASA s Jet Propulsion Laboratory.

  11. Evaluating stream health based environmental justice model performance at different spatial scales

    NASA Astrophysics Data System (ADS)

    Daneshvar, Fariborz; Nejadhashemi, A. Pouyan; Zhang, Zhen; Herman, Matthew R.; Shortridge, Ashton; Marquart-Pyatt, Sandra

    2016-07-01

    This study evaluated the effects of spatial resolution on environmental justice analysis concerning stream health. The Saginaw River Basin in Michigan was selected since it is an area of concern in the Great Lakes basin. Three Bayesian Conditional Autoregressive (CAR) models (ordinary regression, weighted regression and spatial) were developed for each stream health measure based on 17 socioeconomic and physiographical variables at three census levels. For all stream health measures, spatial models had better performance compared to the two non-spatial ones at the census tract and block group levels. Meanwhile no spatial dependency was found at the county level. Multilevel Bayesian CAR models were also developed to understand the spatial dependency at the three levels. Results showed that considering level interactions improved models' prediction. Residual plots also showed that models developed at the block group and census tract (in contrary to county level models) are able to capture spatial variations.

  12. MODELING AND PERFORMANCE EVALUATION FOR AVIATION SECURITY CARGO INSPECTION QUEUING SYSTEM

    SciTech Connect

    Allgood, Glenn O; Olama, Mohammed M; Rose, Terri A; Brumback, Daryl L

    2009-01-01

    Beginning in 2010, the U.S. will require that all cargo loaded in passenger aircraft be inspected. This will require more efficient processing of cargo and will have a significant impact on the inspection protocols and business practices of government agencies and the airlines. In this paper, we conduct performance evaluation study for an aviation security cargo inspection queuing system for material flow and accountability. The overall performance of the aviation security cargo inspection system is computed, analyzed, and optimized for the different system dynamics. Various performance measures are considered such as system capacity, residual capacity, and throughput. These metrics are performance indicators of the system s ability to service current needs and response capacity to additional requests. The increased physical understanding resulting from execution of the queuing model utilizing these vetted performance measures will reduce the overall cost and shipping delays associated with the new inspection requirements.

  13. Systematic Land-Surface-Model Performance Evaluation on different time scales

    NASA Astrophysics Data System (ADS)

    Mahecha, M. D.; Jung, M.; Reichstein, M.; Beer, C.; Braakhekke, M.; Carvalhais, N.; Lange, H.; Lasslop, G.; Le Maire, G.; Seneviratne, S. I.; Vetter, M.

    2008-12-01

    Keeping track of the space--time evolution of CO2--, and H2O--fluxes between the terrestrial biosphere and atmosphere is essential to our understanding of current climate. Monitoring fluxes at site level is one option to characterize the temporal development of ecosystem--atmosphere interactions. Nevertheless, many aspects of ecosystem--atmosphere fluxes become meaningful only when interpreted in time over larger geographical regions. Empirical and process based models play a key role in spatial and temporal upscaling exercises. In this context, comparative model performance evaluations at site level are indispensable. We present a model evaluation scheme which investigates the model-data agreement separately on different time scales. Observed and modeled time series were decomposed by essentially non parametric techniques into subsignals (time scales) of characteristic fluctuations. By evaluating the extracted subsignals of observed and modeled C--fluxes (gross and net ecosystem exchange, GEE and NEE, and terrestrial ecosystem respiration, TER) separately, we obtain scale--dependent performances for the different evaluation measures. Our diagnostic model comparison allows uncovering time scales of model-data agreement and fundamental mismatch. We focus on the systematic evaluation of three land--surface models: Biome--BGC, ORCHIDEE, and LPJ. For the first time all models were driven by consistent site meteorology and compared to respective Eddy-Covariance flux observations. The results show that correct net C--fluxes may result from systematic (simultaneous) biases in TER and GEE on specific time scales of variation. We localize significant model-data mismatches of the annual-seasonal cycles in time and illustrate the recurrence characteristics of such problems. For example LPJ underestimates GEE during winter months and over estimates it in early summer at specific sites. Contrary, ORCHIDEE over-estimates the flux from July to September at these sites. Finally

  14. The Third Phase of AQMEII: Evaluation Strategy and Multi-Model Performance Analysis

    EPA Science Inventory

    AQMEII (Air Quality Model Evaluation International Initiative) is an extraordinary effort promoting policy-relevant research on regional air quality model evaluation across the European and North American atmospheric modelling communities, providing the ideal platform for advanci...

  15. Seasonal versus Episodic Performance Evaluation for an Eulerian Photochemical Air Quality Model

    SciTech Connect

    Jin, Ling; Brown, Nancy J.; Harley, Robert A.; Bao, Jian-Wen; Michelson, Sara A; Wilczak, James M

    2010-04-16

    This study presents detailed evaluation of the seasonal and episodic performance of the Community Multiscale Air Quality (CMAQ) modeling system applied to simulate air quality at a fine grid spacing (4 km horizontal resolution) in central California, where ozone air pollution problems are severe. A rich aerometric database collected during the summer 2000 Central California Ozone Study (CCOS) is used to prepare model inputs and to evaluate meteorological simulations and chemical outputs. We examine both temporal and spatial behaviors of ozone predictions. We highlight synoptically driven high-ozone events (exemplified by the four intensive operating periods (IOPs)) for evaluating both meteorological inputs and chemical outputs (ozone and its precursors) and compare them to the summer average. For most of the summer days, cross-domain normalized gross errors are less than 25% for modeled hourly ozone, and normalized biases are between {+-}15% for both hourly and peak (1 h and 8 h) ozone. The domain-wide aggregated metrics indicate similar performance between the IOPs and the whole summer with respect to predicted ozone and its precursors. Episode-to-episode differences in ozone predictions are more pronounced at a subregional level. The model performs consistently better in the San Joaquin Valley than other air basins, and episodic ozone predictions there are similar to the summer average. Poorer model performance (normalized peak ozone biases <-15% or >15%) is found in the Sacramento Valley and the Bay Area and is most noticeable in episodes that are subject to the largest uncertainties in meteorological fields (wind directions in the Sacramento Valley and timing and strength of onshore flow in the Bay Area) within the boundary layer.

  16. Methodologies for evaluating performance and assessing uncertainty of atmospheric dispersion models

    NASA Astrophysics Data System (ADS)

    Chang, Joseph C.

    This thesis describes methodologies to evaluate the performance and to assess the uncertainty of atmospheric dispersion models, tools that predict the fate of gases and aerosols upon their release into the atmosphere. Because of the large economic and public-health impacts often associated with the use of the dispersion model results, these models should be properly evaluated, and their uncertainty should be properly accounted for and understood. The CALPUFF, HPAC, and VLSTRACK dispersion modeling systems were applied to the Dipole Pride (DP26) field data (˜20 km in scale), in order to demonstrate the evaluation and uncertainty assessment methodologies. Dispersion model performance was found to be strongly dependent on the wind models used to generate gridded wind fields from observed station data. This is because, despite the fact that the test site was a flat area, the observed surface wind fields still showed considerable spatial variability, partly because of the surrounding mountains. It was found that the two components were comparable for the DP26 field data, with variability more important than uncertainty closer to the source, and less important farther away from the source. Therefore, reducing data errors for input meteorology may not necessarily increase model accuracy due to random turbulence. DP26 was a research-grade field experiment, where the source, meteorological, and concentration data were all well-measured. Another typical application of dispersion modeling is a forensic study where the data are usually quite scarce. An example would be the modeling of the alleged releases of chemical warfare agents during the 1991 Persian Gulf War, where the source data had to rely on intelligence reports, and where Iraq had stopped reporting weather data to the World Meteorological Organization since the 1981 Iran-Iraq-war. Therefore the meteorological fields inside Iraq must be estimated by models such as prognostic mesoscale meteorological models, based on

  17. Engineering Process Model for High-Temperature Electrolysis System Performance Evaluation

    SciTech Connect

    Carl M. Stoots; James E. O'Brien; Michael G. McKellar; Grant L. Hawkes

    2005-11-01

    In order to evaluate the potential hydrogen production performance of large-scale High-Temperature Electrolysis (HTE) operations, we have developed an engineering process model at INL using the commercial systems-analysis code HYSYS. Using this code, a detailed process flowsheet has been defined that includes all of the components that would be present in an actual plant such as pumps, compressors, heat exchangers, turbines, and the electrolyzer. Since the electrolyzer is not a standard HYSYS component, a custom one-dimensional electrolyzer model was developed for incorporation into the overall HYSYS process flowsheet. This electrolyzer model allows for the determination of the operating voltage, gas outlet temperatures, and electrolyzer efficiency for any specified inlet gas flow rates, current density, cell active area, and external heat loss or gain. The one-dimensional electrolyzer model was validated by comparison with results obtained from a fully 3-D computational fluid dynamics model developed using FLUENT. This report provides details on the one-dimensional electrolyzer model, the HYSYS process model for a 300 MW HTE plant, and some representative results of parametric studies performed using the HYSYS process model.

  18. An evaluation of the performance of the soil temperature simulation algorithms used in the PRZM model.

    PubMed

    Tsiros, I X; Dimopoulos, I F

    2007-04-01

    Soil temperature simulation is an important component in environmental modeling since it is involved in several aspects of pollutant transport and fate. This paper deals with the performance of the soil temperature simulation algorithms of the well-known environmental model PRZM. Model results are compared and evaluated based on the basis of its ability to predict in situ measured soil temperature profiles in an experimental plot during a 3-year monitoring study. The evaluation of the performance is based on linear regression statistics and typical model statistical errors such as the root mean square error (RMSE) and the normalized objective function (NOF). Results show that the model required minimal calibration to match the observed response of the system. Values of the determination coefficient R(2) were found to be in all cases around the value of 0.98 indicating a very good agreement between measured and simulated data. Values of the RMSE were found to be in the range of 1.2 to 1.4 degrees C, 1.1 to 1.4 degrees C, 0.9 to 1.1 degrees C, and 0.8 to 1.1 degrees C, for the examined 2, 5, 10 and 20 cm soil depths, respectively. Sensitivity analyses were also performed to investigate the influence of various factors involved in the energy balance equation at the ground surface on the soil temperature profiles. The results showed that the model was able to represent important processes affecting the soil temperature regime such as the combined effect of the heat transfer by convection between the ground surface and the atmosphere and the latent heat flux due to soil water evaporation. PMID:17454373

  19. Evaluation of Blade-Strike Models for Estimating the Biological Performance of Large Kaplan Hydro Turbines

    SciTech Connect

    Deng, Zhiqun; Carlson, Thomas J.; Ploskey, Gene R.; Richmond, Marshall C.

    2005-11-30

    BioIndex testing of hydro-turbines is sought as an analog to the hydraulic index testing conducted on hydro-turbines to optimize their power production efficiency. In BioIndex testing the goal is to identify those operations within the range identified by Index testing where the survival of fish passing through the turbine is maximized. BioIndex testing includes the immediate tailrace region as well as the turbine environment between a turbine's intake trashracks and the exit of its draft tube. The US Army Corps of Engineers and the Department of Energy have been evaluating a variety of means, such as numerical and physical turbine models, to investigate the quality of flow through a hydro-turbine and other aspects of the turbine environment that determine its safety for fish. The goal is to use these tools to develop hypotheses identifying turbine operations and predictions of their biological performance that can be tested at prototype scales. Acceptance of hypotheses would be the means for validation of new operating rules for the turbine tested that would be in place when fish were passing through the turbines. The overall goal of this project is to evaluate the performance of numerical blade strike models as a tool to aid development of testable hypotheses for bioIndexing. Evaluation of the performance of numerical blade strike models is accomplished by comparing predictions of fish mortality resulting from strike by turbine runner blades with observations made using live test fish at mainstem Columbia River Dams and with other predictions of blade strike made using observations of beads passing through a 1:25 scale physical turbine model.

  20. Prediction of Warfarin Dose in Pediatric Patients: An Evaluation of the Predictive Performance of Several Models

    PubMed Central

    Marek, Elizabeth; Momper, Jeremiah D.; Hines, Ronald N.; Takao, Cheryl M.; Gill, Joan C.; Pravica, Vera; Gaedigk, Andrea; Neville, Kathleen A.

    2016-01-01

    OBJECTIVES: The objective of this study was to evaluate the performance of pediatric pharmacogenetic-based dose prediction models by using an independent cohort of pediatric patients from a multicenter trial. METHODS: Clinical and genetic data (CYP2C9 [cytochrome P450 2C9] and VKORC1 [vitamin K epoxide reductase]) were collected from pediatric patients aged 3 months to 17 years who were receiving warfarin as part of standard care at 3 separate clinical sites. The accuracy of 8 previously published pediatric pharmacogenetic-based dose models was evaluated in the validation cohort by comparing predicted maintenance doses to actual stable warfarin doses. The predictive ability was assessed by using the proportion of variance (R2), mean prediction error (MPE), and the percentage of predictions that fell within 20% of the actual maintenance dose. RESULTS: Thirty-two children reached a stable international normalized ratio and were included in the validation cohort. The pharmacogenetic-based warfarin dose models showed a proportion of variance ranging from 35% to 78% and an MPE ranging from −2.67 to 0.85 mg/day in the validation cohort. Overall, the model developed by Hamberg et al showed the best performance in the validation cohort (R2 = 78%; MPE = 0.15 mg/day) with 38% of the predictions falling within 20% of observed doses. CONCLUSIONS: Pharmacogenetic-based algorithms provide better predictions than a fixed-dose approach, although an optimal dose algorithm has not yet been developed. PMID:27453700

  1. Performance Evaluation and Modeling Techniques for Parallel Processors. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Dimpsey, Robert Tod

    1992-01-01

    In practice, the performance evaluation of supercomputers is still substantially driven by singlepoint estimates of metrics (e.g., MFLOPS) obtained by running characteristic benchmarks or workloads. With the rapid increase in the use of time-shared multiprogramming in these systems, such measurements are clearly inadequate. This is because multiprogramming and system overhead, as well as other degradations in performance due to time varying characteristics of workloads, are not taken into account. In multiprogrammed environments, multiple jobs and users can dramatically increase the amount of system overhead and degrade the performance of the machine. Performance techniques, such as benchmarking, which characterize performance on a dedicated machine ignore this major component of true computer performance. Due to the complexity of analysis, there has been little work done in analyzing, modeling, and predicting the performance of applications in multiprogrammed environments. This is especially true for parallel processors, where the costs and benefits of multi-user workloads are exacerbated. While some may claim that the issue of multiprogramming is not a viable one in the supercomputer market, experience shows otherwise. Even in recent massively parallel machines, multiprogramming is a key component. It has even been claimed that a partial cause of the demise of the CM2 was the fact that it did not efficiently support time-sharing. In the same paper, Gordon Bell postulates that, multicomputers will evolve to multiprocessors in order to support efficient multiprogramming. Therefore, it is clear that parallel processors of the future will be required to offer the user a time-shared environment with reasonable response times for the applications. In this type of environment, the most important performance metric is the completion of response time of a given application. However, there are a few evaluation efforts addressing this issue.

  2. Evaluation of the Performance of Smoothing Functions in Generalized Additive Models for Spatial Variation in Disease

    PubMed Central

    Siangphoe, Umaporn; Wheeler, David C.

    2015-01-01

    Generalized additive models (GAMs) with bivariate smoothing functions have been applied to estimate spatial variation in risk for many types of cancers. Only a handful of studies have evaluated the performance of smoothing functions applied in GAMs with regard to different geographical areas of elevated risk and different risk levels. This study evaluates the ability of different smoothing functions to detect overall spatial variation of risk and elevated risk in diverse geographical areas at various risk levels using a simulation study. We created five scenarios with different true risk area shapes (circle, triangle, linear) in a square study region. We applied four different smoothing functions in the GAMs, including two types of thin plate regression splines (TPRS) and two versions of locally weighted scatterplot smoothing (loess). We tested the null hypothesis of constant risk and detected areas of elevated risk using analysis of deviance with permutation methods and assessed the performance of the smoothing methods based on the spatial detection rate, sensitivity, accuracy, precision, power, and false-positive rate. The results showed that all methods had a higher sensitivity and a consistently moderate-to-high accuracy rate when the true disease risk was higher. The models generally performed better in detecting elevated risk areas than detecting overall spatial variation. One of the loess methods had the highest precision in detecting overall spatial variation across scenarios and outperformed the other methods in detecting a linear elevated risk area. The TPRS methods outperformed loess in detecting elevated risk in two circular areas. PMID:25983545

  3. Energy harvesting from the discrete gust response of a piezoaeroelastic wing: Modeling and performance evaluation

    NASA Astrophysics Data System (ADS)

    Xiang, Jinwu; Wu, Yining; Li, Daochun

    2015-05-01

    The objective of this paper is to investigate energy harvesting from the unfavorable gust response of a piezoelectric wing. An aeroelectroelastic model is built for the evaluation and improvement of the harvesting performance. The structural model is built based on the Euler-Bernoulli beam theory. The unsteady aerodynamics, combined with 1-cosine gust load, are obtained from Jones' approximation of the Wagner function. The state-space equation of the aeroelectroelastic model is derived and solved numerically. The energy conversion efficiency and output density are defined to evaluate the harvesting performance. The effects of the sizes and location of the piezoelectric transducers, the load resistance in the external circuit, and the locations of the elastic axis and gravity center axis of the wing are studied, respectively. The results show that, under a given width of the transducers in chordwise direction, there are one thickness of the transducers corresponding to highest conversion efficiency and one smaller optimal value for the output density. The conversion efficiency has an approximate linear relationship with the width. As the transducers are placed at the wing root, a maximum conversion efficiency is reached under a certain length in the spanwise direction, whereas a smaller length helps reaching a larger output density. One optimal resistance is found to maximize the conversion efficiency. The rearward shift of either the elastic axis or gravity center axis improves the energy output while reducing the conversion efficiency.

  4. Performance Evaluation of Public Hospital Information Systems by the Information System Success Model

    PubMed Central

    Cho, Kyoung Won; Bae, Sung-Kwon; Ryu, Ji-Hye; Kim, Kyeong Na; An, Chang-Ho

    2015-01-01

    Objectives This study was to evaluate the performance of the newly developed information system (IS) implemented on July 1, 2014 at three public hospitals in Korea. Methods User satisfaction scores of twelve key performance indicators of six IS success factors based on the DeLone and McLean IS Success Model were utilized to evaluate IS performance before and after the newly developed system was introduced. Results All scores increased after system introduction except for the completeness of medical records and impact on the clinical environment. The relationships among six IS factors were also analyzed to identify the important factors influencing three IS success factors (Intention to Use, User Satisfaction, and Net Benefits). All relationships were significant except for the relationships among Service Quality, Intention to Use, and Net Benefits. Conclusions The results suggest that hospitals should not only focus on systems and information quality; rather, they should also continuously improve service quality to improve user satisfaction and eventually reach full the potential of IS performance. PMID:25705557

  5. Integrating Soft Set Theory and Fuzzy Linguistic Model to Evaluate the Performance of Training Simulation Systems.

    PubMed

    Chang, Kuei-Hu; Chang, Yung-Chia; Chain, Kai; Chung, Hsiang-Yu

    2016-01-01

    The advancement of high technologies and the arrival of the information age have caused changes to the modern warfare. The military forces of many countries have replaced partially real training drills with training simulation systems to achieve combat readiness. However, considerable types of training simulation systems are used in military settings. In addition, differences in system set up time, functions, the environment, and the competency of system operators, as well as incomplete information have made it difficult to evaluate the performance of training simulation systems. To address the aforementioned problems, this study integrated analytic hierarchy process, soft set theory, and the fuzzy linguistic representation model to evaluate the performance of various training simulation systems. Furthermore, importance-performance analysis was adopted to examine the influence of saving costs and training safety of training simulation systems. The findings of this study are expected to facilitate applying military training simulation systems, avoiding wasting of resources (e.g., low utility and idle time), and providing data for subsequent applications and analysis. To verify the method proposed in this study, the numerical examples of the performance evaluation of training simulation systems were adopted and compared with the numerical results of an AHP and a novel AHP-based ranking technique. The results verified that not only could expert-provided questionnaire information be fully considered to lower the repetition rate of performance ranking, but a two-dimensional graph could also be used to help administrators allocate limited resources, thereby enhancing the investment benefits and training effectiveness of a training simulation system. PMID:27598390

  6. Integrating Soft Set Theory and Fuzzy Linguistic Model to Evaluate the Performance of Training Simulation Systems

    PubMed Central

    Chang, Kuei-Hu; Chang, Yung-Chia; Chain, Kai; Chung, Hsiang-Yu

    2016-01-01

    The advancement of high technologies and the arrival of the information age have caused changes to the modern warfare. The military forces of many countries have replaced partially real training drills with training simulation systems to achieve combat readiness. However, considerable types of training simulation systems are used in military settings. In addition, differences in system set up time, functions, the environment, and the competency of system operators, as well as incomplete information have made it difficult to evaluate the performance of training simulation systems. To address the aforementioned problems, this study integrated analytic hierarchy process, soft set theory, and the fuzzy linguistic representation model to evaluate the performance of various training simulation systems. Furthermore, importance–performance analysis was adopted to examine the influence of saving costs and training safety of training simulation systems. The findings of this study are expected to facilitate applying military training simulation systems, avoiding wasting of resources (e.g., low utility and idle time), and providing data for subsequent applications and analysis. To verify the method proposed in this study, the numerical examples of the performance evaluation of training simulation systems were adopted and compared with the numerical results of an AHP and a novel AHP-based ranking technique. The results verified that not only could expert-provided questionnaire information be fully considered to lower the repetition rate of performance ranking, but a two-dimensional graph could also be used to help administrators allocate limited resources, thereby enhancing the investment benefits and training effectiveness of a training simulation system. PMID:27598390

  7. Integrating Soft Set Theory and Fuzzy Linguistic Model to Evaluate the Performance of Training Simulation Systems.

    PubMed

    Chang, Kuei-Hu; Chang, Yung-Chia; Chain, Kai; Chung, Hsiang-Yu

    2016-01-01

    The advancement of high technologies and the arrival of the information age have caused changes to the modern warfare. The military forces of many countries have replaced partially real training drills with training simulation systems to achieve combat readiness. However, considerable types of training simulation systems are used in military settings. In addition, differences in system set up time, functions, the environment, and the competency of system operators, as well as incomplete information have made it difficult to evaluate the performance of training simulation systems. To address the aforementioned problems, this study integrated analytic hierarchy process, soft set theory, and the fuzzy linguistic representation model to evaluate the performance of various training simulation systems. Furthermore, importance-performance analysis was adopted to examine the influence of saving costs and training safety of training simulation systems. The findings of this study are expected to facilitate applying military training simulation systems, avoiding wasting of resources (e.g., low utility and idle time), and providing data for subsequent applications and analysis. To verify the method proposed in this study, the numerical examples of the performance evaluation of training simulation systems were adopted and compared with the numerical results of an AHP and a novel AHP-based ranking technique. The results verified that not only could expert-provided questionnaire information be fully considered to lower the repetition rate of performance ranking, but a two-dimensional graph could also be used to help administrators allocate limited resources, thereby enhancing the investment benefits and training effectiveness of a training simulation system.

  8. Modelling and temporal performances evaluation of networked control systems using (max, +) algebra

    NASA Astrophysics Data System (ADS)

    Ammour, R.; Amari, S.

    2015-01-01

    In this paper, we address the problem of temporal performances evaluation of producer/consumer networked control systems. The aim is to develop a formal method for evaluating the response time of this type of control systems. Our approach consists on modelling, using Petri nets classes, the behaviour of the whole architecture including the switches that support multicast communications used by this protocol. (max, +) algebra formalism is then exploited to obtain analytical formulas of the response time and the maximal and minimal bounds. The main novelty is that our approach takes into account all delays experienced at the different stages of networked automation systems. Finally, we show how to apply the obtained results through an example of networked control system.

  9. Solid rocket booster performance evaluation model. Volume 3: Sample case. [propellant combustion simulation/internal ballistics

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The solid rocket booster performance evaluation model (SRB-11) is used to predict internal ballistics in a sample motor. This motor contains a five segmented grain. The first segment has a 14 pointed star configuration with a web which wraps partially around the forward dome. The other segments are circular in cross-section and are tapered along the interior burning surface. Two of the segments are inhibited on the forward face. The nozzle is not assumed to be submerged. The performance prediction is broken into two simulation parts: the delivered end item specific impulse and the propellant properties which are required as inputs for the internal ballistics module are determined; and the internal ballistics for the entire burn duration of the motor are simulated.

  10. A PERFORMANCE EVALUATION OF THE 2004 RELEASE OF MODELS-3 CMAQ

    EPA Science Inventory

    This performance evaluation compares a full annual simulation (2001) of CMAQ (Version4.4) covering the contiguous United States against monitoring data from four nationwide networks. This effort, which represents one of the most spatially and temporally comprehensive performance...

  11. Apprentice Performance Evaluation.

    ERIC Educational Resources Information Center

    Gast, Clyde W.

    The Granite City (Illinois) Steel apprentices are under a performance evaluation from entry to graduation. Federally approved, the program is guided by joint apprenticeship committees whose monthly meetings include performance evaluation from three information sources: journeymen, supervisors, and instructors. Journeymen's evaluations are made…

  12. Measuring Information Security Performance with 10 by 10 Model for Holistic State Evaluation

    PubMed Central

    2016-01-01

    Organizations should measure their information security performance if they wish to take the right decisions and develop it in line with their security needs. Since the measurement of information security is generally underdeveloped in practice and many organizations find the existing recommendations too complex, the paper presents a solution in the form of a 10 by 10 information security performance measurement model. The model—ISP 10×10M is composed of ten critical success factors, 100 key performance indicators and 6 performance levels. Its content was devised on the basis of findings presented in the current research studies and standards, while its structure results from an empirical research conducted among information security professionals from Slovenia. Results of the study show that a high level of information security performance is mostly dependent on measures aimed at managing information risks, employees and information sources, while formal and environmental factors have a lesser impact. Experts believe that information security should evolve systematically, where it’s recommended that beginning steps include technical, logical and physical security controls, while advanced activities should relate predominantly strategic management activities. By applying the proposed model, organizations are able to determine the actual level of information security performance based on the weighted indexing technique. In this manner they identify the measures they ought to develop in order to improve the current situation. The ISP 10×10M is a useful tool for conducting internal system evaluations and decision-making. It may also be applied to a larger sample of organizations in order to determine the general state-of-play for research purposes. PMID:27655001

  13. Evaluation of blade-strike models for estimating the biological performance of large Kaplan hydro turbines

    SciTech Connect

    Deng, Z.; Carlson, T. J.; Ploskey, G. R.; Richmond, M. C.

    2005-11-01

    Bio-indexing of hydro turbines has been identified as an important means to optimize passage conditions for fish by identifying operations for existing and new design turbines that minimize the probability of injury. Cost-effective implementation of bio-indexing requires the use of tools such as numerical and physical turbine models to generate hypotheses for turbine operations that can be tested at prototype scales using live fish. Blade strike has been proposed as an index variable for the biological performance of turbines. Report reviews an evaluation of the use of numerical blade-strike models as a means with which to predict the probability of blade strike and injury of juvenile salmon smolt passing through large Kaplan turbines on the mainstem Columbia River.

  14. EO/IR sensor model for evaluating multispectral imaging system performance

    NASA Astrophysics Data System (ADS)

    Richwine, Robert; Sood, Ashok K.; Balcerak, Raymond S.; Freyvogel, Ken

    2007-04-01

    This paper discusses the capabilities of a EO/IR sensor model developed to provide a robust means for comparative assessments of infrared FPA's and sensors operating in the infrared spectral bands that coincide with the atmospheric windows - SW1 (1.0-1.8μm), sMW (2-2.5μm), MW (3-5μm), and LW (8-12μm). The applications of interest include thermal imaging, threat warning, missile interception, UAV surveillance, forest fire and agricultural crop health assessments, and mine detection. As a true imaging model it also functions as an assessment tool for single-band and multi-color imagery. The detector model characterizes InGaAs, InSb, HgCdTe, QWIP and microbolometer sensors for spectral response, dark currents and noise. The model places the specified FPA into an optical system, evaluates system performance (NEI, NETD, MRTD, and SNR) and creates two-point corrected imagery complete with 3-D noise image effects. Analyses are possible for both passive and active laser illuminated scenes for simulated state-of-the-art IR FPA's and Avalanche Photodiode Detector (APD) arrays. Simulated multispectral image comparisons expose various scene components of interest which are illustrated using the imaging model. This model has been exercised here as a predictive tool for the performance of state-of-the-art detector arrays in optical systems in the five spectral bands (atmospheric windows) from the SW to the LW and as a potential testbed for prototype sensors. Results of the analysis will be presented for various targets for each of the focal plane technologies for a variety of missions.

  15. Evaluating and Improving the Performance of Common Land Model Using FLUXNET Data

    NASA Astrophysics Data System (ADS)

    Zhang, X.; Dai, Y. J.; Dickinson, R. E.

    2015-12-01

    Common Land Model(CoLM), combined the best features of LSM, BATS, and IAP94, has been widely applied and shown its good quality. However, land surface processes are crucial for weather and climate model initialization, hence it's necessary to constrain land surface model performances using observational data. In our preliminary work, eddy covariance measurements from 20 FLUXNET sites with over 100 site-years were used to evaluate CoLM while simulating energy balance fluxes in different climate conditions and vegetation categories. And the results show CoLM simulates well for all four energy fluxes, with sensible heat flux(H) better than latent heat flux(LE), net radiation (Rnet) the best. In terms of different vegetation categories, CoLM performs the best on evergreen needle-leaf forest among the 8 selected land cover types, and shows significant priority at evergreen broadleaf forest. Although a good agreement of simulation and observation is found on seasonal cycles at the 20 sample sites, it produces extreme bias mostly at summer noon, but not shows consistent bias among different seasons. This underestimate was associated with the weakness in simulating of soil water in dry seasons and incomplete description of photosynthesis as well, that's why we will first focus on implementing mesophyll diffusion in CoLM to improve the physical process of photosynthesis.

  16. Performance Metrics for Climate Model Evaluation: Application to CMIP5 Precipitation Simulations

    NASA Astrophysics Data System (ADS)

    Mehran, A.; AghaKouchak, A.; Phillips, T. J.

    2013-12-01

    Validation of gridded climate model simulations is fundamental to future improvements in model developments. Among the metrics, the contingency table, which includes a number of categorical indices, is extensively used in evaluation studies. While the categorical indices offer invaluable information, they do not provide any insight into the volume of the variable detected correctly/incorrectly. In this study, the contingency table categorical metrics are extended to volumetric indices for evaluation of gridded data. The suggested indices include (a) Volumetric Hit Index (VHI): volume of correctly detected simulations relative to the volume of the correctly detected simulations and missed observations; (b) Volumetric False Alarm Ratio (VFAR): volume of false simulations relative to the sum of simulations; (c) Volumetric Miss Index (VMI): volume of missed observations relative to the sum of missed observations and correctly detected simulations; and (d) the Volumetric Critical Success Index (VCSI). The latter provides an overall measure of volumetric performance including volumetric hits, false alarms and misses. Numerous studies have emphasized that climate simulations are subject to various biases and uncertainties. The objective of this study is to cross-validate 34 Coupled Model Inter-comparison Project Phase 5 (CMIP5) historical simulations of precipitation against the Global Precipitation Climatology Project (GPCP) data using the proposed performance metrics, quantifying model pattern discrepancies and biases for both entire data distributions and their upper tails. The results of the Volumetric Hit Index (VHI) analysis of the total monthly precipitation amounts show that most CMIP5 simulations are in good agreement with GPCP patterns in many areas, but their replication of observed precipitation over arid regions and certain sub-continental regions (e.g., northern Eurasia, eastern Russia, central Australia) is problematical. Overall, the VHI of the multi-model

  17. Evaluation of odometry algorithm performances using a railway vehicle dynamic model

    NASA Astrophysics Data System (ADS)

    Allotta, B.; Pugi, L.; Ridolfi, A.; Malvezzi, M.; Vettori, G.; Rindi, A.

    2012-05-01

    In modern railway Automatic Train Protection and Automatic Train Control systems, odometry is a safety relevant on-board subsystem which estimates the instantaneous speed and the travelled distance of the train; a high reliability of the odometry estimate is fundamental, since an error on the train position may lead to a potentially dangerous overestimation of the distance available for braking. To improve the odometry estimate accuracy, data fusion of different inputs coming from a redundant sensor layout may be used. Simplified two-dimensional models of railway vehicles have been usually used for Hardware in the Loop test rig testing of conventional odometry algorithms and of on-board safety relevant subsystems (like the Wheel Slide Protection braking system) in which the train speed is estimated from the measures of the wheel angular speed. Two-dimensional models are not suitable to develop solutions like the inertial type localisation algorithms (using 3D accelerometers and 3D gyroscopes) and the introduction of Global Positioning System (or similar) or the magnetometer. In order to test these algorithms correctly and increase odometry performances, a three-dimensional multibody model of a railway vehicle has been developed, using Matlab-Simulink™, including an efficient contact model which can simulate degraded adhesion conditions (the development and prototyping of odometry algorithms involve the simulation of realistic environmental conditions). In this paper, the authors show how a 3D railway vehicle model, able to simulate the complex interactions arising between different on-board subsystems, can be useful to evaluate the odometry algorithm and safety relevant to on-board subsystem performances.

  18. Performance evaluation and model analysis of BTEX contaminated air in corn-cob biofilter system.

    PubMed

    Rahul; Mathur, Anil Kumar; Balomajumder, Chandrajit

    2013-04-01

    Biofiltration of BTEX with corn-cob packing material have been performed for a period of 68 days in five distinct phases. The overall performance of a biofilter has been evaluated in terms of its elimination capacity by using 3-D mesh techniques. Maximum removal efficiency was found more than 99.85% of all four compounds at an EBRT of 3.06 min in phase I for an inlet BTEX concentration of 0.0970, 0.0978, 0.0971 and 0.0968 g m(-3), respectively. Nearly 100% removal achieved at average BTEX loadings of 20.257 g m(-3) h(-1) to biofilter. A maximum elimination capacity (EC) of 20.239 g m(-3) h(-1) of the biofilter was obtained at inlet BTEX load of 20.391 g m(-3) h(-1). Moreover, using convection-diffusion reaction (CDR) model for biofilter depth shows good agreement with the experimental values for benzene, toluene and ethyl benzene, but for o-xylene the model results deviated from the experimental.

  19. Performance evaluation and model analysis of BTEX contaminated air in corn-cob biofilter system.

    PubMed

    Rahul; Mathur, Anil Kumar; Balomajumder, Chandrajit

    2013-04-01

    Biofiltration of BTEX with corn-cob packing material have been performed for a period of 68 days in five distinct phases. The overall performance of a biofilter has been evaluated in terms of its elimination capacity by using 3-D mesh techniques. Maximum removal efficiency was found more than 99.85% of all four compounds at an EBRT of 3.06 min in phase I for an inlet BTEX concentration of 0.0970, 0.0978, 0.0971 and 0.0968 g m(-3), respectively. Nearly 100% removal achieved at average BTEX loadings of 20.257 g m(-3) h(-1) to biofilter. A maximum elimination capacity (EC) of 20.239 g m(-3) h(-1) of the biofilter was obtained at inlet BTEX load of 20.391 g m(-3) h(-1). Moreover, using convection-diffusion reaction (CDR) model for biofilter depth shows good agreement with the experimental values for benzene, toluene and ethyl benzene, but for o-xylene the model results deviated from the experimental. PMID:23425585

  20. A computer model for the evaluation of the effect of corneal topography on optical performance.

    PubMed

    Camp, J J; Maguire, L J; Cameron, B M; Robb, R A

    1990-04-15

    We developed a method that models the effect of irregular corneal surface topography on corneal optical performance. A computer program mimics the function of an optical bench. The method generates a variety of objects (single point, standard Snellen letters, low contrast Snellen letters, arbitrarily complex objects) in object space. The lens is the corneal surface evaluated by a corneal topography analysis system. The objects are refracted by the cornea by using raytracing analysis to produce an image, which is displayed on a video monitor. Optically degraded images are generated by raytracing analysis of selected irregular corneal surfaces, such as those from patients with keratoconus and those from patients having undergone epikeratophakia for aphakia. PMID:2330940

  1. Model performance evaluation (validation and calibration) in model-based studies of therapeutic interventions for cardiovascular diseases : a review and suggested reporting framework.

    PubMed

    Haji Ali Afzali, Hossein; Gray, Jodi; Karnon, Jonathan

    2013-04-01

    Decision analytic models play an increasingly important role in the economic evaluation of health technologies. Given uncertainties around the assumptions used to develop such models, several guidelines have been published to identify and assess 'best practice' in the model development process, including general modelling approach (e.g., time horizon), model structure, input data and model performance evaluation. This paper focuses on model performance evaluation. In the absence of a sufficient level of detail around model performance evaluation, concerns regarding the accuracy of model outputs, and hence the credibility of such models, are frequently raised. Following presentation of its components, a review of the application and reporting of model performance evaluation is presented. Taking cardiovascular disease as an illustrative example, the review investigates the use of face validity, internal validity, external validity, and cross model validity. As a part of the performance evaluation process, model calibration is also discussed and its use in applied studies investigated. The review found that the application and reporting of model performance evaluation across 81 studies of treatment for cardiovascular disease was variable. Cross-model validation was reported in 55 % of the reviewed studies, though the level of detail provided varied considerably. We found that very few studies documented other types of validity, and only 6 % of the reviewed articles reported a calibration process. Considering the above findings, we propose a comprehensive model performance evaluation framework (checklist), informed by a review of best-practice guidelines. This framework provides a basis for more accurate and consistent documentation of model performance evaluation. This will improve the peer review process and the comparability of modelling studies. Recognising the fundamental role of decision analytic models in informing public funding decisions, the proposed

  2. On the current state of the Hydrologic Evaluation of Landfill Performance (HELP) model.

    PubMed

    Berger, Klaus U

    2015-04-01

    The Hydrologic Evaluation of Landfill Performance (HELP) model is the most widely applied model to calculate the water balance of cover and bottom liner systems for landfills. The paper summarizes the 30 year history of the model from HELP version 1 to HELP 3.95 D and includes references to the three current and simultaneously available versions (HELP 3.07, Visual HELP 2.2, and HELP 3.95 D). A sufficient validation is an essential precondition for the use of any model in planning. The paper summarizes validation approaches for HELP 3 focused on cover systems in the literature. Furthermore, measurement results are compared to simulation results of HELP 3.95 D for (1) a test field with a compacted clay liner in the final cover of the landfill Hamburg-Georgswerder from 1988 to 1995 and (2) a test field with a 2.3m thick so-called water balance layer on the landfill Deetz near Berlin from 2004 to 2011. On the Georgswerder site actual evapotranspiration was well reproduced by HELP on the yearly average as well as in the seasonal course if precipitation data with 10% systematic measurement errors were used. However, the increase of liner leakage due to the deterioration of the clayey soil liner was not considered by the model. On the landfill Deetz HELP overestimated largely the percolation through the water balance layer resulting from an extremely wet summer due to an underestimation of the water storage in the layer and presumably also due to an underestimation of the actual evapotranspiration. Finally based on validation results and requests from the practice, plans for improving the model to a future version HELP 4 D are described. PMID:25690410

  3. On the current state of the Hydrologic Evaluation of Landfill Performance (HELP) model.

    PubMed

    Berger, Klaus U

    2015-04-01

    The Hydrologic Evaluation of Landfill Performance (HELP) model is the most widely applied model to calculate the water balance of cover and bottom liner systems for landfills. The paper summarizes the 30 year history of the model from HELP version 1 to HELP 3.95 D and includes references to the three current and simultaneously available versions (HELP 3.07, Visual HELP 2.2, and HELP 3.95 D). A sufficient validation is an essential precondition for the use of any model in planning. The paper summarizes validation approaches for HELP 3 focused on cover systems in the literature. Furthermore, measurement results are compared to simulation results of HELP 3.95 D for (1) a test field with a compacted clay liner in the final cover of the landfill Hamburg-Georgswerder from 1988 to 1995 and (2) a test field with a 2.3m thick so-called water balance layer on the landfill Deetz near Berlin from 2004 to 2011. On the Georgswerder site actual evapotranspiration was well reproduced by HELP on the yearly average as well as in the seasonal course if precipitation data with 10% systematic measurement errors were used. However, the increase of liner leakage due to the deterioration of the clayey soil liner was not considered by the model. On the landfill Deetz HELP overestimated largely the percolation through the water balance layer resulting from an extremely wet summer due to an underestimation of the water storage in the layer and presumably also due to an underestimation of the actual evapotranspiration. Finally based on validation results and requests from the practice, plans for improving the model to a future version HELP 4 D are described.

  4. Performance evaluation of Al-Zahra academic medical center based on Iran balanced scorecard model

    PubMed Central

    Raeisi, Ahmad Reza; Yarmohammadian, Mohammad Hossein; Bakhsh, Roghayeh Mohammadi; Gangi, Hamid

    2012-01-01

    Background: Growth and development in any country's national health system, without an efficient evaluation system, lacks the basic concepts and tools necessary for fulfilling the system's goals. The balanced scorecard (BSC) is a technique widely used to measure the performance of an organization. The basic core of the BSC is guided by the organization's vision and strategies, which are the bases for the formation of four perspectives of BSC. The goal of this research is the performance evaluation of Al-Zahra Academic Medical Center in Isfahan University of Medical Sciences, based on Iran BSC model. Materials and Methods: This is a combination (quantitative–qualitative) research which was done at Al-Zahra Academic Medical Center in Isfahan University of Medical Sciences in 2011. The research populations were hospital managers at different levels. Sampling method was purposive sampling in which the key informed personnel participated in determining the performance indicators of hospital as the BSC team members in focused discussion groups. After determining the conceptual elements in focused discussion groups, the performance objectives (targets) and indicators of hospital were determined and sorted in perspectives by the group discussion participants. Following that, the performance indicators were calculated by the experts according to the predetermined objectives; then, the score of each indicator and the mean score of each perspective were calculated. Results: Research findings included development of the organizational mission, vision, values, objectives, and strategies. The strategies agreed upon by the participants in the focus discussion group included five strategies, which were customer satisfaction, continuous quality improvement, development of human resources, supporting innovation, expansion of services and improving the productivity. Research participants also agreed upon four perspectives for the Al-Zahra hospital BSC. In the patients and community

  5. Application of Wavelet Filters in an Evaluation of Photochemical Model Performance

    EPA Science Inventory

    Air quality model evaluation can be enhanced with time-scale specific comparisons of outputs and observations. For example, high-frequency (hours to one day) time scale information in observed ozone is not well captured by deterministic models and its incorporation into model pe...

  6. Inter-comparison and performance evaluation of chemistry transport models over Indian region

    NASA Astrophysics Data System (ADS)

    Govardhan, Gaurav R.; Nanjundiah, Ravi S.; Satheesh, S. K.; Moorthy, K. Krishna; Takemura, Toshihiko

    2016-01-01

    Aerosol loading over the South Asian region has the potential to affect the monsoon rainfall, Himalayan glaciers and regional air-quality, with implications for the billions in this region. While field campaigns and network observations provide primary data, they tend to be location/season specific. Numerical models are useful to regionalize such location-specific data. Studies have shown that numerical models underestimate the aerosol scenario over the Indian region, mainly due to shortcomings related to meteorology and the emission inventories used. In this context, we have evaluated the performance of two such chemistry-transport models: WRF-Chem and SPRINTARS over an India-centric domain. The models differ in many aspects including physical domain, horizontal resolution, meteorological forcing and so on etc. Despite these differences, both the models simulated similar spatial patterns of Black Carbon (BC) mass concentration, (with a spatial correlation of 0.9 with each other), and a reasonable estimates of its concentration, though both of them under-estimated vis-a-vis the observations. While the emissions are lower (higher) in SPRINTARS (WRF-Chem), overestimation of wind parameters in WRF-Chem caused the concentration to be similar in both models. Additionally, we quantified the underestimations of anthropogenic BC emissions in the inventories used these two models and three other widely used emission inventories. Our analysis indicates that all these emission inventories underestimate the emissions of BC over India by a factor that ranges from 1.5 to 2.9. We have also studied the model simulations of aerosol optical depth over the Indian region. The models differ significantly in simulations of AOD, with WRF-Chem having a better agreement with satellite observations of AOD as far as the spatial pattern is concerned. It is important to note that in addition to BC, dust can also contribute significantly to AOD. The models differ in simulations of the spatial

  7. Undergraduate Engineering Students' Beliefs, Coping Strategies, and Academic Performance: An Evaluation of Theoretical Models

    ERIC Educational Resources Information Center

    Hsieh, Pei-Hsuan; Sullivan, Jeremy R.; Sass, Daniel A.; Guerra, Norma S.

    2012-01-01

    Research has identified factors associated with academic success by evaluating relations among psychological and academic variables, although few studies have examined theoretical models to understand the complex links. This study used structural equation modeling to investigate whether the relation between test anxiety and final course grades was…

  8. Increasing productivity through performance evaluation.

    PubMed

    Lachman, V D

    1984-12-01

    Four components form the base for a performance evaluation system. A discussion of management/organizational shortcomings creating performance problems is followed by a focus on the importance of an ongoing discussion of goals between the manager and the subordinate. Six components that impact performance are identified, and practical suggestions are given to increase motivation. A coaching analysis process, as well as counseling and disciplining models, define the steps for solving performance problems.

  9. Goal Setting and Performance Evaluation with Different Starting Positions: The Modeling Dilemma.

    ERIC Educational Resources Information Center

    Pray, Thomas F.; Gold, Steven

    1991-01-01

    Reviews 10 computerized business simulations used to teach business policy courses, discusses problems with measuring performance, and presents a statistically based approach to assessing performance that permits individual team goal setting as part of the computer model, and allows simulated firms to start with different financial and operating…

  10. Performance Standards and Evaluation in IR Test Collections: Vector-Space and Other Retrieval Models.

    ERIC Educational Resources Information Center

    Shaw, W. M., Jr.; And Others

    1997-01-01

    Describes a study that computed the low performance standards for queries in 17 test collections. Predicted by the hypergeometric distribution, the standards represent the highest level of retrieval effectiveness attributable to chance. Operational levels of performance for vector-space and other retrieval models were compared to the standards.…

  11. Improving Quality and Accountability in Vocational Technological Programs: An Evaluation of Arizona's VTE Model and Performance Standards.

    ERIC Educational Resources Information Center

    Vandegrift, Judith A.; And Others

    A study examined statewide implementation of the Arizona Department of Education's vocational technological education (ADE/VTE) model and the feasibility of using Arizona's performance standards in evaluating processes/outcomes at model sites. Data were collected from a pilot study of 12 sites and survey of 128 Arizona local education authorities…

  12. Applying the Many-Facet Rasch Model to Evaluate PowerPoint Presentation Performance in Higher Education

    ERIC Educational Resources Information Center

    Basturk, Ramazan

    2008-01-01

    This study investigated the usefulness of the many-facet Rasch model (MFRM) in evaluating the quality of performance related to PowerPoint presentations in higher education. The Rasch Model utilizes item response theory stating that the probability of a correct response to a test item/task depends largely on a single parameter, the ability of the…

  13. A strategic management model for evaluation of health, safety and environmental performance.

    PubMed

    Abbaspour, Majid; Toutounchian, Solmaz; Roayaei, Emad; Nassiri, Parvin

    2012-05-01

    Strategic health, safety, and environmental management system (HSE-MS) involves systematic and cooperative planning in each phase of the lifecycle of a project to ensure that interaction among the industry group, client, contractor, stakeholder, and host community exists with the highest level of health, safety, and environmental standard performances. Therefore, it seems necessary to assess the HSE-MS performance of contractor(s) by a comparative strategic management model with the aim of continuous improvement. The present Strategic Management Model (SMM) has been illustrated by a case study and the results show that the model is a suitable management tool for decision making in a contract environment, especially in oil and gas fields and based on accepted international standards within the framework of management deming cycle. To develop this model, a data bank has been created, which includes the statistical data calculated by converting the HSE performance qualitative data into quantitative values. Based on this fact, the structure of the model has been formed by defining HSE performance indicators according to the HSE-MS model. Therefore, 178 indicators have been selected which have been grouped into four attributes. Model output provides quantitative measures of HSE-MS performance as a percentage of an ideal level with maximum possible score for each attribute. Defining the strengths and weaknesses of the contractor(s) is another capability of this model. On the other hand, this model provides a ranking that could be used as the basis for decision making at the contractors' pre-qualification phase or during the execution of the project. PMID:21739281

  14. A strategic management model for evaluation of health, safety and environmental performance.

    PubMed

    Abbaspour, Majid; Toutounchian, Solmaz; Roayaei, Emad; Nassiri, Parvin

    2012-05-01

    Strategic health, safety, and environmental management system (HSE-MS) involves systematic and cooperative planning in each phase of the lifecycle of a project to ensure that interaction among the industry group, client, contractor, stakeholder, and host community exists with the highest level of health, safety, and environmental standard performances. Therefore, it seems necessary to assess the HSE-MS performance of contractor(s) by a comparative strategic management model with the aim of continuous improvement. The present Strategic Management Model (SMM) has been illustrated by a case study and the results show that the model is a suitable management tool for decision making in a contract environment, especially in oil and gas fields and based on accepted international standards within the framework of management deming cycle. To develop this model, a data bank has been created, which includes the statistical data calculated by converting the HSE performance qualitative data into quantitative values. Based on this fact, the structure of the model has been formed by defining HSE performance indicators according to the HSE-MS model. Therefore, 178 indicators have been selected which have been grouped into four attributes. Model output provides quantitative measures of HSE-MS performance as a percentage of an ideal level with maximum possible score for each attribute. Defining the strengths and weaknesses of the contractor(s) is another capability of this model. On the other hand, this model provides a ranking that could be used as the basis for decision making at the contractors' pre-qualification phase or during the execution of the project.

  15. Signal and image processing systems performance evaluation, simulation, and modeling; Proceedings of the Meeting, Orlando, FL, Apr. 4, 5, 1991

    NASA Astrophysics Data System (ADS)

    Nasr, Hatem N.; Bazakos, Michael E.

    The various aspects of the evaluation and modeling problems in algorithms, sensors, and systems are addressed. Consideration is given to a generic modular imaging IR signal processor, real-time architecture based on the image-processing module family, application of the Proto Ware simulation testbed to the design and evaluation of advanced avionics, development of a fire-and-forget imaging infrared seeker missile simulation, an adaptive morphological filter for image processing, laboratory development of a nonlinear optical tracking filter, a dynamic end-to-end model testbed for IR detection algorithms, wind tunnel model aircraft attitude and motion analysis, an information-theoretic approach to optimal quantization, parametric analysis of target/decoy performance, neural networks for automated target recognition parameters adaptation, performance evaluation of a texture-based segmentation algorithm, evaluation of image tracker algorithms, and multisensor fusion methodologies. (No individual items are abstracted in this volume)

  16. The performance evaluation test for prototype model of Longwave Infrared Imager (LIR) onboard PLANET-C

    NASA Astrophysics Data System (ADS)

    Fukuhara, Tetsuya; Taguchi, Makoto; Imamura, Takeshi

    which are acquired continuously. The vibration test for the UMBA was also carried out and the result showed the UMBA survived without any pixel defects or malfunctions. The tolerance to high-energy protons was tested and verified using a commercial camera in which a same type of UMBA is mounted. Based on these results, a flight model is now being manufactured with minor modifications from the prototype. The performance of flight model will be evaluated during 2008-09 in time for the scheduled launch year of 2010.

  17. Regional climate models performance evaluation for runoff simulation in the mountainous watershed

    NASA Astrophysics Data System (ADS)

    Rahman, Kazi; Etienne, Christophe; Gago da Silva, Ana; Maringanti, Chetan; Beniston, Martin; Lehmann, Anthony

    2013-04-01

    Streamflow forecasting is often done with the help of output obtained from Regional Climate Model (RCM) generated variables. The heterogeneity of the meteorological variables such as precipitation, temperature, wind speed and solar radiation often limit the ability of the hydrological model performance. This research assessed the sensitivity of RCMs outputs from the PRUDENCE project and their performance in reproducing the stream flow. The hydrological model, Soil and Water Assessment Tool (SWAT) was used to simulate the stream flow of the Rhone River watershed located in south-western part of Switzerland, with the climate variables obtained from four RCMs. We analyzed the difference in magnitude of precipitation, and maximum and minimum air temperature with respect to the observed values from the meteorological stations using tailor diagram. In addition we also focused on the impact of the grid resolution on model performance, by analyzing grids with resolutions of 50*50 km2 and 25*25 km2. We found that higher grid resolutions tend to improve model performance. The variability of the meteorological inputs from various RCMs is quite severe in the studied watershed. Among the four different RCMs, the Danish Meteorological Institute (DMI) provided the best performance when simulating runoff. In spite of reproducing similar patterns of hydrograph, it is nevertheless recommended to use a correction factor before using RCM outputs for impact modeling. Since the streamflow simulation in the mountainous watershed is highly driven by the temperature for snow and glacier melt processes, our recommendation is to emphasize the temperature lapse rate for bias correction while applying climate model output for impact modeling.

  18. Performance Evaluation of Models that Describe the Soil Water Retention Curve between Saturation and Oven Dryness

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The objective of this work was to evaluate eight closed-form unimodal analytical expressions that describe the soil-water retention curve over the complete range of soil water contents. To meet this objective, the eight models were compared in terms of their accuracy (root mean square error, RMSE), ...

  19. Evaluation of model nesting performance on the Texas-Louisiana continental shelf

    NASA Astrophysics Data System (ADS)

    Marta-Almeida, Martinho; Hetland, Robert D.; Zhang, Xiaoqian

    2013-05-01

    A skill assessment of a model of the Texas-Louisiana shelf, nested in a variety of different parent models, is performed using hydrographic salinity data. The nested models show improved salinity skill compared to the same model using climatological boundary conditions, as well as general skill score improvements over the parent models in the same region. Although a variety of parent models are used and these parent models have widely different skill scores when compared with regional hydrographic data sets, the skill scores for the nested models are generally indistinguishable. This leads to the conclusion that nesting is important for improving model skill, but it does not matter which parent model is used. The model is also used to create a series of ensembles, where the local forcing is varied with identical boundary conditions and where the boundary conditions are varied by nesting within the various parent models. The variance in the ensemble spread shows that there is a significant level of unpredictable, nonlinear noise associated with instabilities along the Mississippi/Atchafalaya plume front. The noise is seasonal and is greatest during summer upwelling conditions and weaker during nonsummer downwelling.

  20. Discrete tyre model application for evaluation of vehicle limit handling performance

    NASA Astrophysics Data System (ADS)

    Siramdasu, Y.; Taheri, S.

    2016-11-01

    The goal of this study is twofold, first, to understand the transient and nonlinear effects of anti-lock braking systems (ABS), road undulations and driving dynamics on lateral performance of tyre and second, to develop objective handling manoeuvres and respective metrics to characterise these effects on vehicle behaviour. For studying the transient and nonlinear handling performance of the vehicle, the variations of relaxation length of tyre and tyre inertial properties play significant roles [Pacejka HB. Tire and vehicle dynamics. 3rd ed. Butterworth-Heinemann; 2012]. To accurately simulate these nonlinear effects during high-frequency vehicle dynamic manoeuvres, requires a high-frequency dynamic tyre model (? Hz). A 6 DOF dynamic tyre model integrated with enveloping model is developed and validated using fixed axle high-speed oblique cleat experimental data. Commercially available vehicle dynamics software CarSim® is used for vehicle simulation. The vehicle model was validated by comparing simulation results with experimental sinusoidal steering tests. The validated tyre model is then integrated with vehicle model and a commercial grade rule-based ABS model to perform various objective simulations. Two test scenarios of ABS braking in turn on a smooth road and accelerating in a turn on uneven and smooth roads are considered. Both test cases reiterated that while the tyre is operating in the nonlinear region of slip or slip angle, any road disturbance or high-frequency brake torque input variations can excite the inertial belt vibrations of the tyre. It is shown that these inertial vibrations can directly affect the developed performance metrics and potentially degrade the handling performance of the vehicle.

  1. Evaluation of Turbulence-Model Performance as Applied to Jet-Noise Prediction

    NASA Technical Reports Server (NTRS)

    Woodruff, S. L.; Seiner, J. M.; Hussaini, M. Y.; Erlebacher, G.

    1998-01-01

    The accurate prediction of jet noise is possible only if the jet flow field can be predicted accurately. Predictions for the mean velocity and turbulence quantities in the jet flowfield are typically the product of a Reynolds-averaged Navier-Stokes solver coupled with a turbulence model. To evaluate the effectiveness of solvers and turbulence models in predicting those quantities most important to jet noise prediction, two CFD codes and several turbulence models were applied to a jet configuration over a range of jet temperatures for which experimental data is available.

  2. Performance evaluation model of a pilot food waste collection system in Suzhou City, China.

    PubMed

    Wen, Zongguo; Wang, Yuanjia; De Clercq, Djavan

    2015-05-01

    This paper analyses the food waste collection and transportation (C&T) system in a pilot project in Suzhou by using a novel performance evaluation method. The method employed to conduct this analysis involves a unified performance evaluation index containing qualitative and quantitative indicators applied to data from Suzhou City. Two major inefficiencies were identified: a) low system efficiency due to insufficient processing capacity of commercial food waste facilities; and b) low waste resource utilization due to low efficiency of manual sorting. The performance evaluation indicated that the pilot project collection system's strong points included strong economics, low environmental impact and low social impact. This study also shows that Suzhou's integrated system has developed a comprehensive body of laws and clarified regulatory responsibilities for each of the various government departments to solve the problems of commercial food waste management. Based on Suzhou's experience, perspectives and lessons can be drawn for other cities and areas where food waste management systems are in the planning stage, or are encountering operational problems. PMID:25733197

  3. Performance evaluation model of a pilot food waste collection system in Suzhou City, China.

    PubMed

    Wen, Zongguo; Wang, Yuanjia; De Clercq, Djavan

    2015-05-01

    This paper analyses the food waste collection and transportation (C&T) system in a pilot project in Suzhou by using a novel performance evaluation method. The method employed to conduct this analysis involves a unified performance evaluation index containing qualitative and quantitative indicators applied to data from Suzhou City. Two major inefficiencies were identified: a) low system efficiency due to insufficient processing capacity of commercial food waste facilities; and b) low waste resource utilization due to low efficiency of manual sorting. The performance evaluation indicated that the pilot project collection system's strong points included strong economics, low environmental impact and low social impact. This study also shows that Suzhou's integrated system has developed a comprehensive body of laws and clarified regulatory responsibilities for each of the various government departments to solve the problems of commercial food waste management. Based on Suzhou's experience, perspectives and lessons can be drawn for other cities and areas where food waste management systems are in the planning stage, or are encountering operational problems.

  4. An evaluation of the predictive performance of distributional models for flora and fauna in north-east New South Wales.

    PubMed

    Pearce, J; Ferrier, S; Scotts, D

    2001-06-01

    To use models of species distributions effectively in conservation planning, it is important to determine the predictive accuracy of such models. Extensive modelling of the distribution of vascular plant and vertebrate fauna species within north-east New South Wales has been undertaken by linking field survey data to environmental and geographical predictors using logistic regression. These models have been used in the development of a comprehensive and adequate reserve system within the region. We evaluate the predictive accuracy of models for 153 small reptile, arboreal marsupial, diurnal bird and vascular plant species for which independent evaluation data were available. The predictive performance of each model was evaluated using the relative operating characteristic curve to measure discrimination capacity. Good discrimination ability implies that a model's predictions provide an acceptable index of species occurrence. The discrimination capacity of 89% of the models was significantly better than random, with 70% of the models providing high levels of discrimination. Predictions generated by this type of modelling therefore provide a reasonably sound basis for regional conservation planning. The discrimination ability of models was highest for the less mobile biological groups, particularly the vascular plants and small reptiles. In the case of diurnal birds, poor performing models tended to be for species which occur mainly within specific habitats not well sampled by either the model development or evaluation data, highly mobile species, species that are locally nomadic or those that display very broad habitat requirements. Particular care needs to be exercised when employing models for these types of species in conservation planning.

  5. The Effects of Modeling, Self-Evaluation, and Self-Listening on Junior High Instrumentalists' Music Performance and Practice Attitude.

    ERIC Educational Resources Information Center

    Hewitt, Michael P.

    2001-01-01

    Examines the effects that modeling, self-evaluation, and self-listening have on the music performance and attitudes about practice of junior high school students in the seventh (n=36), eighth (n=31), or ninth (n=15) grades who play woodwind, brass, and percussion instruments. Includes references. (CMK)

  6. Evaluating the Sensitivity of Agricultural Model Performance to Different Climate Inputs: Supplemental Material

    NASA Technical Reports Server (NTRS)

    Glotter, Michael J.; Ruane, Alex C.; Moyer, Elisabeth J.; Elliott, Joshua W.

    2015-01-01

    Projections of future food production necessarily rely on models, which must themselves be validated through historical assessments comparing modeled and observed yields. Reliable historical validation requires both accurate agricultural models and accurate climate inputs. Problems with either may compromise the validation exercise. Previous studies have compared the effects of different climate inputs on agricultural projections but either incompletely or without a ground truth of observed yields that would allow distinguishing errors due to climate inputs from those intrinsic to the crop model. This study is a systematic evaluation of the reliability of a widely used crop model for simulating U.S. maize yields when driven by multiple observational data products. The parallelized Decision Support System for Agrotechnology Transfer (pDSSAT) is driven with climate inputs from multiple sources reanalysis, reanalysis that is bias corrected with observed climate, and a control dataset and compared with observed historical yields. The simulations show that model output is more accurate when driven by any observation-based precipitation product than when driven by non-bias-corrected reanalysis. The simulations also suggest, in contrast to previous studies, that biased precipitation distribution is significant for yields only in arid regions. Some issues persist for all choices of climate inputs: crop yields appear to be oversensitive to precipitation fluctuations but under sensitive to floods and heat waves. These results suggest that the most important issue for agricultural projections may be not climate inputs but structural limitations in the crop models themselves.

  7. Photorefractive two-beam coupling joint transform correlator: modeling and performance evaluation.

    PubMed

    Nehmetallah, G; Khoury, J; Banerjee, P P

    2016-05-20

    The photorefractive two-beam coupling joint transform correlator combines two features. The first is embedded semi-adaptive optimality, which weighs the correlation against clutter and noise in the input, and the second is the intrinsic dynamic range compression nonlinearity, which improves several metrics simultaneously without metric trade-off. Although the two beam coupling correlator was invented many years ago, its outstanding performance was recognized on only relatively simple images. There was no study about the performance of this correlator on complicated images and using different figures of merit. In this paper, the study is extended to more complicated images. For the first time, to our knowledge, we demonstrate simultaneous improvement in metrics performance without metric trade-off. The performance was evaluated compared to the classical joint transform correlator. A typical experimental result to validate the simulation results was also shown in this work. The best performing operation parameters were identified to guide the experimental work and for future comparison with other well-known optimal correlation filters. PMID:27411127

  8. Performance evaluation of selected ionospheric delay models during geomagnetic storm conditions in low-latitude region

    NASA Astrophysics Data System (ADS)

    Venkata Ratnam, D.; Sarma, A. D.; Satya Srinivas, V.; Sreelatha, P.

    2011-06-01

    Investigation of space weather effects on GPS satellite navigation systems is very crucial in high-precision positional applications such as aircraft landings and missile guidance, etc. The geomagnetic storms can drastically affect the total electron content (TEC) of the ionosphere even in low latitudes, especially for Indian region as it comes under low-latitude region. Hence, the performance of three prominent ionospheric models is investigated for adverse ionospheric conditions using 17 GPS TEC stations data. The models characterized the ionospheric disturbances due to two magnetic storms well.

  9. Evaluation of Model Results and Measured Performance of Net-Zero Energy Homes in Hawaii: Preprint

    SciTech Connect

    Norton, P.; Kiatreungwattana, K.; Kelly, K. J.

    2013-03-01

    The Kaupuni community consists of 19 affordable net-zero energy homes that were built within the Waianae Valley of Oahu, Hawaii in 2011. The project was developed for the native Hawaiian community led by the Department of Hawaiian Homelands. This paper presents a comparison of the modeled and measured energy performance of the homes. Over the first year of occupancy, the community as a whole performed within 1% of the net-zero energy goals. The data show a range of performance from house to house with the majority of the homes consistently near or exceeding net-zero, while a few fall short of the predicted net-zero energy performance. The impact of building floor plan, weather, and cooling set point on this comparison is discussed. The project demonstrates the value of using building energy simulations as a tool to assist the project to achieve energy performance goals. Lessons learned from the energy performance monitoring has had immediate benefits in providing feedback to the homeowners, and will be used to influence future energy efficient designs in Hawaii and other tropical climates.

  10. On shrinkage and model extrapolation in the evaluation of clinical center performance

    PubMed Central

    Varewyck, Machteld; Goetghebeur, Els; Eriksson, Marie; Vansteelandt, Stijn

    2014-01-01

    We consider statistical methods for benchmarking clinical centers based on a dichotomous outcome indicator. Borrowing ideas from the causal inference literature, we aim to reveal how the entire study population would have fared under the current care level of each center. To this end, we evaluate direct standardization based on fixed versus random center effects outcome models that incorporate patient-specific baseline covariates to adjust for differential case-mix. We explore fixed effects (FE) regression with Firth correction and normal mixed effects (ME) regression to maintain convergence in the presence of very small centers. Moreover, we study doubly robust FE regression to avoid outcome model extrapolation. Simulation studies show that shrinkage following standard ME modeling can result in substantial power loss relative to the considered alternatives, especially for small centers. Results are consistent with findings in the analysis of 30-day mortality risk following acute stroke across 90 centers in the Swedish Stroke Register. PMID:24812420

  11. Performance evaluation of continuity of care records (CCRs): parsing models in a mobile health management system.

    PubMed

    Chen, Hung-Ming; Liou, Yong-Zan

    2014-10-01

    In a mobile health management system, mobile devices act as the application hosting devices for personal health records (PHRs) and the healthcare servers construct to exchange and analyze PHRs. One of the most popular PHR standards is continuity of care record (CCR). The CCR is expressed in XML formats. However, parsing is an expensive operation that can degrade XML processing performance. Hence, the objective of this study was to identify different operational and performance characteristics for those CCR parsing models including the XML DOM parser, the SAX parser, the PULL parser, and the JSON parser with regard to JSON data converted from XML-based CCR. Thus, developers can make sensible choices for their target PHR applications to parse CCRs when using mobile devices or servers with different system resources. Furthermore, the simulation experiments of four case studies are conducted to compare the parsing performance on Android mobile devices and the server with large quantities of CCR data. PMID:25086611

  12. A Predictive Performance Model to Evaluate the Contention Cost in Application Servers

    SciTech Connect

    Chen, Shiping; Gorton, Ian )

    2002-12-04

    In multi-tier enterprise systems, application servers are key components that implement business logic and provide application services. To support a large number of simultaneous accesses from clients over the Internet and intranet, most application servers use replication and multi-threading to handle concurrent requests. While multiple processes and multiple threads enhance the processing bandwidth of servers, they also increase the contention for resources in application servers. This paper investigates this issue empirically based on a middleware benchmark. A cost model is proposed to estimate the overall performance of application servers, including the contention overhead. This model is then used to determine the optimal degree of the concurrency of application servers for a specific client load. A case study based on CORBA is presented to validate our model and demonstrate its application.

  13. Modeling and Evaluating Pilot Performance in NextGen: Review of and Recommendations Regarding Pilot Modeling Efforts, Architectures, and Validation Studies

    NASA Technical Reports Server (NTRS)

    Wickens, Christopher; Sebok, Angelia; Keller, John; Peters, Steve; Small, Ronald; Hutchins, Shaun; Algarin, Liana; Gore, Brian Francis; Hooey, Becky Lee; Foyle, David C.

    2013-01-01

    NextGen operations are associated with a variety of changes to the national airspace system (NAS) including changes to the allocation of roles and responsibilities among operators and automation, the use of new technologies and automation, additional information presented on the flight deck, and the entire concept of operations (ConOps). In the transition to NextGen airspace, aviation and air operations designers need to consider the implications of design or system changes on human performance and the potential for error. To ensure continued safety of the NAS, it will be necessary for researchers to evaluate design concepts and potential NextGen scenarios well before implementation. One approach for such evaluations is through human performance modeling. Human performance models (HPMs) provide effective tools for predicting and evaluating operator performance in systems. HPMs offer significant advantages over empirical, human-in-the-loop testing in that (1) they allow detailed analyses of systems that have not yet been built, (2) they offer great flexibility for extensive data collection, (3) they do not require experimental participants, and thus can offer cost and time savings. HPMs differ in their ability to predict performance and safety with NextGen procedures, equipment and ConOps. Models also vary in terms of how they approach human performance (e.g., some focus on cognitive processing, others focus on discrete tasks performed by a human, while others consider perceptual processes), and in terms of their associated validation efforts. The objectives of this research effort were to support the Federal Aviation Administration (FAA) in identifying HPMs that are appropriate for predicting pilot performance in NextGen operations, to provide guidance on how to evaluate the quality of different models, and to identify gaps in pilot performance modeling research, that could guide future research opportunities. This research effort is intended to help the FAA

  14. Performance Evaluation and Modeling of Erosion Resistant Turbine Engine Thermal Barrier Coatings

    NASA Technical Reports Server (NTRS)

    Miller, Robert A.; Zhu, Dongming; Kuczmarski, Maria

    2008-01-01

    The erosion resistant turbine thermal barrier coating system is critical to the rotorcraft engine performance and durability. The objective of this work was to determine erosion resistance of advanced thermal barrier coating systems under simulated engine erosion and thermal gradient environments, thus validating a new thermal barrier coating turbine blade technology for future rotorcraft applications. A high velocity burner rig based erosion test approach was established and a new series of rare earth oxide- and TiO2/Ta2O5- alloyed, ZrO2-based low conductivity thermal barrier coatings were designed and processed. The low conductivity thermal barrier coating systems demonstrated significant improvements in the erosion resistance. A comprehensive model based on accumulated strain damage low cycle fatigue is formulated for blade erosion life prediction. The work is currently aiming at the simulated engine erosion testing of advanced thermal barrier coated turbine blades to establish and validate the coating life prediction models.

  15. Evaluation of Round Window Stimulation Performance in Otosclerosis Using Finite Element Modeling.

    PubMed

    Yang, Shanguo; Xu, Dan; Liu, Xiaole

    2016-01-01

    Round window (RW) stimulation is a new type of middle ear implant's application for treating patients with middle ear disease, such as otosclerosis. However, clinical outcomes show a substantial degree of variability. One source of variability is the variation in the material properties of the ear components caused by the disease. To investigate the influence of the otosclerosis on the performance of the RW stimulation, a human ear finite element model including middle ear and cochlea was established based on a set of microcomputerized tomography section images of a human temporal bone. Three characteristic changes of the otosclerosis in the auditory system were simulated in the FE model: stapedial annular ligament stiffness enlargement, stapedial abnormal bone growth, and partial fixation of the malleus. The FE model was verified by comparing the model-predicted results with published experimental measurements. The equivalent sound pressure (ESP) of RW stimulation was calculated via comparing the differential intracochlear pressure produced by the RW stimulation and the normal eardrum sound stimulation. The results show that the increase of stapedial annular ligament and partial fixation of the malleus decreases RW stimulation's ESP prominently at lower frequencies. In contrast, the stapedial abnormal bone growth deteriorates RW stimulation's ESP severely at higher frequencies. PMID:27034709

  16. Evaluation of Round Window Stimulation Performance in Otosclerosis Using Finite Element Modeling

    PubMed Central

    Yang, Shanguo; Xu, Dan; Liu, Xiaole

    2016-01-01

    Round window (RW) stimulation is a new type of middle ear implant's application for treating patients with middle ear disease, such as otosclerosis. However, clinical outcomes show a substantial degree of variability. One source of variability is the variation in the material properties of the ear components caused by the disease. To investigate the influence of the otosclerosis on the performance of the RW stimulation, a human ear finite element model including middle ear and cochlea was established based on a set of microcomputerized tomography section images of a human temporal bone. Three characteristic changes of the otosclerosis in the auditory system were simulated in the FE model: stapedial annular ligament stiffness enlargement, stapedial abnormal bone growth, and partial fixation of the malleus. The FE model was verified by comparing the model-predicted results with published experimental measurements. The equivalent sound pressure (ESP) of RW stimulation was calculated via comparing the differential intracochlear pressure produced by the RW stimulation and the normal eardrum sound stimulation. The results show that the increase of stapedial annular ligament and partial fixation of the malleus decreases RW stimulation's ESP prominently at lower frequencies. In contrast, the stapedial abnormal bone growth deteriorates RW stimulation's ESP severely at higher frequencies. PMID:27034709

  17. Performance Evaluation of O-Ring Seals in Model 9975 Packaging Assemblies (U)

    SciTech Connect

    Skidmore, Eric

    1998-12-28

    The Materials Consultation Group of SRTC has completed a review of existing literature and data regarding the useable service life of Viton{reg_sign} GLT fluoroelastomer O-rings currently used in the Model 9975 packaging assemblies. Although the shipping and transportation period is normally limited to 2 years, it is anticipated that these packages will be used for longer-term storage of Pu-bearing materials in KAMS (K-Area Materials Storage) prior to processing or disposition in the APSF (Actinide Packaging and Storage Facility). Based on the service conditions and review of available literature, Materials Consultation concludes that there is sufficient existing data to establish the technical basis for storage of Pu-bearing materials using Parker Seals O-ring compound V835-75 (or equivalent) for up to 10 years following the 2-year shipping period. Although significant physical deterioration of the O-rings and release of product is not expected, definite changes in physical properties will occur. However, due to the complex relationship between elastomer formulation, seal properties, and competing degradation mechanisms, the actual degree of property variation and impact upon seal performance is difficult to predict. Therefore, accelerated aging and/or surveillance programs are recommended to validate the assumptions outlined in this report and to assess the long-term performance of O-ring seals under actual service conditions. Such programs could provide a unique opportunity to develop nonexistent long-term performance data, as well as address storage extension issues if necessary.

  18. Integrated DEA Models and Grey System Theory to Evaluate Past-to-Future Performance: A Case of Indian Electricity Industry

    PubMed Central

    Wang, Chia-Nan; Tran, Thanh-Tuyen

    2015-01-01

    The growth of economy and population together with the higher demand in energy has created many concerns for the Indian electricity industry whose capacity is at 211 gigawatts mostly in coal-fired plants. Due to insufficient fuel supply, India suffers from a shortage of electricity generation, leading to rolling blackouts; thus, performance evaluation and ranking the industry turn into significant issues. By this study, we expect to evaluate the rankings of these companies under control of the Ministry of Power. Also, this research would like to test if there are any significant differences between the two DEA models: Malmquist nonradial and Malmquist radial. Then, one advance model of MPI would be chosen to see these companies' performance in recent years and next few years by using forecasting results of Grey system theory. Totally, the realistic data 14 are considered to be in this evaluation after the strict selection from the whole industry. The results found that all companies have not shown many abrupt changes on their scores, and it is always not consistently good or consistently standing out, which demonstrated the high applicable usability of the integrated methods. This integrated numerical research gives a better “past-present-future” insights into performance evaluation in Indian electricity industry. PMID:25821854

  19. Integrated DEA models and grey system theory to evaluate past-to-future performance: a case of Indian electricity industry.

    PubMed

    Wang, Chia-Nan; Nguyen, Nhu-Ty; Tran, Thanh-Tuyen

    2015-01-01

    The growth of economy and population together with the higher demand in energy has created many concerns for the Indian electricity industry whose capacity is at 211 gigawatts mostly in coal-fired plants. Due to insufficient fuel supply, India suffers from a shortage of electricity generation, leading to rolling blackouts; thus, performance evaluation and ranking the industry turn into significant issues. By this study, we expect to evaluate the rankings of these companies under control of the Ministry of Power. Also, this research would like to test if there are any significant differences between the two DEA models: Malmquist nonradial and Malmquist radial. Then, one advance model of MPI would be chosen to see these companies' performance in recent years and next few years by using forecasting results of Grey system theory. Totally, the realistic data 14 are considered to be in this evaluation after the strict selection from the whole industry. The results found that all companies have not shown many abrupt changes on their scores, and it is always not consistently good or consistently standing out, which demonstrated the high applicable usability of the integrated methods. This integrated numerical research gives a better "past-present-future" insights into performance evaluation in Indian electricity industry.

  20. A parametric multiclass Bayes error estimator for the multispectral scanner spatial model performance evaluation

    NASA Technical Reports Server (NTRS)

    Mobasseri, B. G.; Mcgillem, C. D.; Anuta, P. E. (Principal Investigator)

    1978-01-01

    The author has identified the following significant results. The probability of correct classification of various populations in data was defined as the primary performance index. The multispectral data being of multiclass nature as well, required a Bayes error estimation procedure that was dependent on a set of class statistics alone. The classification error was expressed in terms of an N dimensional integral, where N was the dimensionality of the feature space. The multispectral scanner spatial model was represented by a linear shift, invariant multiple, port system where the N spectral bands comprised the input processes. The scanner characteristic function, the relationship governing the transformation of the input spatial, and hence, spectral correlation matrices through the systems, was developed.

  1. ATAMM enhancement and multiprocessor performance evaluation

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Mielke, Roland R.; Som, Sukhamoy; Obando, Rodrigo; Malekpour, Mahyar R.; Jones, Robert L., III; Mandala, Brij Mohan V.

    1991-01-01

    ATAMM (Algorithm To Architecture Mapping Model) enhancement and multiprocessor performance evaluation is discussed. The following topics are included: the ATAMM model; ATAMM enhancement; ADM (Advanced Development Model) implementation of ATAMM; and ATAMM support tools.

  2. Evaluation of tests of central nervous system performance after hypoxemia for a model for cognitive impairment.

    PubMed

    van der Post, J; Noordzij, L A W; de Kam, M L; Blauw, G J; Cohen, A F; van Gerven, J M A

    2002-12-01

    The sensitivity of several neurophysiological and cognitive tests to different levels of hypoxia was investigated. Cerebral hypoxia in healthy volunteers may be a disease model for dementia or other forms of brain dysfunction. Twelve healthy subjects were included in a randomized, single-blind, placebo-controlled, three-period cross-over trial. They received three air/N2 gas mixtures via mask breathing [aimed at peripheral oxygen saturation (SPO2) values of > 97% (placebo), 90% and 80%, with normal end-tidal CO2]. Central nervous system effects were tested regularly for 130 min by saccadic and smooth pursuit eye movements, electro-encephalogram, visual analogue scales and cognitive tests. Treatments were well tolerated. Compared to SPO2 90%, SPO2 80% reduced saccadic peak velocity by 16.4 degrees/s [confidence interval (CI) -26.3, -6.4], increased occipital delta power by 14.3% (CI 3.6, 25.1), and significantly increased most cognitive reaction times. SPO2 80% also decreased correct responses for the binary choice task and serial word recognition [-1.3 (-2.2, -0.3) and -3.5 (-6.2, -0.8), respectively] compared to SPO2 90%. Cognitive performance was decreased by SPO2 80% and increased by SPO2 90% compared to placebo. Sensitive effect measurements can be identified for these interventions. The applicability as a model for cognitive impairment should be investigated further. PMID:12503833

  3. Simulation of air quality over Central-Eastern Europe - Performance evaluation of WRF-CAMx modelling system

    NASA Astrophysics Data System (ADS)

    Maciejewska, Katarzyna; Juda-Rezler, Katarzyna; Reizer, Magdalena

    2013-04-01

    The main goal of presented work is to evaluate the accuracy of modelling the atmospheric transport and transformation on regional scale, performed with 25 km grid spacing. The coupled Mesoscale Weather Model - Chemical Transport Model (CTM) has been applied for Europe under European-American AQMEII project (Air Quality Modelling Evaluation International Initiative - http://aqmeii.jrc.ec.europa.eu/). The modelling domain was centered over Denmark (57.00°N, 10.00°E) with 172 x 172 grid points in x and y direction. The map projection choice was Lambert conformal. In the applied modelling system the Comprehensive Air Quality Model with extensions (CAMx) from ENVIRON International Corporation (Novato, California) was coupled off-line to the Weather Research and Forecasting (WRF), developed by National Center for Atmospheric Research (NCAR). WRF-CAMx simulations have been carried out for 2006. The anthropogenic emisions database has been provided by TNO (Netherlands Organisation for Applied Scientific Research) under AQMEII initiative. Area and line emissions were proceeded by emission model EMIL (Juda-Rezler et al., 2012) [1], while for the point sources the EPS3 model (Emission Processor v.3 from ENVIRON) was implemented in order to obtain vertical distribution of emission. Boundary conditions were acquired from coupling the GEMS (Global and regional Earth-system Monitoring using Satellite and in-situ data) modelling system results with satellite observations. The modelling system has been evaluated for the area of Central-Eastern Europe, regarding ozone and particulate matter (PM) concentrations. For each pollutant measured data from rural background AirBase and EMEP stations, with more than 75% of daily data, has been used. Original 'operational' evaluation methodology, proposed by Juda-Rezler et al. (2012) was applied. Selected set of metrics consists of 5 groups: bias measures, error measures, correlation measures, measures of model variance and spread, which

  4. Functional Performance Evaluation

    NASA Technical Reports Server (NTRS)

    Greenisen, Michael C.; Hayes, Judith C.; Siconolfi, Steven F.; Moore, Alan D.

    1999-01-01

    The Extended Duration Orbiter Medical Project (EDOMP) was established to address specific issues associated with optimizing the ability of crews to complete mission tasks deemed essential to entry, landing, and egress for spaceflights lasting up to 16 days. The main objectives of this functional performance evaluation were to investigate the physiological effects of long-duration spaceflight on skeletal muscle strength and endurance, as well as aerobic capacity and orthostatic function. Long-duration exposure to a microgravity environment may produce physiological alterations that affect crew ability to complete critical tasks such as extravehicular activity (EVA), intravehicular activity (IVA), and nominal or emergency egress. Ultimately, this information will be used to develop and verify countermeasures. The answers to three specific functional performance questions were sought: (1) What are the performance decrements resulting from missions of varying durations? (2) What are the physical requirements for successful entry, landing, and emergency egress from the Shuttle? and (3) What combination of preflight fitness training and in-flight countermeasures will minimize in-flight muscle performance decrements? To answer these questions, the Exercise Countermeasures Project looked at physiological changes associated with muscle degradation as well as orthostatic intolerance. A means of ensuring motor coordination was necessary to maintain proficiency in piloting skills, EVA, and IVA tasks. In addition, it was necessary to maintain musculoskeletal strength and function to meet the rigors associated with moderate altitude bailout and with nominal or emergency egress from the landed Orbiter. Eight investigations, referred to as Detailed Supplementary Objectives (DSOs) 475, 476, 477, 606, 608, 617, 618, and 624, were conducted to study muscle degradation and the effects of exercise on exercise capacity and orthostatic function (Table 3-1). This chapter is divided into

  5. Using modeling and simulation to evaluate stability and traction performance of a track-laying robotic vehicle

    NASA Astrophysics Data System (ADS)

    Gunter, D. D.; Bylsma, W. W.; Edgar, K.; Letherwood, M. D.; Gorsich, D. J.

    2005-05-01

    DOD has been involved in the research, development and acquisition of unmanned ground vehicle systems to support the troops in the field while minimizing the risks associated with supplying these troops. Engineers and scientists at TARDEC are using computer based modeling and simulation (M&S) to investigate how modifications to unmanned ground vehicles impact their mobility and stability, and to predict performance levels attainable for these types of vehicle systems. The objective of this paper will be to describe the computerbased modeling, simulation, and limited field testing effort that has been undertaken to investigate the dynamic performance of an unmanned tracked vehicle system while conducting a full matrix of tests designed to evaluate system shock, vibration, dynamic stability and off road mobility characteristics. In this paper we will describe the multi-body modeling methodology used as well as the characteristic data incorporated to define the models and their subsystems. The analysis undertaken is applying M&S to baseline the dynamic performance of the vehicle, and comparing these results with performance levels recorded for several manned vehicle systems. We will identify the virtual test matrix over which we executed the models. Finally we will describe our efforts to visualize our findings through the use of computer generated animations of the vehicle system negotiating various virtual automotive tests making up the test matrix.

  6. Validation of alternative models in genetic evaluation of racing performance in North Swedish and Norwegian cold-blooded trotters.

    PubMed

    Olsen, H F; Klemetsdal, G; Odegård, J; Arnason, T

    2012-04-01

    There have been several approaches to the estimation of breeding values of performance in trotters, and the objective of this study was to validate different alternatives for genetic evaluation of racing performance in the North Swedish and Norwegian cold-blooded trotters. The current bivariate approach with the traits racing status (RACE) and earnings (EARN) was compared with a threshold-linear animal model and the univariate alternative with the performance trait only. The models were compared based on cross-validation of standardized earnings, using mean-squared errors of prediction (MSEP) and the correlation between the phenotype (Y) and the estimated breeding value (EBV). Despite possible effects of selection, a rather high estimate of heritability of EARN was found in our univariate analysis. The genetic trend estimate for EARN was clearly higher in the bivariate specification than in the univariate model, as a consequence of the considerable size of estimated heritability of RACE and its high correlation with EARN (approximately 0.8). RACE is highly influenced by ancestry rather than the on-farm performance of the horse itself. Consequently, the use of RACE in the genetic analysis may inflate the genetic trend of EARN because of a double counting of pedigree information. Although, because of the higher predictive ability of the bivariate specification, the improved ranking of animals within a year-class and the inability to discriminate between models for genetic trend, we propose to base prediction of breeding values on the current bivariate model. PMID:22394238

  7. Performance evaluation and modelling studies of gravel--coir fibre--sand multimedia stormwater filter.

    PubMed

    Samuel, Manoj P; Senthilvel, S; Tamilmani, D; Mathew, A C

    2012-09-01

    A horizontal flow multimedia stormwater filter was developed and tested for hydraulic efficiency and pollutant removal efficiency. Gravel, coconut (Cocos nucifera) fibre and sand were selected as the media and filled in 1:1:1 proportion. A fabric screen made up of woven sisal hemp was used to separate the media. The adsorption behaviour of coir fibre was determined in a series of column and batch studies and the corresponding isotherms were developed. The hydraulic efficiency of the filter showed a diminishing trend as the sediment level in inflow increased. The filter exhibited 100% sediment removal at lower sediment concentrations in inflow water (>6 g L(-1)). The filter could remove NO3(-), SO4(2-) and total solids (TS) effectively. Removal percentages of Mg(2+) and Na(+) were also found to be good. Similar results were obtained from a field evaluation study. Studies were also conducted to determine the pattern of silt and sediment deposition inside the filter body. The effects of residence time and rate of flow on removal percentages of NO3(-) and TS were also investigated out. In addition, a multiple regression equation that mathematically represents the filtration process was developed. Based on estimated annual costs and returns, all financial viability criteria (internal rate of return, net present value and benefit-cost ratio) were found favourable and affordable to farmers for investment in the developed filtration system. The model MUSIC was calibrated and validated for field conditions with respect to the developed stormwater filter.

  8. RLS Instrument Radiometric Model: Instrument performance theoretical evaluation and experimental checks

    NASA Astrophysics Data System (ADS)

    Quintana, César; Ramos, Gonzalo; Moral, Andoni; Rodriguez, Jose Antonio; Pérez, Carlos; Hutchinson, Ian; INGLEY, Richard; Rull, Fernando

    2016-10-01

    Raman Laser Spectrometer (RLS) is one of the Pasteur payload instruments located at the Rover of the ExoMars mission and within the ESA's Aurora Exploration Programme. RLS will explore the Mars surface composition through the Raman spectroscopy technique. The instrument is divided into several units: a laser for Raman emission stimulation, an internal optical head (iOH) for sample excitation and for Raman emission recovering, a spectrometer with a CCD located at its output (SPU), the optical harness (OH) for the units connection, from the laser to the excitation path of the iOH and from the iOH reception path to the spectrometer, and the corresponding electronics for the CCD operation.Due to the variability of the samples to be analyzed on Mars, a radiometry prediction for the instrument performance results to be of the critical importance. In such a framework, and taking into account the SNR (signal to noise ratio) required for the achievement of successful results from the scientific point of view (a proper information about the Mars surface composition), a radiometric model has been developed to provide the requirements for the different units, i.e. the laser irradiance, the iOH, OH, and SPU throughputs, and the samples that will be possible to be analyzed in terms of its Raman emission and the relationship of the Raman signal with respect to fluorescence emission, among others.The radiometric model fundamentals (calculations and approximations), as well as the first results obtained during the bread board characterization campaign are here reported on.

  9. Performance evaluation of the solar backscatter ultraviolet radiometer, model 2 (SBUV/2) inflight calibration system

    NASA Technical Reports Server (NTRS)

    Weiss, H.; Cebula, Richard P.; Laamann, K.; Mcpeters, R. D.

    1994-01-01

    The Solar Backscatter Ultraviolet Radiometer, Model 2 (SBUV/2) instruments, as part of their regular operation, deploy ground aluminum reflective diffusers to deflect solar irradiance into the instrument's field-of-view. Previous SBUV instrument diffusers have shown a tendency to degrade in their reflective efficiencies. This degradation will add a trend to the ozone measurements if left uncorrected. An extensive in-flight calibration system was designed into the SBUV/2 instruments to effectively measure the degradation of the solar diffuser (Ball Aerospace Systems Division 1981). Soon after launch, the NOAA-9 SBUV/2 calibration system was unable to track the diffuser's reflectivity changes due, in part, to design flows (Frederick et al. 1986). Subsequently, the NOAA-11 SBUV/2 calibration system was redesigned and an analysis of the first 2 years of data (Weiss et al. 1991) indicated the NOAA-11 SBUV/2 onboard calibration system's performance to be exceeding preflight expectations. This paper will describe the analysis of the first three years NOAA-11 SBUV/2 calibration system data.

  10. Methodology to evaluate the performance of simulation models for alternative compiler and operating system configurations

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Simulation modelers increasingly require greater flexibility for model implementation on diverse operating systems, and they demand high computational speed for efficient iterative simulations. Additionally, model users may differ in preference for proprietary versus open-source software environment...

  11. Evaluating the catching performance of aerodynamic rain gauges through field comparisons and CFD modelling

    NASA Astrophysics Data System (ADS)

    Pollock, Michael; Colli, Matteo; Stagnaro, Mattia; Lanza, Luca; Quinn, Paul; Dutton, Mark; O'Donnell, Greg; Wilkinson, Mark; Black, Andrew; O'Connell, Enda

    2016-04-01

    Accurate rainfall measurement is a fundamental requirement in a broad range of applications including flood risk and water resource management. The most widely used method of measuring rainfall is the rain gauge, which is often also considered to be the most accurate. In the context of hydrological modelling, measurements from rain gauges are interpolated to produce an areal representation, which forms an important input to drive hydrological models and calibrate rainfall radars. In each stage of this process another layer of uncertainty is introduced. The initial measurement errors are propagated through the chain, compounding the overall uncertainty. This study looks at the fundamental source of error, in the rainfall measurement itself; and specifically addresses the largest of these, the systematic 'wind-induced' error. Snowfall is outside the scope. The shape of a precipitation gauge significantly affects its collection efficiency (CE), with respect to a reference measurement. This is due to the airflow around the gauge, which causes a deflection in the trajectories of the raindrops near the gauge orifice. Computational Fluid-Dynamic (CFD) simulations are used to evaluate the time-averaged airflows realized around the EML ARG100, EML SBS500 and EML Kalyx-RG rain gauges, when impacted by wind. These gauges have a similar aerodynamic profile - a shape comparable to that of a champagne flute - and they are used globally. The funnel diameter of each gauge, respectively, is 252mm, 254mm and 127mm. The SBS500 is used by the UK Met Office and the Scottish Environmental Protection Agency. Terms of comparison are provided by the results obtained for standard rain gauge shapes manufactured by Casella and OTT which, respectively, have a uniform and a tapered cylindrical shape. The simulations were executed for five different wind speeds; 2, 5, 7, 10 and 18 ms-1. Results indicate that aerodynamic gauges have a different impact on the time-averaged airflow patterns

  12. Evaluating Economic Performance and Policies.

    ERIC Educational Resources Information Center

    Thurow, Lester C.

    1987-01-01

    Argues that a social welfare approach to evaluating economic performance is inappropriate at the high school level. Provides several historical case studies which could be used to augment instruction aimed at the evaluation of economic performance and policies. (JDH)

  13. More Bias in Performance Evaluation?

    ERIC Educational Resources Information Center

    Gallagher, Michael C.

    1978-01-01

    The results of this study indicate that a single performance evaluation should not be used for different purposes since the stated purpose of the evaluation can affect the actual performance rating. (Author/IRT)

  14. Evaluation models and evaluation use

    PubMed Central

    Contandriopoulos, Damien; Brousselle, Astrid

    2012-01-01

    The use of evaluation results is at the core of evaluation theory and practice. Major debates in the field have emphasized the importance of both the evaluator’s role and the evaluation process itself in fostering evaluation use. A recent systematic review of interventions aimed at influencing policy-making or organizational behavior through knowledge exchange offers a new perspective on evaluation use. We propose here a framework for better understanding the embedded relations between evaluation context, choice of an evaluation model and use of results. The article argues that the evaluation context presents conditions that affect both the appropriateness of the evaluation model implemented and the use of results. PMID:23526460

  15. Data envelopment analysis model for the appraisal and relative performance evaluation of nurses at an intensive care unit.

    PubMed

    Osman, Ibrahim H; Berbary, Lynn N; Sidani, Yusuf; Al-Ayoubi, Baydaa; Emrouznejad, Ali

    2011-10-01

    The appraisal and relative performance evaluation of nurses are very important and beneficial for both nurses and employers in an era of clinical governance, increased accountability and high standards of health care services. They enhance and consolidate the knowledge and practical skills of nurses by identification of training and career development plans as well as improvement in health care quality services, increase in job satisfaction and use of cost-effective resources. In this paper, a data envelopment analysis (DEA) model is proposed for the appraisal and relative performance evaluation of nurses. The model is validated on thirty-two nurses working at an Intensive Care Unit (ICU) at one of the most recognized hospitals in Lebanon. The DEA was able to classify nurses into efficient and inefficient ones. The set of efficient nurses was used to establish an internal best practice benchmark to project career development plans for improving the performance of other inefficient nurses. The DEA result confirmed the ranking of some nurses and highlighted injustice in other cases that were produced by the currently practiced appraisal system. Further, the DEA model is shown to be an effective talent management and motivational tool as it can provide clear managerial plans related to promoting, training and development activities from the perspective of nurses, hence increasing their satisfaction, motivation and acceptance of appraisal results. Due to such features, the model is currently being considered for implementation at ICU. Finally, the ratio of the number DEA units to the number of input/output measures is revisited with new suggested values on its upper and lower limits depending on the type of DEA models and the desired number of efficient units from a managerial perspective.

  16. Data envelopment analysis model for the appraisal and relative performance evaluation of nurses at an intensive care unit.

    PubMed

    Osman, Ibrahim H; Berbary, Lynn N; Sidani, Yusuf; Al-Ayoubi, Baydaa; Emrouznejad, Ali

    2011-10-01

    The appraisal and relative performance evaluation of nurses are very important and beneficial for both nurses and employers in an era of clinical governance, increased accountability and high standards of health care services. They enhance and consolidate the knowledge and practical skills of nurses by identification of training and career development plans as well as improvement in health care quality services, increase in job satisfaction and use of cost-effective resources. In this paper, a data envelopment analysis (DEA) model is proposed for the appraisal and relative performance evaluation of nurses. The model is validated on thirty-two nurses working at an Intensive Care Unit (ICU) at one of the most recognized hospitals in Lebanon. The DEA was able to classify nurses into efficient and inefficient ones. The set of efficient nurses was used to establish an internal best practice benchmark to project career development plans for improving the performance of other inefficient nurses. The DEA result confirmed the ranking of some nurses and highlighted injustice in other cases that were produced by the currently practiced appraisal system. Further, the DEA model is shown to be an effective talent management and motivational tool as it can provide clear managerial plans related to promoting, training and development activities from the perspective of nurses, hence increasing their satisfaction, motivation and acceptance of appraisal results. Due to such features, the model is currently being considered for implementation at ICU. Finally, the ratio of the number DEA units to the number of input/output measures is revisited with new suggested values on its upper and lower limits depending on the type of DEA models and the desired number of efficient units from a managerial perspective. PMID:20734223

  17. Evaluating the performance of the Lee-Carter method and its variants in modelling and forecasting Malaysian mortality

    NASA Astrophysics Data System (ADS)

    Zakiyatussariroh, W. H. Wan; Said, Z. Mohammad; Norazan, M. R.

    2014-12-01

    This study investigated the performance of the Lee-Carter (LC) method and it variants in modeling and forecasting Malaysia mortality. These include the original LC, the Lee-Miller (LM) variant and the Booth-Maindonald-Smith (BMS) variant. These methods were evaluated using Malaysia's mortality data which was measured based on age specific death rates (ASDR) for 1971 to 2009 for overall population while those for 1980-2009 were used in separate models for male and female population. The performance of the variants has been examined in term of the goodness of fit of the models and forecasting accuracy. Comparison was made based on several criteria namely, mean square error (MSE), root mean square error (RMSE), mean absolute deviation (MAD) and mean absolute percentage error (MAPE). The results indicate that BMS method was outperformed in in-sample fitting for overall population and when the models were fitted separately for male and female population. However, in the case of out-sample forecast accuracy, BMS method only best when the data were fitted to overall population. When the data were fitted separately for male and female, LCnone performed better for male population and LM method is good for female population.

  18. Evaluation of a micro-scale wind model's performance over realistic building clusters using wind tunnel experiments

    NASA Astrophysics Data System (ADS)

    Zhang, Ning; Du, Yunsong; Miao, Shiguang; Fang, Xiaoyi

    2016-08-01

    The simulation performance over complex building clusters of a wind simulation model (Wind Information Field Fast Analysis model, WIFFA) in a micro-scale air pollutant dispersion model system (Urban Microscale Air Pollution dispersion Simulation model, UMAPS) is evaluated using various wind tunnel experimental data including the CEDVAL (Compilation of Experimental Data for Validation of Micro-Scale Dispersion Models) wind tunnel experiment data and the NJU-FZ experiment data (Nanjing University-Fang Zhuang neighborhood wind tunnel experiment data). The results show that the wind model can reproduce the vortexes triggered by urban buildings well, and the flow patterns in urban street canyons and building clusters can also be represented. Due to the complex shapes of buildings and their distributions, the simulation deviations/discrepancies from the measurements are usually caused by the simplification of the building shapes and the determination of the key zone sizes. The computational efficiencies of different cases are also discussed in this paper. The model has a high computational efficiency compared to traditional numerical models that solve the Navier-Stokes equations, and can produce very high-resolution (1-5 m) wind fields of a complex neighborhood scale urban building canopy (~ 1 km ×1 km) in less than 3 min when run on a personal computer.

  19. An evaluation of 1D loss model collections for the off-design performance prediction of automotive turbocharger compressors

    NASA Astrophysics Data System (ADS)

    Harley, P.; Spence, S.; Early, J.; Filsinger, D.; Dietrich, M.

    2013-12-01

    Single-zone modelling is used to assess different collections of impeller 1D loss models. Three collections of loss models have been identified in literature, and the background to each of these collections is discussed. Each collection is evaluated using three modern automotive turbocharger style centrifugal compressors; comparisons of performance for each of the collections are made. An empirical data set taken from standard hot gas stand tests for each turbocharger is used as a baseline for comparison. Compressor range is predicted in this study; impeller diffusion ratio is shown to be a useful method of predicting compressor surge in 1D, and choke is predicted using basic compressible flow theory. The compressor designer can use this as a guide to identify the most compatible collection of losses for turbocharger compressor design applications. The analysis indicates the most appropriate collection for the design of automotive turbocharger centrifugal compressors.

  20. GROUND-WATER MODEL TESTING: SYSTEMATIC EVALUATION AND TESTING OF CODE FUNCTIONALITY AND PERFORMANCE

    EPA Science Inventory

    Effective use of ground-water simulation codes as management decision tools requires the establishment of their functionality, performance characteristics, and applicability to the problem at hand. This is accomplished through application of a systematic code-testing protocol and...

  1. Evaluating Escherichia coli removal performance in stormwater biofilters: a preliminary modelling approach.

    PubMed

    Chandrasena, G I; Deletic, A; McCarthy, D T

    2013-01-01

    Stormwater biofilters are not currently optimised for pathogen removal since the behaviour of these pollutants within the stormwater biofilters is poorly understood. Modelling is a common way of optimising these systems, which also provides a better understanding of the major processes that govern the pathogen removal. This paper provides an overview of a laboratory-scale study that investigated how different design and operational conditions impact pathogen removal in the stormwater biofilters. These data were then used to develop a modelling tool that can be used to optimise the design and operation of the stormwater biofilters. The model uses continuous simulations where adsorption and desorption were dominant during wet weather periods and first order die-off kinetics were significant in dry periods between the wet weather events. Relatively high Nash Sutcliffe Efficiencies (>0.5) indicate that the calibrated model is in good agreement with observed data and the optimised model parameters were comparable with values reported in the literature. The model's sensitivity is highest towards the adsorption process parameter followed by the die-off and desorption rate parameters, which implies that adsorption is the governing process of the model. Vegetation is found to have an impact on the wet weather processes since the adsorption and desorption parameters vary significantly with the different plant configurations. The model is yet to be tested against field data and needs to be improved to represent the effect of some other biofilter design configurations, such as the inclusion of the submerged zone.

  2. BPACK -- A computer model package for boiler reburning/co-firing performance evaluations. User`s manual, Volume 1

    SciTech Connect

    Wu, K.T.; Li, B.; Payne, R.

    1992-06-01

    This manual presents and describes a package of computer models uniquely developed for boiler thermal performance and emissions evaluations by the Energy and Environmental Research Corporation. The model package permits boiler heat transfer, fuels combustion, and pollutant emissions predictions related to a number of practical boiler operations such as fuel-switching, fuels co-firing, and reburning NO{sub x} reductions. The models are adaptable to most boiler/combustor designs and can handle burner fuels in solid, liquid, gaseous, and slurried forms. The models are also capable of performing predictions for combustion applications involving gaseous-fuel reburning, and co-firing of solid/gas, liquid/gas, gas/gas, slurry/gas fuels. The model package is conveniently named as BPACK (Boiler Package) and consists of six computer codes, of which three of them are main computational codes and the other three are input codes. The three main codes are: (a) a two-dimensional furnace heat-transfer and combustion code: (b) a detailed chemical-kinetics code; and (c) a boiler convective passage code. This user`s manual presents the computer model package in two volumes. Volume 1 describes in detail a number of topics which are of general users` interest, including the physical and chemical basis of the models, a complete description of the model applicability, options, input/output, and the default inputs. Volume 2 contains a detailed record of the worked examples to assist users in applying the models, and to illustrate the versatility of the codes.

  3. Evaluating the performance of a glacier erosion model applied to Peyto Glacier, Alberta, Canada

    NASA Astrophysics Data System (ADS)

    Vogt, R.; Mlynowski, T. J.; Menounos, B.

    2013-12-01

    Glaciers are effective agents of erosion for many mountainous regions, but primary rates of erosion are difficult to quantify due to unknown conditions at the glacier bed. We develop a numerical model of subglacial erosion and passively couple it to a vertically integrated ice flow model (UBC regional glaciation model). The model accounts for seasonal changes in water pressure at the glacier bed which affect rates of abrasion and quarrying. We apply our erosion model to Peyto Glacier, and compare estimates of glacier erosion to the mass of fine sediment contained in a lake immediately down valley from the glacier. A series of experiments with our model and ones based on subglacial sliding rates are run to explore model sensitivity to bedrock hardness, seasonal hydrology, changes in mass balance, and longer-term dimensional changes of the glacier. Our experiments show that, as expected, erosion rates are most sensitive to bedrock hardness and changes in glacier mass balance. Silt and clay contained in Peyto Lake primarily originate from the glacier, and represent sediments derived from abrasion and comminution of material produced by quarrying. Average specific sediment yield during the period AD1917-1970 from the lake is 467×190 Mg km-2yr-1 and reaches a maximum of 928 Mg km-2yr-1 in AD1941. Converting to a specific sediment yield, modelled average abrasion and quarrying rates during the comparative period are 142×44 Mg km-2yr-1 and 1167×213 Mg km-2yr-1 respectively. Modelled quarrying accounts for approximately 85-95% of the erosion occurring beneath the glacier. The basal sliding model estimates combined abrasion and quarrying. During the comparative period, estimated yields average 427×136 Mg km-2yr-1, lower than the combined abrasion and quarrying models. Both models predict maximum sediment yield when Peyto Glacier reached its maximum extent. The simplistic erosion model shows higher sensitivity to climate, as seen by accentuated sediment yield peaks

  4. Distributed Space Mission Design for Earth Observation Using Model-Based Performance Evaluation

    NASA Technical Reports Server (NTRS)

    Nag, Sreeja; LeMoigne-Stewart, Jacqueline; Cervantes, Ben; DeWeck, Oliver

    2015-01-01

    Distributed Space Missions (DSMs) are gaining momentum in their application to earth observation missions owing to their unique ability to increase observation sampling in multiple dimensions. DSM design is a complex problem with many design variables, multiple objectives determining performance and cost and emergent, often unexpected, behaviors. There are very few open-access tools available to explore the tradespace of variables, minimize cost and maximize performance for pre-defined science goals, and therefore select the most optimal design. This paper presents a software tool that can multiple DSM architectures based on pre-defined design variable ranges and size those architectures in terms of predefined science and cost metrics. The tool will help a user select Pareto optimal DSM designs based on design of experiments techniques. The tool will be applied to some earth observation examples to demonstrate its applicability in making some key decisions between different performance metrics and cost metrics early in the design lifecycle.

  5. Surface characteristics modeling and performance evaluation of urban building materials using LiDAR data.

    PubMed

    Li, Xiaolu; Liang, Yu

    2015-05-20

    Analysis of light detection and ranging (LiDAR) intensity data to extract surface features is of great interest in remote sensing research. One potential application of LiDAR intensity data is target classification. A new bidirectional reflectance distribution function (BRDF) model is derived for target characterization of rough and smooth surfaces. Based on the geometry of our coaxial full-waveform LiDAR system, the integration method is improved through coordinate transformation to establish the relationship between the BRDF model and intensity data of LiDAR. A series of experiments using typical urban building materials are implemented to validate the proposed BRDF model and integration method. The fitting results show that three parameters extracted from the proposed BRDF model can distinguish the urban building materials from perspectives of roughness, specular reflectance, and diffuse reflectance. A comprehensive analysis of these parameters will help characterize surface features in a physically rigorous manner.

  6. Artificial intelligence modeling to evaluate field performance of photocatalytic asphalt pavement for ambient air purification.

    PubMed

    Asadi, Somayeh; Hassan, Marwa; Nadiri, Ataallah; Dylla, Heather

    2014-01-01

    In recent years, the application of titanium dioxide (TiO₂) as a photocatalyst in asphalt pavement has received considerable attention for purifying ambient air from traffic-emitted pollutants via photocatalytic processes. In order to control the increasing deterioration of ambient air quality, urgent and proper risk assessment tools are deemed necessary. However, in practice, monitoring all process parameters for various operating conditions is difficult due to the complex and non-linear nature of air pollution-based problems. Therefore, the development of models to predict air pollutant concentrations is very useful because it can provide early warnings to the population and also reduce the number of measuring sites. This study used artificial neural network (ANN) and neuro-fuzzy (NF) models to predict NOx concentration in the air as a function of traffic count (Tr) and climatic conditions including humidity (H), temperature (T), solar radiation (S), and wind speed (W) before and after the application of TiO₂ on the pavement surface. These models are useful for modeling because of their ability to be trained using historical data and because of their capability for modeling highly non-linear relationships. To build these models, data were collected from a field study where an aqueous nano TiO₂ solution was sprayed on a 0.2-mile of asphalt pavement in Baton Rouge, LA. Results of this study showed that the NF model provided a better fitting to NOx measurements than the ANN model in the training, validation, and test steps. Results of a parametric study indicated that traffic level, relative humidity, and solar radiation had the most influence on photocatalytic efficiency.

  7. Performance Standards and Evaluations in IR Test Collections: Cluster-Based Retrieval Models.

    ERIC Educational Resources Information Center

    Shaw, W. M., Jr.; And Others

    1997-01-01

    Describes a study that computed low performance standards for the group of queries in 13 information retrieval (IR) test collections. Derived from the random graph hypothesis, these standards represent the highest levels of retrieval effectiveness that can be obtained from meaningless clustering structures. (Author/LRW)

  8. Development and Evaluation of a Performance Modeling Flight Test Approach Based on Quasi Steady-State Maneuvers

    NASA Technical Reports Server (NTRS)

    Yechout, T. R.; Braman, K. B.

    1984-01-01

    The development, implementation and flight test evaluation of a performance modeling technique which required a limited amount of quasisteady state flight test data to predict the overall one g performance characteristics of an aircraft. The concept definition phase of the program include development of: (1) the relationship for defining aerodynamic characteristics from quasi steady state maneuvers; (2) a simplified in flight thrust and airflow prediction technique; (3) a flight test maneuvering sequence which efficiently provided definition of baseline aerodynamic and engine characteristics including power effects on lift and drag; and (4) the algorithms necessary for cruise and flight trajectory predictions. Implementation of the concept include design of the overall flight test data flow, definition of instrumentation system and ground test requirements, development and verification of all applicable software and consolidation of the overall requirements in a flight test plan.

  9. Wireless sensors in complex networks: study and performance evaluation of a new hybrid model

    NASA Astrophysics Data System (ADS)

    Curia, Vincenzo; Santamaria, Amilcare Francesco; Sottile, Cesare; Voznak, Miroslav

    2014-05-01

    Many recent research efforts have confirmed that, given the natural evolution of telecommunication systems, they can be approached by a new modeling technique, not based yet on traditional approach of graphs theory. The branch of complex networking, although young, is able to introduce a new and strong way of networks modeling, nevertheless they are social, telecommunication or friendship networks. In this paper we propose a new modeling technique applied to Wireless Sensor Networks (WSNs). The modeling has the purpose of ensuring an improvement of the distributed communication, quantifying it in terms of clustering coefficient and average diameter of the entire network. The main idea consists in the introduction of hybrid Data Mules, able to enhance the whole connectivity of the entire network. The distribution degree of individual nodes in the network will follow a logarithmic trend, meaning that the most of the nodes are not necessarily adjacent but, for each pair of them, there exists a relatively short path that connects them. The effectiveness of the proposed idea has been validated thorough a deep campaign of simulations, proving also the power of complex and small-world networks.

  10. OBJECTIVE REDUCTION OF THE SPACE-TIME DOMAIN DIMENSIONALITY FOR EVALUATING MODEL PERFORMANCE

    EPA Science Inventory

    In the United States, photochemical air quality models are the principal tools used by governmental agencies to develop emission reduction strategies aimed at achieving National Ambient Air Quality Standards (NAAQS). Before they can be applied with confidence in a regulatory sett...

  11. Evaluation of the Logistic Model for GAC Performance in Water Treatment

    EPA Science Inventory

    Full-scale field measurement and rapid small-scale column test data from the Greater Cincinnati (Ohio) Water Works (GCWW) were used to calibrate and investigate the application of the logistic model for simulating breakthrough of total organic carbon (TOC) in granular activated c...

  12. Proposal for a Conceptual Model for Evaluating Lean Product Development Performance: A Study of LPD Enablers in Manufacturing Companies

    NASA Astrophysics Data System (ADS)

    Osezua Aikhuele, Daniel; Mohd Turan, Faiz

    2016-02-01

    The instability in today's market and the emerging demands for mass customized products by customers, are driving companies to seek for cost effective and time efficient improvements in their production system and this have led to real pressure for the adaptation of new developmental architecture and operational parameters to remain competitive in the market. Among such developmental architecture adopted, is the integration of lean thinking in the product development process. However, due to lack of clear understanding of the lean performance and its measurements, many companies are unable to implement and fully integrate the lean principle into their product development process and without a proper performance measurement, the performance level of the organizational value stream will be unknown and the specific area of improvement as it relates to the LPD program cannot be tracked. Hence, it will result in poor decision making in the LPD implementation. This paper therefore seeks to present a conceptual model for evaluation of LPD performances by identifying and analysing the core existing LPD enabler (Chief Engineer, Cross-functional teams, Set-based engineering, Poka-yoke (mistakeproofing), Knowledge-based environment, Value-focused planning and development, Top management support, Technology, Supplier integration, Workforce commitment and Continuous improvement culture) for assessing the LPD performance.

  13. Rank and order: evaluating the performance of SNPs for individual assignment in a non-model organism.

    PubMed

    Storer, Caroline G; Pascal, Carita E; Roberts, Steven B; Templin, William D; Seeb, Lisa W; Seeb, James E

    2012-01-01

    Single nucleotide polymorphisms (SNPs) are valuable tools for ecological and evolutionary studies. In non-model species, the use of SNPs has been limited by the number of markers available. However, new technologies and decreasing technology costs have facilitated the discovery of a constantly increasing number of SNPs. With hundreds or thousands of SNPs potentially available, there is interest in comparing and developing methods for evaluating SNPs to create panels of high-throughput assays that are customized for performance, research questions, and resources. Here we use five different methods to rank 43 new SNPs and 71 previously published SNPs for sockeye salmon: F(ST), informativeness (I(n)), average contribution to principal components (LC), and the locus-ranking programs BELS and WHICHLOCI. We then tested the performance of these different ranking methods by creating 48- and 96-SNP panels of the top-ranked loci for each method and used empirical and simulated data to obtain the probability of assigning individuals to the correct population using each panel. All 96-SNP panels performed similarly and better than the 48-SNP panels except for the 96-SNP BELS panel. Among the 48-SNP panels, panels created from F(ST), I(n), and LC ranks performed better than panels formed using the top-ranked loci from the programs BELS and WHICHLOCI. The application of ranking methods to optimize panel performance will become more important as more high-throughput assays become available. PMID:23185290

  14. Rank and Order: Evaluating the Performance of SNPs for Individual Assignment in a Non-Model Organism

    PubMed Central

    Storer, Caroline G.; Pascal, Carita E.; Roberts, Steven B.; Templin, William D.; Seeb, Lisa W.; Seeb, James E.

    2012-01-01

    Single nucleotide polymorphisms (SNPs) are valuable tools for ecological and evolutionary studies. In non-model species, the use of SNPs has been limited by the number of markers available. However, new technologies and decreasing technology costs have facilitated the discovery of a constantly increasing number of SNPs. With hundreds or thousands of SNPs potentially available, there is interest in comparing and developing methods for evaluating SNPs to create panels of high-throughput assays that are customized for performance, research questions, and resources. Here we use five different methods to rank 43 new SNPs and 71 previously published SNPs for sockeye salmon: FST, informativeness (In), average contribution to principal components (LC), and the locus-ranking programs BELS and WHICHLOCI. We then tested the performance of these different ranking methods by creating 48- and 96-SNP panels of the top-ranked loci for each method and used empirical and simulated data to obtain the probability of assigning individuals to the correct population using each panel. All 96-SNP panels performed similarly and better than the 48-SNP panels except for the 96-SNP BELS panel. Among the 48-SNP panels, panels created from FST, In, and LC ranks performed better than panels formed using the top-ranked loci from the programs BELS and WHICHLOCI. The application of ranking methods to optimize panel performance will become more important as more high-throughput assays become available. PMID:23185290

  15. Performance evaluation of a web-based system to exchange Electronic Health Records using Queueing model (M/M/1).

    PubMed

    de la Torre, Isabel; Díaz, Francisco Javier; Antón, Míriam; Martínez, Mario; Díez, José Fernando; Boto, Daniel; López, Miguel; Hornero, Roberto; López, María Isabel

    2012-04-01

    Response time measurement of a web-based system is essential to evaluate its performance. This paper shows a comparison of the response times of a Web-based system for Ophthalmologic Electronic Health Records (EHRs), TeleOftalWeb. It makes use of different database models like Oracle 10 g, dbXML 2.0, Xindice 1.2, and eXist 1.1.1. The system's modelling, which uses Tandem Queue networks, will allow us to estimate the service times of the different components of the system (CPU, network and databases). In order to calculate those times, associated to the different databases, benchmarking techniques are used. The final objective of the comparison is to choose the database system resulting in the lowest response time to TeleOftalWeb and to compare the obtained results using a new benchmarking.

  16. Evaluating Student Clinical Performance.

    ERIC Educational Resources Information Center

    Foster, Danny T.

    When the University of Iowa's athletic training education department developed evaluation criteria and methods to be used with students, attention was paid to validity, consistency, observation, and behaviors. The observations of student behaviors reflect three types of learning outcomes important to clinical education: cognitive, psychomotor, and…

  17. Instrument performance evaluation

    SciTech Connect

    Swinth, K.L.

    1993-03-01

    Deficiencies exist in both the performance and the quality of health physics instruments. Recognizing the implications of such deficiencies for the protection of workers and the public, in the early 1980s the DOE and the NRC encouraged the development of a performance standard and established a program to test a series of instruments against criteria in the standard. The purpose of the testing was to establish the practicality of the criteria in the standard, to determine the performance of a cross section of available instruments, and to establish a testing capability. Over 100 instruments were tested, resulting in a practical standard and an understanding of the deficiencies in available instruments. In parallel with the instrument testing, a value-impact study clearly established the benefits of implementing a formal testing program. An ad hoc committee also met several times to establish recommendations for the voluntary implementation of a testing program based on the studies and the performance standard. For several reasons, a formal program did not materialize. Ongoing tests and studies have supported the development of specific instruments and have helped specific clients understand the performance of their instruments. The purpose of this presentation is to trace the history of instrument testing to date and suggest the benefits of a centralized formal program.

  18. School Improvement Model--A Total Systems Approach to Teacher, Administrator, and Student Performance Evaluation. Occasional Paper No. 84-1.

    ERIC Educational Resources Information Center

    Broughton, Valerie J.

    The purpose of the School Improvement Model (SIM) Project is to investigate the links between systematically developed teacher performance evaluation, administrator performance evaluation, staff development interventions, and the quality of education as measured by student achievement results. Following the 4th year of the 5-year study,…

  19. Combustion modeling and performance evaluation in a full-scale rotary kiln incinerator.

    PubMed

    Chen, K S; Hsu, W T; Lin, Y C; Ho, Y T; Wu, C H

    2001-06-01

    This work summarizes the results of numerical investigations and in situ measurements for turbulent combustion in a full-scale rotary kiln incinerator (RKI). The three-dimensional (3D) governing equations for mass, momentum, energy, and species, together with the kappa - epsilon turbulence model, are formulated and solved using a finite volume method. Volatile gases from solid waste were simulated by gaseous CH4 distributed nonuniformly along the kiln bed. The combustion process was considered to be a two-step stoichiometric reaction for primary air mixed with CH4 gas in the combustion chamber. The mixing-controlled eddy-dissipation model (EDM) was employed to predict the conversion rates of CH4, O2, CO2, and CO. The results of the prediction show that reverse flows occur near the entrance of the first combustion chamber (FCC) and the turning point at the entrance to the second combustion chamber (SCC). Temperature and species are nonuniform and are vertically stratified. Meanwhile, additional mixing in the SCC enhances postflame oxidation. A combustion efficiency of up to 99.96% can be achieved at approximately 150% excess air and 20-30% secondary air. Reasonable agreement is achieved between numerical predictions and in situ measurements.

  20. Performance Evaluation of Missing-Value Imputation Clustering Based on a Multivariate Gaussian Mixture Model

    PubMed Central

    Wu, Chuanli; Gao, Yuexia; Hua, Tianqi; Xu, Chenwu

    2016-01-01

    Background It is challenging to deal with mixture models when missing values occur in clustering datasets. Methods and Results We propose a dynamic clustering algorithm based on a multivariate Gaussian mixture model that efficiently imputes missing values to generate a “pseudo-complete” dataset. Parameters from different clusters and missing values are estimated according to the maximum likelihood implemented with an expectation-maximization algorithm, and multivariate individuals are clustered with Bayesian posterior probability. A simulation showed that our proposed method has a fast convergence speed and it accurately estimates missing values. Our proposed algorithm was further validated with Fisher’s Iris dataset, the Yeast Cell-cycle Gene-expression dataset, and the CIFAR-10 images dataset. The results indicate that our algorithm offers highly accurate clustering, comparable to that using a complete dataset without missing values. Furthermore, our algorithm resulted in a lower misjudgment rate than both clustering algorithms with missing data deleted and with missing-value imputation by mean replacement. Conclusion We demonstrate that our missing-value imputation clustering algorithm is feasible and superior to both of these other clustering algorithms in certain situations. PMID:27552203

  1. Performance evaluation in color face hallucination with error regression model in MPCA subspace method

    NASA Astrophysics Data System (ADS)

    Asavaskulkiet, Krissada

    2014-01-01

    This paper proposes a novel face super-resolution reconstruction (hallucination) technique for YCbCr color space. The underlying idea is to learn with an error regression model and multi-linear principal component analysis (MPCA). From hallucination framework, many color face images are explained in YCbCr space. To reduce the time complexity of color face hallucination, we can be naturally described the color face imaged as tensors or multi-linear arrays. In addition, the error regression analysis is used to find the error estimation which can be obtained from the existing LR in tensor space. In learning process is from the mistakes in reconstruct face images of the training dataset by MPCA, then finding the relationship between input and error by regression analysis. In hallucinating process uses normal method by backprojection of MPCA, after that the result is corrected with the error estimation. In this contribution we show that our hallucination technique can be suitable for color face images both in RGB and YCbCr space. By using the MPCA subspace with error regression model, we can generate photorealistic color face images. Our approach is demonstrated by extensive experiments with high-quality hallucinated color faces. Comparison with existing algorithms shows the effectiveness of the proposed method.

  2. Evaluating the performance of a climate-driven mortality model during heat waves and cold spells in Europe.

    PubMed

    Lowe, Rachel; Ballester, Joan; Creswick, James; Robine, Jean-Marie; Herrmann, François R; Rodó, Xavier

    2015-01-23

    The impact of climate change on human health is a serious concern. In particular, changes in the frequency and intensity of heat waves and cold spells are of high relevance in terms of mortality and morbidity. This demonstrates the urgent need for reliable early-warning systems to help authorities prepare and respond to emergency situations. In this study, we evaluate the performance of a climate-driven mortality model to provide probabilistic predictions of exceeding emergency mortality thresholds for heat wave and cold spell scenarios. Daily mortality data corresponding to 187 NUTS2 regions across 16 countries in Europe were obtained from 1998-2003. Data were aggregated to 54 larger regions in Europe, defined according to similarities in population structure and climate. Location-specific average mortality rates, at given temperature intervals over the time period, were modelled to account for the increased mortality observed during both high and low temperature extremes and differing comfort temperatures between regions. Model parameters were estimated in a Bayesian framework, in order to generate probabilistic simulations of mortality across Europe for time periods of interest. For the heat wave scenario (1-15 August 2003), the model was successfully able to anticipate the occurrence or non-occurrence of mortality rates exceeding the emergency threshold (75th percentile of the mortality distribution) for 89% of the 54 regions, given a probability decision threshold of 70%. For the cold spell scenario (1-15 January 2003), mortality events in 69% of the regions were correctly anticipated with a probability decision threshold of 70%. By using a more conservative decision threshold of 30%, this proportion increased to 87%. Overall, the model performed better for the heat wave scenario. By replacing observed temperature data in the model with forecast temperature, from state-of-the-art European forecasting systems, probabilistic mortality predictions could

  3. Evaluating the Performance of a Climate-Driven Mortality Model during Heat Waves and Cold Spells in Europe

    PubMed Central

    Lowe, Rachel; Ballester, Joan; Creswick, James; Robine, Jean-Marie; Herrmann, François R.; Rodó, Xavier

    2015-01-01

    The impact of climate change on human health is a serious concern. In particular, changes in the frequency and intensity of heat waves and cold spells are of high relevance in terms of mortality and morbidity. This demonstrates the urgent need for reliable early-warning systems to help authorities prepare and respond to emergency situations. In this study, we evaluate the performance of a climate-driven mortality model to provide probabilistic predictions of exceeding emergency mortality thresholds for heat wave and cold spell scenarios. Daily mortality data corresponding to 187 NUTS2 regions across 16 countries in Europe were obtained from 1998–2003. Data were aggregated to 54 larger regions in Europe, defined according to similarities in population structure and climate. Location-specific average mortality rates, at given temperature intervals over the time period, were modelled to account for the increased mortality observed during both high and low temperature extremes and differing comfort temperatures between regions. Model parameters were estimated in a Bayesian framework, in order to generate probabilistic simulations of mortality across Europe for time periods of interest. For the heat wave scenario (1–15 August 2003), the model was successfully able to anticipate the occurrence or non-occurrence of mortality rates exceeding the emergency threshold (75th percentile of the mortality distribution) for 89% of the 54 regions, given a probability decision threshold of 70%. For the cold spell scenario (1–15 January 2003), mortality events in 69% of the regions were correctly anticipated with a probability decision threshold of 70%. By using a more conservative decision threshold of 30%, this proportion increased to 87%. Overall, the model performed better for the heat wave scenario. By replacing observed temperature data in the model with forecast temperature, from state-of-the-art European forecasting systems, probabilistic mortality predictions could

  4. IR DIAL performance modeling

    SciTech Connect

    Sharlemann, E.T.

    1994-07-01

    We are developing a DIAL performance model for CALIOPE at LLNL. The intent of the model is to provide quick and interactive parameter sensitivity calculations with immediate graphical output. A brief overview of the features of the performance model is given, along with an example of performance calculations for a non-CALIOPE application.

  5. Modelling Performance: Opening Pandora's Box.

    ERIC Educational Resources Information Center

    McNamara, T. F.

    1995-01-01

    This paper argues that it is necessary for researchers and test developers in the area of language performance testing to have a clear understanding of the role of underlying performance capacities in second language performance. It critically evaluates the models proposed by Hymes, Canale and Swain, and Bachman. (71 references) (MDM)

  6. Performance evaluation of the Enraf-Nonius Model 872 radar gage

    SciTech Connect

    Peters, T.J.; Park, W.R.

    1992-12-01

    There are indications that the Enraf-Nonius Radar Gage installed in Tank 241-SY-101 may not be providing an accurate reading of the true surface level in the waste tank. The Pacific Northwest Laboratory (PNL) performed an initial study to determine the effect of the following items on the distance read by the gage: Tank riser; Material permittivity and conductivity Foam; Proportion of supernatant to solid material in the field of view of the instrument; Physical geometry of the supernatant and solid material changing in the field of view with respect to time; and Varying water content in the solid material. The results of the tests indicate that distance measured by the radar gage is affected by the permittivity, conductivity, and angle of the target surface. These parameters affect the complex input impedance of the signal received by the radar gage to measure the distance to the target. In Tank 101-SY, the radar gage is placed on top of a 12 in. diameter riser. The riser affects the field of view of the instrument, and a much smaller target surface is detected when the radar beam propagates through a riser. In addition, the riser acts as a waveguide, and standing waves are enhanced between the target surface and the radar gage. The result is a change in the level measured by the radar gage due to changing properties of the target surface even when the distance to the target does not change. The test results indicate that the radar will not detect dry crust or foam. However, if the crust or foam is stirred so that it becomes wet, then the crust or foam became detectable. The level read using the radar gage decreased as the moisture in the crust or foam evaporated.

  7. Formative evaluation of a telemedicine model for delivering clinical neurophysiology services part I: Utility, technical performance and service provider perspective

    PubMed Central

    2010-01-01

    Background Formative evaluation is conducted in the early stages of system implementation to assess how it works in practice and to identify opportunities for improving technical and process performance. A formative evaluation of a teleneurophysiology service was conducted to examine its technical and sociological dimensions. Methods A teleneurophysiology service providing routine EEG investigation was established. Service use, technical performance and satisfaction of clinical neurophysiology personnel were assessed qualitatively and quantitatively. These were contrasted with a previously reported analysis of the need for teleneurophysiology, and examination of expectation and satisfaction with clinical neurophysiology services in Ireland. A preliminary cost-benefit analysis was also conducted. Results Over the course of 40 clinical sessions during 20 weeks, 142 EEG investigations were recorded and stored on a file server at a satellite centre which was 130 miles away from the host clinical neurophysiology department. Using a virtual private network, the EEGs were accessed by a consultant neurophysiologist at the host centre for interpretation. The model resulted in a 5-fold increase in access to EEG services as well as reducing average waiting times for investigation by a half. Technically the model worked well, although a temporary loss of virtual private network connectivity highlighted the need for clarity in terms of responsibility for troubleshooting and repair of equipment problems. Referral quality, communication between host and satellite centres, quality of EEG recordings, and ease of EEG review and reporting indicated that appropriate organisational processes were adopted by the service. Compared to traditional CN service delivery, the teleneurophysiology model resulted in a comparable unit cost per EEG. Conclusion Observations suggest that when traditional organisational boundaries are crossed challenges associated with the social dimension of service

  8. CMIP5 Global Climate Model Performance Evaluation and Climate Scenario Development over the South-Central United States

    NASA Astrophysics Data System (ADS)

    Rosendahl, D. H.; Rupp, D. E.; Mcpherson, R. A.; Moore, B., III

    2015-12-01

    Future climate change projections from Global Climate Models (GCMs) are the primary drivers of regional downscaling and impacts research - from which relevant information for stakeholders is generated at the regional and local levels. Therefore understanding uncertainties in GCMs is a fundamental necessity if the scientific community is to provide useful and reliable future climate change information that can be utilized by end users and decision makers. Two different assessments of the Coupled Model Intercomparison Project Phase 5 (CMIP5) GCM ensemble were conducted for the south-central United States. The first was a performance evaluation over the historical period for metrics of near surface meteorological variables (e.g., temperature, precipitation) and system-based phenomena, which include large-scale processes that can influence the region (e.g., low-level jet, ENSO). These metrics were used to identify a subset of models of higher performance across the region which were then used to constrain future climate change projections. A second assessment explored climate scenario development where all model climate change projections were assumed equally likely and future projections with the highest impact were identified (e.g., temperature and precipitation combinations of hottest/driest, hottest/wettest, and highest variability). Each of these assessments identify a subset of models that may prove useful to regional downscaling and impacts researchers who may be restricted by the total number of GCMs they can utilize. Results from these assessments will be provided as well as a discussion on when each would be useful and appropriate to use.

  9. Performance Evaluation: A Deadly Disease?

    ERIC Educational Resources Information Center

    Aluri, Rao; Reichel, Mary

    1994-01-01

    W. Edwards Deming condemned performance evaluations as a deadly disease afflicting American management. He argued that performance evaluations nourish fear, encourage short-term thinking, stifle teamwork, and are no better than lotteries. This article examines library literature from Deming's perspective. Although that literature accepts…

  10. IrBurst Modeling and Performance Evaluation for Large Data Block Exchange over High-Speed IrDA Links

    NASA Astrophysics Data System (ADS)

    Alam, Mohammad Shah; Shawkat, Shamim Ara; Kitazumi, Gontaro; Matsumoto, Mitsuji

    IrBurst, recently proposed by IrDA, is a high speed information transmission protocol. In this paper, a mathematical model is developed which leads to derivation of the IrBurst throughput over the IrDA protocol stack. Based on this model, we compare the performance of IrBurst and existing OBEX protocol in order to investigate the suitability of IrBurst protocol for exchange of large data blocks over high-speed IrDA links. Furthermore, the model allows the evaluation of the impact of the link layer parameters, such as window size and frame length, and physical layer parameters, such as minimum turnaround time, on system through-put for high-speed IrDA links and in the presence of transmission errors. Consequently, an effective Automatic Repeat Request (ARQ) scheme is proposed at link layer to maximize the throughput efficiency for IrBurst protocol as well as for next generation high speed IrDA links. Simulation result indicates that employment of our proposed ARQ scheme results in significant improvement of IrBurst throughput efficiency at high bit error rates.

  11. Family incivility and job performance: a moderated mediation model of psychological distress and core self-evaluation.

    PubMed

    Lim, Sandy; Tai, Kenneth

    2014-03-01

    This study extends the stress literature by exploring the relationship between family incivility and job performance. We examine whether psychological distress mediates the link between family incivility and job performance. We also investigate how core self-evaluation might moderate this mediated relationship. Data from a 2-wave study indicate that psychological distress mediates the relationship between family incivility and job performance. In addition, core self-evaluation moderates the relationship between family incivility and psychological distress but not the relationship between psychological distress and job performance. The results hold while controlling for general job stress, family-to-work conflict, and work-to-family conflict. The findings suggest that family incivility is linked to poor performance at work, and psychological distress and core self-evaluation are key mechanisms in the relationship.

  12. Use of Mathematical Models in the Design and Performance Evaluation of a Surfactant Flushing Demonstration at the Bachman Road Site

    NASA Astrophysics Data System (ADS)

    Abriola, L. M.; Drummond, C. D.; Lemke, L. D.; Rathfelder, K. M.; Pennell, K. D.

    2001-05-01

    This presentation provides an overview of the design and performance evaluation of a surfactant enhanced remediation pilot demonstration conducted in the summer of 2000 at a former dry cleaning facility in Oscoda, Michigan, USA. The unconfined contaminated formation is composed of relatively homogeneous glacial outwash sands, underlain by a thick clay layer. Core samples have revealed the presence of a reasonably persistent coarse sand and gravel layer at a depth of 11-16 feet and a sand/silt/clay transition zone at the base of the aquifer. A narrow tetrachloroethylene (PCE) plume emanates from the suspected source area, beneath the former dry cleaning building, and discharges into Lake Huron, approximately 700 feet down gradient. There is little evidence of microbial plume attenuation at the site. Aqueous samples from multilevel piezometers installed beneath the building have confirmed the presence of residual PCE within the coarse sand and gravel layer and have detected consistently high PCE concentrations at the base of the aquifer. The actual distribution and volume of entrapped PCE, however, is unknown. A surfactant injection and recovery scheme was designed and implemented to effectively flush the identified source area beneath the building. In this scheme, a line of water injection wells was installed behind the surfactant injection points to control surfactant delivery and maximize solubilized plume capture. Prior to surfactant injection, conservative and partitioning tracer tests were also conducted to confirm sweep and estimate source zone mass. Mass recovery calculations indicate that more than 94% of the injected surfactant and approximately 19 liters of PCE were recovered during the test. This volume of DNAPL is consistent with estimated low saturations within the swept zone. Single and multiphase transport models were employed to aid in remedial design and predict system performance. For the model simulations, input parameters were determined from

  13. Multiprocessor performance modeling with ADAS

    NASA Technical Reports Server (NTRS)

    Hayes, Paul J.; Andrews, Asa M.

    1989-01-01

    A graph managing strategy referred to as the Algorithm to Architecture Mapping Model (ATAMM) appears useful for the time-optimized execution of application algorithm graphs in embedded multiprocessors and for the performance prediction of graph designs. This paper reports the modeling of ATAMM in the Architecture Design and Assessment System (ADAS) to make an independent verification of ATAMM's performance prediction capability and to provide a user framework for the evaluation of arbitrary algorithm graphs. Following an overview of ATAMM and its major functional rules are descriptions of the ADAS model of ATAMM, methods to enter an arbitrary graph into the model, and techniques to analyze the simulation results. The performance of a 7-node graph example is evaluated using the ADAS model and verifies the ATAMM concept by substantiating previously published performance results.

  14. Photovoltaic array performance model.

    SciTech Connect

    Kratochvil, Jay A.; Boyson, William Earl; King, David L.

    2004-08-01

    This document summarizes the equations and applications associated with the photovoltaic array performance model developed at Sandia National Laboratories over the last twelve years. Electrical, thermal, and optical characteristics for photovoltaic modules are included in the model, and the model is designed to use hourly solar resource and meteorological data. The versatility and accuracy of the model has been validated for flat-plate modules (all technologies) and for concentrator modules, as well as for large arrays of modules. Applications include system design and sizing, 'translation' of field performance measurements to standard reporting conditions, system performance optimization, and real-time comparison of measured versus expected system performance.

  15. Quantification of leachate discharged to groundwater using the water balance method and the hydrologic evaluation of landfill performance (HELP) model.

    PubMed

    Alslaibi, Tamer M; Abustan, Ismail; Mogheir, Yunes K; Afifi, Samir

    2013-01-01

    Landfills are a source of groundwater pollution in Gaza Strip. This study focused on Deir Al Balah landfill, which is a unique sanitary landfill site in Gaza Strip (i.e., it has a lining system and a leachate recirculation system). The objective of this article is to assess the generated leachate quantity and percolation to the groundwater aquifer at a specific site, using the approaches of (i) the hydrologic evaluation of landfill performance model (HELP) and (ii) the water balance method (WBM). The results show that when using the HELP model, the average volume of leachate discharged from Deir Al Balah landfill during the period 1997 to 2007 was around, 6800 m3/year. Meanwhile, the average volume of leachate percolated through the clay layer was 550 m3/year, which represents around 8% of the generated leachate. Meanwhile, the WBM indicated that the average volume of leachate discharged from Deir Al Balah landfill during the same period was around 7660 m3/year--about half of which comes from the moisture content of the waste, while the remainder comes from the infiltration of precipitation and re-circulated leachate. Therefore, the estimated quantity of leachate to groundwater by these two methods was very close. However, compared with the measured leachate quantity, these results were overestimated and indicated a dangerous threat to the groundwater aquifer, as there was no separation between municipal, hazardous and industrial wastes, in the area. PMID:23148014

  16. Evaluating Model Performance of an Ensemble-based Chemical Data Assimilation System During INTEX-B Field Mission

    NASA Technical Reports Server (NTRS)

    Arellano, A. F., Jr.; Raeder, K.; Anderson, J. L.; Hess, P. G.; Emmons, L. K.; Edwards, D. P.; Pfister, G. G.; Campos, T. L.; Sachse, G. W.

    2007-01-01

    We present a global chemical data assimilation system using a global atmosphere model, the Community Atmosphere Model (CAM3) with simplified chemistry and the Data Assimilation Research Testbed (DART) assimilation package. DART is a community software facility for assimilation studies using the ensemble Kalman filter approach. Here, we apply the assimilation system to constrain global tropospheric carbon monoxide (CO) by assimilating meteorological observations of temperature and horizontal wind velocity and satellite CO retrievals from the Measurement of Pollution in the Troposphere (MOPITT) satellite instrument. We verify the system performance using independent CO observations taken on board the NSFINCAR C-130 and NASA DC-8 aircrafts during the April 2006 part of the Intercontinental Chemical Transport Experiment (INTEX-B). Our evaluations show that MOPITT data assimilation provides significant improvements in terms of capturing the observed CO variability relative to no MOPITT assimilation (i.e. the correlation improves from 0.62 to 0.71, significant at 99% confidence). The assimilation provides evidence of median CO loading of about 150 ppbv at 700 hPa over the NE Pacific during April 2006. This is marginally higher than the modeled CO with no MOPITT assimilation (-140 ppbv). Our ensemble-based estimates of model uncertainty also show model overprediction over the source region (i.e. China) and underprediction over the NE Pacific, suggesting model errors that cannot be readily explained by emissions alone. These results have important implications for improving regional chemical forecasts and for inverse modeling of CO sources and further demonstrate the utility of the assimilation system in comparing non-coincident measurements, e.g. comparing satellite retrievals of CO with in-situ aircraft measurements. The work described above also brought to light several short-comings of the data assimilation approach for CO profiles. Because of the limited vertical

  17. Room for Improvement: Performance Evaluations.

    ERIC Educational Resources Information Center

    Webb, Gisela

    1989-01-01

    Describes a performance management approach to library personnel management that stresses communication, clarification of goals, and reinforcement of new practices and behaviors. Each phase of the evaluation process (preparation, rating, administrative review, appraisal interview, and follow-up) and special evaluations to be used in cases of…

  18. Evaluating Administrative/Supervisory Performance.

    ERIC Educational Resources Information Center

    Educational Research Service, Arlington, VA.

    This is a report on the third survey conducted on procedures for evaluating the performance of administrators and supervisors in local school systems. A questionnaire was sent to school systems enrolling 25,000 or more pupils, and results indicated that 84 of the 154 responding systems have formal evaluation procedures. Tables and discussions of…

  19. Performance and race in evaluating minority mayors.

    PubMed

    Howell, S E

    2001-01-01

    This research compares a performance model to a racial model in explaining approval of a black mayor. The performance model emphasizes citizen evaluations of conditions in the city and the mayor's perceived effectiveness in dealing with urban problems. The racial model stipulates that approval of a black mayor is based primarily on racial identification or racism. A model of mayoral approval is tested with two surveys over different years of citizens in a city that has had 20 years' experience with black mayors. Findings indicate that performance matters when evaluating black mayors, indicating that the national performance models of presidential approval are generalizable to local settings with black executives. Implications for black officeholders are discussed. However, the racial model is alive and well, as indicated by its impact on approval and the finding that, in this context, performance matters more to white voters than to black voters. A final, highly tentative conclusion is offered that context conditions the relative power of these models. The performance model may explain more variation in approval of the black mayor than the racial model in a context of rapidly changing city conditions that focuses citizen attention on performance, but during a period of relative stability the two models are evenly matched.

  20. Design and performance evaluation of a simplified dynamic model for combined sewer overflows in pumped sewer systems

    NASA Astrophysics Data System (ADS)

    van Daal-Rombouts, Petra; Sun, Siao; Langeveld, Jeroen; Bertrand-Krajewski, Jean-Luc; Clemens, François

    2016-07-01

    Optimisation or real time control (RTC) studies in wastewater systems increasingly require rapid simulations of sewer systems in extensive catchments. To reduce the simulation time calibrated simplified models are applied, with the performance generally based on the goodness of fit of the calibration. In this research the performance of three simplified and a full hydrodynamic (FH) model for two catchments are compared based on the correct determination of CSO event occurrences and of the total discharged volumes to the surface water. Simplified model M1 consists of a rainfall runoff outflow (RRO) model only. M2 combines the RRO model with a static reservoir model for the sewer behaviour. M3 comprises the RRO model and a dynamic reservoir model. The dynamic reservoir characteristics were derived from FH model simulations. It was found that M2 and M3 are able to describe the sewer behaviour of the catchments, contrary to M1. The preferred model structure depends on the quality of the information (geometrical database and monitoring data) available for the design and calibration of the model. Finally, calibrated simplified models are shown to be preferable to uncalibrated FH models when performing optimisation or RTC studies.

  1. The use of multilevel models to evaluate sources of variation in reproductive performance in dairy cattle in Reunion Island.

    PubMed

    Dohoo, I R; Tillard, E; Stryhn, H; Faye, B

    2001-07-19

    Sources of variation in measures of reproductive performance in dairy cattle were evaluated using data collected from 3207 lactations in 1570 cows in 50 herds from five geographic regions of Reunion Island (located off the east coast of Madagascar). Three continuously distributed reproductive parameters (intervals from calving-to-conception, calving-to-first-service and first-service-to-conception) were considered, along with one Binomial outcome (first-service-conception risk). Multilevel models which take into account the hierarchical nature of the data were used to fit all models. For the overall measure of calving-to-conception interval, 86% of the variation resided at the lactation level with only 7, 6 and 2% at the cow, herd and regional levels, respectively. The proportion of variance at the herd and cow levels were slightly higher for the calving-to-first-service interval (12 and 9%, respectively) - but for the other two parameters (first-service-conception risk and first-service-to-conception interval), >90% of the variation resided at the lactation level. For the three continuous dependent variables, comparison of results between models based on log-transformed data and Box-Cox-transformed data suggested that minor departures from the assumption of normality did not have a substantial effect on the variance estimates. For the Binomial dependent variable, five different estimation procedures (penalised quasi-likelihood, Markov-Chain Monte Carlo, parametric and non-parametric bootstrap estimates and maximum-likelihood) yielded substantially different results for the estimate of the cow-level variance.

  2. EVALUATING THE PERFORMANCE OF REGIONAL-SCALE PHOTOCHEMICAL MODELING SYSTEMS: PART II--OZONE PREDICTIONS. (R825260)

    EPA Science Inventory

    In this paper, the concept of scale analysis is applied to evaluate ozone predictions from two regional-scale air quality models. To this end, seasonal time series of observations and predictions from the RAMS3b/UAM-V and MM5/MAQSIP (SMRAQ) modeling systems for ozone were spectra...

  3. EVALUATING THE PERFORMANCE OF REGIONAL-SCALE PHOTOCHEMICAL MODELING SYSTEMS: PART I--METEOROLOGICAL PREDICTIONS. (R825260)

    EPA Science Inventory

    In this study, the concept of scale analysis is applied to evaluate two state-of-science meteorological models, namely MM5 and RAMS3b, currently being used to drive regional-scale air quality models. To this end, seasonal time series of observations and predictions for temperatur...

  4. Evaluation of performance of a BLSS model in long-term operation in dynamic and steady states

    NASA Astrophysics Data System (ADS)

    Gros, Jean-Bernard; Tikhomirov, Alex; Ushakova, Sofya; Velitchko, Vladimir; Tikhomirova, Natalia; Lasseur, Christophe

    Evaluation of performance of a BLSS model, including higher plants for food production and biodegradation of human waste, in long-term operation in dynamic and steady states was performed. The model system was conceived for supplying vegetarian food and oxygen to 0.07 human. The following data were obtained in steady-state operating conditions. Average rate of wheat, chufa, radish, lettuce and Salicornia edible biomass accumulation were 8.7, 5.5, 0.6, 0.6 and metricconverterProductID2.5 g2.5 g per day respectively. Thus, to mimic the vegetarian edible biomass consumption by a human it was necessary to withdraw 17.9 g/d from total mass ex-change. Simultaneously, human mineralized exometabolites (artificial mineralized urine, AMU) in the amount of approximately 7% of a daily norm were introduced into the nutrient solu-tion for irrigation of the plants cultivated on a neutral substrate (expanded clay aggregate). The estimated value of 5.8 g/d of wheat and Salicornia inedible biomass was introduced in the soil-like substrate (SLS) to fully meet the plants need in nitrogen. The rest of wheat and Salicornia inedible biomass, 5.7 g/d, was stored. Thus in all, 23.6g of vegetarian dry matter had been stored. Assuming edible biomass is eaten up by the human, the closure coefficient of the vegetarian biomass inclusion into matter recycling amounted to 88%. The analysis of the long-term model operation showed that the main factors limiting increase of recycling processes were the following: a) Partly unbalanced mineral composition of daily human waste with daily needs of plants culti-` vated in the system. Thus, when fully satisfied with respect to nitrogen, the plants experienced a lack of macro elements such as P, Mg and Ca by more than 50%; b) Partly unbalanced mineral composition of edible biomass of the plants cultivated in the SLS with that of inedible biomass of the plants cultivated by hydroponic method on neutral substrate introduced in the SLS; c) Accumulation of

  5. Hydrologic Evaluation of Landfill Performance (HELP) Model: B (Set Includes, A- User's Guide for Version 3 w/disks, B-Engineering Documentation for Version 3

    EPA Science Inventory

    The Hydrologic Evaluation of Landfill Performance (HELP) computer program is a quasi-two-dimensional hydrologic model of water movement across, into, through and out of landfills. The model accepts weather, soil and design data. Landfill systems including various combinations o...

  6. Performance evaluation of air quality models for predicting PM10 and PM2.5 concentrations at urban traffic intersection during winter period.

    PubMed

    Gokhale, Sharad; Raokhande, Namita

    2008-05-01

    There are several models that can be used to evaluate roadside air quality. The comparison of the operational performance of different models pertinent to local conditions is desirable so that the model that performs best can be identified. Three air quality models, namely the 'modified General Finite Line Source Model' (M-GFLSM) of particulates, the 'California Line Source' (CALINE3) model, and the 'California Line Source for Queuing & Hot Spot Calculations' (CAL3QHC) model have been identified for evaluating the air quality at one of the busiest traffic intersections in the city of Guwahati. These models have been evaluated statistically with the vehicle-derived airborne particulate mass emissions in two sizes, i.e. PM10 and PM2.5, the prevailing meteorology and the temporal distribution of the measured daily average PM10 and PM2.5 concentrations in wintertime. The study has shown that the CAL3QHC model would make better predictions compared to other models for varied meteorology and traffic conditions. The detailed study reveals that the agreements between the measured and the modeled PM10 and PM2.5 concentrations have been reasonably good for CALINE3 and CAL3QHC models. Further detailed analysis shows that the CAL3QHC model performed well compared to the CALINE3. The monthly performance measures have also led to the similar results. These two models have also outperformed for a class of wind speed velocities except for low winds (<1 m s(-1)), for which, the M-GFLSM model has shown the tendency of better performance for PM10. Nevertheless, the CAL3QHC model has outperformed for both the particulate sizes and for all the wind classes, which therefore can be optional for air quality assessment at urban traffic intersections. PMID:18289641

  7. The Effect of Social Roles and Performance Cues on Self-Evaluations: Evidence for an Interpersonal Model of Loneliness.

    ERIC Educational Resources Information Center

    Vitkus, John

    Vitkus and Horowitz (1987) found that lonely people demonstrated adequate social behavior when they were assigned to controlling interpersonal roles. Despite this successful performance, they evaluated themselves and their behavior negatively. Study 1 replicated these findings and extended them to naturalistic interactions. In a hypothetical…

  8. Energy performance evaluation of AAC

    NASA Astrophysics Data System (ADS)

    Aybek, Hulya

    The U.S. building industry constitutes the largest consumer of energy (i.e., electricity, natural gas, petroleum) in the world. The building sector uses almost 41 percent of the primary energy and approximately 72 percent of the available electricity in the United States. As global energy-generating resources are being depleted at exponential rates, the amount of energy consumed and wasted cannot be ignored. Professionals concerned about the environment have placed a high priority on finding solutions that reduce energy consumption while maintaining occupant comfort. Sustainable design and the judicious combination of building materials comprise one solution to this problem. A future including sustainable energy may result from using energy simulation software to accurately estimate energy consumption and from applying building materials that achieve the potential results derived through simulation analysis. Energy-modeling tools assist professionals with making informed decisions about energy performance during the early planning phases of a design project, such as determining the most advantageous combination of building materials, choosing mechanical systems, and determining building orientation on the site. By implementing energy simulation software to estimate the effect of these factors on the energy consumption of a building, designers can make adjustments to their designs during the design phase when the effect on cost is minimal. The primary objective of this research consisted of identifying a method with which to properly select energy-efficient building materials and involved evaluating the potential of these materials to earn LEED credits when properly applied to a structure. In addition, this objective included establishing a framework that provides suggestions for improvements to currently available simulation software that enhance the viability of the estimates concerning energy efficiency and the achievements of LEED credits. The primary objective

  9. Repository Integration Program: RIP performance assessment and strategy evaluation model theory manual and user`s guide

    SciTech Connect

    1995-11-01

    This report describes the theory and capabilities of RIP (Repository Integration Program). RIP is a powerful and flexible computational tool for carrying out probabilistic integrated total system performance assessments for geologic repositories. The primary purpose of RIP is to provide a management tool for guiding system design and site characterization. In addition, the performance assessment model (and the process of eliciting model input) can act as a mechanism for integrating the large amount of available information into a meaningful whole (in a sense, allowing one to keep the ``big picture`` and the ultimate aims of the project clearly in focus). Such an integration is useful both for project managers and project scientists. RIP is based on a `` top down`` approach to performance assessment that concentrates on the integration of the entire system, and utilizes relatively high-level descriptive models and parameters. The key point in the application of such a ``top down`` approach is that the simplified models and associated high-level parameters must incorporate an accurate representation of their uncertainty. RIP is designed in a very flexible manner such that details can be readily added to various components of the model without modifying the computer code. Uncertainty is also handled in a very flexible manner, and both parameter and model (process) uncertainty can be explicitly considered. Uncertainty is propagated through the integrated PA model using an enhanced Monte Carlo method. RIP must rely heavily on subjective assessment (expert opinion) for much of its input. The process of eliciting the high-level input parameters required for RIP is critical to its successful application. As a result, in order for any project to successfully apply a tool such as RIP, an enormous amount of communication and cooperation must exist between the data collectors, the process modelers, and the performance. assessment modelers.

  10. GUIDANCE FOR THE PERFORMANCE EVALUATION OF THREE-DIMENSIONAL AIR QUALITY MODELING SYSTEMS FOR PARTICULATE MATTER AND VISIBILITY

    EPA Science Inventory

    The National Ambient Air Quality Standards for particulate matter (PM) and the federal regional haze regulations place some emphasis on the assessment of fine particle (PM; 5) concentrations. Current air quality models need to be improved and evaluated against observations to a...

  11. ESMValTool (v1.0) - a community diagnostic and performance metrics tool for routine evaluation of Earth system models in CMIP

    NASA Astrophysics Data System (ADS)

    Eyring, Veronika; Righi, Mattia; Lauer, Axel; Evaldsson, Martin; Wenzel, Sabrina; Jones, Colin; Anav, Alessandro; Andrews, Oliver; Cionni, Irene; Davin, Edouard L.; Deser, Clara; Ehbrecht, Carsten; Friedlingstein, Pierre; Gleckler, Peter; Gottschaldt, Klaus-Dirk; Hagemann, Stefan; Juckes, Martin; Kindermann, Stephan; Krasting, John; Kunert, Dominik; Levine, Richard; Loew, Alexander; Mäkelä, Jarmo; Martin, Gill; Mason, Erik; Phillips, Adam S.; Read, Simon; Rio, Catherine; Roehrig, Romain; Senftleben, Daniel; Sterl, Andreas; van Ulft, Lambertus H.; Walton, Jeremy; Wang, Shiyu; Williams, Keith D.

    2016-05-01

    A community diagnostics and performance metrics tool for the evaluation of Earth system models (ESMs) has been developed that allows for routine comparison of single or multiple models, either against predecessor versions or against observations. The priority of the effort so far has been to target specific scientific themes focusing on selected essential climate variables (ECVs), a range of known systematic biases common to ESMs, such as coupled tropical climate variability, monsoons, Southern Ocean processes, continental dry biases, and soil hydrology-climate interactions, as well as atmospheric CO2 budgets, tropospheric and stratospheric ozone, and tropospheric aerosols. The tool is being developed in such a way that additional analyses can easily be added. A set of standard namelists for each scientific topic reproduces specific sets of diagnostics or performance metrics that have demonstrated their importance in ESM evaluation in the peer-reviewed literature. The Earth System Model Evaluation Tool (ESMValTool) is a community effort open to both users and developers encouraging open exchange of diagnostic source code and evaluation results from the Coupled Model Intercomparison Project (CMIP) ensemble. This will facilitate and improve ESM evaluation beyond the state-of-the-art and aims at supporting such activities within CMIP and at individual modelling centres. Ultimately, we envisage running the ESMValTool alongside the Earth System Grid Federation (ESGF) as part of a more routine evaluation of CMIP model simulations while utilizing observations available in standard formats (obs4MIPs) or provided by the user.

  12. ESMValTool (v1.0) - a community diagnostic and performance metrics tool for routine evaluation of Earth System Models in CMIP

    NASA Astrophysics Data System (ADS)

    Eyring, V.; Righi, M.; Evaldsson, M.; Lauer, A.; Wenzel, S.; Jones, C.; Anav, A.; Andrews, O.; Cionni, I.; Davin, E. L.; Deser, C.; Ehbrecht, C.; Friedlingstein, P.; Gleckler, P.; Gottschaldt, K.-D.; Hagemann, S.; Juckes, M.; Kindermann, S.; Krasting, J.; Kunert, D.; Levine, R.; Loew, A.; Mäkelä, J.; Martin, G.; Mason, E.; Phillips, A.; Read, S.; Rio, C.; Roehrig, R.; Senftleben, D.; Sterl, A.; van Ulft, L. H.; Walton, J.; Wang, S.; Williams, K. D.

    2015-09-01

    A community diagnostics and performance metrics tool for the evaluation of Earth System Models (ESMs) has been developed that allows for routine comparison of single or multiple models, either against predecessor versions or against observations. The priority of the effort so far has been to target specific scientific themes focusing on selected Essential Climate Variables (ECVs), a range of known systematic biases common to ESMs, such as coupled tropical climate variability, monsoons, Southern Ocean processes, continental dry biases and soil hydrology-climate interactions, as well as atmospheric CO2 budgets, tropospheric and stratospheric ozone, and tropospheric aerosols. The tool is being developed in such a way that additional analyses can easily be added. A set of standard namelists for each scientific topic reproduces specific sets of diagnostics or performance metrics that have demonstrated their importance in ESM evaluation in the peer-reviewed literature. The Earth System Model Evaluation Tool (ESMValTool) is a community effort open to both users and developers encouraging open exchange of diagnostic source code and evaluation results from the CMIP ensemble. This will facilitate and improve ESM evaluation beyond the state-of-the-art and aims at supporting such activities within the Coupled Model Intercomparison Project (CMIP) and at individual modelling centres. Ultimately, we envisage running the ESMValTool alongside the Earth System Grid Federation (ESGF) as part of a more routine evaluation of CMIP model simulations while utilizing observations available in standard formats (obs4MIPs) or provided by the user.

  13. Using multi-year data to evaluate performance of one-layer and multi-layer models in snow hydrology: an example from Col De Porte

    NASA Astrophysics Data System (ADS)

    Avanzi, Francesco; De Michele, Carlo; Morin, Samuel; Carmagnola, Carlo Maria; Ghezzi, Antonio; Lejeune, Yves

    2016-04-01

    Snow mass dynamics prediction represents an important task for snow hydrologists, since snow on the ground influences local/global water availability and streamflow timing and amount. Different modeling tools have been formulated for decades to predict snowmelt runoff dynamics and therefore to integrate snow mass dynamics in watershed hydrology modeling. Typical variables of interest include snow depth, snow bulk density, snow water equivalent (SWE) and snowmelt runoff. All these variables have been monitored at several locations worldwide for several decades in order to evaluate model performance. As a result, several multi-year datasets are now available to perform extensive evaluation tests. In this presentation, we report an example of these evaluations by discussing the performance of two models of different complexity in reproducing observed data of snow dynamics at a site in French Alps (Col De Porte, 1325 m AMSL), where 18 continuous-time years of observations are available. We consider Crocus as an example of multi-layer physically-based complex models and HyS (De Michele et al. 2013) as an example of a one-layer temperature-index models. Using multi-year data allows us to compare models performance over long periods of time, thus considering different climatic and snow conditions. Moreover, the use of continuous-time data allows to evaluate models performance at different temporal resolutions. De Michele, C., Avanzi, F., Ghezzi, A., and Jommi, C.: Investigating the dynamics of bulk snow density in dry and wet conditions using a one-dimensional model, The Cryosphere, 7, 433-444, doi:10.5194/tc-7-433-2013, 2013.

  14. Evaluation of the performance of four chemical transport models in predicting the aerosol chemical composition in Europe in 2005

    NASA Astrophysics Data System (ADS)

    Prank, Marje; Sofiev, Mikhail; Tsyro, Svetlana; Hendriks, Carlijn; Semeena, Valiyaveetil; Vazhappilly Francis, Xavier; Butler, Tim; Denier van der Gon, Hugo; Friedrich, Rainer; Hendricks, Johannes; Kong, Xin; Lawrence, Mark; Righi, Mattia; Samaras, Zissis; Sausen, Robert; Kukkonen, Jaakko; Sokhi, Ranjeet

    2016-05-01

    Four regional chemistry transport models were applied to simulate the concentration and composition of particulate matter (PM) in Europe for 2005 with horizontal resolution ~ 20 km. The modelled concentrations were compared with the measurements of PM chemical composition by the European Monitoring and Evaluation Programme (EMEP) monitoring network. All models systematically underestimated PM10 and PM2.5 by 10-60 %, depending on the model and the season of the year, when the calculated dry PM mass was compared with the measurements. The average water content at laboratory conditions was estimated between 5 and 20 % for PM2.5 and between 10 and 25 % for PM10. For majority of the PM chemical components, the relative underestimation was smaller than it was for total PM, exceptions being the carbonaceous particles and mineral dust. Some species, such as sea salt and NO3-, were overpredicted by the models. There were notable differences between the models' predictions of the seasonal variations of PM, mainly attributable to different treatments or omission of some source categories and aerosol processes. Benzo(a)pyrene concentrations were overestimated by all the models over the whole year. The study stresses the importance of improving the models' skill in simulating mineral dust and carbonaceous compounds, necessity for high-quality emissions from wildland fires, as well as the need for an explicit consideration of aerosol water content in model-measurement comparison.

  15. USE OF A STOCHASTIC MODEL TO EVALUATE UNCERTAINTY IN A PERFORMANCE ASSESSMENT AT THE SAVANNAH RIVER SITE - 8120

    SciTech Connect

    Hiergesell, R; Glenn Taylor, G

    2008-01-21

    A significant effort has recently been initiated to address probabilistic issues within radiological Performance Assessments (PA's) conducted at the Savannah River Site (SRS). This effort is considered to be part of a continual process, as is the program of PA analysis and maintenance across the Department of Energy (DOE) complex. At SRS, findings in the initial probabilistic analysis of the Slit Trenches in the E-Area PA were built upon and improved in the later development of the probabilistic model for the F-Area Tank Farm. Within the PA studies conducted at SRS, the initial effort of the uncertainty analyses was focused on the Slit Trenches as part of the E-Area PA. Specifically, a probabilistic model was developed for Slit Trench 5 within the E-Area. This model was utilized in deterministic mode to compare its results against the 2- and 3-D model results of the deterministic models. Then, utilizing the PDFs, the model was used to perform multiple realizations and produce probabilistic results. Later, a second probabilistic sensitivity and uncertainty analysis was undertaken for the F-Area Tank Farm PA. This effort is currently underway. Many improvements were made in how the flow and transport processes were incorporated within this model.

  16. Ion thruster performance model

    NASA Technical Reports Server (NTRS)

    Brophy, J. R.

    1984-01-01

    A model of ion thruster performance is developed for high flux density, cusped magnetic field thruster designs. This model is formulated in terms of the average energy required to produce an ion in the discharge chamber plasma and the fraction of these ions that are extracted to form the beam. The direct loss of high energy (primary) electrons from the plasma to the anode is shown to have a major effect on thruster performance. The model provides simple algebraic equations enabling one to calculate the beam ion energy cost, the average discharge chamber plasma ion energy cost, the primary electron density, the primary-to-Maxwellian electron density ratio and the Maxwellian electron temperature. Experiments indicate that the model correctly predicts the variation in plasma ion energy cost for changes in propellant gas (Ar, Kr and Xe), grid transparency to neutral atoms, beam extraction area, discharge voltage, and discharge chamber wall temperature. The model and experiments indicate that thruster performance may be described in terms of only four thruster configuration dependent parameters and two operating parameters. The model also suggests that improved performance should be exhibited by thruster designs which extract a large fraction of the ions produced in the discharge chamber, which have good primary electron and neutral atom containment and which operate at high propellant flow rates.

  17. A novel hybrid MCDM model for performance evaluation of research and technology organizations based on BSC approach.

    PubMed

    Varmazyar, Mohsen; Dehghanbaghi, Maryam; Afkhami, Mehdi

    2016-10-01

    Balanced Scorecard (BSC) is a strategic evaluation tool using both financial and non-financial indicators to determine the business performance of organizations or companies. In this paper, a new integrated approach based on the Balanced Scorecard (BSC) and multi-criteria decision making (MCDM) methods are proposed to evaluate the performance of research centers of research and technology organization (RTO) in Iran. Decision-Making Trial and Evaluation Laboratory (DEMATEL) are employed to reflect the interdependencies among BSC perspectives. Then, Analytic Network Process (ANP) is utilized to weight the indices influencing the considered problem. In the next step, we apply four MCDM methods including Additive Ratio Assessment (ARAS), Complex Proportional Assessment (COPRAS), Multi-Objective Optimization by Ratio Analysis (MOORA), and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) for ranking of alternatives. Finally, the utility interval technique is applied to combine the ranking results of MCDM methods. Weighted utility intervals are computed by constructing a correlation matrix between the ranking methods. A real case is presented to show the efficacy of the proposed approach.

  18. A novel hybrid MCDM model for performance evaluation of research and technology organizations based on BSC approach.

    PubMed

    Varmazyar, Mohsen; Dehghanbaghi, Maryam; Afkhami, Mehdi

    2016-10-01

    Balanced Scorecard (BSC) is a strategic evaluation tool using both financial and non-financial indicators to determine the business performance of organizations or companies. In this paper, a new integrated approach based on the Balanced Scorecard (BSC) and multi-criteria decision making (MCDM) methods are proposed to evaluate the performance of research centers of research and technology organization (RTO) in Iran. Decision-Making Trial and Evaluation Laboratory (DEMATEL) are employed to reflect the interdependencies among BSC perspectives. Then, Analytic Network Process (ANP) is utilized to weight the indices influencing the considered problem. In the next step, we apply four MCDM methods including Additive Ratio Assessment (ARAS), Complex Proportional Assessment (COPRAS), Multi-Objective Optimization by Ratio Analysis (MOORA), and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) for ranking of alternatives. Finally, the utility interval technique is applied to combine the ranking results of MCDM methods. Weighted utility intervals are computed by constructing a correlation matrix between the ranking methods. A real case is presented to show the efficacy of the proposed approach. PMID:27371786

  19. Performance Criteria and Evaluation System

    1992-06-18

    The Performance Criteria and Evaluation System (PCES) was developed in order to make a data base of criteria accessible to radiation safety staff. The criteria included in the package are applicable to occupational radiation safety at DOE reactor and nonreactor nuclear facilities, but any data base of criteria may be created using the Criterion Data Base Utiliity (CDU). PCES assists personnel in carrying out oversight, line, and support activities.

  20. Propagation modeling and evaluation of communication system performance in nuclear environments. Final report 11 Nov 76-29 Feb 80

    SciTech Connect

    Rino, C.L.

    1980-02-29

    This report summarizes propagation modeling work for predicting communication-system performance in disturbed nuclear environments. Simple formulas are developed that characterize the onset of scintillation, the coherence time of the scintillation, the coherence bandwidth loss and associated delay jitter, plus the angle-of-arrival scintillation for radar applications. The calculations are based on a power-law phase-screen model, and they fully accommodate a varying spectral index and arbitrary propagation angles relative to the principal irregularity axis. In a power-law environment, the signal structure is critically dependent upon the power-law index, particularly under strong-scatter conditions.

  1. EVALUATION OF THE COMMUNITY MULTISCALE AIR QUALITY (CMAQ) MODEL VERSION 4.5: UNCERTAINTIES AND SENSITIVITIES IMPACTING MODEL PERFORMANCE: PART II - PARTICULATE MATTER

    EPA Science Inventory

    This paper presents an analysis of the CMAQ v4.5 model performance for particulate matter and its chemical components for the simulated year 2001. This is part two is two part series of papers that examines the model performance of CMAQ v4.5.

  2. Application of 2D numerical model to unsteady performance evaluation of vertical-axis tidal current turbine

    NASA Astrophysics Data System (ADS)

    Liu, Zhen; Qu, Hengliang; Shi, Hongda; Hu, Gexing; Hyun, Beom-Soo

    2016-09-01

    Tidal current energy is renewable and sustainable, which is a promising alternative energy resource for the future electricity supply. The straight-bladed vertical-axis turbine is regarded as a useful tool to capture the tidal current energy especially under low-speed conditions. A 2D unsteady numerical model based on Ansys-Fluent 12.0 is established to conduct the numerical simulation, which is validated by the corresponding experimental data. For the unsteady calculations, the SST model, 2×105 and 0.01 s are selected as the proper turbulence model, mesh number, and time step, respectively. Detailed contours of the velocity distributions around the rotor blade foils have been provided for a flow field analysis. The tip speed ratio (TSR) determines the azimuth angle of the appearance of the torque peak, which occurs once for a blade in a single revolution. It is also found that simply increasing the incident flow velocity could not improve the turbine performance accordingly. The peaks of the averaged power and torque coefficients appear at TSRs of 2.1 and 1.8, respectively. Furthermore, several shapes of the duct augmentation are proposed to improve the turbine performance by contracting the flow path gradually from the open mouth of the duct to the rotor. The duct augmentation can significantly enhance the power and torque output. Furthermore, the elliptic shape enables the best performance of the turbine. The numerical results prove the capability of the present 2D model for the unsteady hydrodynamics and an operating performance analysis of the vertical tidal stream turbine.

  3. Performance Evaluation of K-DEMO Cable-in-conduit Conductors Using the Florida Electro-Mechanical Cable Model

    SciTech Connect

    Zhai, Yuhu

    2013-07-16

    The United States ITER Project Office (USIPO) is responsible for design of the Toroidal Field (TF) insert coil, which will allow validation of the performance of significant lengths of the conductors to be used in the full scale TF coils in relevant conditions of field, current density and mechanical strain. The Japan Atomic Energy Agency (JAEA) will build the TF insert which will be tested at the Central Solenoid Model Coil (CSMC) Test facility at JAEA, Naka, Japan. Three dimensional mathematical model of TF Insert was created based on the initial design geometry data, and included the following features: orthotropic material properties of superconductor material and insulation; external magnetic field from CSMC, temperature dependent properties of the materials; pre-compression and plastic deformation in lap joint. Major geometrical characteristics of the design were preserved including cable jacket and insulation shape, mandrel outline, and support clamps and spacers. The model is capable of performing coupled structural, thermal, and electromagnetic analysis using ANSYS. Numerical simulations were performed for room temperature conditions; cool down to 4K, and the operating regime with 68kA current at 11.8 Tesla background field. Numerical simulations led to the final design of the coil producing the required strain levels on the cable, while simultaneously satisfying the ITER magnet structural design criteria.

  4. Airlift column photobioreactors for Porphyridium sp. culturing: Part II. verification of dynamic growth rate model for reactor performance evaluation.

    PubMed

    Luo, Hu-Ping; Al-Dahhan, Muthanna H

    2012-04-01

    Dynamic growth rate model has been developed to quantify the impact of hydrodynamics on the growth of photosynthetic microorganisms and to predict the photobioreactor performance. Rigorous verification of such reactor models, however, is rare in the literature. In this part of work, verification of a dynamic growth rate model developed in Luo and Al-Dahhan (2004) [Biotech Bioeng 85(4): 382-393] was attempted using the experimental results reported in Part I of this work and results from literature. The irradiance distribution inside the studied reactor was also measured at different optical densities and successfully correlated by the Lambert-Beer Law. When reliable hydrodynamic data were used, the dynamic growth rate model successfully predicted the algae's growth rate obtained in the experiments in both low and high irradiance regime indicating the robustness of this model. The simulation results also indicate the hydrodynamics is significantly different between the real algae culturing system and an air-water system that signifies the importance in using reliable data input for the growth rate model.

  5. ATR performance modeling concepts

    NASA Astrophysics Data System (ADS)

    Ross, Timothy D.; Baker, Hyatt B.; Nolan, Adam R.; McGinnis, Ryan E.; Paulson, Christopher R.

    2016-05-01

    Performance models are needed for automatic target recognition (ATR) development and use. ATRs consume sensor data and produce decisions about the scene observed. ATR performance models (APMs) on the other hand consume operating conditions (OCs) and produce probabilities about what the ATR will produce. APMs are needed for many modeling roles of many kinds of ATRs (each with different sensing modality and exploitation functionality combinations); moreover, there are different approaches to constructing the APMs. Therefore, although many APMs have been developed, there is rarely one that fits a particular need. Clarified APM concepts may allow us to recognize new uses of existing APMs and identify new APM technologies and components that better support coverage of the needed APMs. The concepts begin with thinking of ATRs as mapping OCs of the real scene (including the sensor data) to reports. An APM is then a mapping from explicit quantized OCs (represented with less resolution than the real OCs) and latent OC distributions to report distributions. The roles of APMs can be distinguished by the explicit OCs they consume. APMs used in simulations consume the true state that the ATR is attempting to report. APMs used online with the exploitation consume the sensor signal and derivatives, such as match scores. APMs used in sensor management consume neither of those, but estimate performance from other OCs. This paper will summarize the major building blocks for APMs, including knowledge sources, OC models, look-up tables, analytical and learned mappings, and tools for signal synthesis and exploitation.

  6. A new performance evaluation tool

    SciTech Connect

    Kindl, F.H.

    1996-12-31

    The paper describes a Steam Cycle Diagnostic Program (SCDP), that has been specifically designed to respond to the increasing need of electric power generators for periodic performance monitoring, and quick identification of the causes for any observed increase in fuel consumption. There is a description of program objectives, modeling and test data inputs, results, underlying program logic, validation of program accuracy by comparison with acceptance test quality data, and examples of program usage.

  7. METAPHOR (version 1): Users guide. [performability modeling

    NASA Technical Reports Server (NTRS)

    Furchtgott, D. G.

    1979-01-01

    General information concerning METAPHOR, an interactive software package to facilitate performability modeling and evaluation, is presented. Example systems are studied and their performabilities are calculated. Each available METAPHOR command and array generator is described. Complete METAPHOR sessions are included.

  8. Small wind turbine performance evaluation using field test data and a coupled aero-electro-mechanical model

    NASA Astrophysics Data System (ADS)

    Wallace, Brian D.

    A series of field tests and theoretical analyses were performed on various wind turbine rotor designs at two Penn State residential-scale wind-electric facilities. This work involved the prediction and experimental measurement of the electrical and aerodynamic performance of three wind turbines; a 3 kW rated Whisper 175, 2.4 kW rated Skystream 3.7, and the Penn State designed Carolus wind turbine. Both the Skystream and Whisper 175 wind turbines are OEM blades which were originally installed at the facilities. The Carolus rotor is a carbon-fiber composite 2-bladed machine, designed and assembled at Penn State, with the intent of replacing the Whisper 175 rotor at the off-grid system. Rotor aerodynamic performance is modeled using WT_Perf, a National Renewable Energy Laboratory developed Blade Element Momentum theory based performance prediction code. Steady-state power curves are predicted by coupling experimentally determined electrical characteristics with the aerodynamic performance of the rotor simulated with WT_Perf. A dynamometer test stand is used to establish the electromechanical efficiencies of the wind-electric system generator. Through the coupling of WT_Perf and dynamometer test results, an aero-electro-mechanical analysis procedure is developed and provides accurate predictions of wind system performance. The analysis of three different wind turbines gives a comprehensive assessment of the capability of the field test facilities and the accuracy of aero-electro-mechanical analysis procedures. Results from this study show that the Carolus and Whisper 175 rotors are running at higher tip-speed ratios than are optimum for power production. The aero-electro-mechanical analysis predicted the high operating tip-speed ratios of the rotors and was accurate at predicting output power for the systems. It is shown that the wind turbines operate at high tip-speeds because of a miss-match between the aerodynamic drive torque and the operating torque of the wind

  9. 48 CFR 436.604 - Performance evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 4 2010-10-01 2010-10-01 false Performance evaluation... Performance evaluation. Preparation of performance evaluation reports. (a) In addition to the requirements of FAR 36.604, performance evaluation reports shall be prepared for indefinite-delivery type...

  10. 48 CFR 436.604 - Performance evaluation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 4 2011-10-01 2011-10-01 false Performance evaluation... Performance evaluation. Preparation of performance evaluation reports. (a) In addition to the requirements of FAR 36.604, performance evaluation reports shall be prepared for indefinite-delivery type...

  11. 48 CFR 436.604 - Performance evaluation.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 4 2013-10-01 2013-10-01 false Performance evaluation... Performance evaluation. Preparation of performance evaluation reports. (a) In addition to the requirements of FAR 36.604, performance evaluation reports shall be prepared for indefinite-delivery type...

  12. 48 CFR 436.604 - Performance evaluation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 4 2014-10-01 2014-10-01 false Performance evaluation... Performance evaluation. Preparation of performance evaluation reports. (a) In addition to the requirements of FAR 36.604, performance evaluation reports shall be prepared for indefinite-delivery type...

  13. 48 CFR 436.604 - Performance evaluation.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 4 2012-10-01 2012-10-01 false Performance evaluation... Performance evaluation. Preparation of performance evaluation reports. (a) In addition to the requirements of FAR 36.604, performance evaluation reports shall be prepared for indefinite-delivery type...

  14. Towards Fully Coupled Atmosphere-Hydrology Model Systems: Recent Developments and Performance Evaluation For Different Climate Regions

    NASA Astrophysics Data System (ADS)

    Kunstmann, Harald; Fersch, Benjamin; Rummler, Thomas; Wagner, Sven; Arnault, Joel; Senatore, Alfonso; Gochis, David

    2015-04-01

    Limitations in the adequate representation of terrestrial hydrologic processes controlling the land-atmosphere coupling are assumed to be a significant factor currently limiting prediction skills of regional atmospheric models. The necessity for more comprehensive process descriptions accounting for the interdependencies between water- and energy fluxes at the compartmental interfaces are driving recent developments in hydrometeorological modeling towards more sophisticated treatment of terrestrial hydrologic processes. It is particularly the lateral surface and subsurface water fluxes that are neglected in standard regional atmospheric models. Current developments in enhanced lateral hydrological process descriptions in the WRF model system will be presented. Based on WRF and WRF-Hydro, new modules and concepts for integrating the saturated zone by a 2-dim groundwater scheme and coupling approaches to the unsaturated zone will be presented. The fully coupled model system allows to model the complete regional water cycle, from the top of the atmosphere, via the boundary layer, the land surface, the unsaturated zone and the saturated zone till the flow in the river beds. With this increasing complexity, that also allows to describe the complex interaction of the regional water cycle on different spatial and temporal scales, the reliability and predictability of model simulations can only be shown, if performance is tested for a variety of hydrological variables for different climatological environments. We will show results of fully coupled simulations for the regions of sempiternal humid Southern Bavaria/Germany (rivers Isar and Ammer) and semiarid to subhumid Westafrica (river Sissilli). In both regions, in addition to streamflow measurements, also the validation of heat fluxes is possible via Eddy-Covariance stations within hydrometeorological testbeds. In the German Isar/Ammer region, e.g., we apply the extended WRF-Hydro modeling system in 3km atmospheric- grid

  15. Numerical Prediction of Cold Season Fog Events over Complex Terrain: the Performance of the WRF Model During MATERHORN-Fog and Early Evaluation

    NASA Astrophysics Data System (ADS)

    Pu, Zhaoxia; Chachere, Catherine N.; Hoch, Sebastian W.; Pardyjak, Eric; Gultepe, Ismail

    2016-08-01

    A field campaign to study cold season fog in complex terrain was conducted as a component of the Mountain Terrain Atmospheric Modeling and Observations (MATERHORN) Program from 07 January to 01 February 2015 in Salt Lake City and Heber City, Utah, United States. To support the field campaign, an advanced research version of the Weather Research and Forecasting (WRF) model was used to produce real-time forecasts and model evaluation. This paper summarizes the model performance and preliminary evaluation of the model against the observations. Results indicate that accurately forecasting fog is challenging for the WRF model, which produces large errors in the near-surface variables, such as relative humidity, temperature, and wind fields in the model forecasts. Specifically, compared with observations, the WRF model overpredicted fog events with extended duration in Salt Lake City because it produced higher moisture, lower wind speeds, and colder temperatures near the surface. In contrast, the WRF model missed all fog events in Heber City, as it reproduced lower moisture, higher wind speeds, and warmer temperatures against observations at the near-surface level. The inability of the model to produce proper levels of near-surface atmospheric conditions under fog conditions reflects uncertainties in model physical parameterizations, such as the surface layer, boundary layer, and microphysical schemes.

  16. Numerical Prediction of Cold Season Fog Events over Complex Terrain: the Performance of the WRF Model During MATERHORN-Fog and Early Evaluation

    NASA Astrophysics Data System (ADS)

    Pu, Zhaoxia; Chachere, Catherine N.; Hoch, Sebastian W.; Pardyjak, Eric; Gultepe, Ismail

    2016-09-01

    A field campaign to study cold season fog in complex terrain was conducted as a component of the Mountain Terrain Atmospheric Modeling and Observations (MATERHORN) Program from 07 January to 01 February 2015 in Salt Lake City and Heber City, Utah, United States. To support the field campaign, an advanced research version of the Weather Research and Forecasting (WRF) model was used to produce real-time forecasts and model evaluation. This paper summarizes the model performance and preliminary evaluation of the model against the observations. Results indicate that accurately forecasting fog is challenging for the WRF model, which produces large errors in the near-surface variables, such as relative humidity, temperature, and wind fields in the model forecasts. Specifically, compared with observations, the WRF model overpredicted fog events with extended duration in Salt Lake City because it produced higher moisture, lower wind speeds, and colder temperatures near the surface. In contrast, the WRF model missed all fog events in Heber City, as it reproduced lower moisture, higher wind speeds, and warmer temperatures against observations at the near-surface level. The inability of the model to produce proper levels of near-surface atmospheric conditions under fog conditions reflects uncertainties in model physical parameterizations, such as the surface layer, boundary layer, and microphysical schemes.

  17. Prospective safety performance evaluation on construction sites.

    PubMed

    Wu, Xianguo; Liu, Qian; Zhang, Limao; Skibniewski, Miroslaw J; Wang, Yanhong

    2015-05-01

    This paper presents a systematic Structural Equation Modeling (SEM) based approach for Prospective Safety Performance Evaluation (PSPE) on construction sites, with causal relationships and interactions between enablers and the goals of PSPE taken into account. According to a sample of 450 valid questionnaire surveys from 30 Chinese construction enterprises, a SEM model with 26 items included for PSPE in the context of Chinese construction industry is established and then verified through the goodness-of-fit test. Three typical types of construction enterprises, namely the state-owned enterprise, private enterprise and Sino-foreign joint venture, are selected as samples to measure the level of safety performance given the enterprise scale, ownership and business strategy are different. Results provide a full understanding of safety performance practice in the construction industry, and indicate that the level of overall safety performance situation on working sites is rated at least a level of III (Fair) or above. This phenomenon can be explained that the construction industry has gradually matured with the norms, and construction enterprises should improve the level of safety performance as not to be eliminated from the government-led construction industry. The differences existing in the safety performance practice regarding different construction enterprise categories are compared and analyzed according to evaluation results. This research provides insights into cause-effect relationships among safety performance factors and goals, which, in turn, can facilitate the improvement of high safety performance in the construction industry.

  18. Prospective safety performance evaluation on construction sites.

    PubMed

    Wu, Xianguo; Liu, Qian; Zhang, Limao; Skibniewski, Miroslaw J; Wang, Yanhong

    2015-05-01

    This paper presents a systematic Structural Equation Modeling (SEM) based approach for Prospective Safety Performance Evaluation (PSPE) on construction sites, with causal relationships and interactions between enablers and the goals of PSPE taken into account. According to a sample of 450 valid questionnaire surveys from 30 Chinese construction enterprises, a SEM model with 26 items included for PSPE in the context of Chinese construction industry is established and then verified through the goodness-of-fit test. Three typical types of construction enterprises, namely the state-owned enterprise, private enterprise and Sino-foreign joint venture, are selected as samples to measure the level of safety performance given the enterprise scale, ownership and business strategy are different. Results provide a full understanding of safety performance practice in the construction industry, and indicate that the level of overall safety performance situation on working sites is rated at least a level of III (Fair) or above. This phenomenon can be explained that the construction industry has gradually matured with the norms, and construction enterprises should improve the level of safety performance as not to be eliminated from the government-led construction industry. The differences existing in the safety performance practice regarding different construction enterprise categories are compared and analyzed according to evaluation results. This research provides insights into cause-effect relationships among safety performance factors and goals, which, in turn, can facilitate the improvement of high safety performance in the construction industry. PMID:25746166

  19. How do current irrigation practices perform? Evaluation of different irrigation scheduling approaches based on experiements and crop model simulations

    NASA Astrophysics Data System (ADS)

    Seidel, Sabine J.; Werisch, Stefan; Barfus, Klemens; Wagner, Michael; Schütze, Niels; Laber, Hermann

    2014-05-01

    The increasing worldwide water scarcity, costs and negative off-site effects of irrigation are leading to the necessity of developing methods of irrigation that increase water productivity. Various approaches are available for irrigation scheduling. Traditionally schedules are calculated based on soil water balance (SWB) calculations using some measure of reference evaporation and empirical crop coeffcients. These crop-specific coefficients are provided by the FAO but are also available for different regions (e.g. Germany). The approach is simple but there are several inaccuracies due to simplifications and limitations such as poor transferability. Crop growth models - which simulate the main physiological plant processes through a set of assumptions and calibration parameter - are widely used to support decision making, but also for yield gap or scenario analyses. One major advantage of mechanistic models compared to empirical approaches is their spatial and temporal transferability. Irrigation scheduling can also be based on measurements of soil water tension which is closely related to plant stress. Advantages of precise and easy measurements are able to be automated but face difficulties of finding the place where to probe especially in heterogenous soils. In this study, a two-year field experiment was used to extensively evaluate the three mentioned irrigation scheduling approaches regarding their efficiency on irrigation water application with the aim to promote better agronomic practices in irrigated horticulture. To evaluate the tested irrigation scheduling approaches, an extensive plant and soil water data collection was used to precisely calibrate the mechanistic crop model Daisy. The experiment was conducted with white cabbage (Brassica oleracea L.) on a sandy loamy field in 2012/13 near Dresden, Germany. Hereby, three irrigation scheduling approaches were tested: (i) two schedules were estimated based on SWB calculations using different crop

  20. A Discrete Event Simulation Model for Evaluating the Performances of an M/G/C/C State Dependent Queuing System

    PubMed Central

    Khalid, Ruzelan; M. Nawawi, Mohd Kamal; Kawsar, Luthful A.; Ghani, Noraida A.; Kamil, Anton A.; Mustafa, Adli

    2013-01-01

    M/G/C/C state dependent queuing networks consider service rates as a function of the number of residing entities (e.g., pedestrians, vehicles, and products). However, modeling such dynamic rates is not supported in modern Discrete Simulation System (DES) software. We designed an approach to cater this limitation and used it to construct the M/G/C/C state-dependent queuing model in Arena software. Using the model, we have evaluated and analyzed the impacts of various arrival rates to the throughput, the blocking probability, the expected service time and the expected number of entities in a complex network topology. Results indicated that there is a range of arrival rates for each network where the simulation results fluctuate drastically across replications and this causes the simulation results and analytical results exhibit discrepancies. Detail results that show how tally the simulation results and the analytical results in both abstract and graphical forms and some scientific justifications for these have been documented and discussed. PMID:23560037

  1. Evaluating Performance Portability of OpenACC

    SciTech Connect

    Sabne, Amit J; Sakdhnagool, Putt; Lee, Seyong; Vetter, Jeffrey S

    2015-01-01

    Accelerator-based heterogeneous computing is gaining momentum in High Performance Computing arena. However, the increased complexity of the accelerator architectures demands more generic, high-level programming models. OpenACC is one such attempt to tackle the problem. While the abstraction endowed by OpenACC offers productivity, it raises questions on its portability. This paper evaluates the performance portability obtained by OpenACC on twelve OpenACC programs on NVIDIA CUDA, AMD GCN, and Intel MIC architectures. We study the effects of various compiler optimizations and OpenACC program settings on these architectures to provide insights into the achieved performance portability.

  2. 48 CFR 236.604 - Performance evaluation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ..., DEPARTMENT OF DEFENSE SPECIAL CATEGORIES OF CONTRACTING CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACTS Architect-Engineer Services 236.604 Performance evaluation. Prepare a separate performance evaluation after... familiar with the architect-engineer contractor's performance....

  3. Evaluation of the predictive performance of a pharmacokinetic model for propofol in Japanese macaques (Macaca fuscata fuscata).

    PubMed

    Miyabe-Nishiwaki, T; Masui, K; Kaneko, A; Nishiwaki, K; Nishio, T; Kanazawa, H

    2013-04-01

    Propofol is a short-acting intravenous anesthetic used for induction/maintenance anesthesia. The objective of this study was to assess a population pharmacokinetic (PPK) model for Japanese macaques during a step-down infusion of propofol. Five male Japanese macaques were immobilized with ketamine (10 mg/kg) and atropine (0.02 mg/kg). A bolus dose of propofol (5 mg/kg) was administrated intravenously (360 mg/kg/h) followed by step-down infusion at 40 mg/kg/h for 10 min, 20 mg/kg/h for 10 min, and then 15 mg/kg/h for 100 min. Venous blood samples were repeatedly collected following the administration. The plasma concentration of propofol (Cp) was measured by high-speed LC-FL. PPK analyses were performed using NONMEM VII. Median absolute prediction error and median prediction error (MDPE), the indices of prediction inaccuracy and bias, respectively, were calculated, and PE - individual MDPE vs. time was depicted to show the variability of prediction errors. In addition, we developed another population pharmacokinetic model using previous and current datasets. The previous PK model achieved stable prediction of propofol Cp throughout the study period, although it underestimates Cp. The step-down infusion regimen described in this study would be feasible in macaques during noninvasive procedures.

  4. Presentation of the EURODELTA III intercomparison exercise - evaluation of the chemistry transport models' performance on criteria pollutants and joint analysis with meteorology

    NASA Astrophysics Data System (ADS)

    Bessagnet, Bertrand; Pirovano, Guido; Mircea, Mihaela; Cuvelier, Cornelius; Aulinger, Armin; Calori, Giuseppe; Ciarelli, Giancarlo; Manders, Astrid; Stern, Rainer; Tsyro, Svetlana; García Vivanco, Marta; Thunis, Philippe; Pay, Maria-Teresa; Colette, Augustin; Couvidat, Florian; Meleux, Frédérik; Rouïl, Laurence; Ung, Anthony; Aksoyoglu, Sebnem; María Baldasano, José; Bieser, Johannes; Briganti, Gino; Cappelletti, Andrea; D'Isidoro, Massimo; Finardi, Sandro; Kranenburg, Richard; Silibello, Camillo; Carnevale, Claudio; Aas, Wenche; Dupont, Jean-Charles; Fagerli, Hilde; Gonzalez, Lucia; Menut, Laurent; Prévôt, André S. H.; Roberts, Pete; White, Les

    2016-10-01

    The EURODELTA III exercise has facilitated a comprehensive intercomparison and evaluation of chemistry transport model performances. Participating models performed calculations for four 1-month periods in different seasons in the years 2006 to 2009, allowing the influence of different meteorological conditions on model performances to be evaluated. The exercise was performed with strict requirements for the input data, with few exceptions. As a consequence, most of differences in the outputs will be attributed to the differences in model formulations of chemical and physical processes. The models were evaluated mainly for background rural stations in Europe. The performance was assessed in terms of bias, root mean square error and correlation with respect to the concentrations of air pollutants (NO2, O3, SO2, PM10 and PM2.5), as well as key meteorological variables. Though most of meteorological parameters were prescribed, some variables like the planetary boundary layer (PBL) height and the vertical diffusion coefficient were derived in the model preprocessors and can partly explain the spread in model results. In general, the daytime PBL height is underestimated by all models. The largest variability of predicted PBL is observed over the ocean and seas. For ozone, this study shows the importance of proper boundary conditions for accurate model calculations and then on the regime of the gas and particle chemistry. The models show similar and quite good performance for nitrogen dioxide, whereas they struggle to accurately reproduce measured sulfur dioxide concentrations (for which the agreement with observations is the poorest). In general, the models provide a close-to-observations map of particulate matter (PM2.5 and PM10) concentrations over Europe rather with correlations in the range 0.4-0.7 and a systematic underestimation reaching -10 µg m-3 for PM10. The highest concentrations are much more underestimated, particularly in wintertime. Further evaluation of

  5. MSAD actuator solenoid, performance evaluation and modification

    SciTech Connect

    North, G.

    1983-04-19

    A small conical-faced solenoid actuator is tested in order to develop design criteria for improved performance including increased pull sensitivity. In addition to increased pull for the normal electrical inputs, a reduction in pull response to short duration electrical noise pulses is also required. Along with dynamic testing of the solenoid, a linear circuit model is developed. This model permits calculation of the dynamic forces and currents which can be expected with various electrical inputs. The model parameters are related to the actual solenoid and allow the effects of winding density and shading rings to be evaluated.

  6. SEASAT SAR performance evaluation study

    NASA Technical Reports Server (NTRS)

    1982-01-01

    The performance of the SEASAT synthetic aperture radar (SAR) sensor was evaluated using data processed by the MDA digital processor. Two particular aspects are considered the location accuracy of image data, and the calibration of the measured backscatter amplitude of a set of corner reflectors. The image location accuracy was assessed by selecting identifiable targets in several scenes, converting their image location to UTM coordinates, and comparing the results to map sheets. The error standard deviation is measured to be approximately 30 meters. The amplitude was calibrated by measuring the responses of the Goldstone corner reflector array and comparing the results to theoretical values. A linear regression of the measured against theoretical values results in a slope of 0.954 with a correlation coefficient of 0.970.

  7. Applying Human-performance Models to Designing and Evaluating Nuclear Power Plants: Review Guidance and Technical Basis

    SciTech Connect

    O'Hara, J.M.

    2009-11-30

    Human performance models (HPMs) are simulations of human behavior with which we can predict human performance. Designers use them to support their human factors engineering (HFE) programs for a wide range of complex systems, including commercial nuclear power plants. Applicants to U.S. Nuclear Regulatory Commission (NRC) can use HPMs for design certifications, operating licenses, and license amendments. In the context of nuclear-plant safety, it is important to assure that HPMs are verified and validated, and their usage is consistent with their intended purpose. Using HPMs improperly may generate misleading or incorrect information, entailing safety concerns. The objective of this research was to develop guidance to support the NRC staff's reviews of an applicant's use of HPMs in an HFE program. The guidance is divided into three topical areas: (1) HPM Verification, (2) HPM Validation, and (3) User Interface Verification. Following this guidance will help ensure the benefits of HPMs are achieved in a technically sound, defensible manner. During the course of developing this guidance, I identified several issues that could not be addressed; they also are discussed.

  8. 13 CFR 304.4 - Performance evaluations.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Performance evaluations. 304.4... ECONOMIC DEVELOPMENT DISTRICTS § 304.4 Performance evaluations. (a) EDA shall evaluate the management... of at least one (1) other District Organization in the performance evaluation on a...

  9. 48 CFR 2936.604 - Performance evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 7 2010-10-01 2010-10-01 false Performance evaluation... Performance evaluation. (a) The HCA must establish procedures to evaluate architect-engineer contractor... reports must be made using Standard Form 1421, Performance Evaluation (Architect-Engineer) as...

  10. 48 CFR 2936.604 - Performance evaluation.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 7 2012-10-01 2012-10-01 false Performance evaluation... Performance evaluation. (a) The HCA must establish procedures to evaluate architect-engineer contractor... reports must be made using Standard Form 1421, Performance Evaluation (Architect-Engineer) as...

  11. 48 CFR 2936.604 - Performance evaluation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 7 2014-10-01 2014-10-01 false Performance evaluation... Performance evaluation. (a) The HCA must establish procedures to evaluate architect-engineer contractor... reports must be made using Standard Form 1421, Performance Evaluation (Architect-Engineer) as...

  12. 48 CFR 2936.604 - Performance evaluation.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 7 2013-10-01 2012-10-01 true Performance evaluation... Performance evaluation. (a) The HCA must establish procedures to evaluate architect-engineer contractor... reports must be made using Standard Form 1421, Performance Evaluation (Architect-Engineer) as...

  13. 48 CFR 236.604 - Performance evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Performance evaluation... Architect-Engineer Services 236.604 Performance evaluation. (a) Preparation of performance reports. Use DD Form 2631, Performance Evaluation (Architect-Engineer), instead of SF 1421. (2) Prepare a...

  14. Performance evaluation of the SITE® model to estimate energy flux in a tropical semi-deciduous forest of the southern Amazon Basin.

    PubMed

    Sanches, Luciana; de Andrade, Nara Luísa Reis; Costa, Marcos Heil; Alves, Marcelo de Carvalho; Gaio, Denilton

    2011-05-01

    The SITE® model was originally developed to study the response of tropical ecosystems to varying environmental conditions. The present study evaluated the applicability of the SITE model to simulation of energy fluxes in a tropical semi-deciduous forest of the southern Amazon Basin. The model was simulated with data representing the wet and dry season, and was calibrated according to each season. The output data of the calibrated model [net radiation (Rn), latent heat flux (LE) and sensible heat flux (H)] were compared with data observed in the field for validation. Considering changes in parameter calibration for a time step simulation of 30 min, the magnitude of variation in temporal flux was satisfactory when compared to observation field data. There was a tendency to underestimate and overestimate LE and H, respectively. Of all the calibration parameters, the soil moisture parameter presented the highest variation over the seasons, thus influencing SITE model performance.

  15. Using a shared governance structure to evaluate the implementation of a new model of care: the shared experience of a performance improvement committee.

    PubMed

    Myers, Mary; Parchen, Debra; Geraci, Marilla; Brenholtz, Roger; Knisely-Carrigan, Denise; Hastings, Clare

    2013-10-01

    Sustaining change in the behaviors and habits of experienced practicing nurses can be frustrating and daunting, even when changes are based on evidence. Partnering with an active shared governance structure to communicate change and elicit feedback is an established method to foster partnership, equity, accountability, and ownership. Few recent exemplars in the literature link shared governance, change management, and evidence-based practice to transitions in care models. This article describes an innovative staff-driven approach used by nurses in a shared governance performance improvement committee to use evidence-based practice in determining the best methods to evaluate the implementation of a new model of care.

  16. A system-level mathematical model for evaluation of power train performance of load-leveled electric-vehicles

    NASA Technical Reports Server (NTRS)

    Purohit, G. P.; Leising, C. J.

    1984-01-01

    The power train performance of load leveled electric vehicles can be compared with that of nonload leveled systems by use of a simple mathematical model. This method of measurement involves a number of parameters including the degree of load leveling and regeneration, the flywheel mechanical to electrical energy fraction, and efficiencies of the motor, generator, flywheel, and transmission. Basic efficiency terms are defined and representative comparisons of a variety of systems are presented. Results of the study indicate that mechanical transfer of energy into and out of the flywheel is more advantageous than electrical transfer. An optimum degree of load leveling may be achieved in terms of the driving cycle, battery characteristics, mode of mechanization, and the efficiency of the components. For state of the art mechanically coupled flyheel systems, load leveling losses can be held to a reasonable 10%; electrically coupled systems can have losses that are up to six times larger. Propulsion system efficiencies for mechanically coupled flywheel systems are predicted to be approximately the 60% achieved on conventional nonload leveled systems.

  17. Performance evaluation of an automotive thermoelectric generator

    NASA Astrophysics Data System (ADS)

    Dubitsky, Andrei O.

    Around 40% of the total fuel energy in typical internal combustion engines (ICEs) is rejected to the environment in the form of exhaust gas waste heat. Efficient recovery of this waste heat in automobiles can promise a fuel economy improvement of 5%. The thermal energy can be harvested through thermoelectric generators (TEGs) utilizing the Seebeck effect. In the present work, a versatile test bench has been designed and built in order to simulate conditions found on test vehicles. This allows experimental performance evaluation and model validation of automotive thermoelectric generators. An electrically heated exhaust gas circuit and a circulator based coolant loop enable integrated system testing of hot and cold side heat exchangers, thermoelectric modules (TEMs), and thermal interface materials at various scales. A transient thermal model of the coolant loop was created in order to design a system which can maintain constant coolant temperature under variable heat input. Additionally, as electrical heaters cannot match the transient response of an ICE, modelling was completed in order to design a relaxed exhaust flow and temperature history utilizing the system thermal lag. This profile reduced required heating power and gas flow rates by over 50%. The test bench was used to evaluate a DOE/GM initial prototype automotive TEG and validate analytical performance models. The maximum electrical power generation was found to be 54 W with a thermal conversion efficiency of 1.8%. It has been found that thermal interface management is critical for achieving maximum system performance, with novel designs being considered for further improvement.

  18. Hydrological evaluation of landfill performance (HELP) model assessment of the geology at Los Alamos National Laboratory, Technical Area 54, Material Disposal Area J

    SciTech Connect

    Vigil-Holterman, L.

    2002-01-01

    The purpose of this paper is: (1) conduct HELP model variations in weather data, profile characteristics, and hydraulic conductivities for major rock units; (2) compare and contrast the results of simulations; (3) obtain an estimation of leakage through the landfill from the surface to the aquifer; and (4) evaluate contaminant transport to the aquifer utilizing leakage estimation. The conclusions of this paper are: (1) the HELP model is useful to assess landfill design alternatives or the performance of a pre-existing landfill; (2) model results using site-specific data incorporated into the Weather Generator (Trail 4), varied significantly from generalized runs (Trials 1-3), consequently, models that lack site-specific data should be used cautiously; and (3) data from this study suggest that there will not be significant downward percolation of leachate from the surface of the landfill cap to the aquifer-leachate transport rates have been calculated to be slow.

  19. Performance Evaluation of a Data Validation System

    NASA Technical Reports Server (NTRS)

    Wong, Edmond (Technical Monitor); Sowers, T. Shane; Santi, L. Michael; Bickford, Randall L.

    2005-01-01

    Online data validation is a performance-enhancing component of modern control and health management systems. It is essential that performance of the data validation system be verified prior to its use in a control and health management system. A new Data Qualification and Validation (DQV) Test-bed application was developed to provide a systematic test environment for this performance verification. The DQV Test-bed was used to evaluate a model-based data validation package known as the Data Quality Validation Studio (DQVS). DQVS was employed as the primary data validation component of a rocket engine health management (EHM) system developed under NASA's NGLT (Next Generation Launch Technology) program. In this paper, the DQVS and DQV Test-bed software applications are described, and the DQV Test-bed verification procedure for this EHM system application is presented. Test-bed results are summarized and implications for EHM system performance improvements are discussed.

  20. Modeling the fate of atmospheric reduced nitrogen during the Rocky Mountain Atmospheric Nitrogen and Sulfur Study (RoMANS): Performance evaluation and diagnosis using integrated processes rate analysis

    NASA Astrophysics Data System (ADS)

    Rodriguez, Marco A.; Barna, Michael G.; Gebhart, Kristi A.; Hand, Jennifer L.; Adelman, Zachariah E.; Schichtel, Bret A.; Collett, Jeffrey L., Jr.; Malm, William C.

    2011-01-01

    Excess wet and dry deposition of nitrogen-containing compounds is a concern at a number of national parks. The Rocky Mountain Atmospheric Nitrogen and Sulfur Study (RoMANS) was conducted during the spring and summer of 2006 to identify the overall mix of ambient and deposited sulfur and nitrogen at Rocky Mountain National Park (RMNP), in north-central Colorado. The Comprehensive Air Quality Model with extensions (CAMx) was used to simulate the fate of gaseous and particulate species subjected to multiple chemical and physical processes during RoMANS. This study presents an operational evaluation with a special emphasis on the model performance of reduced nitrogen species. The evaluation showed large negative biases and errors at RMNP and the entire domain for ammonia; therefore the model was considered inadequate for future source apportionment applications. The CAMx Integrated Processes Rate (IPR) analysis tool was used to elucidate the potential causes behind the poor model performance. IPR served as a tool to diagnose the relative contributions of individual physical and chemical processes to the final concentrations of reduced nitrogen species. The IPR analysis revealed that dry deposition is the largest sink of ammonia in the model, with some cells losing almost 100% of the available mass. Closer examination of the ammonia dry deposition velocities in CAMx found that they were up to a factor of 10 larger than those reported in the literature. A series of sensitivity simulations were then performed by changing the original deposition velocities with a simple multiplicative scaling factor. These simulations showed that even when the dry deposition values were altered to reduce their influence, the model was still unable to replicate the observed time series; i.e., it fixed the average bias, but it did not improve the precision.

  1. 48 CFR 36.604 - Performance evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Performance evaluation. 36.604 Section 36.604 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION SPECIAL... Performance evaluation. See 42.1502(f) for the requirements for preparing past performance evaluations...

  2. 48 CFR 36.604 - Performance evaluation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 1 2011-10-01 2011-10-01 false Performance evaluation. 36.604 Section 36.604 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION SPECIAL... Performance evaluation. See 42.1502(f) for the requirements for preparing past performance evaluations...

  3. 48 CFR 36.604 - Performance evaluation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 1 2014-10-01 2014-10-01 false Performance evaluation. 36.604 Section 36.604 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION SPECIAL... Performance evaluation. See 42.1502(f) for the requirements for preparing past performance evaluations...

  4. 48 CFR 236.604 - Performance evaluation.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 3 2013-10-01 2013-10-01 false Performance evaluation. 236.604 Section 236.604 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS SYSTEM... Architect-Engineer Services 236.604 Performance evaluation. Prepare a separate performance evaluation...

  5. 13 CFR 304.4 - Performance evaluations.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 13 Business Credit and Assistance 1 2013-01-01 2013-01-01 false Performance evaluations. 304.4... ECONOMIC DEVELOPMENT DISTRICTS § 304.4 Performance evaluations. (a) EDA shall evaluate the management... the District Organization continues to receive Investment Assistance. EDA's evaluation shall...

  6. Evaluating GC/MS Performance

    SciTech Connect

    Alcaraz, A; Dougan, A

    2006-11-26

    and Water Check': By selecting View - Diagnostics/Vacuum Control - Vacuum - Air and Water Check. A Yes/No dialogue box will appear; select No (use current values). It is very important to select No! Otherwise the tune values are drastically altered. The software program will generate a water/air report similar to figure 3. Evaluating the GC/MS system with a performance standard: This procedure should allow the analyst to verify that the chromatographic column and associated components are working adequately to separate the various classes of chemical compounds (e.g., hydrocarbons, alcohols, fatty acids, aromatics, etc.). Use the same GC/MS conditions used to collect the system background and solvent check (part 1 of this document). Figure 5 is an example of a commercial GC/MS column test mixture used to evaluate GC/MS prior to analysis.

  7. PERFORMANCE EVALUATION OF TYPE I MARINE SANITATION DEVICES

    EPA Science Inventory

    This performance test was designed to evaluate the effectiveness of two Type I Marine Sanitation Devices (MSDs): the Electro Scan Model EST 12, manufactured by Raritan Engineering Company, Inc., and the Thermopure-2, manufactured by Gross Mechanical Laboratories, Inc. Performance...

  8. Behavioral patterns of environmental performance evaluation programs.

    PubMed

    Li, Wanxin; Mauerhofer, Volker

    2016-11-01

    During the past decades numerous environmental performance evaluation programs have been developed and implemented on different geographic scales. This paper develops a taxonomy of environmental management behavioral patterns in order to provide a practical comparison tool for environmental performance evaluation programs. Ten such programs purposively selected are mapped against the identified four behavioral patterns in the form of diagnosis, negotiation, learning, and socialization and learning. Overall, we found that schemes which serve to diagnose environmental abnormalities are mainly externally imposed and have been developed as a result of technical debates concerning data sources, methodology and ranking criteria. Learning oriented scheme is featured by processes through which free exchange of ideas, mutual and adaptive learning can occur. Scheme developed by higher authority for influencing behaviors of lower levels of government has been adopted by the evaluated to signal their excellent environmental performance. The socializing and learning classified evaluation schemes have incorporated dialogue, participation, and capacity building in program design. In conclusion we consider the 'fitness for purpose' of the various schemes, the merits of our analytical model and the future possibilities of fostering capacity building in the realm of wicked environmental challenges. PMID:27513220

  9. Behavioral patterns of environmental performance evaluation programs.

    PubMed

    Li, Wanxin; Mauerhofer, Volker

    2016-11-01

    During the past decades numerous environmental performance evaluation programs have been developed and implemented on different geographic scales. This paper develops a taxonomy of environmental management behavioral patterns in order to provide a practical comparison tool for environmental performance evaluation programs. Ten such programs purposively selected are mapped against the identified four behavioral patterns in the form of diagnosis, negotiation, learning, and socialization and learning. Overall, we found that schemes which serve to diagnose environmental abnormalities are mainly externally imposed and have been developed as a result of technical debates concerning data sources, methodology and ranking criteria. Learning oriented scheme is featured by processes through which free exchange of ideas, mutual and adaptive learning can occur. Scheme developed by higher authority for influencing behaviors of lower levels of government has been adopted by the evaluated to signal their excellent environmental performance. The socializing and learning classified evaluation schemes have incorporated dialogue, participation, and capacity building in program design. In conclusion we consider the 'fitness for purpose' of the various schemes, the merits of our analytical model and the future possibilities of fostering capacity building in the realm of wicked environmental challenges.

  10. Error Reduction Program. [combustor performance evaluation codes

    NASA Technical Reports Server (NTRS)

    Syed, S. A.; Chiappetta, L. M.; Gosman, A. D.

    1985-01-01

    The details of a study to select, incorporate and evaluate the best available finite difference scheme to reduce numerical error in combustor performance evaluation codes are described. The combustor performance computer programs chosen were the two dimensional and three dimensional versions of Pratt & Whitney's TEACH code. The criteria used to select schemes required that the difference equations mirror the properties of the governing differential equation, be more accurate than the current hybrid difference scheme, be stable and economical, be compatible with TEACH codes, use only modest amounts of additional storage, and be relatively simple. The methods of assessment used in the selection process consisted of examination of the difference equation, evaluation of the properties of the coefficient matrix, Taylor series analysis, and performance on model problems. Five schemes from the literature and three schemes developed during the course of the study were evaluated. This effort resulted in the incorporation of a scheme in 3D-TEACH which is usuallly more accurate than the hybrid differencing method and never less accurate.

  11. Performance evaluation of two OCR systems

    SciTech Connect

    Chen, S.; Subramaniam, S.; Haralick, R.M.; Phillips, I.T.

    1994-12-31

    An experimental protocol for the performance evaluation of Optical Character Recognition (OCR) algorithms is described. The protocol is intended to serve as a model for using the University of Washington English Document Image Database-I to evaluate OCR systems. The plain text zones (without special symbols) in this database have over 2,300,000 characters. The performances of two UNIX-based OCR systems, namely Caere OCR v109a and Xerox ScanWorX v2.0, are measured. The results suggest that Caere OCR outperforms ScanWorX in terms of recognition accuracy; however, ScanWorX is more robust in the presence of image flaws.

  12. How To Evaluate Teacher Performence.

    ERIC Educational Resources Information Center

    Wilson, Laval S.

    Teacher evaluations tend to be like clothes. Whatever is in vogue at the time is utilized extensively by those who are attempting to remain modern and current. If you stay around long enough, the "hot" methods of today will probably recycle to be the new discovery of the future. In the end, each school district develops an evaluation process that…

  13. Performance evaluation of carbon dioxide-alkanolamine- water system by equation of state/excess Gibbs energy models

    NASA Astrophysics Data System (ADS)

    Suleman, H.; Maulud, A. S.; Man, Z.

    2016-06-01

    Numerous thermodynamic techniques have been applied to correlate carbon dioxide- alkanolamine-water systems, with varying accuracy and complexity. With advent of high pressure carbon dioxide absorption in industry, the development of high pressure thermodynamic models have became an exigency. Equation of state/excess Gibbs energy models promises a substantial improvement in this field. Many researchers have shown application of these models to high pressure vapour liquid equilibria of said system with good correlation. However, no study shows the range of application of these models in presence of other competitive techniques. Therefore, this study quantitatively describes the range of application of equation of state/excess Gibbs energy models to carbon dioxide-alkanolamine systems. The model uses Linear Combination of Vidal and Michelsen mixing rule for correlation of carbon dioxide absorption in single aqueous monoethanolamine, diethanolamine and methyldiethanolamine mixtures. The results show that correlation of equation of state/excess Gibbs energy models show a transient change at carbon dioxide loadings of 0.8. Therefore, these models are applicable to the above mentioned system for carbon dioxide loadings beyond 0.8 mol/mol and higher. The observations are similar in behaviour for all tested alkanolamines and are therefore generalized for the system.

  14. Performance evaluation of traveling wave ultrasonic motor based on a model with visco-elastic friction layer on stator.

    PubMed

    Qu, Jianjun; Sun, Fengyan; Zhao, Chunsheng

    2006-12-01

    A new visco-elastic contact model of traveling wave ultrasonic motor (TWUSM) is proposed. In this model, the rotor is assumed to be rigid body and the friction material on stator teeth surface to be visco-elastic body. Both load characteristics of TWUSM, such as rotation speed, torque and efficiency, and effects of interface parameters between stator and rotor on output characteristic of TWUSM can be calculated and simulated numerically by using MATLAB method based on this model. This model is compared with that one of compliant slider and rigid stator. The results show that this model can obtain bigger stall torque. The simulated results are compared with test results, and found that their load characteristics have good agreement.

  15. Method for evaluating performance of clinical pharmacists.

    PubMed

    Schumock, G T; Leister, K A; Edwards, D; Wareham, P S; Burkhart, V D

    1990-01-01

    A performance-evaluation process that satisfies Joint Commission on Accreditation of Healthcare Organizations criteria and state policies is described. A three-part, criteria-based, weighted performance-evaluation tool specific for clinical pharmacists was designed for use in two institutions affiliated with the University of Washington. The three parts are self-appraisal and goal setting, peer evaluation, and supervisory evaluation. Objective criteria within each section were weighted to reflect the relative importance of that characteristic to the job that the clinical pharmacist performs. The performance score for each criterion is multiplied by the weighted value to produce an outcome score. The peer evaluation and self-appraisal/goal-setting parts of the evaluation are completed before the formal performance-evaluation interview. The supervisory evaluation is completed during the interview. For this evaluation, supervisors use both the standard university employee performance evaluation form and a set of specific criteria applicable to the clinical pharmacists in these institutions. The first performance evaluations done under this new system were conducted in May 1989. Pharmacists believed that the new system was more objective and allowed more interchange between the manager and the pharmacist. The peer-evaluation part of the system was seen as extremely constructive. This three-part, criteria-based system for evaluation of the job performance of clinical pharmacists could easily be adopted by other pharmacy departments.

  16. VPPA weld model evaluation

    NASA Astrophysics Data System (ADS)

    McCutcheon, Kimble D.; Gordon, Stephen S.; Thompson, Paul A.

    1992-07-01

    NASA uses the Variable Polarity Plasma Arc Welding (VPPAW) process extensively for fabrication of Space Shuttle External Tanks. This welding process has been in use at NASA since the late 1970's but the physics of the process have never been satisfactorily modeled and understood. In an attempt to advance the level of understanding of VPPAW, Dr. Arthur C. Nunes, Jr., (NASA) has developed a mathematical model of the process. The work described in this report evaluated and used two versions (level-0 and level-1) of Dr. Nunes' model, and a model derived by the University of Alabama at Huntsville (UAH) from Dr. Nunes' level-1 model. Two series of VPPAW experiments were done, using over 400 different combinations of welding parameters. Observations were made of VPPAW process behavior as a function of specific welding parameter changes. Data from these weld experiments was used to evaluate and suggest improvements to Dr. Nunes' model. Experimental data and correlations with the model were used to develop a multi-variable control algorithm for use with a future VPPAW controller. This algorithm is designed to control weld widths (both on the crown and root of the weld) based upon the weld parameters, base metal properties, and real-time observation of the crown width. The algorithm exhibited accuracy comparable to that of the weld width measurements for both aluminum and mild steel welds.

  17. VPPA weld model evaluation

    NASA Technical Reports Server (NTRS)

    Mccutcheon, Kimble D.; Gordon, Stephen S.; Thompson, Paul A.

    1992-01-01

    NASA uses the Variable Polarity Plasma Arc Welding (VPPAW) process extensively for fabrication of Space Shuttle External Tanks. This welding process has been in use at NASA since the late 1970's but the physics of the process have never been satisfactorily modeled and understood. In an attempt to advance the level of understanding of VPPAW, Dr. Arthur C. Nunes, Jr., (NASA) has developed a mathematical model of the process. The work described in this report evaluated and used two versions (level-0 and level-1) of Dr. Nunes' model, and a model derived by the University of Alabama at Huntsville (UAH) from Dr. Nunes' level-1 model. Two series of VPPAW experiments were done, using over 400 different combinations of welding parameters. Observations were made of VPPAW process behavior as a function of specific welding parameter changes. Data from these weld experiments was used to evaluate and suggest improvements to Dr. Nunes' model. Experimental data and correlations with the model were used to develop a multi-variable control algorithm for use with a future VPPAW controller. This algorithm is designed to control weld widths (both on the crown and root of the weld) based upon the weld parameters, base metal properties, and real-time observation of the crown width. The algorithm exhibited accuracy comparable to that of the weld width measurements for both aluminum and mild steel welds.

  18. Predictive performance models and multiple task performance

    NASA Technical Reports Server (NTRS)

    Wickens, Christopher D.; Larish, Inge; Contorer, Aaron

    1989-01-01

    Five models that predict how performance of multiple tasks will interact in complex task scenarios are discussed. The models are shown in terms of the assumptions they make about human operator divided attention. The different assumptions about attention are then empirically validated in a multitask helicopter flight simulation. It is concluded from this simulation that the most important assumption relates to the coding of demand level of different component tasks.

  19. A Model Performance

    ERIC Educational Resources Information Center

    Thornton, Bradley D.; Smalley, Robert A.

    2008-01-01

    Building information modeling (BIM) uses three-dimensional modeling concepts, information technology and interoperable software to design, construct and operate a facility. However, BIM can be more than a tool for virtual modeling--it can provide schools with a 3-D walkthrough of a project while it still is on the electronic drawing board. BIM can…

  20. Suitability of Hydrologic Evaluation of Landfill Performance (HELP) model of the US Environmental Protection Agency for the simulation of the water balance of landfill cover systems

    NASA Astrophysics Data System (ADS)

    Berger, K.; Melchior, S.; Miehlich, G.

    1996-12-01

    Cover systems are widely used to safeguard landfills and contaminated sites. The evaluation of the water balance is crucial for the design of landfill covers. The Hydrologic Evaluation of Landfill Performance (HELP) model of the US Environmental Protection Agency was developed for this purpose. This paper discusses some limitations of version 2 of this model and some operational difficulties for the use of this model in Germany, which has been developed for the United States. The model results are tested against field data of the water balance, measured on test fields on the Georgswerder landfill in Hamburg. Theoretically, HELP considers gravitational forces as driving forces of water flow only. Therefore capillary barriers cannot be simulated. Furthermore, the formation of and the flow through macropores are not considered, a main critical process that the diminishes the effectiveness of compacted soil liners. In the output comparison, the matching of measured and simulated data is quite good for lateral drainage, but failed for surface runoff and liner leakage through compacted soil liners. A further validation study is planned for HELP version 3 using a broader range of test field data.

  1. On the use of the post-closure methods uncertainty band to evaluate the performance of land surface models against eddy covariance flux data

    NASA Astrophysics Data System (ADS)

    Ingwersen, J.; Imukova, K.; Högy, P.; Streck, T.

    2015-04-01

    The energy balance of eddy covariance (EC) flux data is normally not closed. Therefore, at least if used for modelling, EC flux data are usually post-closed, i.e. the measured turbulent fluxes are adjusted so as to close the energy balance. At the current state of knowledge, however, it is not clear how to partition the missing energy in the right way. Eddy flux data therefore contain some uncertainty due to the unknown nature of the energy balance gap, which should be considered in model evaluation and the interpretation of simulation results. We propose to construct the post-closure methods uncertainty band (PUB), which essentially designates the differences between non-adjusted flux data and flux data adjusted with the three post-closure methods (Bowen ratio, latent heat flux (LE) and sensible heat flux (H) method). To demonstrate this approach, simulations with the NOAH-MP land surface model were evaluated based on EC measurements conducted at a winter wheat stand in southwest Germany in 2011, and the performance of the Jarvis and Ball-Berry stomatal resistance scheme was compared. The width of the PUB of the LE was up to 110 W m-2 (21% of net radiation). Our study shows that it is crucial to account for the uncertainty in EC flux data originating from lacking energy balance closure. Working with only a single post-closing method might result in severe misinterpretations in model-data comparisons.

  2. On the use of the post-closure method uncertainty band to evaluate the performance of land surface models against eddy covariance flux data

    NASA Astrophysics Data System (ADS)

    Ingwersen, J.; Imukova, K.; Högy, P.; Streck, T.

    2014-12-01

    The energy balance of eddy covariance (EC) flux data is normally not closed. Therefore, at least if used for modeling, EC flux data are usually post-closed, i.e. the measured turbulent fluxes are adjusted so as to close the energy balance. At the current state of knowledge, however, it is not clear how to partition the missing energy in the right way. Eddy flux data therefore contain some uncertainty due to the unknown nature of the energy balance gap, which should be considered in model evaluation and the interpretation of simulation results. We propose to construct the post-closure method uncertainty band (PUB), which essentially designates the differences between non-adjusted flux data and flux data adjusted with the three post-closure methods (Bowen ratio, latent heat flux (LE) and sensible heat flux (H) method). To demonstrate this approach, simulations with the NOAH-MP land surface model were evaluated based on EC measurements conducted at a winter wheat stand in Southwest Germany in 2011, and the performance of the Jarvis and Ball-Berry stomatal resistance scheme was compared. The width of the PUB of the LE was up to 110 W m-2 (21% of net radiation). Our study shows that it is crucial to account for the uncertainty of EC flux data originating from lacking energy balance closure. Working with only a single post-closing method might result in severe misinterpretations in model-data comparisons.

  3. Seismic Performance Evaluation of Concentrically Braced Frames

    NASA Astrophysics Data System (ADS)

    Hsiao, Po-Chien

    Concentrically braced frames (CBFs) are broadly used as lateral-load resisting systems in buildings throughout the US. In high seismic regions, special concentrically braced frames (SCBFs) where ductility under seismic loading is necessary. Their large elastic stiffness and strength efficiently sustains the seismic demands during smaller, more frequent earthquakes. During large, infrequent earthquakes, SCBFs exhibit highly nonlinear behavior due to brace buckling and yielding and the inelastic behavior induced by secondary deformation of the framing system. These response modes reduce the system demands relative to an elastic system without supplemental damping. In design the re reduced demands are estimated using a response modification coefficient, commonly termed the R factor. The R factor values are important to the seismic performance of a building. Procedures put forth in FEMAP695 developed to R factors through a formalized procedure with the objective of consistent level of collapse potential for all building types. The primary objective of the research was to evaluate the seismic performance of SCBFs. To achieve this goal, an improved model including a proposed gusset plate connection model for SCBFs that permits accurate simulation of inelastic deformations of the brace, gusset plate connections, beams and columns and brace fracture was developed and validated using a large number of experiments. Response history analyses were conducted using the validated model. A series of different story-height SCBF buildings were designed and evaluated. The FEMAP695 method and an alternate procedure were applied to SCBFs and NCBFs. NCBFs are designed without ductile detailing. The evaluation using P695 method shows contrary results to the alternate evaluation procedure and the current knowledge in which short-story SCBF structures are more venerable than taller counterparts and NCBFs are more vulnerable than SCBFs.

  4. A Method for Missile Autopilot Performance Evaluation

    NASA Astrophysics Data System (ADS)

    Eguchi, Hirofumi

    The essential benefit of HardWare-In-the-Loop (HWIL) simulation can be summarized as that the performance of autopilot system is evaluated realistically without the modeling error by using actual hardware such as seeker systems, autopilot systems and servo equipments. HWIL simulation, however, requires very expensive facilities: in these facilities, the target model generator is the indispensable subsystem. In this paper, one example of HWIL simulation facility with a target model generator for RF seeker systems is introduced at first. But this generator has the functional limitation on the line-of-sight angle as almost other generators, then, a test method to overcome the line-of-sight angle limitation is proposed.

  5. Infrasound Sensor Models and Evaluations

    SciTech Connect

    KROMER,RICHARD P.; MCDONALD,TIMOTHY S.

    2000-07-31

    Sandia National Laboratories has continued to evaluate the performance of infrasound sensors that are candidates for use by the International Monitoring System (IMS) for the Comprehensive Nuclear-Test-Ban Treaty Organization. The performance criteria against which these sensors are assessed are specified in ``Operational Manual for Infra-sound Monitoring and the International Exchange of Infrasound Data''. This presentation includes the results of efforts concerning two of these sensors: (1) Chaparral Physics Model 5; and (2) CEA MB2000. Sandia is working with Chaparral Physics in order to improve the capability of the Model 5 (a prototype sensor) to be calibrated and evaluated. With the assistance of the Scripps Institution of Oceanography, Sandia is also conducting tests to evaluate the performance of the CEA MB2000. Sensor models based on theoretical transfer functions and manufacturer specifications for these two devices have been developed. This presentation will feature the results of coherence-based data analysis of signals from a huddle test, utilizing several sensors of both types, in order to verify the sensor performance.

  6. Evaluating Service Organization Models

    PubMed Central

    TOUATI, NASSERA; PINEAULT, RAYNALD; CHAMPAGNE, FRANÇOIS; DENIS, JEAN-LOUIS; BROUSSELLE, ASTRID; CONTANDRIOPOULOS, ANDRÉ-PIERRE; GENEAU, ROBERT

    2016-01-01

    Based on the example of the evaluation of service organization models, this article shows how a configurational approach overcomes the limits of traditional methods which for the most part have studied the individual components of various models considered independently of one another. These traditional methods have led to results (observed effects) that are difficult to interpret. The configurational approach, in contrast, is based on the hypothesis that effects are associated with a set of internally coherent model features that form various configurations. These configurations, like their effects, are context-dependent. We explore the theoretical basis of the configuration approach in order to emphasize its relevance, and discuss the methodological challenges inherent in the application of this approach through an in-depth analysis of the scientific literature. We also propose methodological solutions to these challenges. We illustrate from an example how a configurational approach has been used to evaluate primary care models. Finally, we begin a discussion on the implications of this new evaluation approach for the scientific and decision-making communities. PMID:27274682

  7. EVALUATING THE PERFORMANCE OF REGIONAL-SCALE PHOTOCHEMICAL MODELING SYSTEMS. PART III-PRECURSOR PREDICTIONS. (R825260)

    EPA Science Inventory

    Abstract

    Two regional-scale photochemical modeling systems, RAMS/UAM-V and MM5/MAQSIP, are used to simulate precursor concentrations for 4 June¯31 August 1995 period. The time series of simulated and observed precursor concentrations are spectrally deco...

  8. What makes a top research medical school? A call for a new model to evaluate academic physicians and medical school performance.

    PubMed

    Goldstein, Matthew J; Lunn, Mitchell R; Peng, Lily

    2015-05-01

    Since the publication of the Flexner Report in 1910, the medical education enterprise has undergone many changes to ensure that medical schools meet a minimum standard for the curricula and clinical training they offer students. Although the efforts of the licensing and accrediting bodies have raised the quality of medical education, the educational processes that produce the physicians who provide the best patient care and conduct the best biomedical research have not been identified. Comparative analyses are powerful tools to understand the differences between institutions, but they are challenging to carry out. As a result, the analysis performed by U.S. News & World Report (USN&WR) has become the default tool to compare U.S. medical schools. Medical educators must explore more rigorous and equitable approaches to analyze and understand the performance of medical schools. In particular, a better understanding and more thorough evaluation of the most successful institutions in producing academic physicians with biomedical research careers are needed. In this Perspective, the authors present a new model to evaluate medical schools' production of academic physicians who advance medicine through basic, clinical, translational, and implementation science research. This model is based on relevant and accessible objective criteria that should replace the subjective criteria used in the current USN&WR rankings system. By fostering a national discussion about the most meaningful criteria that should be measured and reported, the authors hope to increase transparency of assessment standards and ultimately improve educational quality.

  9. What makes a top research medical school? A call for a new model to evaluate academic physicians and medical school performance.

    PubMed

    Goldstein, Matthew J; Lunn, Mitchell R; Peng, Lily

    2015-05-01

    Since the publication of the Flexner Report in 1910, the medical education enterprise has undergone many changes to ensure that medical schools meet a minimum standard for the curricula and clinical training they offer students. Although the efforts of the licensing and accrediting bodies have raised the quality of medical education, the educational processes that produce the physicians who provide the best patient care and conduct the best biomedical research have not been identified. Comparative analyses are powerful tools to understand the differences between institutions, but they are challenging to carry out. As a result, the analysis performed by U.S. News & World Report (USN&WR) has become the default tool to compare U.S. medical schools. Medical educators must explore more rigorous and equitable approaches to analyze and understand the performance of medical schools. In particular, a better understanding and more thorough evaluation of the most successful institutions in producing academic physicians with biomedical research careers are needed. In this Perspective, the authors present a new model to evaluate medical schools' production of academic physicians who advance medicine through basic, clinical, translational, and implementation science research. This model is based on relevant and accessible objective criteria that should replace the subjective criteria used in the current USN&WR rankings system. By fostering a national discussion about the most meaningful criteria that should be measured and reported, the authors hope to increase transparency of assessment standards and ultimately improve educational quality. PMID:25607941

  10. Evaluating the performance of the Community Land Model (CLM4.5) for a western US coniferous forest under annual drought stress

    NASA Astrophysics Data System (ADS)

    Duarte, H.; Lin, J. C.; Ehleringer, J. R.

    2014-12-01

    The Community Land Model (CLM) is the land model of NCAR's Community Earth System Model (CESM), encompassing land biogeophysics, biogeochemistry, hydrology, and ecosystem dynamics components. Several modifications were implemented in its most recent release (CLM4.5), including a revised photosynthesis scheme and improved hydrology, among an extensive list of updates. Since version 4.0, CLM also includes parameterizations related to photosynthetic carbon isotope discrimination. In this study we evaluate the performance of CLM4.5 at the Wind River Field Station AmeriFlux site (US-Wrc), with particular attention to its parameterization of ecosystem drought response. US-Wrc is located near the WA/OR border in a coniferous forest (Douglas-fir/western hemlock), in a region characterized by strongly seasonal climate and summer drought. Long-term meteorological/biological data are available through the AmeriFlux repository (almost a decade of L4 (gap-filled) data available, starting in 1998). Another factor that makes the site so unique is the availability of a decade-long record of carbon isotope ratios (δ13C). Here we run CLM in offline mode, forced by the observed meteorological data, and then compare modeled surface fluxes (CO2, sensible heat, and latent heat) against observed eddy-covariance fluxes. We also use the observed δ13C values to assess the parameterizations of carbon isotope discrimination in the model. We will present the result of the analysis and discuss possible improvements in the model.

  11. Toward More Performance Evaluation in Chemistry

    NASA Astrophysics Data System (ADS)

    Rasp, Sharon L.

    1998-01-01

    The history of the author's experiences in testing and changes in evaluation philosophy are chronicled. Tests in her classroom have moved from solely paper-pencil, multiple-choiced/objective formats to include also lab performance evaluatiors. Examples of performance evaluations in both a traditional chemistry course and a consumer-level chmistry course are given. Analysis of test results rof students indicates the need to continue to include a variety of methods in evaluating student performance in science.

  12. Performance evaluation of ocean color satellite models for deriving accurate chlorophyll estimates in the Gulf of Saint Lawrence

    NASA Astrophysics Data System (ADS)

    Montes-Hugo, M.; Bouakba, H.; Arnone, R.

    2014-06-01

    The understanding of phytoplankton dynamics in the Gulf of the Saint Lawrence (GSL) is critical for managing major fisheries off the Canadian East coast. In this study, the accuracy of two atmospheric correction techniques (NASA standard algorithm, SA, and Kuchinke's spectral optimization, KU) and three ocean color inversion models (Carder's empirical for SeaWiFS (Sea-viewing Wide Field-of-View Sensor), EC, Lee's quasi-analytical, QAA, and Garver- Siegel-Maritorena semi-empirical, GSM) for estimating the phytoplankton absorption coefficient at 443 nm (aph(443)) and the chlorophyll concentration (chl) in the GSL is examined. Each model was validated based on SeaWiFS images and shipboard measurements obtained during May of 2000 and April 2001. In general, aph(443) estimates derived from coupling KU and QAA models presented the smallest differences with respect to in situ determinations as measured by High Pressure liquid Chromatography measurements (median absolute bias per cruise up to 0.005, RMSE up to 0.013). A change on the inversion approach used for estimating aph(443) values produced up to 43.4% increase on prediction error as inferred from the median relative bias per cruise. Likewise, the impact of applying different atmospheric correction schemes was secondary and represented an additive error of up to 24.3%. By using SeaDAS (SeaWiFS Data Analysis System) default values for the optical cross section of phytoplankton (i.e., aph(443) = aph(443)/chl = 0.056 m2mg-1), the median relative bias of our chl estimates as derived from the most accurate spaceborne aph(443) retrievals and with respect to in situ determinations increased up to 29%.

  13. Pragmatic geometric model evaluation

    NASA Astrophysics Data System (ADS)

    Pamer, Robert

    2015-04-01

    Quantification of subsurface model reliability is mathematically and technically demanding as there are many different sources of uncertainty and some of the factors can be assessed merely in a subjective way. For many practical applications in industry or risk assessment (e. g. geothermal drilling) a quantitative estimation of possible geometric variations in depth unit is preferred over relative numbers because of cost calculations for different scenarios. The talk gives an overview of several factors that affect the geometry of structural subsurface models that are based upon typical geological survey organization (GSO) data like geological maps, borehole data and conceptually driven construction of subsurface elements (e. g. fault network). Within the context of the trans-European project "GeoMol" uncertainty analysis has to be very pragmatic also because of different data rights, data policies and modelling software between the project partners. In a case study a two-step evaluation methodology for geometric subsurface model uncertainty is being developed. In a first step several models of the same volume of interest have been calculated by omitting successively more and more input data types (seismic constraints, fault network, outcrop data). The positions of the various horizon surfaces are then compared. The procedure is equivalent to comparing data of various levels of detail and therefore structural complexity. This gives a measure of the structural significance of each data set in space and as a consequence areas of geometric complexity are identified. These areas are usually very data sensitive hence geometric variability in between individual data points in these areas is higher than in areas of low structural complexity. Instead of calculating a multitude of different models by varying some input data or parameters as it is done by Monte-Carlo-simulations, the aim of the second step of the evaluation procedure (which is part of the ongoing work) is to

  14. INTEGRATED WATER TREATMENT SYSTEM PERFORMANCE EVALUATION

    SciTech Connect

    SEXTON RA; MEEUWSEN WE

    2009-03-12

    This document describes the results of an evaluation of the current Integrated Water Treatment System (IWTS) operation against design performance and a determination of short term and long term actions recommended to sustain IWTS performance.

  15. Performance Evaluation of PBL Schemes of ARW Model in Simulating Thermo-Dynamical Structure of Pre-Monsoon Convective Episodes over Kharagpur Using STORM Data Sets

    NASA Astrophysics Data System (ADS)

    Madala, Srikanth; Satyanarayana, A. N. V.; Srinivas, C. V.; Tyagi, Bhishma

    2016-05-01

    In the present study, advanced research WRF (ARW) model is employed to simulate convective thunderstorm episodes over Kharagpur (22°30'N, 87°20'E) region of Gangetic West Bengal, India. High-resolution simulations are conducted using 1 × 1 degree NCEP final analysis meteorological fields for initial and boundary conditions for events. The performance of two non-local [Yonsei University (YSU), Asymmetric Convective Model version 2 (ACM2)] and two local turbulence kinetic energy closures [Mellor-Yamada-Janjic (MYJ), Bougeault-Lacarrere (BouLac)] are evaluated in simulating planetary boundary layer (PBL) parameters and thermodynamic structure of the atmosphere. The model-simulated parameters are validated with available in situ meteorological observations obtained from micro-meteorological tower as well has high-resolution DigiCORA radiosonde ascents during STORM-2007 field experiment at the study location and Doppler Weather Radar (DWR) imageries. It has been found that the PBL structure simulated with the TKE closures MYJ and BouLac are in better agreement with observations than the non-local closures. The model simulations with these schemes also captured the reflectivity, surface pressure patterns such as wake-low, meso-high, pre-squall low and the convective updrafts and downdrafts reasonably well. Qualitative and quantitative comparisons reveal that the MYJ followed by BouLac schemes better simulated various features of the thunderstorm events over Kharagpur region. The better performance of MYJ followed by BouLac is evident in the lesser mean bias, mean absolute error, root mean square error and good correlation coefficient for various surface meteorological variables as well as thermo-dynamical structure of the atmosphere relative to other PBL schemes. The better performance of the TKE closures may be attributed to their higher mixing efficiency, larger convective energy and better simulation of humidity promoting moist convection relative to non

  16. Evaluation of the performance of SiBcrop model in predicting carbon fluxes and crop yields in the croplands of the US mid continental region

    NASA Astrophysics Data System (ADS)

    Lokupitiya, E.; Denning, S.; Paustian, K.; Corbin, K.; Baker, I.; Schaefer, K.

    2008-12-01

    The accurate representation of phenology, physiology, and major crop variables is important in the land- atmosphere carbon models being used to predict carbon and other exchanges of the man-made cropland ecosystems. We evaluated the performance of SiBcrop model (which is the Simple Biosphere model (SiB) with a new scheme for crop phenology and physiology) in predicting carbon exchanges of the US mid continental region which has several major crops. The use of the new phenology scheme within SiB remarkably improved the prediction of LAI and carbon fluxes for corn, soybean, and wheat crops as compared with the observed data at several Ameriflux eddy covariance flux tower sites with those crops. SiBcrop better predicted the onset and end of the growing season, harvest, interannual variability associated with crop rotation, day time carbon draw down, and day to day variability in the carbon exchanges. The model has been coupled with RAMS, the regional Atmospheric Modeling System (developed at Colorado State University), and the coupled SiBcrop-RAMS has predicted better carbon and other fluxes compared to the original SiB-RAMS. SiBcrop also predicted daily variation in biomass in different plant pools (i.e. roots, leaves, stems, and products). In this study, we further evaluated the performance of SiBcrop by comparing the yield estimates based on the grain/seed biomass at harvest predicted by SiBcrop for relevant major crops, against the county-level crop yields reported by the US National Agricultural Statistics Service (NASS). Initially, the model runs were based on crop maps scaled at 40 km resolution; the maps were used to derive the fraction of corn, soybean, and wheat at each grid cell across the US Mid Continental Intensive (MCI) region under the North American Carbon Program (NACP). The yield biomass carbon values (at harvest) predicted for each grid cell by SiBcrop were extrapolated to derive the county-level yield biomass carbon values, which were then

  17. Performance evaluation of an on-site volume reduction system with synthetic urine using a water transport model.

    PubMed

    Pahore, Muhammad Masoom; Ito, Ryusei; Funamizu, Naoyuki

    2011-07-01

    The parameters of a model of the transport of water from a wet cloth sheet to the air, developed for deionized water, to establish design procedures of an on-site volume reduction system, were identified for high salt concentrations present in synthetic urine. The results showed that the water penetration was affected neither by the salts, urea or creatinine present in the synthetic urine nor by the salts accumulated on the surface of the vertical gauze sheet. However, the saturated vapour pressure decreased, leading to reduction in the evaporation rate, which occurred as a result of the salts accumulating on the surface of the vertical gauze sheet. Furthermore, a steady-state evaporation condition was established, illustrating salts falling back to the tank from the vertical gauze sheet. Accordingly, the existing design procedure was amended by incorporating the calculation procedure for the saturated vapour pressure using Raoult's law. Subsequently, the effective evaporation area of the vertical gauze sheet was estimated using the amended deign procedures to assess feasibility. This estimation showed that the arid, tropical, temperate and cold climates are suitable for the operation of this system, which require requires a small place at household level for 80% volume reduction of 10 L of urine per day for 12 hours' operation in the daytime. PMID:21882549

  18. S-191 sensor performance evaluation

    NASA Technical Reports Server (NTRS)

    Hughes, C. L.

    1975-01-01

    A final analysis was performed on the Skylab S-191 spectrometer data received from missions SL-2, SL-3, and SL-4. The repeatability and accuracy of the S-191 spectroradiometric internal calibration was determined by correlation to the output obtained from well-defined external targets. These included targets on the moon and earth as well as deep space. In addition, the accuracy of the S-191 short wavelength autocalibration was flight checked by correlation of the earth resources experimental package S-191 outputs and the Backup Unit S-191 outputs after viewing selected targets on the moon.

  19. Perspectives on human performance modelling

    SciTech Connect

    Pew, R.W.; Baron, S.

    1983-11-01

    A combination of psychologically-based and control-theoretic approaches to human performance modelling results in other models which have the potential for unifying related works in psychology, artificial intelligence, and system-oriented supervisory control. 33 references.

  20. Evaluating iterative reconstruction performance in computed tomography

    SciTech Connect

    Chen, Baiyu Solomon, Justin; Ramirez Giraldo, Juan Carlos; Samei, Ehsan

    2014-12-15

    Purpose: Iterative reconstruction (IR) offers notable advantages in computed tomography (CT). However, its performance characterization is complicated by its potentially nonlinear behavior, impacting performance in terms of specific tasks. This study aimed to evaluate the performance of IR with both task-specific and task-generic strategies. Methods: The performance of IR in CT was mathematically assessed with an observer model that predicted the detection accuracy in terms of the detectability index (d′). d′ was calculated based on the properties of the image noise and resolution, the observer, and the detection task. The characterizations of image noise and resolution were extended to accommodate the nonlinearity of IR. A library of tasks was mathematically modeled at a range of sizes (radius 1–4 mm), contrast levels (10–100 HU), and edge profiles (sharp and soft). Unique d′ values were calculated for each task with respect to five radiation exposure levels (volume CT dose index, CTDI{sub vol}: 3.4–64.8 mGy) and four reconstruction algorithms (filtered backprojection reconstruction, FBP; iterative reconstruction in imaging space, IRIS; and sinogram affirmed iterative reconstruction with strengths of 3 and 5, SAFIRE3 and SAFIRE5; all provided by Siemens Healthcare, Forchheim, Germany). The d′ values were translated into the areas under the receiver operating characteristic curve (AUC) to represent human observer performance. For each task and reconstruction algorithm, a threshold dose was derived as the minimum dose required to achieve a threshold AUC of 0.9. A task-specific dose reduction potential of IR was calculated as the difference between the threshold doses for IR and FBP. A task-generic comparison was further made between IR and FBP in terms of the percent of all tasks yielding an AUC higher than the threshold. Results: IR required less dose than FBP to achieve the threshold AUC. In general, SAFIRE5 showed the most significant dose reduction

  1. Evaluating Economic Performance and Policies: A Comment.

    ERIC Educational Resources Information Center

    Schur, Leon M.

    1987-01-01

    Offers a critique of Thurow's paper on the evaluation of economic performance (see SO516719). Concludes that the alternative offered by Thurow is inadequate, and states that the standards developed by the "Framework" are adequate for evaluating economic performance and policies. (JDH)

  2. Colorimetric evaluation of display performance

    NASA Astrophysics Data System (ADS)

    Kosmowski, Bogdan B.

    2001-08-01

    The development of information techniques, using new technologies, physical phenomena and coding schemes, enables new application areas to be benefited form the introduction of displays. The full utilization of the visual perception of a human operator, requires the color coding process to be implemented. The evolution of displays, from achromatic (B&W) and monochromatic, to multicolor and full-color, enhances the possibilities of information coding, creating however a need for the quantitative methods of display parameter assessment. Quantitative assessment of color displays, restricted to photometric measurements of their parameters, is an estimate leading to considerable errors. Therefore, the measurements of a display's color properties have to be based on spectral measurements of the display and its elements. The quantitative assessment of the display system parameters should be made using colorimetric systems like CIE1931, CIE1976 LAB or LUV. In the paper, the constraints on the measurement method selection for the color display evaluation are discussed and the relations between their qualitative assessment and the ergonomic conditions of their application are also presented. The paper presents the examples of using LUV colorimetric system and color difference (Delta) E in the optimization of color liquid crystal displays.

  3. Using Business Performance To Evaluate Multimedia Training in Manufacturing.

    ERIC Educational Resources Information Center

    Lachenmaier, Lynn S.; Moor, William C.

    1997-01-01

    Discusses training evaluation and shows how an abbreviated form of Kirkpatrick's four-level evaluation model can be used effectively to evaluate multimedia-based manufacturing training. Topics include trends in manufacturing training, quantifying performance improvement, and statistical comparisons using the Mann-Whitney test and the Tukey Quick…

  4. Using the Many-Facet Rasch Model to Evaluate Standard-Setting Judgments: Setting Performance Standards for Advanced Placement® Examinations

    ERIC Educational Resources Information Center

    Kaliski, Pamela; Wind, Stefanie A.; Engelhard, George, Jr.; Morgan, Deanna; Plake, Barbara; Reshetar, Rosemary

    2012-01-01

    The Many-Facet Rasch (MFR) Model is traditionally used to evaluate the quality of ratings on constructed response assessments; however, it can also be used to evaluate the quality of judgments from panel-based standard setting procedures. The current study illustrates the use of the MFR Model by examining the quality of ratings obtained from a…

  5. Theory and Practice on Teacher Performance Evaluation

    ERIC Educational Resources Information Center

    Yonghong, Cai; Chongde, Lin

    2006-01-01

    Teacher performance evaluation plays a key role in educational personnel reform, so it has been an important yet difficult issue in educational reform. Previous evaluations on teachers failed to make strict distinction among the three dominant types of evaluation, namely, capability, achievement, and effectiveness. Moreover, teacher performance…

  6. Towards Systematic Benchmarking of Climate Model Performance

    NASA Astrophysics Data System (ADS)

    Gleckler, P. J.

    2014-12-01

    The process by which climate models are evaluated has evolved substantially over the past decade, with the Coupled Model Intercomparison Project (CMIP) serving as a centralizing activity for coordinating model experimentation and enabling research. Scientists with a broad spectrum of expertise have contributed to the CMIP model evaluation process, resulting in many hundreds of publications that have served as a key resource for the IPCC process. For several reasons, efforts are now underway to further systematize some aspects of the model evaluation process. First, some model evaluation can now be considered routine and should not require "re-inventing the wheel" or a journal publication simply to update results with newer models. Second, the benefit of CMIP research to model development has not been optimal because the publication of results generally takes several years and is usually not reproducible for benchmarking newer model versions. And third, there are now hundreds of model versions and many thousands of simulations, but there is no community-based mechanism for routinely monitoring model performance changes. An important change in the design of CMIP6 can help address these limitations. CMIP6 will include a small set standardized experiments as an ongoing exercise (CMIP "DECK": ongoing Diagnostic, Evaluation and Characterization of Klima), so that modeling groups can submit them at any time and not be overly constrained by deadlines. In this presentation, efforts to establish routine benchmarking of existing and future CMIP simulations will be described. To date, some benchmarking tools have been made available to all CMIP modeling groups to enable them to readily compare with CMIP5 simulations during the model development process. A natural extension of this effort is to make results from all CMIP simulations widely available, including the results from newer models as soon as the simulations become available for research. Making the results from routine

  7. Social Program Evaluation: Six Models.

    ERIC Educational Resources Information Center

    New Directions for Program Evaluation, 1980

    1980-01-01

    Representative models of program evaluation are described by their approach to values, and categorized by empirical style: positivism versus humanism. The models are: social process audit; experimental/quasi-experimental research design; goal-free evaluation; systems evaluation; cost-benefit analysis; and accountability program evaluation. (CP)

  8. Evaluation Theory, Models, and Applications

    ERIC Educational Resources Information Center

    Stufflebeam, Daniel L.; Shinkfield, Anthony J.

    2007-01-01

    "Evaluation Theory, Models, and Applications" is designed for evaluators and students who need to develop a commanding knowledge of the evaluation field: its history, theory and standards, models and approaches, procedures, and inclusion of personnel as well as program evaluation. This important book shows how to choose from a growing array of…

  9. Conductor gestures influence evaluations of ensemble performance

    PubMed Central

    Morrison, Steven J.; Price, Harry E.; Smedley, Eric M.; Meals, Cory D.

    2014-01-01

    Previous research has found that listener evaluations of ensemble performances vary depending on the expressivity of the conductor’s gestures, even when performances are otherwise identical. It was the purpose of the present study to test whether this effect of visual information was evident in the evaluation of specific aspects of ensemble performance: articulation and dynamics. We constructed a set of 32 music performances that combined auditory and visual information and were designed to feature a high degree of contrast along one of two target characteristics: articulation and dynamics. We paired each of four music excerpts recorded by a chamber ensemble in both a high- and low-contrast condition with video of four conductors demonstrating high- and low-contrast gesture specifically appropriate to either articulation or dynamics. Using one of two equivalent test forms, college music majors and non-majors (N = 285) viewed sixteen 30 s performances and evaluated the quality of the ensemble’s articulation, dynamics, technique, and tempo along with overall expressivity. Results showed significantly higher evaluations for performances featuring high rather than low conducting expressivity regardless of the ensemble’s performance quality. Evaluations for both articulation and dynamics were strongly and positively correlated with evaluations of overall ensemble expressivity. PMID:25104944

  10. Conductor gestures influence evaluations of ensemble performance.

    PubMed

    Morrison, Steven J; Price, Harry E; Smedley, Eric M; Meals, Cory D

    2014-01-01

    Previous research has found that listener evaluations of ensemble performances vary depending on the expressivity of the conductor's gestures, even when performances are otherwise identical. It was the purpose of the present study to test whether this effect of visual information was evident in the evaluation of specific aspects of ensemble performance: articulation and dynamics. We constructed a set of 32 music performances that combined auditory and visual information and were designed to feature a high degree of contrast along one of two target characteristics: articulation and dynamics. We paired each of four music excerpts recorded by a chamber ensemble in both a high- and low-contrast condition with video of four conductors demonstrating high- and low-contrast gesture specifically appropriate to either articulation or dynamics. Using one of two equivalent test forms, college music majors and non-majors (N = 285) viewed sixteen 30 s performances and evaluated the quality of the ensemble's articulation, dynamics, technique, and tempo along with overall expressivity. Results showed significantly higher evaluations for performances featuring high rather than low conducting expressivity regardless of the ensemble's performance quality. Evaluations for both articulation and dynamics were strongly and positively correlated with evaluations of overall ensemble expressivity. PMID:25104944

  11. LANDSAT-4 horizon scanner performance evaluation

    NASA Technical Reports Server (NTRS)

    Bilanow, S.; Chen, L. C.; Davis, W. M.; Stanley, J. P.

    1984-01-01

    Representative data spans covering a little more than a year since the LANDSAT-4 launch were analyzed to evaluate the flight performance of the satellite's horizon scanner. High frequency noise was filtered out by 128-point averaging. The effects of Earth oblateness and spacecraft altitude variations are modeled, and residual systematic errors are analyzed. A model for the predicted radiance effects is compared with the flight data and deficiencies in the radiance effects modeling are noted. Correction coefficients are provided for a finite Fourier series representation of the systematic errors in the data. Analysis of the seasonal dependence of the coefficients indicates the effects of some early mission problems with the reference attitudes which were computed by the onboard computer using star trackers and gyro data. The effects of sun and moon interference, unexplained anomalies in the data, and sensor noise characteristics and their power spectrum are described. The variability of full orbit data averages is shown. Plots of the sensor data for all the available data spans are included.

  12. Performance Evaluation of Undulator Radiation at CEBAF

    SciTech Connect

    Chuyu Liu, Geoffrey Krafft, Guimei Wang

    2010-05-01

    The performance of undulator radiation (UR) at CEBAF with a 3.5 m helical undulator is evaluated and compared with APS undulator-A radiation in terms of brilliance, peak brilliance, spectral flux, flux density and intensity distribution.

  13. Actinide Sorption in a Brine/Dolomite Rock System: Evaluating the Degree of Conservatism in Kd Ranges used in Performance Assessment Modeling for the WIPP Nuclear Waste Repository

    NASA Astrophysics Data System (ADS)

    Dittrich, T. M.; Reed, D. T.

    2015-12-01

    The Waste Isolation Pilot Plant (WIPP) near Carlsbad, NM is the only operating nuclear waste repository in the US and has been accepting transuranic (TRU) waste since 1999. The WIPP is located in a salt deposit approximately 650 m below the surface and performance assessment (PA) modeling for a 10,000 year period is required to recertify the operating license with the US EPA every five years. The main pathway of concern for environmental release of radioactivity is a human intrusion caused by drilling into a pressurized brine reservoir below the repository. This could result in the flooding of the repository and subsequent transport in the high transmissivity layer (dolomite-rich Culebra formation) above the waste disposal rooms. We evaluate the degree of conservatism in the estimated sorption partition coefficients (Kds) ranges used in the PA based on an approach developed with granite rock and actinides (Dittrich and Reimus, 2015; Dittrich et al., 2015). Sorption onto the waste storage material (Fe drums) may also play a role in mobile actinide concentrations. We will present (1) a conceptual overview of how Kds are used in the PA model, (2) technical background of the evolution of the ranges and (3) results from batch and column experiments and model predictions for Kds with WIPP dolomite and clays, brine with various actinides, and ligands (e.g., acetate, citrate, EDTA) that could promote transport. The current Kd ranges used in performance models are based on oxidation state and are 5-400, 0.5-10,000, 0.03-200, and 0.03-20 mL g-1 for elements with oxidation states of III, IV, V, and VI, respectively. Based on redox conditions predicted in the brines, possible actinide species include Pu(III), Pu(IV), U(IV), U(VI), Np(IV), Np(V), Am(III), and Th(IV). We will also discuss the challenges of upscaling from lab experiments to field scale predictions, the role of colloids, and the effect of engineered barrier materials (e.g., MgO) on transport conditions. Dittrich

  14. TPF-C Performance Modeling

    NASA Technical Reports Server (NTRS)

    Shaklan, Stuart

    2008-01-01

    This slide presentation reviews the performance modeling of the Terrestrial Planet Finder Coronagraph (TPF-C). Included is a chart of the Error Budget Models, definitions of the static and dynamic terms, a chart showing the aberration sensitivity at 2 lambda/D, charts showing the thermal performance models and analysis, surface requirements, high-level requirements, and calculations for the beam walk model.Also included is a description of the control systems, and a flow for the iterative design and analysis cycle.

  15. Evaluation of Infiltration Basin Performance in Florida

    NASA Astrophysics Data System (ADS)

    Bean, E.

    2012-12-01

    design volume within 72 h. Only one basin was expected not to function as designed and was confirmed by monitoring data. The remaining five basins expected to not function as designed, however, met their design criteria based on monitoring data. When groundwater mounding occurred above the surface of the ten functioning basins, drawdown rates were at least an order of magnitude below the design infiltration rates and affected recovery times for subsequent events, although this only resulted from individual events or successive events exceeding the design storm for functioning basins. Results of this study indicate that basins with coarser soils and FDOT basins were more likely to exceed their design performance based on DRI evaluations. However, using the DRI underestimated basin performance leading to an incorrect result for five basins and was overly conservative. Extended ponding resulting from extreme or successive events suggests that basin designs may benefit from using continuous simulation modeling, rather than single event simulations, for sizing.

  16. Improvement of Automotive Part Supplier Performance Evaluation

    NASA Astrophysics Data System (ADS)

    Kongmunee, Chalermkwan; Chutima, Parames

    2016-05-01

    This research investigates the problem of the part supplier performance evaluation in a major Japanese automotive plant in Thailand. Its current evaluation scheme is based on experiences and self-opinion of the evaluators. As a result, many poor performance suppliers are still considered as good suppliers and allow to supply parts to the plant without further improvement obligation. To alleviate this problem, the brainstorming session among stakeholders and evaluators are formally conducted. The result of which is the appropriate evaluation criteria and sub-criteria. The analytical hierarchy process is also used to find suitable weights for each criteria and sub-criteria. The results show that a newly developed evaluation method is significantly better than the previous one in segregating between good and poor suppliers.

  17. Evaluation of high-performance computing software

    SciTech Connect

    Browne, S.; Dongarra, J.; Rowan, T.

    1996-12-31

    The absence of unbiased and up to date comparative evaluations of high-performance computing software complicates a user`s search for the appropriate software package. The National HPCC Software Exchange (NHSE) is attacking this problem using an approach that includes independent evaluations of software, incorporation of author and user feedback into the evaluations, and Web access to the evaluations. We are applying this approach to the Parallel Tools Library (PTLIB), a new software repository for parallel systems software and tools, and HPC-Netlib, a high performance branch of the Netlib mathematical software repository. Updating the evaluations with feed-back and making it available via the Web helps ensure accuracy and timeliness, and using independent reviewers produces unbiased comparative evaluations difficult to find elsewhere.

  18. Performance-Based Evaluation and School Librarians

    ERIC Educational Resources Information Center

    Church, Audrey P.

    2015-01-01

    Evaluation of instructional personnel is standard procedure in our Pre-K-12 public schools, and its purpose is to document educator effectiveness. With Race to the Top and No Child Left Behind waivers, states are required to implement performance-based evaluations that demonstrate student academic progress. This three-year study describes the…

  19. Building Leadership Talent through Performance Evaluation

    ERIC Educational Resources Information Center

    Clifford, Matthew

    2015-01-01

    Most states and districts scramble to provide professional development to support principals, but "principal evaluation" is often lost amid competing priorities. Evaluation is an important method for supporting principal growth, communicating performance expectations to principals, and improving leadership practice. It provides leaders…

  20. Reference Service Standards, Performance Criteria, and Evaluation.

    ERIC Educational Resources Information Center

    Schwartz, Diane G.; Eakin, Dottie

    1986-01-01

    Describes process by which reference service standards were developed at a university medical library and their impact on the evaluation of work of librarians. Highlights include establishment of preliminary criteria, literature review, reference service standards, performance evaluation, peer review, and staff development. Checklist of reference…

  1. Assessment beyond Performance: Phenomenography in Educational Evaluation

    ERIC Educational Resources Information Center

    Micari, Marina; Light, Gregory; Calkins, Susanna; Streitwieser, Bernhard

    2007-01-01

    Increasing calls for accountability in education have promoted improvements in quantitative evaluation approaches that measure student performance; however, this has often been to the detriment of qualitative approaches, reducing the richness of educational evaluation as an enterprise. In this article the authors assert that it is not merely…

  2. Evaluating Economic Performance and Policies: A Comment.

    ERIC Educational Resources Information Center

    Walstad, William B.

    1987-01-01

    Critiques Thurow's paper on the evaluation of economic performance (see SO516719). Concludes that the Joint Council's "Framework" offers a solid foundation for teaching about economic performance if the Joint Council can persuade high school economics teachers to use it. (JDH)

  3. EVALUATION OF THE COMMUNITY MULTISCALE AIR QUALITY (CMAQ) MODEL VERSION 4.5: UNCERTAINTIES AND SENSITIVITIES IMPACTING MODEL PERFORMANCE: PART I - OZONE

    EPA Science Inventory

    This study examines ozone (O3) predictions from the Community Multiscale Air Quality (CMAQ) model version 4.5 and discusses potential factors influencing the model results. Daily maximum 8-hr average O3 levels are largely underpredicted when observed O...

  4. AMTEC RC-10 Performance Evaluation Test Program

    NASA Astrophysics Data System (ADS)

    Schuller, Michael; Reiners, Elinor; Lemire, Robert; Sievers, Robert

    1994-07-01

    The Phillips Laboratory Power and Thermal Management Division (PL/VTP), in conjunction with ORION International Technologies, initiated the Alkali Metal Thermal to Electric Conversion (AMTEC), Remote Condensed-10% efficient (RC-10) Performance Evaluation Test Program to investigate cell design variations intended to increase efficiency in AMTEC cells. The RC-10 cell, fabricated by Advanced Modular Power Systems, uses a remote condensing region to reduce radiative heat losses from the electrode. The cell has operated at 10% efficiency. PL/VTP tested the RC-10 to evaluate its performance and efficiency. The impact of temperature variations along the length of the cell wall on performance were evaluated. Testing was performed in air, with a `` guard heater'' surrounding the cell to simulate the system environment of the cell.

  5. Nickel cadmium battery performance modelling

    NASA Technical Reports Server (NTRS)

    Clark, K.; Halpert, G.; Timmerman, P.

    1989-01-01

    The development of a model to predict cell/battery behavior given databases of temperature is described. The model accommodates batteries of various structural as well as thermal designs. Cell internal design modifications can be accommodated as long as the databases reflect the cell's performance characteristics. Operational parameters can be varied to simulate any number of charge or discharge methods under any orbital regime. The flexibility of the model stems from the broad scope of input variables and allows the prediction of battery performance under simulated mission or test conditions.

  6. Simulated and reconstructed climate in Europe during the last five centuries: joint evaluation of climate models performance and the dynamical consistency of gridded reconstructions

    NASA Astrophysics Data System (ADS)

    José Gómez-Navarro, Juan; Bothe, Oliver; Wagner, Sebastian; Zorita, Eduardo; Werner, Johannes P.; Luterbacher, Jürg; Raible, Christoph C.; Montávez, Juan Pedro

    2015-04-01

    This study jointly analyses European winter and summer temperature and precipitation gridded climate reconstructions and a regional climate simulation reaching a resolution of 45 km over the period 1501-1990. In a first step, the simulation is compared to observational records to establish the model performance and to identify the most prominent caveats. It is found that the regional simulation is able to add value to the driving global simulation, which allows it to reproduce accurately the most prominent characteristics of the European climate, although remarkable biases can also be identified. In a second step, the simulation is compared to a set on independent reconstructions. The high-resolution of the simulation and the reconstructions allows to analyse the European area for nine sub-areas. An overall good agreement is found between the reconstructed and simulated climate variability across different areas, supporting a consistency of both products and the proper calibration of the reconstructions. However, biases appear between both datasets, that thanks to the evaluation of the model performance carried out before, can be attributed to deficiencies in the simulation. Although the simulation responds to external forcing, it largely differers with reconstructions in their estimates of the past climate evolution for European sub-regions. In particular, there are deviations between simulated and reconstructed anomalies during the Maunder and Dalton minima, i.e. the simulated response is much stronger than the reconstructed. This disagreement is to some extent expected given the prominent role of internal variability in the regional evolution of temperature and precipitation. However the inability of the model to reproduce any warm period similar to that recorded around 1740 in the reconstructions indicates fundamental limitations in the simulation that preclude reproducing exceptionally anomalous conditions. Despite these limitations, the simulated climate is a

  7. Air Conditioner Compressor Performance Model

    SciTech Connect

    Lu, Ning; Xie, YuLong; Huang, Zhenyu

    2008-09-05

    During the past three years, the Western Electricity Coordinating Council (WECC) Load Modeling Task Force (LMTF) has led the effort to develop the new modeling approach. As part of this effort, the Bonneville Power Administration (BPA), Southern California Edison (SCE), and Electric Power Research Institute (EPRI) Solutions tested 27 residential air-conditioning units to assess their response to delayed voltage recovery transients. After completing these tests, different modeling approaches were proposed, among them a performance modeling approach that proved to be one of the three favored for its simplicity and ability to recreate different SVR events satisfactorily. Funded by the California Energy Commission (CEC) under its load modeling project, researchers at Pacific Northwest National Laboratory (PNNL) led the follow-on task to analyze the motor testing data to derive the parameters needed to develop a performance models for the single-phase air-conditioning (SPAC) unit. To derive the performance model, PNNL researchers first used the motor voltage and frequency ramping test data to obtain the real (P) and reactive (Q) power versus voltage (V) and frequency (f) curves. Then, curve fitting was used to develop the P-V, Q-V, P-f, and Q-f relationships for motor running and stalling states. The resulting performance model ignores the dynamic response of the air-conditioning motor. Because the inertia of the air-conditioning motor is very small (H<0.05), the motor reaches from one steady state to another in a few cycles. So, the performance model is a fair representation of the motor behaviors in both running and stalling states.

  8. Effects of Performers' External Characteristics on Performance Evaluations.

    ERIC Educational Resources Information Center

    Bermingham, Gudrun A.

    2000-01-01

    States that fairness has been a major concern in the field of music adjudication. Reviews the research literature to reveal information about three external characteristics (race, gender, and physical attractiveness) that may affect judges' performance evaluations and influence fairness of music adjudication. Includes references. (CMK)

  9. Modeling road-cycling performance.

    PubMed

    Olds, T S; Norton, K I; Lowe, E L; Olive, S; Reay, F; Ly, S

    1995-04-01

    This paper presents a complete set of equations for a "first principles" mathematical model of road-cycling performance, including corrections for the effect of winds, tire pressure and wheel radius, altitude, relative humidity, rotational kinetic energy, drafting, and changed drag. The relevant physiological, biophysical, and environmental variables were measured in 41 experienced cyclists completing a 26-km road time trial. The correlation between actual and predicted times was 0.89 (P < or = 0.0001), with a mean difference of 0.74 min (1.73% of mean performance time) and a mean absolute difference of 1.65 min (3.87%). Multiple simulations were performed where model inputs were randomly varied using a normal distribution about the measured values with a SD equivalent to the estimated day-to-day variability or technical error of measurement in each of the inputs. This analysis yielded 95% confidence limits for the predicted times. The model suggests that the main physiological factors contributing to road-cycling performance are maximal O2 consumption, fractional utilization of maximal O2 consumption, mechanical efficiency, and projected frontal area. The model is then applied to some practical problems in road cycling: the effect of drafting, the advantage of using smaller front wheels, the effects of added mass, the importance of rotational kinetic energy, the effect of changes in drag due to changes in bicycle configuration, the normalization of performances under different conditions, and the limits of human performance. PMID:7615475

  10. Impact of Full-Day Head Start Prekindergarten Class Model on Student Academic Performance, Cognitive Skills, and Learning Behaviors by the End of Grade 2. Evaluation Brief

    ERIC Educational Resources Information Center

    Zhao, Huafang; Modarresi, Shahpar

    2013-01-01

    This brief describes the impact of the Montgomery County (Maryland) Public Schools (MCPS) 2007-2008 full-day Head Start prekindergarten (pre-K) class model on student academic performance, cognitive skills, and learning behaviors by the end of Grade 2. This is the fourth impact study of the MCPS full-day Head Start pre-K class model. The following…

  11. Evaluation of the performance of a meso-scale NWP model to forecast solar irradiance on Reunion Island for photovoltaic power applications

    NASA Astrophysics Data System (ADS)

    Kalecinski, Natacha; Haeffelin, Martial; Badosa, Jordi; Periard, Christophe

    2013-04-01

    Solar photovoltaic power is a predominant source of electrical power on Reunion Island, regularly providing near 30% of electrical power demand for a few hours per day. However solar power on Reunion Island is strongly modulated by clouds in small temporal and spatial scales. Today regional regulations require that new solar photovoltaic plants be combined with storage systems to reduce electrical power fluctuations on the grid. Hence cloud and solar irradiance forecasting becomes an important tool to help optimize the operation of new solar photovoltaic plants on Reunion Island. Reunion Island, located in the South West of the Indian Ocean, is exposed to persistent trade winds, most of all in winter. In summer, the southward motion of the ITCZ brings atmospheric instabilities on the island and weakens trade winds. This context together with the complex topography of Reunion Island, which is about 60 km wide, with two high summits (3070 and 2512 m) connected by a 1500 m plateau, makes cloudiness very heterogeneous. High cloudiness variability is found between mountain and coastal areas and between the windward, leeward and lateral regions defined with respect to the synoptic wind direction. A detailed study of local dynamics variability is necessary to better understand cloud life cycles around the island. In the presented work, our approach to explore the short-term solar irradiance forecast at local scales is to use the deterministic output from a meso-scale numerical weather prediction (NWP) model, AROME, developed by Meteo France. To start we evaluate the performance of the deterministic forecast from AROME by using meteorological measurements from 21 meteorological ground stations widely spread around the island (and with altitudes from 8 to 2245 m). Ground measurements include solar irradiation, wind speed and direction, relative humidity, air temperature, precipitation and pressure. Secondly we study in the model the local dynamics and thermodynamics that

  12. Models for Automated Tube Performance Calculations

    SciTech Connect

    C. Brunkhorst

    2002-12-12

    High power radio-frequency systems, as typically used in fusion research devices, utilize vacuum tubes. Evaluation of vacuum tube performance involves data taken from tube operating curves. The acquisition of data from such graphical sources is a tedious process. A simple modeling method is presented that will provide values of tube currents for a given set of element voltages. These models may be used as subroutines in iterative solutions of amplifier operating conditions for a specific loading impedance.

  13. Smith Newton Vehicle Performance Evaluation (Brochure)

    SciTech Connect

    Not Available

    2012-08-01

    The Fleet Test and Evaluation Team at the U.S. Department of Energy's National Renewable Energy Laboratory is evaluating and documenting the performance of electric and plug-in hybrid electric drive systems in medium-duty trucks across the nation. Through this project, Smith Electric Vehicles will build and deploy 500 all-electric medium-duty trucks. The trucks will be deployed in diverse climates across the country.

  14. Performance Evaluation and Benchmarking of Intelligent Systems

    SciTech Connect

    Madhavan, Raj; Messina, Elena; Tunstel, Edward

    2009-09-01

    To design and develop capable, dependable, and affordable intelligent systems, their performance must be measurable. Scientific methodologies for standardization and benchmarking are crucial for quantitatively evaluating the performance of emerging robotic and intelligent systems technologies. There is currently no accepted standard for quantitatively measuring the performance of these systems against user-defined requirements; and furthermore, there is no consensus on what objective evaluation procedures need to be followed to understand the performance of these systems. The lack of reproducible and repeatable test methods has precluded researchers working towards a common goal from exchanging and communicating results, inter-comparing system performance, and leveraging previous work that could otherwise avoid duplication and expedite technology transfer. Currently, this lack of cohesion in the community hinders progress in many domains, such as manufacturing, service, healthcare, and security. By providing the research community with access to standardized tools, reference data sets, and open source libraries of solutions, researchers and consumers will be able to evaluate the cost and benefits associated with intelligent systems and associated technologies. In this vein, the edited book volume addresses performance evaluation and metrics for intelligent systems, in general, while emphasizing the need and solutions for standardized methods. To the knowledge of the editors, there is not a single book on the market that is solely dedicated to the subject of performance evaluation and benchmarking of intelligent systems. Even books that address this topic do so only marginally or are out of date. The research work presented in this volume fills this void by drawing from the experiences and insights of experts gained both through theoretical development and practical implementation of intelligent systems in a variety of diverse application domains. The book presents

  15. Performance and Evaluation of LISP Systems

    SciTech Connect

    Gabriel, R.P.

    1985-01-01

    The final report of the Stanford Lisp Performance Study, Performance and Evaluation of Lisp Systems is the first book to present descriptions on Lisp implementation techniques actually in use. It provides performance information using the tools of benchmarking to measure the various Lisp systems, and provides an understanding of the technical tradeoffs made during the implementation of a Lisp system. The study is divided into three parts. The first provides the theoretical background, outlining the factors that go into evaluating the performance of a Lisp system. The second part presents the Lisp implementations: MacLisp, MIT CADR, LMI Lambda, S-I Lisp, Franz Lisp, MIL, Spice Lisp, Vax Common Lisp, Portable Standard Lisp, and Xerox D-Machine. A final part describes the benchmark suite that was used during the major portion of the study and the results themselves.

  16. DRACS thermal performance evaluation for FHR

    SciTech Connect

    Lv, Q.; Lin, H. C.; Kim, I. H.; Sun, X.; Christensen, R. N.; Blue, T. E.; Yoder, G. L.; Wilson, D. F.; Sabharwall, P.

    2015-03-01

    Direct Reactor Auxiliary Cooling System (DRACS) is a passive decay heat removal system proposed for the Fluoride-salt-cooled High-temperature Reactor (FHR) that combines coated particle fuel and a graphite moderator with a liquid fluoride salt as the coolant. The DRACS features three coupled natural circulation/convection loops, relying completely on buoyancy as the driving force. These loops are coupled through two heat exchangers, namely, the DRACS Heat Exchanger and the Natural Draft Heat Exchanger. In addition, a fluidic diode is employed to minimize the parasitic flow into the DRACS primary loop and correspondingly the heat loss to the DRACS during normal operation of the reactor, and to keep the DRACS ready for activation, if needed, during accidents. To help with the design and thermal performance evaluation of the DRACS, a computer code using MATLAB has been developed. This code is based on a one-dimensional formulation and its principle is to solve the energy balance and integral momentum equations. By discretizing the DRACS system in the axial direction, a bulk mean temperature is assumed for each mesh cell. The temperatures of all the cells, as well as the mass flow rates in the DRACS loops, are predicted by solving the governing equations that are obtained by integrating the energy conservation equation over each cell and integrating the momentum conservation equation over each of the DRACS loops. In addition, an intermediate heat transfer loop equipped with a pump has also been modeled in the code. This enables the study of flow reversal phenomenon in the DRACS primary loop, associated with the pump trip process. Experimental data from a High-Temperature DRACS Test Facility (HTDF) are not available yet to benchmark the code. A preliminary code validation is performed by using natural circulation experimental data available in the literature, which are as closely relevant as possible. The code is subsequently applied to the HTDF that is under

  17. Metrics for Evaluation of Student Models

    ERIC Educational Resources Information Center

    Pelanek, Radek

    2015-01-01

    Researchers use many different metrics for evaluation of performance of student models. The aim of this paper is to provide an overview of commonly used metrics, to discuss properties, advantages, and disadvantages of different metrics, to summarize current practice in educational data mining, and to provide guidance for evaluation of student…

  18. Smith Newton Vehicle Performance Evaluation - Cumulative (Brochure)

    SciTech Connect

    Not Available

    2014-08-01

    The Fleet Test and Evaluation Team at the U.S. Department of Energy's National Renewable Energy Laboratory is evaluating and documenting the performance of electric and plug-in hybrid electric drive systems in medium-duty trucks across the nation. U.S. companies participating in this evaluation project received funding from the American Recovery and Reinvestment Act to cover part of the cost of purchasing these vehicles. Through this project, Smith Electric Vehicles is building and deploying 500 all-electric medium-duty trucks that will be deployed by a variety of companies in diverse climates across the country.

  19. Hypersonic Interceptor Performance Evaluation Center aero-optics performance predictions

    NASA Astrophysics Data System (ADS)

    Sutton, George W.; Pond, John E.; Snow, Ronald; Hwang, Yanfang

    1993-06-01

    This paper describes the Hypersonic Interceptor Performance Evaluation Center's (HIPEC) aerooptics performance predictions capability. It includes code results for three dimensional shapes and comparisons to initial experiments. HIPEC consists of a collection of aerothermal, aerodynamic computational codes which are capable of covering the entire flight regime from subsonic to hypersonic flow and include chemical reactions and turbulence. Heat transfer to the various surfaces is calculated as an input to cooling and ablation processes. HIPEC also has aero-optics codes to determine the effect of the mean flowfield and turbulence on the tracking and imaging capability of on-board optical sensors. The paper concentrates on the latter aspects.

  20. Accuracy of TCP performance models

    NASA Astrophysics Data System (ADS)

    Schwefel, Hans Peter; Jobmann, Manfred; Hoellisch, Daniel; Heyman, Daniel P.

    2001-07-01

    Despite the fact that most of todays' Internet traffic is transmitted via the TCP protocol, the performance behavior of networks with TCP traffic is still not well understood. Recent research activities have lead to a number of performance models for TCP traffic, but the degree of accuracy of these models in realistic scenarios is still questionable. This paper provides a comparison of the results (in terms of average throughput per connection) of three different `analytic' TCP models: I. the throughput formula in [Padhye et al. 98], II. the modified Engset model of [Heyman et al. 97], and III. the analytic TCP queueing model of [Schwefel 01] that is a packet based extension of (II). Results for all three models are computed for a scenario of N identical TCP sources that transmit data in individual TCP connections of stochastically varying size. The results for the average throughput per connection in the analytic models are compared with simulations of detailed TCP behavior. All of the analytic models are expected to show deficiencies in certain scenarios, since they neglect highly influential parameters of the actual real simulation model: The approach of Model (I) and (II) only indirectly considers queueing in bottleneck routers, and in certain scenarios those models are not able to adequately describe the impact of buffer-space, neither qualitatively nor quantitatively. Furthermore, (II) is insensitive to the actual distribution of the connection sizes. As a consequence, their prediction would also be insensitive of so-called long-range dependent properties in the traffic that are caused by heavy-tailed connection size distributions. The simulation results show that such properties cannot be neglected for certain network topologies: LRD properties can even have counter-intuitive impact on the average goodput, namely the goodput can be higher for small buffer-sizes.

  1. Evaluating the performance of a new model for predicting the growth of Clostridium perfringens in cooked, uncured meat and poultry products under isothermal, heating, and dynamically cooling conditions

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Clostridium perfringens Type A is a significant public health threat and may germinate, outgrow, and multiply during cooling of cooked meats. This study evaluates a new C. perfringens growth model in IPMP Dynamic Prediction using the same criteria and cooling data in Mohr and others (2015), but inc...

  2. Performance evaluation of an air solar collector

    NASA Technical Reports Server (NTRS)

    1979-01-01

    Indoor tests on signal-glazed flat-plate collector are described in report. Marhsall Space Flight Center solar simulator is used to make tests. Test included evaluations on thermal performance under various combinations of flow rate, incident flux, inlet temperature, and wind speed. Results are presented in graph/table form.

  3. ASBESTOS IN DRINKING WATER PERFORMANCE EVALUATION STUDIES

    EPA Science Inventory

    Performance evaluations of laboratories testing for asbestos in drinking water according to USEPA Test Method 100.1 or 100.2 are complicated by the difficulty of providing stable sample dispersions of asbestos in water. Reference samples of a graduated series of chrysotile asbes...

  4. ASBESTOS IN DRINKING WATER PERFORMANCE EVALUATION STUDIES

    EPA Science Inventory

    Performance evaluations of laboratories testing for asbestos in drinking water according to USEPA Test Method 100.1 or 100.2 are complicated by the difficulty of providing stable sample dispersions of asbestos in water. Reference samples of a graduated series of chrysotile asbest...

  5. A New Approach to Evaluating Performance.

    PubMed

    Bleich, Michael R

    2016-09-01

    A leadership task is evaluating the performance of individuals for organizational fit. Traditional approaches have included leader-subordinate reviews, self-review, and peer review. A new approach is evolving in team-based organizations, introduced in this article. J Contin Educ Nurs. 2016;47(9):393-394. PMID:27580504

  6. An Evaluation of a Performance Contract.

    ERIC Educational Resources Information Center

    Dembo, Myron H.; Wilson, Donald E.

    This paper reports an evaluation of a performance contract in reading with 2,500 seventh-grade students. Seventy-five percent of the students were to increase their reading speed five times over their beginning level with ten percent more comprehension after three months of instruction. Results indicated that only thirteen percent of the students…

  7. GENERAL METHODS FOR REMEDIAL PERFORMANCE EVALUATIONS

    EPA Science Inventory

    This document was developed by an EPA-funded project to explain technical considerations and principles necessary to evaluated the performance of ground-water contamination remediations at hazardous waste sites. This is neither a "cookbook", nor an encyclopedia of recommended fi...

  8. EVALUATION OF CONFOCAL MICROSCOPY SYSTEM PERFORMANCE

    EPA Science Inventory

    BACKGROUND. The confocal laser scanning microscope (CLSM) has enormous potential in many biological fields. Currently there is a subjective nature in the assessment of a confocal microscope's performance by primarily evaluating the system with a specific test slide provided by ea...

  9. Evaluating Causal Models.

    ERIC Educational Resources Information Center

    Watt, James H., Jr.

    Pointing out that linear causal models can organize the interrelationships of a large number of variables, this paper contends that such models are particularly useful to mass communication research, which must by necessity deal with complex systems of variables. The paper first outlines briefly the philosophical requirements for establishing a…

  10. Evaluating modeling tools for the EDOS

    NASA Technical Reports Server (NTRS)

    Knoble, Gordon; Mccaleb, Frederick; Aslam, Tanweer; Nester, Paul

    1994-01-01

    The Earth Observing System (EOS) Data and Operations System (EDOS) Project is developing a functional, system performance model to support the system implementation phase of the EDOS which is being designed and built by the Goddard Space Flight Center (GSFC). The EDOS Project will use modeling to meet two key objectives: (1) manage system design impacts introduced by unplanned changed in mission requirements; and (2) evaluate evolutionary technology insertions throughout the development of the EDOS. To select a suitable modeling tool, the EDOS modeling team developed an approach for evaluating modeling tools and languages by deriving evaluation criteria from both the EDOS modeling requirements and the development plan. Essential and optional features for an appropriate modeling tool were identified and compared with known capabilities of several modeling tools. Vendors were also provided the opportunity to model a representative EDOS processing function to demonstrate the applicability of their modeling tool to the EDOS modeling requirements. This paper emphasizes the importance of using a well defined approach for evaluating tools to model complex systems like the EDOS. The results of this evaluation study do not in any way signify the superiority of any one modeling tool since the results will vary with the specific modeling requirements of each project.

  11. 40 CFR 63.5850 - How do I conduct performance tests, performance evaluations, and design evaluations?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Pollutants: Reinforced Plastic Composites Production Testing and Initial Compliance Requirements § 63.5850... performance test, performance evaluation, and design evaluation in 40 CFR part 63, subpart SS, that applies to... requirements in § 63.7(e)(1) and under the specific conditions that 40 CFR part 63, subpart SS, specifies....

  12. 40 CFR 63.5850 - How do I conduct performance tests, performance evaluations, and design evaluations?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Pollutants: Reinforced Plastic Composites Production Testing and Initial Compliance Requirements § 63.5850... performance test, performance evaluation, and design evaluation in 40 CFR part 63, subpart SS, that applies to... requirements in § 63.7(e)(1) and under the specific conditions that 40 CFR part 63, subpart SS, specifies....

  13. 40 CFR 63.5850 - How do I conduct performance tests, performance evaluations, and design evaluations?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... performance test, performance evaluation, and design evaluation in 40 CFR part 63, subpart SS, that applies to... requirements in § 63.7(e)(1) and under the specific conditions that 40 CFR part 63, subpart SS, specifies. (c... and under the specific conditions that 40 CFR part 63, subpart SS, specifies. (d) You may not...

  14. Evaluation of Advocacy Models.

    ERIC Educational Resources Information Center

    Bradley, Valerie J.

    The paper describes approaches and findings of an evaluation of 10 advocacy projects providing services to developmentally disabled and mentally ill persons across the country. The projects included internal rights protection organizations, independent legal advocacy mechanisms, self-advocacy training centers, and legal advocacy providers in…

  15. Performance Analysis of GYRO: A Tool Evaluation

    SciTech Connect

    Worley, P.; Roth, P.; Candy, J.; Shan, Hongzhang; Mahinthakumar,G.; Sreepathi, S.; Carrington, L.; Kaiser, T.; Snavely, A.; Reed, D.; Zhang, Y.; Huck, K.; Malony, A.; Shende, S.; Moore, S.; Wolf, F.

    2005-06-26

    The performance of the Eulerian gyrokinetic-Maxwell solver code GYRO is analyzed on five high performance computing systems. First, a manual approach is taken, using custom scripts to analyze the output of embedded wall clock timers, floating point operation counts collected using hardware performance counters, and traces of user and communication events collected using the profiling interface to Message Passing Interface (MPI) libraries. Parts of the analysis are then repeated or extended using a number of sophisticated performance analysis tools: IPM, KOJAK, SvPablo, TAU, and the PMaC modeling tool suite. The paper briefly discusses what has been discovered via this manual analysis process, what performance analyses are inconvenient or infeasible to attempt manually, and to what extent the tools show promise in accelerating or significantly extending the manual performance analyses.

  16. Visual Performance Prediction Using Schematic Eye Models

    NASA Astrophysics Data System (ADS)

    Schwiegerling, James Theodore

    The goal of visual modeling is to predict the visual performance or a change in performance of an individual from a model of the human visual system. In designing a model of the human visual system, two distinct functions are considered. The first is the production of an image incident on the retina by the optical system of the eye, and the second is the conversion of this image into a perceived image by the retina and brain. The eye optics are evaluated using raytracing techniques familiar to the optical engineer. The effect of the retinal and brain function are combined with the raytracing results by analyzing the modulation of the retinal image. Each of these processes is important far evaluating the performance of the entire visual system. Techniques for converting the abstract system performance measures used by optical engineers into clinically -applicable measures such as visual acuity and contrast sensitivity are developed in this dissertation. Furthermore, a methodology for applying videokeratoscopic height data to the visual model is outlined. These tools are useful in modeling the visual effects of corrective lenses, ocular maladies and refractive surgeries. The modeling techniques are applied to examples of soft contact lenses, keratoconus, radial keratotomy, photorefractive keratectomy and automated lamellar keratoplasty. The modeling tools developed in this dissertation are meant to be general and modular. As improvements to the measurements of the properties and functionality of the various visual components are made, the new information can be incorporated into the visual system model. Furthermore, the examples discussed here represent only a small subset of the applications of the visual model. Additional ocular maladies and emerging refractive surgeries can be modeled as well.

  17. Nuclear models relevant to evaluation

    SciTech Connect

    Arthur, E.D.; Chadwick, M.B.; Hale, G.M.; Young, P.G.

    1991-01-01

    The widespread use of nuclear models continues in the creation of data evaluations. The reasons include extension of data evaluations to higher energies, creation of data libraries for isotopic components of natural materials, and production of evaluations for radiative target species. In these cases, experimental data are often sparse or nonexistent. As this trend continues, the nuclear models employed in evaluation work move towards more microscopically-based theoretical methods, prompted in part by the availability of increasingly powerful computational resources. Advances in nuclear models applicable to evaluation will be reviewed. These include advances in optical model theory, microscopic and phenomenological state and level density theory, unified models that consistently describe both equilibrium and nonequilibrium reaction mechanism, and improved methodologies for calculation of prompt radiation from fission. 84 refs., 8 figs.

  18. Using hybrid method to evaluate the green performance in uncertainty.

    PubMed

    Tseng, Ming-Lang; Lan, Lawrence W; Wang, Ray; Chiu, Anthony; Cheng, Hui-Ping

    2011-04-01

    Green performance measure is vital for enterprises in making continuous improvements to maintain sustainable competitive advantages. Evaluation of green performance, however, is a challenging task due to the dependence complexity of the aspects, criteria, and the linguistic vagueness of some qualitative information and quantitative data together. To deal with this issue, this study proposes a novel approach to evaluate the dependence aspects and criteria of firm's green performance. The rationale of the proposed approach, namely green network balanced scorecard, is using balanced scorecard to combine fuzzy set theory with analytical network process (ANP) and importance-performance analysis (IPA) methods, wherein fuzzy set theory accounts for the linguistic vagueness of qualitative criteria and ANP converts the relations among the dependence aspects and criteria into an intelligible structural modeling used IPA. For the empirical case study, four dependence aspects and 34 green performance criteria for PCB firms in Taiwan were evaluated. The managerial implications are discussed. PMID:20571885

  19. Attributing spatial patterns of hydrological model performance

    NASA Astrophysics Data System (ADS)

    Eisner, S.; Malsy, M.; Flörke, M.

    2013-12-01

    Global hydrological models and land surface models are used to understand and simulate the global terrestrial water cycle. They are, in particular, applied to assess the current state of global water resources, to identify anthropogenic pressures on the global water system, and to assess impacts of global and climate change on water resources. Especially in data-scarce regions, the growing availability of remote sensing products, e.g. GRACE estimates of changes in terrestrial water storage, evaporation or soil moisture estimates, has added valuable information to force and constrain these models as they facilitate the calibration and validation of simulated states and fluxes other than stream flow at large spatial scales. Nevertheless, observed discharge records provide important evidence to evaluate the quality of water availability estimates and to quantify the uncertainty associated with these estimates. Most large scale modelling approaches are constrained by simplified physical process representations and they implicitly rely on the assumption that the same model structure is valid and can be applied globally. It is therefore important to understand why large scale hydrological models perform good or poor in reproducing observed runoff and discharge fields in certain regions, and to explore and explain spatial patterns of model performance. We present an extensive evaluation of the global water model WaterGAP (Water - Global Assessment and Prognosis) to simulate 20th century discharges. The WaterGAP modeling framework comprises a hydrology model and several water use models and operates in its current version, WaterGAP3, on a 5 arc minute global grid and . Runoff generated on the individual grid cells is routed along a global drainage direction map taking into account retention in natural surface water bodies, i.e. lakes and wetlands, as well as anthropogenic impacts, i.e. flow regulation and water abstraction for agriculture, industry and domestic purposes as

  20. Comprehensive system models: Strategies for evaluation

    NASA Technical Reports Server (NTRS)

    Field, Christopher; Kutzbach, John E.; Ramanathan, V.; Maccracken, Michael C.

    1992-01-01

    The task of evaluating comprehensive earth system models is vast involving validations of every model component at every scale of organization, as well as tests of all the individual linkages. Even the most detailed evaluation of each of the component processes and the individual links among them should not, however, engender confidence in the performance of the whole. The integrated earth system is so rich with complex feedback loops, often involving components of the atmosphere, oceans, biosphere, and cryosphere, that it is certain to exhibit emergent properties very difficult to predict from the perspective of a narrow focus on any individual component of the system. Therefore, a substantial share of the task of evaluating comprehensive earth system models must reside at the level of whole system evaluations. Since complete, integrated atmosphere/ ocean/ biosphere/ hydrology models are not yet operational, questions of evaluation must be addressed at the level of the kinds of earth system processes that the models should be competent to simulate, rather than at the level of specific performance criteria. Here, we have tried to identify examples of earth system processes that are difficult to simulate with existing models and that involve a rich enough suite of feedbacks that they are unlikely to be satisfactorily described by highly simplified or toy models. Our purpose is not to specify a checklist of evaluation criteria but to introduce characteristics of the earth system that may present useful opportunities for model testing and, of course, improvement.

  1. Evaluating the Performance of Wavelet-based Data-driven Models for Multistep-ahead Flood Forecasting in an Urbanized Watershed

    NASA Astrophysics Data System (ADS)

    Kasaee Roodsari, B.; Chandler, D. G.

    2015-12-01

    A real-time flood forecast system is presented to provide emergency management authorities sufficient lead time to execute plans for evacuation and asset protection in urban watersheds. This study investigates the performance of two hybrid models for real-time flood forecasting at different subcatchments of Ley Creek watershed, a heavily urbanized watershed in the vicinity of Syracuse, New York. Hybrid models include Wavelet-Based Artificial Neural Network (WANN) and Wavelet-Based Adaptive Neuro-Fuzzy Inference System (WANFIS). Both models are developed on the basis of real time stream network sensing. The wavelet approach is applied to decompose the collected water depth timeseries to Approximation and Detail components. The Approximation component is then used as an input to ANN and ANFIS models to forecast water level at lead times of 1 to 10 hours. The performance of WANN and WANFIS models are compared to ANN and ANFIS models for different lead times. Initial results demonstrated greater predictive power of hybrid models.

  2. Performance evaluation of fingerprint verification systems.

    PubMed

    Cappelli, Raffaele; Maio, Dario; Maltoni, Davide; Wayman, James L; Jain, Anil K

    2006-01-01

    This paper is concerned with the performance evaluation of fingerprint verification systems. After an initial classification of biometric testing initiatives, we explore both the theoretical and practical issues related to performance evaluation by presenting the outcome of the recent Fingerprint Verification Competition (FVC2004). FVC2004 was organized by the authors of this work for the purpose of assessing the state-of-the-art in this challenging pattern recognition application and making available a new common benchmark for an unambiguous comparison of fingerprint-based biometric systems. FVC2004 is an independent, strongly supervised evaluation performed at the evaluators' site on evaluators' hardware. This allowed the test to be completely controlled and the computation times of different algorithms to be fairly compared. The experience and feedback received from previous, similar competitions (FVC2000 and FVC2002) allowed us to improve the organization and methodology of FVC2004 and to capture the attention of a significantly higher number of academic and commercial organizations (67 algorithms were submitted for FVC2004). A new, "Light" competition category was included to estimate the loss of matching performance caused by imposing computational constraints. This paper discusses data collection and testing protocols, and includes a detailed analysis of the results. We introduce a simple but effective method for comparing algorithms at the score level, allowing us to isolate difficult cases (images) and to study error correlations and algorithm "fusion." The huge amount of information obtained, including a structured classification of the submitted algorithms on the basis of their features, makes it possible to better understand how current fingerprint recognition systems work and to delineate useful research directions for the future.

  3. OMPS SDR Status and Performance Evaluation

    NASA Astrophysics Data System (ADS)

    Pan, S.; Weng, F.; Wu, X.; Flynn, L. E.; Jaross, G.; Buss, R. H.; Niu, J.; Seftor, C. J.

    2012-12-01

    Launched on October 28, 2011, OMPS has successfully passed different operational phases from the Early Observation and Activation (LEO&A) to Early Orbit Checkout (EOC), and is currently in the Intensive CAL/Val (ICV) phase. OMPS data gathered during the on-orbit calibration and validation activities allow us to evaluate the instrument on-orbit performance and validate Sensor Data Records (SDRs). Detector performance shows that offset, gain, and dark current rate trends remain within 0.2% of the pre-launch values with significant margin below sensor requirements. Detector gain and offset performance trends are generally stable and observed solar irradiance is within an average of 2% of predicted values. This presentation will update the status of the OMPS SDRs with newly established calibration measurements. Examples of analysis of dark calibration, linearity performance, solar irradiance validation, sensor noise and wavelength change are provided.

  4. Stormwater quality models: performance and sensitivity analysis.

    PubMed

    Dotto, C B S; Kleidorfer, M; Deletic, A; Fletcher, T D; McCarthy, D T; Rauch, W

    2010-01-01

    The complex nature of pollutant accumulation and washoff, along with high temporal and spatial variations, pose challenges for the development and establishment of accurate and reliable models of the pollution generation process in urban environments. Therefore, the search for reliable stormwater quality models remains an important area of research. Model calibration and sensitivity analysis of such models are essential in order to evaluate model performance; it is very unlikely that non-calibrated models will lead to reasonable results. This paper reports on the testing of three models which aim to represent pollutant generation from urban catchments. Assessment of the models was undertaken using a simplified Monte Carlo Markov Chain (MCMC) method. Results are presented in terms of performance, sensitivity to the parameters and correlation between these parameters. In general, it was suggested that the tested models poorly represent reality and result in a high level of uncertainty. The conclusions provide useful information for the improvement of existing models and insights for the development of new model formulations.

  5. Data management system performance modeling

    NASA Technical Reports Server (NTRS)

    Kiser, Larry M.

    1993-01-01

    This paper discusses analytical techniques that have been used to gain a better understanding of the Space Station Freedom's (SSF's) Data Management System (DMS). The DMS is a complex, distributed, real-time computer system that has been redesigned numerous times. The implications of these redesigns have not been fully analyzed. This paper discusses the advantages and disadvantages for static analytical techniques such as Rate Monotonic Analysis (RMA) and also provides a rationale for dynamic modeling. Factors such as system architecture, processor utilization, bus architecture, queuing, etc. are well suited for analysis with a dynamic model. The significance of performance measures for a real-time system are discussed.

  6. Performance evaluation soil samples utilizing encapsulation technology

    DOEpatents

    Dahlgran, James R.

    1999-01-01

    Performance evaluation soil samples and method of their preparation using encapsulation technology to encapsulate analytes which are introduced into a soil matrix for analysis and evaluation by analytical laboratories. Target analytes are mixed in an appropriate solvent at predetermined concentrations. The mixture is emulsified in a solution of polymeric film forming material. The emulsified solution is polymerized to form microcapsules. The microcapsules are recovered, quantitated and introduced into a soil matrix in a predetermined ratio to form soil samples with the desired analyte concentration.

  7. Performance evaluation soil samples utilizing encapsulation technology

    DOEpatents

    Dahlgran, J.R.

    1999-08-17

    Performance evaluation soil samples and method of their preparation uses encapsulation technology to encapsulate analytes which are introduced into a soil matrix for analysis and evaluation by analytical laboratories. Target analytes are mixed in an appropriate solvent at predetermined concentrations. The mixture is emulsified in a solution of polymeric film forming material. The emulsified solution is polymerized to form microcapsules. The microcapsules are recovered, quantitated and introduced into a soil matrix in a predetermined ratio to form soil samples with the desired analyte concentration. 1 fig.

  8. The Discrepancy Evaluation Model. I. Basic Tenets of the Model.

    ERIC Educational Resources Information Center

    Steinmetz, Andres

    1976-01-01

    The basic principles of the discrepancy evaluation model (DEM), developed by Malcolm Provus, are presented. The three concepts which are essential to DEM are defined: (1) the standard is a description of how something should be; (2) performance measures are used to find out the actual characteristics of the object being evaluated; and (3) the…

  9. Evaluation of an Upslope Precipitation Model

    NASA Astrophysics Data System (ADS)

    Barstad, I.; Smith, R. B.

    2002-12-01

    A linear orographic precipitation model applicable on complex terrain for an arbitrary wind direction has been developed. The model includes mountain wave dynamics as well as condensed water advection and two micro-physical time delay mechanisms. Atmospheric input variables in the model are wind speed and direction, specific humidity, wet static stability and two conversion factors for the micro-physics. In addition, the underlying terrain is needed. Various closed-form solutions for the precipitation behavior over ideal mountains have been derived and verified with numerical mesoscale models. The model is tested in real terrain against observations. Several locations are used to evaluate the model performance (southern Norway, the Alps and the Wasatch mountains in Utah). The model results are of same magnitude as the observations, which indicate that the fundamental physics is included in the model. The ratio of condensate that is carried over the mountain crest to the amount that is left as precipitation is crucial, and the model seem to reproduce this well. When the model results are evaluated against observations with statistical measure such as correlation coefficient, it performs well overall. This requires that detailed input information such as wind direction and stability are provided and that the observations are taken frequently. Traditional observation samplings are normally unevenly distributed between valleys and mountain tops which cause a bias in objective analysis. Such an analysis can, in this case, not be held directly against model results. For the same reason, if a model for instance perform well on mountain tops, but poorly in valleys, observations will give a wrong impressions of the model performance. From our tests, the model perform well in smaller region where the input variables are representative for the whole area. Some model deficiencies are also discovered. The model performance seems to improve with slightly smoothed terrain which

  10. Performance Evaluation Methods for Assistive Robotic Technology

    NASA Astrophysics Data System (ADS)

    Tsui, Katherine M.; Feil-Seifer, David J.; Matarić, Maja J.; Yanco, Holly A.

    Robots have been developed for several assistive technology domains, including intervention for Autism Spectrum Disorders, eldercare, and post-stroke rehabilitation. Assistive robots have also been used to promote independent living through the use of devices such as intelligent wheelchairs, assistive robotic arms, and external limb prostheses. Work in the broad field of assistive robotic technology can be divided into two major research phases: technology development, in which new devices, software, and interfaces are created; and clinical, in which assistive technology is applied to a given end-user population. Moving from technology development towards clinical applications is a significant challenge. Developing performance metrics for assistive robots poses a related set of challenges. In this paper, we survey several areas of assistive robotic technology in order to derive and demonstrate domain-specific means for evaluating the performance of such systems. We also present two case studies of applied performance measures and a discussion regarding the ubiquity of functional performance measures across the sampled domains. Finally, we present guidelines for incorporating human performance metrics into end-user evaluations of assistive robotic technologies.

  11. Measurement-based reliability/performability models

    NASA Technical Reports Server (NTRS)

    Hsueh, Mei-Chen

    1987-01-01

    Measurement-based models based on real error-data collected on a multiprocessor system are described. Model development from the raw error-data to the estimation of cumulative reward is also described. A workload/reliability model is developed based on low-level error and resource usage data collected on an IBM 3081 system during its normal operation in order to evaluate the resource usage/error/recovery process in a large mainframe system. Thus, both normal and erroneous behavior of the system are modeled. The results provide an understanding of the different types of errors and recovery processes. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model the system behavior. A sensitivity analysis is performed to investigate the significance of using a semi-Markov process, as opposed to a Markov process, to model the measured system.

  12. Performance modeling for large database systems

    NASA Astrophysics Data System (ADS)

    Schaar, Stephen; Hum, Frank; Romano, Joe

    1997-02-01

    One of the unique approaches Science Applications International Corporation took to meet performance requirements was to start the modeling effort during the proposal phase of the Interstate Identification Index/Federal Bureau of Investigations (III/FBI) project. The III/FBI Performance Model uses analytical modeling techniques to represent the III/FBI system. Inputs to the model include workloads for each transaction type, record size for each record type, number of records for each file, hardware envelope characteristics, engineering margins and estimates for software instructions, memory, and I/O for each transaction type. The model uses queuing theory to calculate the average transaction queue length. The model calculates a response time and the resources needed for each transaction type. Outputs of the model include the total resources needed for the system, a hardware configuration, and projected inherent and operational availability. The III/FBI Performance Model is used to evaluate what-if scenarios and allows a rapid response to engineering change proposals and technical enhancements.

  13. Hierarchical Model Validation of Symbolic Performance Models of Scientific Kernels

    SciTech Connect

    Alam, Sadaf R; Vetter, Jeffrey S

    2006-08-01

    Multi-resolution validation of hierarchical performance models of scientific applications is critical primarily for two reasons. First, the step-by-step validation determines the correctness of all essential components or phases in a science simulation. Second, a model that is validated at multiple resolution levels is the very first step to generate predictive performance models, for not only existing systems but also for emerging systems and future problem sizes. We present the design and validation of hierarchical performance models of two scientific benchmarks using a new technique called the modeling assertions (MA). Our MA prototype framework generates symbolic performance models that can be evaluated efficiently by generating the equivalent model representations in Octave and MATLAB. The multi-resolution modeling and validation is conducted on two contemporary, massively-parallel systems, XT3 and Blue Gene/L system. The workload distribution and the growth rates predictions generated by the MA models are confirmed by the experimental data collected on the MPP platforms. In addition, the physical memory requirements that are generated by the MA models are verified by the runtime values on the Blue Gene/L system, which has 512 MBytes and 256 MBytes physical memory capacity in its two unique execution modes.

  14. A performance evaluation of biometric identification devices

    SciTech Connect

    Holmes, J.P.; Maxwell, R.L.; Wright, L.J.

    1990-06-01

    A biometric identification device is an automatic device that can verify a person's identity from a measurement of a physical feature or repeatable action of the individual. A reference measurement of the biometric is obtained when the individual is enrolled on the device. Subsequent verifications are made by comparing the submitted biometric feature against the reference sample. Sandia Laboratories has been evaluating the relative performance of several identity verifiers, using volunteer test subjects. Sandia testing methods and results are discussed.

  15. Automated Laser Seeker Performance Evaluation System (ALSPES)

    NASA Astrophysics Data System (ADS)

    Martin, Randal G.; Robinson, Elisa L.

    1988-01-01

    The Automated Laser Seeker Performance Evaluation System (ALSPES), which supports the Hellfire missile and Copperhead projectile laser seekers, is discussed. The ALSPES capabilities in manual and automatic operation are described, and the ALSPES test hardware is examined, including the computer system, the laser/attenuator, optics systems, seeker test fixture, and the measurement and test equipment. The calibration of laser energy and test signals in ALSPES is considered.

  16. Metrics for Offline Evaluation of Prognostic Performance

    NASA Technical Reports Server (NTRS)

    Saxena, Abhinav; Celaya, Jose; Saha, Bhaskar; Saha, Sankalita; Goebel, Kai

    2010-01-01

    Prognostic performance evaluation has gained significant attention in the past few years. Currently, prognostics concepts lack standard definitions and suffer from ambiguous and inconsistent interpretations. This lack of standards is in part due to the varied end-user requirements for different applications, time scales, available information, domain dynamics, etc. to name a few. The research community has used a variety of metrics largely based on convenience and their respective requirements. Very little attention has been focused on establishing a standardized approach to compare different efforts. This paper presents several new evaluation metrics tailored for prognostics that were recently introduced and were shown to effectively evaluate various algorithms as compared to other conventional metrics. Specifically, this paper presents a detailed discussion on how these metrics should be interpreted and used. These metrics have the capability of incorporating probabilistic uncertainty estimates from prognostic algorithms. In addition to quantitative assessment they also offer a comprehensive visual perspective that can be used in designing the prognostic system. Several methods are suggested to customize these metrics for different applications. Guidelines are provided to help choose one method over another based on distribution characteristics. Various issues faced by prognostics and its performance evaluation are discussed followed by a formal notational framework to help standardize subsequent developments.

  17. Strapdown system performance optimization test evaluations (SPOT), volume 1

    NASA Technical Reports Server (NTRS)

    Blaha, R. J.; Gilmore, J. P.

    1973-01-01

    A three axis inertial system was packaged in an Apollo gimbal fixture for fine grain evaluation of strapdown system performance in dynamic environments. These evaluations have provided information to assess the effectiveness of real-time compensation techniques and to study system performance tradeoffs to factors such as quantization and iteration rate. The strapdown performance and tradeoff studies conducted include: (1) Compensation models and techniques for the inertial instrument first-order error terms were developed and compensation effectivity was demonstrated in four basic environments; single and multi-axis slew, and single and multi-axis oscillatory. (2) The theoretical coning bandwidth for the first-order quaternion algorithm expansion was verified. (3) Gyro loop quantization was identified to affect proportionally the system attitude uncertainty. (4) Land navigation evaluations identified the requirement for accurate initialization alignment in order to pursue fine grain navigation evaluations.

  18. Group 3: Performance evaluation and assessment

    NASA Technical Reports Server (NTRS)

    Frink, A.

    1981-01-01

    Line-oriented flight training provides a unique learning experience and an opportunity to look at aspects of performance other types of training did not provide. Areas such as crew coordination, resource management, leadership, and so forth, can be readily evaluated in such a format. While individual performance is of the utmost importance, crew performance deserves equal emphasis, therefore, these areas should be carefully observed by the instructors as an rea for discussion in the same way that individual performane is observed. To be effective, it must be accepted by the crew members, and administered by the instructors as pure training-learning through experience. To keep open minds, to benefit most from the experience, both in the doing and in the follow-on discussion, it is essential that it be entered into with a feeling of freedom, openness, and enthusiasm. Reserve or defensiveness because of concern for failure must be inhibit participation.

  19. Evaluating Algorithm Performance Metrics Tailored for Prognostics

    NASA Technical Reports Server (NTRS)

    Saxena, Abhinav; Celaya, Jose; Saha, Bhaskar; Saha, Sankalita; Goebel, Kai

    2009-01-01

    Prognostics has taken a center stage in Condition Based Maintenance (CBM) where it is desired to estimate Remaining Useful Life (RUL) of the system so that remedial measures may be taken in advance to avoid catastrophic events or unwanted downtimes. Validation of such predictions is an important but difficult proposition and a lack of appropriate evaluation methods renders prognostics meaningless. Evaluation methods currently used in the research community are not standardized and in many cases do not sufficiently assess key performance aspects expected out of a prognostics algorithm. In this paper we introduce several new evaluation metrics tailored for prognostics and show that they can effectively evaluate various algorithms as compared to other conventional metrics. Specifically four algorithms namely; Relevance Vector Machine (RVM), Gaussian Process Regression (GPR), Artificial Neural Network (ANN), and Polynomial Regression (PR) are compared. These algorithms vary in complexity and their ability to manage uncertainty around predicted estimates. Results show that the new metrics rank these algorithms in different manner and depending on the requirements and constraints suitable metrics may be chosen. Beyond these results, these metrics offer ideas about how metrics suitable to prognostics may be designed so that the evaluation procedure can be standardized. 1

  20. [Evaluation model for municipal health planning management].

    PubMed

    Berretta, Isabel Quint; Lacerda, Josimari Telino de; Calvo, Maria Cristina Marino

    2011-11-01

    This article presents an evaluation model for municipal health planning management. The basis was a methodological study using the health planning theoretical framework to construct the evaluation matrix, in addition to an understanding of the organization and functioning designed by the Planning System of the Unified National Health System (PlanejaSUS) and definition of responsibilities for the municipal level under the Health Management Pact. The indicators and measures were validated using the consensus technique with specialists in planning and evaluation. The applicability was tested in 271 municipalities (counties) in the State of Santa Catarina, Brazil, based on population size. The proposed model features two evaluative dimensions which reflect the municipal health administrator's commitment to planning: the guarantee of resources and the internal and external relations needed for developing the activities. The data were analyzed using indicators, sub-dimensions, and dimensions. The study concludes that the model is feasible and appropriate for evaluating municipal performance in health planning management.

  1. High temperature furnace modeling and performance verifications

    NASA Technical Reports Server (NTRS)

    Smith, James E., Jr.

    1992-01-01

    Analytical, numerical, and experimental studies were performed on two classes of high temperature materials processing sources for their potential use as directional solidification furnaces. The research concentrated on a commercially available high temperature furnace using a zirconia ceramic tube as the heating element and an Arc Furnace based on a tube welder. The first objective was to assemble the zirconia furnace and construct parts needed to successfully perform experiments. The 2nd objective was to evaluate the zirconia furnace performance as a directional solidification furnace element. The 3rd objective was to establish a data base on materials used in the furnace construction, with particular emphasis on emissivities, transmissivities, and absorptivities as functions of wavelength and temperature. A 1-D and 2-D spectral radiation heat transfer model was developed for comparison with standard modeling techniques, and were used to predict wall and crucible temperatures. The 4th objective addressed the development of a SINDA model for the Arc Furnace and was used to design sample holders and to estimate cooling media temperatures for the steady state operation of the furnace. And, the 5th objective addressed the initial performance evaluation of the Arc Furnace and associated equipment for directional solidification. Results of these objectives are presented.

  2. A detailed evaluation of the Eta-CMAQ forecast model performance for O3, its related precursors, and meteorological parameters during the 2004 ICARTT study

    NASA Astrophysics Data System (ADS)

    Yu, Shaocai; Mathur, Rohit; Schere, Kenneth; Kang, Daiwen; Pleim, Jonathan; Otte, Tanya L.

    2007-06-01

    The Eta-Community Multiscale Air Quality (CMAQ) model's forecast performance for ozone (O3), its precursors, and meteorological parameters has been assessed over the eastern United States with the observations obtained by aircraft, ship, ozonesonde, and lidar and two surface networks (AIRNOW and AIRMAP) during the 2004 International Consortium for Atmospheric Research on Transport and Transformation (ICARTT) study. The results at the AIRNOW sites show that the model was able to reproduce the day-to-day variations of observed daily maximum 8-hour O3 and captured the majority (73%) of observed daily maximum 8-hour O3 within a factor of 1.5 with normalized mean bias of 22%. The model in general reproduced O3 vertical distributions on most of the days at low altitudes, but consistent overestimations above ˜6 km are evident because of a combination of effects related to the specifications of lateral boundary conditions from the Global Forecast System (GFS) as well as the model's coarse vertical resolution in the upper free troposphere. The model captured the vertical variation patterns of the observed values for other parameters (HNO3, SO2, NO2, HCHO, and NOy_sum (NOy_sum = NO + NO2 + HNO3 + PAN)) with some exceptions, depending on the studied areas and air mass characteristics. The consistent underestimation of CO by ˜30% from surface to high altitudes is partly attributed to the inadequate representation of the transport of pollution associated with Alaska forest fires from outside the domain. The model exhibited good performance for marine or continental clear airflows from the east/north/northwest/south and southwest flows influenced only by Boston city plumes but overestimation for southeast flows influenced by the long-range transport of urban plumes from both New York City and Boston.

  3. 40 CFR 63.5850 - How do I conduct performance tests, performance evaluations, and design evaluations?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...: Reinforced Plastic Composites Production Testing and Initial Compliance Requirements § 63.5850 How do I... test, performance evaluation, and design evaluation in 40 CFR part 63, subpart SS, that applies to you... requirements in § 63.7(e)(1) and under the specific conditions that 40 CFR part 63, subpart SS, specifies....

  4. 40 CFR 63.5850 - How do I conduct performance tests, performance evaluations, and design evaluations?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... test, performance evaluation, and design evaluation in 40 CFR part 63, subpart SS, that applies to you... requirements in § 63.7(e)(1) and under the specific conditions that 40 CFR part 63, subpart SS, specifies. (c... and under the specific conditions that 40 CFR part 63, subpart SS, specifies. (d) You may not...

  5. Performance Evaluation of Triangulation Based Range Sensors

    PubMed Central

    Guidi, Gabriele; Russo, Michele; Magrassi, Grazia; Bordegoni, Monica

    2010-01-01

    The performance of 2D digital imaging systems depends on several factors related with both optical and electronic processing. These concepts have originated standards, which have been conceived for photographic equipment and bi-dimensional scanning systems, and which have been aimed at estimating different parameters such as resolution, noise or dynamic range. Conversely, no standard test protocols currently exist for evaluating the corresponding performances of 3D imaging systems such as laser scanners or pattern projection range cameras. This paper is focused on investigating experimental processes for evaluating some critical parameters of 3D equipment, by extending the concepts defined by the ISO standards to the 3D domain. The experimental part of this work concerns the characterization of different range sensors through the extraction of their resolution, accuracy and uncertainty from sets of 3D data acquisitions of specifically designed test objects whose geometrical characteristics are known in advance. The major objective of this contribution is to suggest an easy characterization process for generating a reliable comparison between the performances of different range sensors and to check if a specific piece of equipment is compliant with the expected characteristics. PMID:22163599

  6. ATAMM enhancement and multiprocessing performance evaluation

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.

    1994-01-01

    The algorithm to architecture mapping model (ATAAM) is a Petri net based model which provides a strategy for periodic execution of a class of real-time algorithms on multicomputer dataflow architecture. The execution of large-grained, decision-free algorithms on homogeneous processing elements is studied. The ATAAM provides an analytical basis for calculating performance bounds on throughput characteristics. Extension of the ATAMM as a strategy for cyclo-static scheduling provides for a truly distributed ATAMM multicomputer operating system. An ATAAM testbed consisting of a centralized graph manager and three processors is described using embedded firmware on 68HC11 microcontrollers.

  7. A performance evaluation of personnel identity verifiers

    SciTech Connect

    Maxwell, R.L.; Wright, L.J.

    1987-01-01

    Personnel identity verification devices, which are based on the examination and assessment of a body feature or a unique repeatable personal action, are steadily improving. These biometric devices are becoming more practical with respect to accuracy, speed, user compatibility, reliability and cost, but more development is necessary to satisfy the varied and sometimes ill-defined future requirements of the security industry. In an attempt to maintain an awareness of the availability and the capabilities of identity verifiers for the DOE security community, Sandia Laboratories continue to comparatively evaluate the capabilities and improvements of developing devices. An evaluation of several recently available verifiers is discussed in this paper. Operating environments and procedures more typical of physical access control use can reveal performance substantially different from the basic laboratory tests.

  8. A performance evaluation of personnel identity verifiers

    SciTech Connect

    Maxwell, R.L.; Wright, L.J.

    1987-07-01

    Personnel identity verification devices, which are based on the examination and assessment of a body feature or a unique repeatable personal action, are steadily improving. These biometric devices are becoming more practical with respect to accuracy, speed, user compatibility, reliability and cost, but more development is necessary to satisfy the varied and sometimes ill-defined future requirements of the security industry. In an attempt to maintain an awareness of the availability and the capabilities of identity verifiers for the DOE security community, Sandia Laboratories continues to comparatively evaluate the capabilities and improvements of developing devices. An evaluation of several recently available verifiers is discussed in this paper. Operating environments and procedures more typical of physical access control use can reveal performance substantially different from the basic laboratory tests.

  9. Application performation evaluation of the HTMT architecture.

    SciTech Connect

    Hereld, M.; Judson, I. R.; Stevens, R.

    2004-02-23

    In this report we summarize findings from a study of the predicted performance of a suite of application codes taken from the research environment and analyzed against a modeling framework for the HTMT architecture. We find that the inward bandwidth of the data vortex may be a limiting factor for some applications. We also find that available memory in the cryogenic layer is a constraining factor in the partitioning of applications into parcels. The architecture in several examples may be inadequately exploited; in particular, applications typically did not capitalize well on the available computational power or data organizational capability in the PIM layers. The application suite provided significant examples of wide excursions from the accepted (if simplified) program execution model--in particular, by required complex in-SPELL synchronization between parcels. The availability of the HTMT-C emulation environment did not contribute significantly to the ability to analyze applications, because of the large gap between the available hardware descriptions and parameters in the modeling framework and the types of data that could be collected via HTMT-C emulation runs. Detailed analysis of application performance, and indeed further credible development of the HTMT-inspired program execution model and system architecture, requires development of much better tools. Chief among them are cycle-accurate simulation tools for computational, network, and memory components. Additionally, there is a critical need for a whole system simulation tool to allow detailed programming exercises and performance tests to be developed. We address three issues in this report: (1) The landscape for applications of petaflops computing; (2) The performance of applications on the HTMT architecture; and (3) The effectiveness of HTMT-C as a tool for studying and developing the HTMT architecture. We set the scene with observations about the course of application development as petaflops

  10. Performance evaluation of swimmers: scientific tools.

    PubMed

    Smith, David J; Norris, Stephen R; Hogg, John M

    2002-01-01

    The purpose of this article is to provide a critical commentary of the physiological and psychological tools used in the evaluation of swimmers. The first-level evaluation should be the competitive performance itself, since it is at this juncture that all elements interplay and provide the 'highest form' of assessment. Competition video analysis of major swimming events has progressed to the point where it has become an indispensable tool for coaches, athletes, sport scientists, equipment manufacturers, and even the media. The breakdown of each swimming performance at the individual level to its constituent parts allows for comparison with the predicted or sought after execution, as well as allowing for comparison with identified world competition levels. The use of other 'on-going' monitoring protocols to evaluate training efficacy typically involves criterion 'effort' swims and specific training sets where certain aspects are scrutinised in depth. Physiological parameters that are often examined alongside swimming speed and technical aspects include oxygen uptake, heart rate, blood lactate concentration, blood lactate accumulation and clearance rates. Simple and more complex procedures are available for in-training examination of technical issues. Strength and power may be quantified via several modalities although, typically, tethered swimming and dry-land isokinetic devices are used. The availability of a 'swimming flume' does afford coaches and sport scientists a higher degree of flexibility in the type of monitoring and evaluation that can be undertaken. There is convincing evidence that athletes can be distinguished on the basis of their psychological skills and emotional competencies and that these differences become further accentuated as the athlete improves. No matter what test format is used (physiological, biomechanical or psychological), similar criteria of validity must be ensured so that the test provides useful and associative information

  11. Performance evaluation of swimmers: scientific tools.

    PubMed

    Smith, David J; Norris, Stephen R; Hogg, John M

    2002-01-01

    The purpose of this article is to provide a critical commentary of the physiological and psychological tools used in the evaluation of swimmers. The first-level evaluation should be the competitive performance itself, since it is at this juncture that all elements interplay and provide the 'highest form' of assessment. Competition video analysis of major swimming events has progressed to the point where it has become an indispensable tool for coaches, athletes, sport scientists, equipment manufacturers, and even the media. The breakdown of each swimming performance at the individual level to its constituent parts allows for comparison with the predicted or sought after execution, as well as allowing for comparison with identified world competition levels. The use of other 'on-going' monitoring protocols to evaluate training efficacy typically involves criterion 'effort' swims and specific training sets where certain aspects are scrutinised in depth. Physiological parameters that are often examined alongside swimming speed and technical aspects include oxygen uptake, heart rate, blood lactate concentration, blood lactate accumulation and clearance rates. Simple and more complex procedures are available for in-training examination of technical issues. Strength and power may be quantified via several modalities although, typically, tethered swimming and dry-land isokinetic devices are used. The availability of a 'swimming flume' does afford coaches and sport scientists a higher degree of flexibility in the type of monitoring and evaluation that can be undertaken. There is convincing evidence that athletes can be distinguished on the basis of their psychological skills and emotional competencies and that these differences become further accentuated as the athlete improves. No matter what test format is used (physiological, biomechanical or psychological), similar criteria of validity must be ensured so that the test provides useful and associative information

  12. Comparative Evaluation of Software Features and Performances.

    PubMed

    Cecconi, Daniela

    2016-01-01

    Analysis of two-dimensional gel images is a crucial step for the determination of changes in the protein expression, but at present, it still represents one of the bottlenecks in 2-DE studies. Over the years, different commercial and academic software packages have been developed for the analysis of 2-DE images. Each of these shows different advantageous characteristics in terms of quality of analysis. In this chapter, the characteristics of the different commercial software packages are compared in order to evaluate their main features and performances.

  13. Performance evaluation of TCP over ABT protocols

    NASA Astrophysics Data System (ADS)

    Ata, Shingo; Murata, Masayuki; Miyahara, Hideo

    1998-10-01

    ABT is promising for effectively transferring a highly bursty data traffic in ATM networks. Most of past studies focused on the data transfer capability of ABT within the ATM layer. In actual, however, we need to consider the upper layer transport protocol since the transport layer protocol also supports a network congestion control mechanism. One such example is TCP, which is now widely used in the Internet. In this paper, we evaluate the performance of TCP over ABT protocols. Simulation results show that the retransmission mechanism of ABT can effectively overlay the TCP congestion control mechanism so that TCP operates in a stable fashion and works well only as an error recovery mechanism.

  14. Sandia solar dryer: preliminary performance evaluation

    SciTech Connect

    Glass, J.S.; Holm-Hansen, T.; Tills, J.; Pierce, J.D.

    1986-01-01

    Preliminary performance evaluations were conducted with the prototype modular solar dryer for wastewater sludge at Sandia National Laboratories. Operational parameters which appeared to influence sludge drying efficiency included condensation system capacity and air turbulence at the sludge surface. Sludge heating profiles showed dependencies on sludge moisture content, sludge depth and seasonal variability in available solar energy. Heat-pasteurization of sludge in the module was demonstrated in two dynamic-processing experiments. Through balanced utilization of drying and heating functions, the facility has the potential for year-round sludge treatment application.

  15. Evaluation of impact limiter performance during end-on and slapdown drop tests of a one-third scale model storage/transport cask system

    SciTech Connect

    Yoshimura, H.R.; Bronowski, D.R.; Uncapher, W.L.; Attaway, S.W.; Bateman, V.I.; Carne, T.G.; Gregory, D.L. ); Huerta, M. )

    1990-12-01

    This report describes drop testing of a one-third scale model shipping cask system. Two casks were designed and fabricated by Transnuclear, Inc., to ship spent fuel from the former Nuclear Fuel Services West Valley reprocessing facility in New York to the Idaho National Engineering Laboratory for a long-term spent fuel dry storage demonstration project. As part of the NRC's regulatory certification process, one-third scale model tests were performed to obtain experimental data on impact limiter performance during impact testing. The objectives of the testing program were to (1) obtain deceleration and displacement information for the cask and impact limiter system, (2) obtain dynamic force-displacement data for the impact limiters, (3) verify the integrity of the impact limiter retention system, and (4) examine the crush behavior of the limiters. Two 30-ft (9-m) drop tests were conducted on a mass model of the cask body and scaled balsa and redwood-filled impact limiters. This report describes the results of both tests in terms of measured decelerations, posttest deformation measurements, and the general structural response of the system. 3 refs., 32 figs.

  16. Performance Evaluation of Phasor Measurement Systems

    SciTech Connect

    Huang, Zhenyu; Kasztenny, Bogdan; Madani, Vahid; Martin, Kenneth E.; Meliopoulos, Sakis; Novosel, Damir; Stenbakken, Jerry

    2008-07-20

    After two decades of phasor network deployment, phasor measurements are now available at many major substations and power plants. The North American SynchroPhasor Initiative (NASPI), supported by both the US Department of Energy and the North American Electricity Reliability Council (NERC), provides a forum to facilitate the efforts in phasor technology in North America. Phasor applications have been explored and some are in today’s utility practice. IEEE C37.118 Standard is a milestone in standardizing phasor measurements and defining performance requirements. To comply with IEEE C37.118 and to better understand the impact of phasor quality on applications, the NASPI Performance and Standards Task Team (PSTT) initiated and accomplished the development of two important documents to address characterization of PMUs and instrumentation channels, which leverage prior work (esp. in WECC) and international experience. This paper summarizes the accomplished PSTT work and presents the methods for phasor measurement evaluation.

  17. Generic hypersonic vehicle performance model

    NASA Technical Reports Server (NTRS)

    Chavez, Frank R.; Schmidt, David K.

    1993-01-01

    An integrated computational model of a generic hypersonic vehicle was developed for the purpose of determining the vehicle's performance characteristics, which include the lift, drag, thrust, and moment acting on the vehicle at specified altitude, flight condition, and vehicular configuration. The lift, drag, thrust, and moment are developed for the body fixed coordinate system. These forces and moments arise from both aerodynamic and propulsive sources. SCRAMjet engine performance characteristics, such as fuel flow rate, can also be determined. The vehicle is assumed to be a lifting body with a single aerodynamic control surface. The body shape and control surface location are arbitrary and must be defined. The aerodynamics are calculated using either 2-dimensional Newtonian or modified Newtonian theory and approximate high-Mach-number Prandtl-Meyer expansion theory. Skin-friction drag was also accounted for. The skin-friction drag coefficient is a function of the freestream Mach number. The data for the skin-friction drag coefficient values were taken from NASA Technical Memorandum 102610. The modeling of the vehicle's SCRAMjet engine is based on quasi 1-dimensional gas dynamics for the engine diffuser, nozzle, and the combustor with heat addition. The engine has three variable inputs for control: the engine inlet diffuser area ratio, the total temperature rise through the combustor due to combustion of the fuel, and the engine internal expansion nozzle area ratio. The pressure distribution over the vehicle's lower aft body surface, which acts as an external nozzle, is calculated using a combination of quasi 1-dimensional gas dynamic theory and Newtonian or modified Newtonian theory. The exhaust plume shape is determined by matching the pressure inside the plume, calculated from the gas dynamic equations, with the freestream pressure, calculated from Newtonian or Modified Newtonian theory. In this manner, the pressure distribution along the vehicle after body

  18. Evaluating cryostat performance for naval applications

    NASA Astrophysics Data System (ADS)

    Knoll, David; Willen, Dag; Fesmire, James; Johnson, Wesley; Smith, Jonathan; Meneghelli, Barry; Demko, Jonathan; George, Daniel; Fowler, Brian; Huber, Patti

    2012-06-01

    The Navy intends to use High Temperature Superconducting Degaussing (HTSDG) coil systems on future Navy platforms. The Navy Metalworking Center (NMC) is leading a team that is addressing cryostat configuration and manufacturing issues associated with fabricating long lengths of flexible, vacuum-jacketed cryostats that meet Navy shipboard performance requirements. The project includes provisions to evaluate the reliability performance, as well as proofing of fabrication techniques. Navy cryostat performance specifications include less than 1 Wm-1 heat loss, 2 MPa working pressure, and a 25-year vacuum life. Cryostat multilayer insulation (MLI) systems developed on the project have been validated using a standardized cryogenic test facility and implemented on 5-meterlong test samples. Performance data from these test samples, which were characterized using both LN2 boiloff and flow-through measurement techniques, will be presented. NMC is working with an Integrated Project Team consisting of Naval Sea Systems Command, Naval Surface Warfare Center-Carderock Division, Southwire Company, nkt cables, Oak Ridge National Laboratory (ORNL), ASRC Aerospace, and NASA Kennedy Space Center (NASA-KSC) to complete these efforts. Approved for public release; distribution is unlimited. This material is submitted with the understanding that right of reproduction for governmental purposes is reserved for the Office of Naval Research, Arlington, Virginia 22203-1995.

  19. Performance evaluation of bound diamond ring tools

    SciTech Connect

    Piscotty, M.A.; Taylor, J.S.; Blaedel, K.L.

    1995-07-14

    LLNL is collaborating with the Center for Optics Manufacturing (COM) and the American Precision Optics Manufacturers Association (APOMA) to optimize bound diamond ring tools for the spherical generation of high quality optical surfaces. An important element of this work is establishing an experimentally-verified link between tooling properties and workpiece quality indicators such as roughness, subsurface damage and removal rate. In this paper, we report on a standardized methodology for assessing ring tool performance and its preliminary application to a set of commercially-available wheels. Our goals are to (1) assist optics manufacturers (users of the ring tools) in evaluating tools and in assessing their applicability for a given operation, and (2) provide performance feedback to wheel manufacturers to help optimize tooling for the optics industry. Our paper includes measurements of wheel performance for three 2-4 micron diamond bronze-bond wheels that were supplied by different manufacturers to nominally- identical specifications. Preliminary data suggests that the difference in performance levels among the wheels were small.

  20. 40 CFR 35.9055 - Evaluation of recipient performance.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 1 2010-07-01 2010-07-01 false Evaluation of recipient performance. 35... Evaluation of recipient performance. The Regional Administrator will oversee each recipient's performance... schedule for evaluation in the assistance agreement and will evaluate recipient performance and...

  1. 48 CFR 436.201 - Evaluation of contractor performance.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Construction 436.201 Evaluation of contractor performance. Preparation of performance evaluation reports. In addition to the requirements of FAR 36.201, performance evaluation reports shall be prepared for indefinite... of services to be ordered exceeds $500,000.00. For these contracts, performance evaluation...

  2. Behavior model for performance assessment.

    SciTech Connect

    Borwn-VanHoozer, S. A.

    1999-07-23

    Every individual channels information differently based on their preference of the sensory modality or representational system (visual auditory or kinesthetic) we tend to favor most (our primary representational system (PRS)). Therefore, some of us access and store our information primarily visually first, some auditorily, and others kinesthetically (through feel and touch); which in turn establishes our information processing patterns and strategies and external to internal (and subsequently vice versa) experiential language representation. Because of the different ways we channel our information, each of us will respond differently to a task--the way we gather and process the external information (input), our response time (process), and the outcome (behavior). Traditional human models of decision making and response time focus on perception, cognitive and motor systems stimulated and influenced by the three sensory modalities, visual, auditory and kinesthetic. For us, these are the building blocks to knowing how someone is thinking. Being aware of what is taking place and how to ask questions is essential in assessing performance toward reducing human errors. Existing models give predications based on time values or response times for a particular event, and may be summed and averaged for a generalization of behavior(s). However, by our not establishing a basic understanding of the foundation of how the behavior was predicated through a decision making strategy process, predicative models are overall inefficient in their analysis of the means by which behavior was generated. What is seen is the end result.

  3. Performance Evaluation and Parameter Identification on DROID III

    NASA Technical Reports Server (NTRS)

    Plumb, Julianna J.

    2011-01-01

    The DROID III project consisted of two main parts. The former, performance evaluation, focused on the performance characteristics of the aircraft such as lift to drag ratio, thrust required for level flight, and rate of climb. The latter, parameter identification, focused on finding the aerodynamic coefficients for the aircraft using a system that creates a mathematical model to match the flight data of doublet maneuvers and the aircraft s response. Both portions of the project called for flight testing and that data is now available on account of this project. The conclusion of the project is that the performance evaluation data is well-within desired standards but could be improved with a thrust model, and that parameter identification is still in need of more data processing but seems to produce reasonable results thus far.

  4. Evaluation of stroke performance in tennis.

    PubMed

    Vergauwen, L; Spaepen, A J; Lefevre, J; Hespel, P

    1998-08-01

    In the present studies, the Leuven Tennis Performance Test (LTPT), a newly developed test procedure to measure stroke performance in match-like conditions in elite tennis players, was evaluated as to its value for research purposes. The LTPT is enacted on a regular tennis court. It consists of first and second services, and of returning balls projected by a machine to target zones indicated by a lighted sign. Neutral, defensive, and offensive tactical situations are elicited by appropriately programming the machine. Stroke quality is determined from simultaneous measurements of error rate, ball velocity, and precision of ball placement. A velocity/precision (VP) an a velocity/precision/error (VPE) index are also calculated. The validity and sensitivity of the LTPT were determined by verifying whether LTPT scores reflect minor differences in tennis ranking on the one hand and the effects of fatigue on the other hand. Compared with lower ranked players, higher ones made fewer errors (P < 0.05). In addition, stroke velocity was higher (P < 0.05), and lateral stroke precision, VP, and VPE scores were better (P < 0.05) in the latter. Furthermore, fatigue induced by a prolonged tennis load increased (P < 0.05) error rate and decreased (P < 0.05) stroke velocity and the VP and VPE indices. It is concluded that the LTPT is an accurate, reliable, and valid instrument for the evaluation of stroke quality in high-level tennis players. PMID:9710870

  5. Evaluating performance of container terminal operation using simulation

    NASA Astrophysics Data System (ADS)

    Nawawi, Mohd Kamal Mohd; Jamil, Fadhilah Che; Hamzah, Firdaus Mohamad

    2015-05-01

    A container terminal is a facility where containers are transshipped from one mode of transport to another. Congestion problem leads to the decreasing of the customer's level of satisfaction. This study presents the application of simulation technique with the main objective of this study is to develop the current model and evaluate the performance of the container terminal. The suitable performance measure used in this study to evaluate the container terminal model are the average waiting time in queue, average of process time at berth, number of vessels enter the berth and resource utilization. Simulation technique was found to be a suitable technique to conduct in this study. The results from the simulation model had proved to solve the problem occurred in the container terminal.

  6. Ground truth and benchmarks for performance evaluation

    NASA Astrophysics Data System (ADS)

    Takeuchi, Ayako; Shneier, Michael; Hong, Tsai Hong; Chang, Tommy; Scrapper, Christopher; Cheok, Geraldine S.

    2003-09-01

    Progress in algorithm development and transfer of results to practical applications such as military robotics requires the setup of standard tasks, of standard qualitative and quantitative measurements for performance evaluation and validation. Although the evaluation and validation of algorithms have been discussed for over a decade, the research community still faces a lack of well-defined and standardized methodology. The range of fundamental problems include a lack of quantifiable measures of performance, a lack of data from state-of-the-art sensors in calibrated real-world environments, and a lack of facilities for conducting realistic experiments. In this research, we propose three methods for creating ground truth databases and benchmarks using multiple sensors. The databases and benchmarks will provide researchers with high quality data from suites of sensors operating in complex environments representing real problems of great relevance to the development of autonomous driving systems. At NIST, we have prototyped a High Mobility Multi-purpose Wheeled Vehicle (HMMWV) system with a suite of sensors including a Riegl ladar, GDRS ladar, stereo CCD, several color cameras, Global Position System (GPS), Inertial Navigation System (INS), pan/tilt encoders, and odometry . All sensors are calibrated with respect to each other in space and time. This allows a database of features and terrain elevation to be built. Ground truth for each sensor can then be extracted from the database. The main goal of this research is to provide ground truth databases for researchers and engineers to evaluate algorithms for effectiveness, efficiency, reliability, and robustness, thus advancing the development of algorithms.

  7. Measuring the Performance of Neural Models.

    PubMed

    Schoppe, Oliver; Harper, Nicol S; Willmore, Ben D B; King, Andrew J; Schnupp, Jan W H

    2016-01-01

    Good metrics of the performance of a statistical or computational model are essential for model comparison and selection. Here, we address the design of performance metrics for models that aim to predict neural responses to sensory inputs. This is particularly difficult because the responses of sensory neurons are inherently variable, even in response to repeated presentations of identical stimuli. In this situation, standard metrics (such as the correlation coefficient) fail because they do not distinguish between explainable variance (the part of the neural response that is systematically dependent on the stimulus) and response variability (the part of the neural response that is not systematically dependent on the stimulus, and cannot be explained by modeling the stimulus-response relationship). As a result, models which perfectly describe the systematic stimulus-response relationship may appear to perform poorly. Two metrics have previously been proposed which account for this inherent variability: Signal Power Explained (SPE, Sahani and Linden, 2003), and the normalized correlation coefficient (CC norm , Hsu et al., 2004). Here, we analyze these metrics, and show that they are intimately related. However, SPE has no lower bound, and we show that, even for good models, SPE can yield negative values that are difficult to interpret. CC norm is better behaved in that it is effectively bounded between -1 and 1, and values below zero are very rare in practice and easy to interpret. However, it was hitherto not possible to calculate CC norm directly; instead, it was estimated using imprecise and laborious resampling techniques. Here, we identify a new approach that can calculate CC norm quickly and accurately. As a result, we argue that it is now a better choice of metric than SPE to accurately evaluate the performance of neural models. PMID:26903851

  8. Performance evaluation of SAR/GMTI algorithms

    NASA Astrophysics Data System (ADS)

    Garber, Wendy; Pierson, William; Mcginnis, Ryan; Majumder, Uttam; Minardi, Michael; Sobota, David

    2016-05-01

    There is a history and understanding of exploiting moving targets within ground moving target indicator (GMTI) data, including methods for modeling performance. However, many assumptions valid for GMTI processing are invalid for synthetic aperture radar (SAR) data. For example, traditional GMTI processing assumes targets are exo-clutter and a system that uses a GMTI waveform, i.e. low bandwidth (BW) and low pulse repetition frequency (PRF). Conversely, SAR imagery is typically formed to focus data at zero Doppler and requires high BW and high PRF. Therefore, many of the techniques used in performance estimation of GMTI systems are not valid for SAR data. However, as demonstrated by papers in the recent literature,1-11 there is interest in exploiting moving targets within SAR data. The techniques employed vary widely, including filter banks to form images at multiple Dopplers, performing smear detection, and attempting to address the issue through waveform design. The above work validates the need for moving target exploitation in SAR data, but it does not represent a theory allowing for the prediction or bounding of performance. This work develops an approach to estimate and/or bound performance for moving target exploitation specific to SAR data. Synthetic SAR data is generated across a range of sensor, environment, and target parameters to test the exploitation algorithms under specific conditions. This provides a design tool allowing radar systems to be tuned for specific moving target exploitation applications. In summary, we derive a set of rules that bound the performance of specific moving target exploitation algorithms under variable operating conditions.

  9. Lithographic performance evaluation of a contaminated EUV mask after cleaning

    SciTech Connect

    George, Simi; Naulleau, Patrick; Okoroanyanwu, Uzodinma; Dittmar, Kornelia; Holfeld, Christian; Wuest, Andrea

    2009-11-16

    The effect of surface contamination and subsequent mask surface cleaning on the lithographic performance of a EUV mask is investigated. SEMATECH's Berkeley micro-field exposure tool (MET) printed 40 nm and 50 nm line and space (L/S) patterns are evaluated to compare the performance of a contaminated and cleaned mask to an uncontaminated mask. Since the two EUV masks have distinct absorber architectures, optical imaging models and aerial image calculations were completed to determine any expected differences in performance. Measured and calculated Bossung curves, process windows, and exposure latitudes for the two sets of L/S patterns are compared to determine how the contamination and cleaning impacts the lithographic performance of EUV masks. The observed differences in mask performance are shown to be insignificant, indicating that the cleaning process did not appreciably affect mask performance.

  10. Space Shuttle Underside Astronaut Communications Performance Evaluation

    NASA Technical Reports Server (NTRS)

    Hwu, Shian U.; Dobbins, Justin A.; Loh, Yin-Chung; Kroll, Quin D.; Sham, Catherine C.

    2005-01-01

    The Space Shuttle Ultra High Frequency (UHF) communications system is planned to provide Radio Frequency (RF) coverage for astronauts working underside of the Space Shuttle Orbiter (SSO) for thermal tile inspection and repairing. This study is to assess the Space Shuttle UHF communication performance for astronauts in the shadow region without line-of-sight (LOS) to the Space Shuttle and Space Station UHF antennas. To insure the RF coverage performance at anticipated astronaut worksites, the link margin between the UHF antennas and Extravehicular Activity (EVA) Astronauts with significant vehicle structure blockage was analyzed. A series of near-field measurements were performed using the NASA/JSC Anechoic Chamber Antenna test facilities. Computational investigations were also performed using the electromagnetic modeling techniques. The computer simulation tool based on the Geometrical Theory of Diffraction (GTD) was used to compute the signal strengths. The signal strength was obtained by computing the reflected and diffracted fields along the propagation paths between the transmitting and receiving antennas. Based on the results obtained in this study, RF coverage for UHF communication links was determined for the anticipated astronaut worksite in the shadow region underneath the Space Shuttle.

  11. An empirical evaluation of spatial regression models

    NASA Astrophysics Data System (ADS)

    Gao, Xiaolu; Asami, Yasushi; Chung, Chang-Jo F.

    2006-10-01

    Conventional statistical methods are often ineffective to evaluate spatial regression models. One reason is that spatial regression models usually have more parameters or smaller sample sizes than a simple model, so their degree of freedom is reduced. Thus, it is often unlikely to evaluate them based on traditional tests. Another reason, which is theoretically associated with statistical methods, is that statistical criteria are crucially dependent on such assumptions as normality, independence, and homogeneity. This may create problems because the assumptions are open for testing. In view of these problems, this paper proposes an alternative empirical evaluation method. To illustrate the idea, a few hedonic regression models for a house and land price data set are evaluated, including a simple, ordinary linear regression model and three spatial models. Their performance as to how well the price of the house and land can be predicted is examined. With a cross-validation technique, the prices at each sample point are predicted with a model estimated with the samples excluding the one being concerned. Then, empirical criteria are established whereby the predicted prices are compared with the real, observed prices. The proposed method provides an objective guidance for the selection of a suitable model specification for a data set. Moreover, the method is seen as an alternative way to test the significance of the spatial relationships being concerned in spatial regression models.

  12. Advocacy Evaluation: A Model for Internal Evaluation Offices.

    ERIC Educational Resources Information Center

    Sonnichsen, Richard C.

    1988-01-01

    As evaluations are more often implemented by internal staff, internal evaluators must begin to assume decision-making and advocacy tasks. This advocacy evaluation concept is described using the Federal Bureau of Investigation evaluation staff as a model. (TJH)

  13. Polypyrrole actuators: modeling and performance

    NASA Astrophysics Data System (ADS)

    Madden, John D.; Madden, Peter G.; Hunter, Ian W.

    2001-07-01

    Conducting polymer actuators generate forces that exceed those of mammalian skeletal muscle by up to two orders of magnitude for a given cross-sectional area, require only a few volts to operate, and are low in cost. However application of conducting polymer actuators is hampered by the lack of a full description of the relationship between load, displacement, voltage and current. In an effort to provide such a model, system identification techniques are employed. Stress-strain tests are performed at constant applied potential to determine polypyrrole stiffness. The admittance transfer function of polypyrrole and the associated electrolyte is measured over the potential range in which polypyrrole is highly conductive. The admittance is well described by treating the polymer as a volumetric capacitance of 8*107 F*m3 whose charging rate is limited by the electrolyte resistance and by diffusion within polypyrrole. The relationship between strain and charge is investigated, showing that strain is directly proportional to charge via the strain to charge density ratio, (alpha) = 1*10+-10 m3*C-1, at loads of up to 4 MPa. Beyond 4 MPa the strain to charge ratio is time dependent. The admittance models, stress/strain relation and strain to charge relationship are combined to form a full description of polypyrrole electromechanical response. This description predicts that large increases in strain rate and power are obtained through miniaturization, yielding bandwidths in excess of 10 kHz. The model also enables motor designers to optimize polypyrrole actuator geometries for their applications.

  14. Traction contact performance evaluation at high speeds

    NASA Technical Reports Server (NTRS)

    Tevaarwerk, J. L.

    1981-01-01

    The results of traction tests performed on two fluids are presented. These tests covered a pressure range of 1.0 to 2.5 GPa, an inlet temperature range of 30 'C to 70 'C, a speed range of 10 to 80 m/sec, aspect ratios of .5 to 5 and spin from 0 to 2.1 percent. The test results are presented in the form of two dimensionless parameters, the initial traction slope and the maximum traction peak. With the use of a suitable rheological fluid model the actual traction curves measured can now be reconstituted from the two fluid parameters. More importantly, the knowledge of these parameters together with the fluid rheological model, allow the prediction of traction under conditions of spin, slip and any combination thereof. Comparison between theoretically predicted traction under these conditions and those measured in actual traction tests shows that this method gives good results.

  15. Manipulator Performance Evaluation Using Fitts' Taping Task

    SciTech Connect

    Draper, J.V.; Jared, B.C.; Noakes, M.W.

    1999-04-25

    Metaphorically, a teleoperator with master controllers projects the user's arms and hands into a re- mote area, Therefore, human users interact with teleoperators at a more fundamental level than they do with most human-machine systems. Instead of inputting decisions about how the system should func- tion, teleoperator users input the movements they might make if they were truly in the remote area and the remote machine must recreate their trajectories and impedance. This intense human-machine inter- action requires displays and controls more carefully attuned to human motor capabilities than is neces- sary with most systems. It is important for teleoperated manipulators to be able to recreate human trajectories and impedance in real time. One method for assessing manipulator performance is to observe how well a system be- haves while a human user completes human dexterity tasks with it. Fitts' tapping task has been, used many times in the past for this purpose. This report describes such a performance assessment. The International Submarine Engineering (ISE) Autonomous/Teleoperated Operations Manipulator (ATOM) servomanipulator system was evalu- ated using a generic positioning accuracy task. The task is a simple one but has the merits of (1) pro- ducing a performance function estimate rather than a point estimate and (2) being widely used in the past for human and servomanipulator dexterity tests. Results of testing using this task may, therefore, allow comparison with other manipulators, and is generically representative of a broad class of tasks. Results of the testing indicate that the ATOM manipulator is capable of performing the task. Force reflection had a negative impact on task efficiency in these data. This was most likely caused by the high resistance to movement the master controller exhibited with the force reflection engaged. Measurements of exerted forces were not made, so it is not possible to say whether the force reflection helped partici- pants

  16. Critical evaluation of laser-induced interstitial thermotherapy (LITT) performed on in-vitro, in-vivo, and ex-vivo models

    NASA Astrophysics Data System (ADS)

    Henkel, Thomas O.; Niedergethmann, M.; Alken, Peter

    1996-01-01

    Thermal ablation techniques are experiencing application in many different fields of medicine. Recently, experimental studies have been performed by various authors concerned with dosimetry and laser-tissue interaction. In order to study the effects of interstitial laser energy on biological tissue, we examined different tissue models which compared important parameters during laser application. We have performed the following in vitro, in vivo and ex vivo studies by comparing a neodymium: YAG (1064 nm) and diode laser (830 nm) equipped with interstitial laser fibers. In vitro studies which examined the influence of changes in power and time duration of application were performed on potato, muscle, liver and kidney. In vivo studies (porcine model) also examined different power settings at designated time intervals. Ex vivo studies with isolated perfused kidney (IPK) investigated the effects of power, application time, perfusion pressure and different perfusion mediums (saline solution, anticoagulated blood). In vitro studies revealed necrotic lesions in all tissues. Although no power threshold could be obtained for liver tissue (early onset fiber damage), potato, kidney and muscle tissue demonstrated their own respective power threshold. Furthermore, when using the Nd:YAG laser, we observed that higher power settings had permitted a quicker necrosis induction, however within its own treatment power spectrum, the diode laser was capable of inducing larger lesions. In vivo studies demonstrated that early onset diffuser tip damage would prevent exact documentation of laser-tissue interaction at higher power levels. Results obtained with our standardized ex vivo model (IPK) revealed smaller necrotic lesions with saline than with blood perfusion and also demonstrated the important role which perfusion rate plays during laser-tissue interaction. We found that pigmented, well vascularized parenchymal organs with low stromal content (kidney, liver) and a higher absorption

  17. Evaluation of help model replacement codes

    SciTech Connect

    Whiteside, Tad; Hang, Thong; Flach, Gregory

    2009-07-01

    This work evaluates the computer codes that are proposed to be used to predict percolation of water through the closure-cap and into the waste containment zone at the Department of Energy closure sites. This work compares the currently used water-balance code (HELP) with newly developed computer codes that use unsaturated flow (Richards’ equation). It provides a literature review of the HELP model and the proposed codes, which result in two recommended codes for further evaluation: HYDRUS-2D3D and VADOSE/W. This further evaluation involved performing actual simulations on a simple model and comparing the results of those simulations to those obtained with the HELP code and the field data. From the results of this work, we conclude that the new codes perform nearly the same, although moving forward, we recommend HYDRUS-2D3D.

  18. The MSAD actuator sclenoid, performance evaluation and modification

    NASA Astrophysics Data System (ADS)

    Worth, G.

    1983-04-01

    A small conical-faced solenoid actuator is tested in order to develop design criteria for improved performance including increased pull sensitivity. In addition to increased pull for the normal electrical inputs, a reduction in pull response to short duration electrical noise pulses is also required. Along with dynamic testing of the solenoid, a linear circuit model is developed. This model permits calculation of the dynamic forces and currents which can be expected with various electrical inputs. The model parameters are related to the actual solenoid and allow the effects of winding density and shading rings to be evaluated.

  19. Performance Evaluations of Ceramic Wafer Seals

    NASA Technical Reports Server (NTRS)

    Dunlap, Patrick H., Jr.; DeMange, Jeffrey J.; Steinetz, Bruce M.

    2006-01-01

    Future hypersonic vehicles will require high temperature, dynamic seals in advanced ramjet/scramjet engines and on the vehicle airframe to seal the perimeters of movable panels, flaps, and doors. Seal temperatures in these locations can exceed 2000 F, especially when the seals are in contact with hot ceramic matrix composite sealing surfaces. NASA Glenn Research Center is developing advanced ceramic wafer seals to meet the needs of these applications. High temperature scrub tests performed between silicon nitride wafers and carbon-silicon carbide rub surfaces revealed high friction forces and evidence of material transfer from the rub surfaces to the wafer seals. Stickage between adjacent wafers was also observed after testing. Several design changes to the wafer seals were evaluated as possible solutions to these concerns. Wafers with recessed sides were evaluated as a potential means of reducing friction between adjacent wafers. Alternative wafer materials are also being considered as a means of reducing friction between the seals and their sealing surfaces and because the baseline silicon nitride wafer material (AS800) is no longer commercially available.

  20. Advanced fuels modeling: Evaluating the steady-state performance of carbide fuel in helium-cooled reactors using FRAPCON 3.4

    NASA Astrophysics Data System (ADS)

    Hallman, Luther, Jr.

    Uranium carbide (UC) has long been considered a potential alternative to uranium dioxide (UO2) fuel, especially in the context of Gen IV gas-cooled reactors. It has shown promise because of its high uranium density, good irradiation stability, and especially high thermal conductivity. Despite its many benefits, UC is known to swell at a rate twice that of UO2. However, the swelling phenomenon is not well understood, and we are limited to a weak empirical understanding of the swelling mechanism. One suggested cladding for UC is silicon carbide (SiC), a ceramic that demonstrates a number of desirable properties. Among them are an increased corrosion resistance, high mechanical strength, and irradiation stability. However, with increased temperatures, SiC exhibits an extremely brittle nature. The brittle behavior of SiC is not fully understood and thus it is unknown how SiC would respond to the added stress of a swelling UC fuel. To better understand the interaction between these advanced materials, each has been implemented into FRAPCON, the preferred fuel performance code of the Nuclear Regulatory Commission (NRC); additionally, the material properties for a helium coolant have been incorporated. The implementation of UC within FRAPCON required the development of material models that described not only the thermophysical properties of UC, such as thermal conductivity and thermal expansion, but also models for the swelling, densification, and fission gas release associated with the fuel's irradiation behavior. This research is intended to supplement ongoing analysis of the performance and behavior of uranium carbide and silicon carbide in a helium-cooled reactor.

  1. 48 CFR 236.201 - Evaluation of contractor performance.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... CONTRACTS Special Aspects of Contracting for Construction 236.201 Evaluation of contractor performance. (a) Preparation of performance evaluation reports. Use DD Form 2626, Performance Evaluation (Construction... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Evaluation of...

  2. Performance and Perception in the Flipped Learning Model: An Initial Approach to Evaluate the Effectiveness of a New Teaching Methodology in a General Science Classroom

    NASA Astrophysics Data System (ADS)

    González-Gómez, David; Jeong, Jin Su; Airado Rodríguez, Diego; Cañada-Cañada, Florentina

    2016-06-01

    "Flipped classroom" teaching methodology is a type of blended learning in which the traditional class setting is inverted. Lecture is shifted outside of class, while the classroom time is employed to solve problems or doing practical works through the discussion/peer collaboration of students and instructors. This relatively new instructional methodology claims that flipping your classroom engages more effectively students with the learning process, achieving better teaching results. Thus, this research aimed to evaluate the effects of the flipped classroom on the students' performance and perception of this new methodology. This study was conducted in a general science course, sophomore of the Primary Education bachelor degree in the Training Teaching School of the University of Extremadura (Spain) during the course 2014/2015. In order to assess the suitability of the proposed methodology, the class was divided in two groups. For the first group, a traditional methodology was followed, and it was used as control. On the other hand, the "flipped classroom" methodology was used in the second group, where the students were given diverse materials, such as video lessons and reading materials, before the class to be revised at home by them. Online questionnaires were as well provided to assess the progress of the students before the class. Finally, the results were compared in terms of students' achievements and a post-task survey was also conducted to know the students' perceptions. A statistically significant difference was found on all assessments with the flipped class students performing higher on average. In addition, most students had a favorable perception about the flipped classroom noting the ability to pause, rewind and review lectures, as well as increased individualized learning and increased teacher availability.

  3. A Model for Curriculum Evaluation

    ERIC Educational Resources Information Center

    Crane, Peter; Abt, Clark C.

    1969-01-01

    Describes in some detail the Curriculum Evaluation Model, "a technique for calculating the cost-effectiveness of alternative curriculum materials by a detailed breakdown and analysis of their components, quality, and cost. Coverage, appropriateness, motivational effectiveness, and cost are the four major categories in terms of which the…

  4. Solar power plant performance evaluation: simulation and experimental validation

    NASA Astrophysics Data System (ADS)

    Natsheh, E. M.; Albarbar, A.

    2012-05-01

    In this work the performance of solar power plant is evaluated based on a developed model comprise photovoltaic array, battery storage, controller and converters. The model is implemented using MATLAB/SIMULINK software package. Perturb and observe (P&O) algorithm is used for maximizing the generated power based on maximum power point tracker (MPPT) implementation. The outcome of the developed model are validated and supported by a case study carried out using operational 28.8kW grid-connected solar power plant located in central Manchester. Measurements were taken over 21 month's period; using hourly average irradiance and cell temperature. It was found that system degradation could be clearly monitored by determining the residual (the difference) between the output power predicted by the model and the actual measured power parameters. It was found that the residual exceeded the healthy threshold, 1.7kW, due to heavy snow in Manchester last winter. More important, the developed performance evaluation technique could be adopted to detect any other reasons that may degrade the performance of the P V panels such as shading and dirt. Repeatability and reliability of the developed system performance were validated during this period. Good agreement was achieved between the theoretical simulation and the real time measurement taken the online grid connected solar power plant.

  5. 48 CFR 1252.216-72 - Performance evaluation plan.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ....216-72 Performance evaluation plan. As prescribed in (TAR) 48 CFR 1216.406(b), insert the following clause: Performance Evaluation Plan (OCT 1994) (a) A Performance Evaluation Plan shall be unilaterally... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Performance...

  6. Analysis of Photovoltaic System Energy Performance Evaluation Method

    SciTech Connect

    Kurtz, S.; Newmiller, J.; Kimber, A.; Flottemesch, R.; Riley, E.; Dierauf, T.; McKee, J.; Krishnani, P.

    2013-11-01

    Documentation of the energy yield of a large photovoltaic (PV) system over a substantial period can be useful to measure a performance guarantee, as an assessment of the health of the system, for verification of a performance model to then be applied to a new system, or for a variety of other purposes. Although the measurement of this performance metric might appear to be straight forward, there are a number of subtleties associated with variations in weather and imperfect data collection that complicate the determination and data analysis. A performance assessment is most valuable when it is completed with a very low uncertainty and when the subtleties are systematically addressed, yet currently no standard exists to guide this process. This report summarizes a draft methodology for an Energy Performance Evaluation Method, the philosophy behind the draft method, and the lessons that were learned by implementing the method.

  7. Evaluating the Performance of a New Model for Predicting the Growth of Clostridium perfringens in Cooked, Uncured Meat and Poultry Products under Isothermal, Heating, and Dynamically Cooling Conditions.

    PubMed

    Huang, Lihan

    2016-07-01

    Clostridium perfringens type A is a significant public health threat and its spores may germinate, outgrow, and multiply during cooling of cooked meats. This study applies a new C. perfringens growth model in the USDA Integrated Pathogen Modeling Program-Dynamic Prediction (IPMP Dynamic Prediction) Dynamic Prediction to predict the growth from spores of C. perfringens in cooked uncured meat and poultry products using isothermal, dynamic heating, and cooling data reported in the literature. The residual errors of predictions (observation-prediction) are analyzed, and the root-mean-square error (RMSE) calculated. For isothermal and heating profiles, each data point in growth curves is compared. The mean residual errors (MRE) of predictions range from -0.40 to 0.02 Log colony forming units (CFU)/g, with a RMSE of approximately 0.6 Log CFU/g. For cooling, the end point predictions are conservative in nature, with an MRE of -1.16 Log CFU/g for single-rate cooling and -0.66 Log CFU/g for dual-rate cooling. The RMSE is between 0.6 and 0.7 Log CFU/g. Compared with other models reported in the literature, this model makes more accurate and fail-safe predictions. For cooling, the percentage for accurate and fail-safe predictions is between 97.6% and 100%. Under criterion 1, the percentage of accurate predictions is 47.5% for single-rate cooling and 66.7% for dual-rate cooling, while the fail-dangerous predictions are between 0% and 2.4%. This study demonstrates that IPMP Dynamic Prediction can be used by food processors and regulatory agencies as a tool to predict the growth of C. perfringens in uncured cooked meats and evaluate the safety of cooked or heat-treated uncured meat and poultry products exposed to cooling deviations or to develop customized cooling schedules. This study also demonstrates the need for more accurate data collection during cooling.

  8. Using Weibull Distribution Analysis to Evaluate ALARA Performance

    SciTech Connect

    E. L. Frome, J. P. Watkins, and D. A. Hagemeyer

    2009-10-01

    As Low as Reasonably Achievable (ALARA) is the underlying principle for protecting nuclear workers from potential health outcomes related to occupational radiation exposure. Radiation protection performance is currently evaluated by measures such as collective dose and average measurable dose, which do not indicate ALARA performance. The purpose of this work is to show how statistical modeling of individual doses using the Weibull distribution can provide objective supplemental performance indicators for comparing ALARA implementation among sites and for insights into ALARA practices within a site. Maximum likelihood methods were employed to estimate the Weibull shape and scale parameters used for performance indicators. The shape parameter reflects the effectiveness of maximizing the number of workers receiving lower doses and is represented as the slope of the fitted line on a Weibull probability plot. Additional performance indicators derived from the model parameters include the 99th percentile and the exceedance fraction. When grouping sites by collective total effective dose equivalent (TEDE) and ranking by 99th percentile with confidence intervals, differences in performance among sites can be readily identified. Applying this methodology will enable more efficient and complete evaluation of the effectiveness of ALARA implementation.

  9. A model evaluation checklist for process-based environmental models

    NASA Astrophysics Data System (ADS)

    Jackson-Blake, Leah

    2015-04-01

    Mechanistic catchment-scale phosphorus models appear to perform poorly where diffuse sources dominate. The reasons for this were investigated for one commonly-applied model, the INtegrated model of CAtchment Phosphorus (INCA-P). Model output was compared to 18 months of daily water quality monitoring data in a small agricultural catchment in Scotland, and model structure, key model processes and internal model responses were examined. Although the model broadly reproduced dissolved phosphorus dynamics, it struggled with particulates. The reasons for poor performance were explored, together with ways in which improvements could be made. The process of critiquing and assessing model performance was then generalised to provide a broadly-applicable model evaluation checklist, incorporating: (1) Calibration challenges, relating to difficulties in thoroughly searching a high-dimensional parameter space and in selecting appropriate means of evaluating model performance. In this study, for example, model simplification was identified as a necessary improvement to reduce the number of parameters requiring calibration, whilst the traditionally-used Nash Sutcliffe model performance statistic was not able to discriminate between realistic and unrealistic model simulations, and alternative statistics were needed. (2) Data limitations, relating to a lack of (or uncertainty in) input data, data to constrain model parameters, data for model calibration and testing, and data to test internal model processes. In this study, model reliability could be improved by addressing all four kinds of data limitation. For example, there was insufficient surface water monitoring data for model testing against an independent dataset to that used in calibration, whilst additional monitoring of groundwater and effluent phosphorus inputs would help distinguish between alternative plausible model parameterisations. (3) Model structural inadequacies, whereby model structure may inadequately represent

  10. Market behavior and performance of different strategy evaluation schemes

    NASA Astrophysics Data System (ADS)

    Baek, Yongjoo; Lee, Sang Hoon; Jeong, Hawoong

    2010-08-01

    Strategy evaluation schemes are a crucial factor in any agent-based market model, as they determine the agents’ strategy preferences and consequently their behavioral pattern. This study investigates how the strategy evaluation schemes adopted by agents affect their performance in conjunction with the market circumstances. We observe the performance of three strategy evaluation schemes, the history-dependent wealth game, the trend-opposing minority game, and the trend-following majority game, in a stock market where the price is exogenously determined. The price is either directly adopted from the real stock market indices or generated with a Markov chain of order ≤2 . Each scheme’s success is quantified by average wealth accumulated by the traders equipped with the scheme. The wealth game, as it learns from the history, shows relatively good performance unless the market is highly unpredictable. The majority game is successful in a trendy market dominated by long periods of sustained price increase or decrease. On the other hand, the minority game is suitable for a market with persistent zigzag price patterns. We also discuss the consequence of implementing finite memory in the scoring processes of strategies. Our findings suggest under which market circumstances each evaluation scheme is appropriate for modeling the behavior of real market traders.

  11. High performance APCS conceptual design and evaluation scoping study

    SciTech Connect

    Soelberg, N.; Liekhus, K.; Chambers, A.; Anderson, G.

    1998-02-01

    This Air Pollution Control System (APCS) Conceptual Design and Evaluation study was conducted to evaluate a high-performance (APC) system for minimizing air emissions from mixed waste thermal treatment systems. Seven variations of high-performance APCS designs were conceptualized using several design objectives. One of the system designs was selected for detailed process simulation using ASPEN PLUS to determine material and energy balances and evaluate performance. Installed system capital costs were also estimated. Sensitivity studies were conducted to evaluate the incremental cost and benefit of added carbon adsorber beds for mercury control, specific catalytic reduction for NO{sub x} control, and offgas retention tanks for holding the offgas until sample analysis is conducted to verify that the offgas meets emission limits. Results show that the high-performance dry-wet APCS can easily meet all expected emission limits except for possibly mercury. The capability to achieve high levels of mercury control (potentially necessary for thermally treating some DOE mixed streams) could not be validated using current performance data for mercury control technologies. The engineering approach and ASPEN PLUS modeling tool developed and used in this study identified APC equipment and system performance, size, cost, and other issues that are not yet resolved. These issues need to be addressed in feasibility studies and conceptual designs for new facilities or for determining how to modify existing facilities to meet expected emission limits. The ASPEN PLUS process simulation with current and refined input assumptions and calculations can be used to provide system performance information for decision-making, identifying best options, estimating costs, reducing the potential for emission violations, providing information needed for waste flow analysis, incorporating new APCS technologies in existing designs, or performing facility design and permitting activities.

  12. Evaluating the influence of physical, economic and managerial factors on sheet erosion in rangelands of SW Spain by performing a sensitivity analysis on an integrated dynamic model.

    PubMed

    Ibáñez, J; Lavado Contador, J F; Schnabel, S; Martínez Valderrama, J

    2016-02-15

    An integrated dynamic model was used to evaluate the influence of climatic, soil, pastoral, economic and managerial factors on sheet erosion in rangelands of SW Spain (dehesas). This was achieved by means of a variance-based sensitivity analysis. Topsoil erodibility, climate change and a combined factor related to soil water storage capacity and the pasture production function were the factors which influenced water erosion the most. Of them, climate change is the main source of uncertainty, though in this study it caused a reduction in the mean and the variance of long-term erosion rates. The economic and managerial factors showed scant influence on soil erosion, meaning that it is unlikely to find such influence in the study area for the time being. This is because the low profitability of the livestock business maintains stocking rates at low levels. However, the potential impact of livestock, through which economic and managerial factors affect soil erosion, proved to be greater in absolute value than the impact of climate change. Therefore, if changes in some economic or managerial factors led to higher stocking rates in the future, significant increases in erosion rates would be expected.

  13. High-Performance Monopropellants and Catalysts Evaluated

    NASA Technical Reports Server (NTRS)

    Reed, Brian D.

    2004-01-01

    The NASA Glenn Research Center is sponsoring efforts to develop advanced monopropellant technology. The focus has been on monopropellant formulations composed of an aqueous solution of hydroxylammonium nitrate (HAN) and a fuel component. HAN-based monopropellants do not have a toxic vapor and do not need the extraordinary procedures for storage, handling, and disposal required of hydrazine (N2H4). Generically, HAN-based monopropellants are denser and have lower freezing points than N2H4. The performance of HAN-based monopropellants depends on the selection of fuel, the HAN-to-fuel ratio, and the amount of water in the formulation. HAN-based monopropellants are not seen as a replacement for N2H4 per se, but rather as a propulsion option in their own right. For example, HAN-based monopropellants would prove beneficial to the orbit insertion of small, power-limited satellites because of this propellant's high performance (reduced system mass), high density (reduced system volume), and low freezing point (elimination of tank and line heaters). Under a Glenn-contracted effort, Aerojet Redmond Rocket Center conducted testing to provide the foundation for the development of monopropellant thrusters with an I(sub sp) goal of 250 sec. A modular, workhorse reactor (representative of a 1-lbf thruster) was used to evaluate HAN formulations with catalyst materials. Stoichiometric, oxygen-rich, and fuelrich formulations of HAN-methanol and HAN-tris(aminoethyl)amine trinitrate were tested to investigate the effects of stoichiometry on combustion behavior. Aerojet found that fuelrich formulations degrade the catalyst and reactor faster than oxygen-rich and stoichiometric formulations do. A HAN-methanol formulation with a theoretical Isp of 269 sec (designated HAN269MEO) was selected as the baseline. With a combustion efficiency of at least 93 percent demonstrated for HAN-based monopropellants, HAN269MEO will meet the I(sub sp) 250 sec goal.

  14. Coherent lidar airborne windshear sensor: performance evaluation.

    PubMed

    Targ, R; Kavaya, M J; Huffaker, R M; Bowles, R L

    1991-05-20

    National attention has focused on the critical problem of detecting and avoiding windshear since the crash on 2 Aug. 1985 of a Lockheed L-1011 at Dallas/Fort Worth International Airport. As part of the NASA/FAA National Integrated Windshear Program, we have defined a measurable windshear hazard index that can be remotely sensed from an aircraft, to give the pilot information about the wind conditions he will experience at some later time if he continues along the present flight path. A technology analysis and end-to-end performance simulation measuring signal-to-noise ratios and resulting wind velocity errors for competing coherent laser radar (lidar) systems have been carried out. The results show that a Ho:YAG lidar at a wavelength of 2.1 microm and a CO(2) lidar at 10.6 microm can give the pilot information about the line-of-sight component of a windshear threat from his present position to a region extending 2-4 km in front of the aircraft. This constitutes a warning time of 20-40 s, even in conditions of moderately heavy precipitation. Using these results, a Coherent Lidar Airborne Shear Sensor (CLASS) that uses a Q-switched CO(2) laser at 10.6 microm is being designed and developed for flight evaluation in the fall of 1991.

  15. Performance evaluation of an infrared thermocouple.

    PubMed

    Chen, Chiachung; Weng, Yu-Kai; Shen, Te-Ching

    2010-01-01

    The measurement of the leaf temperature of forests or agricultural plants is an important technique for the monitoring of the physiological state of crops. The infrared thermometer is a convenient device due to its fast response and nondestructive measurement technique. Nowadays, a novel infrared thermocouple, developed with the same measurement principle of the infrared thermometer but using a different detector, has been commercialized for non-contact temperature measurement. The performances of two-kinds of infrared thermocouples were evaluated in this study. The standard temperature was maintained by a temperature calibrator and a special black cavity device. The results indicated that both types of infrared thermocouples had good precision. The error distribution ranged from -1.8 °C to 18 °C as the reading values served as the true values. Within the range from 13 °C to 37 °C, the adequate calibration equations were the high-order polynomial equations. Within the narrower range from 20 °C to 35 °C, the adequate equation was a linear equation for one sensor and a two-order polynomial equation for the other sensor. The accuracy of the two kinds of infrared thermocouple was improved by nearly 0.4 °C with the calibration equations. These devices could serve as mobile monitoring tools for in situ and real time routine estimation of leaf temperatures.

  16. Sequentially Executed Model Evaluation Framework

    SciTech Connect

    2015-10-20

    Provides a message passing framework between generic input, model and output drivers, and specifies an API for developing such drivers. Also provides batch and real-time controllers which step the model and I/O through the time domain (or other discrete domain), and sample I/O drivers. This is a library framework, and does not, itself, solve any problems or execute any modeling. The SeMe framework aids in development of models which operate on sequential information, such as time-series, where evaluation is based on prior results combined with new data for this iteration. Has applications in quality monitoring, and was developed as part of the CANARY-EDS software, where real-time water quality data is being analyzed for anomalies.

  17. Sequentially Executed Model Evaluation Framework

    SciTech Connect

    2014-02-14

    Provides a message passing framework between generic input, model and output drivers, and specifies an API for developing such drivers. Also provides batch and real-time controllers which step the model and 1/0 through the time domain (or other discrete domain), and sample 1/0 drivers. This is a Framework library framework, and does not, itself, solve any problems or execute any modelling. The SeMe framework aids in development of models which operate on sequential information, such as time-series, where evaluation is based on prior results combined with new data for this iteration. Ha) applications in quality monitoring, and was developed as part of the CANARY-EDS software, where real-time water quality data is being analyzed

  18. Sequentially Executed Model Evaluation Framework

    2014-02-14

    Provides a message passing framework between generic input, model and output drivers, and specifies an API for developing such drivers. Also provides batch and real-time controllers which step the model and 1/0 through the time domain (or other discrete domain), and sample 1/0 drivers. This is a Framework library framework, and does not, itself, solve any problems or execute any modelling. The SeMe framework aids in development of models which operate on sequential information, suchmore » as time-series, where evaluation is based on prior results combined with new data for this iteration. Ha) applications in quality monitoring, and was developed as part of the CANARY-EDS software, where real-time water quality data is being analyzed« less

  19. Sequentially Executed Model Evaluation Framework

    2015-10-20

    Provides a message passing framework between generic input, model and output drivers, and specifies an API for developing such drivers. Also provides batch and real-time controllers which step the model and I/O through the time domain (or other discrete domain), and sample I/O drivers. This is a library framework, and does not, itself, solve any problems or execute any modeling. The SeMe framework aids in development of models which operate on sequential information, such asmore » time-series, where evaluation is based on prior results combined with new data for this iteration. Has applications in quality monitoring, and was developed as part of the CANARY-EDS software, where real-time water quality data is being analyzed for anomalies.« less

  20. Public Education Resources and Pupil Performance Models.

    ERIC Educational Resources Information Center

    Spottheim, David; And Others

    This report details three models quantifying the relationships between educational means (resources) and ends (pupil achievements) to analyze resource allocation problems within school districts: (1) the Pupil Performance Model; (2) the Goal Programming Model; and (3) the Operational Structure of a School and Pupil Performance Model. These models…

  1. Evaluation of a lake whitefish bioenergetics model

    USGS Publications Warehouse

    Madenjian, Charles P.; O'Connor, Daniel V.; Pothoven, Steven A.; Schneeberger, Philip J.; Rediske, Richard R.; O'Keefe, James P.; Bergstedt, Roger A.; Argyle, Ray L.; Brandt, Stephen B.

    2006-01-01

    We evaluated the Wisconsin bioenergetics model for lake whitefish Coregonus clupeaformis in the laboratory and in the field. For the laboratory evaluation, lake whitefish were fed rainbow smelt Osmerus mordax in four laboratory tanks during a 133-d experiment. Based on a comparison of bioenergetics model predictions of lake whitefish food consumption and growth with observed consumption and growth, we concluded that the bioenergetics model furnished significantly biased estimates of both food consumption and growth. On average, the model overestimated consumption by 61% and underestimated growth by 16%. The source of the bias was probably an overestimation of the respiration rate. We therefore adjusted the respiration component of the bioenergetics model to obtain a good fit of the model to the observed consumption and growth in our laboratory tanks. Based on the adjusted model, predictions of food consumption over the 133-d period fell within 5% of observed consumption in three of the four tanks and within 9% of observed consumption in the remaining tank. We used polychlorinated biphenyls (PCBs) as a tracer to evaluate model performance in the field. Based on our laboratory experiment, the efficiency with which lake whitefish retained PCBs from their food (I?) was estimated at 0.45. We applied the bioenergetics model to Lake Michigan lake whitefish and then used PCB determinations of both lake whitefish and their prey from Lake Michigan to estimate p in the field. Application of the original model to Lake Michigan lake whitefish yielded a field estimate of 0.28, implying that the original formulation of the model overestimated consumption in Lake Michigan by 61%. Application of the bioenergetics model with the adjusted respiration component resulted in a field I? estimate of 0.56, implying that this revised model underestimated consumption by 20%.

  2. Evaluating the TD model of classical conditioning.

    PubMed

    Ludvig, Elliot A; Sutton, Richard S; Kehoe, E James

    2012-09-01

    The temporal-difference (TD) algorithm from reinforcement learning provides a simple method for incrementally learning predictions of upcoming events. Applied to classical conditioning, TD models suppose that animals learn a real-time prediction of the unconditioned stimulus (US) on the basis of all available conditioned stimuli (CSs). In the TD model, similar to other error-correction models, learning is driven by prediction errors--the difference between the change in US prediction and the actual US. With the TD model, however, learning occurs continuously from moment to moment and is not artificially constrained to occur in trials. Accordingly, a key feature of any TD model is the assumption about the representation of a CS on a moment-to-moment basis. Here, we evaluate the performance of the TD model with a heretofore unexplored range of classical conditioning tasks. To do so, we consider three stimulus representations that vary in their degree of temporal generalization and evaluate how the representation influences the performance of the TD model on these conditioning tasks.

  3. 48 CFR 8.406-7 - Contractor Performance Evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Performance Evaluation. Ordering activities must prepare an evaluation of contractor performance for each... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Contractor Performance Evaluation. 8.406-7 Section 8.406-7 Federal Acquisition Regulations System FEDERAL ACQUISITION...

  4. 48 CFR 1552.209-76 - Contractor performance evaluations.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 1552.209-76 Contractor performance evaluations. As prescribed in section 1509.170-1, insert the following clause in all applicable solicitations and contracts. Contractor Performance Evaluations (OCT 2002... compliance with safety standards performance categories if deemed appropriate for the evaluation or...

  5. 10 CFR 1045.9 - RD classification performance evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 4 2010-01-01 2010-01-01 false RD classification performance evaluation. 1045.9 Section... classification performance evaluation. (a) Heads of agencies shall ensure that RD management officials and those... RD or FRD documents shall have their personnel performance evaluated with respect to...

  6. 24 CFR 570.491 - Performance and evaluation report.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 3 2010-04-01 2010-04-01 false Performance and evaluation report... Development Block Grant Program § 570.491 Performance and evaluation report. The annual performance and evaluation report shall be submitted in accordance with 24 CFR part 91. (Approved by the Office of...

  7. 24 CFR 570.491 - Performance and evaluation report.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... Development Block Grant Program § 570.491 Performance and evaluation report. The annual performance and evaluation report shall be submitted in accordance with 24 CFR part 91. (Approved by the Office of Management... 24 Housing and Urban Development 3 2011-04-01 2010-04-01 true Performance and evaluation...

  8. CTBT integrated verification system evaluation model supplement

    SciTech Connect

    EDENBURN,MICHAEL W.; BUNTING,MARCUS; PAYNE JR.,ARTHUR C.; TROST,LAWRENCE C.

    2000-03-02

    Sandia National Laboratories has developed a computer based model called IVSEM (Integrated Verification System Evaluation Model) to estimate the performance of a nuclear detonation monitoring system. The IVSEM project was initiated in June 1994, by Sandia's Monitoring Systems and Technology Center and has been funded by the U.S. Department of Energy's Office of Nonproliferation and National Security (DOE/NN). IVSEM is a simple, ''top-level,'' modeling tool which estimates the performance of a Comprehensive Nuclear Test Ban Treaty (CTBT) monitoring system and can help explore the impact of various sensor system concepts and technology advancements on CTBT monitoring. One of IVSEM's unique features is that it integrates results from the various CTBT sensor technologies (seismic, in sound, radionuclide, and hydroacoustic) and allows the user to investigate synergy among the technologies. Specifically, IVSEM estimates the detection effectiveness (probability of detection), location accuracy, and identification capability of the integrated system and of each technology subsystem individually. The model attempts to accurately estimate the monitoring system's performance at medium interfaces (air-land, air-water) and for some evasive testing methods such as seismic decoupling. The original IVSEM report, CTBT Integrated Verification System Evaluation Model, SAND97-25 18, described version 1.2 of IVSEM. This report describes the changes made to IVSEM version 1.2 and the addition of identification capability estimates that have been incorporated into IVSEM version 2.0.

  9. Evaluating conflation methods using uncertainty modeling

    NASA Astrophysics Data System (ADS)

    Doucette, Peter; Dolloff, John; Canavosio-Zuzelski, Roberto; Lenihan, Michael; Motsko, Dennis

    2013-05-01

    The classic problem of computer-assisted conflation involves the matching of individual features (e.g., point, polyline, or polygon vectors) as stored in a geographic information system (GIS), between two different sets (layers) of features. The classical goal of conflation is the transfer of feature metadata (attributes) from one layer to another. The age of free public and open source geospatial feature data has significantly increased the opportunity to conflate such data to create enhanced products. There are currently several spatial conflation tools in the marketplace with varying degrees of automation. An ability to evaluate conflation tool performance quantitatively is of operational value, although manual truthing of matched features is laborious and costly. In this paper, we present a novel methodology that uses spatial uncertainty modeling to simulate realistic feature layers to streamline evaluation of feature matching performance for conflation methods. Performance results are compiled for DCGIS street centerline features.

  10. Evaluating a Performance-Ideal vs. Great Performance

    ERIC Educational Resources Information Center

    Bar-Elli, Gilead

    2004-01-01

    Based on a conception in which a musical composition determines aesthetic-normative properties, a distinction is drawn between two notions of performance: the "autonomous", in which a performance is regarded as a musical work on its own, and the "intentionalistic", in which it is regarded as essentially of a particular work. An ideal…

  11. Flexible pavement performance evaluation using deflection criteria

    NASA Astrophysics Data System (ADS)

    Wedner, R. J.

    1980-04-01

    Flexible pavement projects in Nebraska were monitored for dynamic deflections, roughness, and distress for six consecutive years. Present surface conditions were characterized and data for evaluating rehabilitation needs, including amount of overlay, were provided. Data were evaluated and factors were isolated for determining the structural adequacy of flexible pavements, evaluating existing pavement strength and soil subgrade conditions, and determining overlay thickness requirements. Terms for evaluating structural condition for pavement sufficiently ratings were developed and existing soil support value and subgrade strength province maps were evaluated.

  12. Performance Evaluation in Network-Based Parallel Computing

    NASA Technical Reports Server (NTRS)

    Dezhgosha, Kamyar

    1996-01-01

    Network-based parallel computing is emerging as a cost-effective alternative for solving many problems which require use of supercomputers or massively parallel computers. The primary objective of this project has been to conduct experimental research on performance evaluation for clustered parallel computing. First, a testbed was established by augmenting our existing SUNSPARCs' network with PVM (Parallel Virtual Machine) which is a software system for linking clusters of machines. Second, a set of three basic applications were selected. The applications consist of a parallel search, a parallel sort, a parallel matrix multiplication. These application programs were implemented in C programming language under PVM. Third, we conducted performance evaluation under various configurations and problem sizes. Alternative parallel computing models and workload allocations for application programs were explored. The performance metric was limited to elapsed time or response time which in the context of parallel computing can be expressed in terms of speedup. The results reveal that the overhead of communication latency between processes in many cases is the restricting factor to performance. That is, coarse-grain parallelism which requires less frequent communication between processes will result in higher performance in network-based computing. Finally, we are in the final stages of installing an Asynchronous Transfer Mode (ATM) switch and four ATM interfaces (each 155 Mbps) which will allow us to extend our study to newer applications, performance metrics, and configurations.

  13. Towards Reliable Evaluation of Anomaly-Based Intrusion Detection Performance

    NASA Technical Reports Server (NTRS)

    Viswanathan, Arun

    2012-01-01

    This report describes the results of research into the effects of environment-induced noise on the evaluation process for anomaly detectors in the cyber security domain. This research was conducted during a 10-week summer internship program from the 19th of August, 2012 to the 23rd of August, 2012 at the Jet Propulsion Laboratory in Pasadena, California. The research performed lies within the larger context of the Los Angeles Department of Water and Power (LADWP) Smart Grid cyber security project, a Department of Energy (DoE) funded effort involving the Jet Propulsion Laboratory, California Institute of Technology and the University of Southern California/ Information Sciences Institute. The results of the present effort constitute an important contribution towards building more rigorous evaluation paradigms for anomaly-based intrusion detectors in complex cyber physical systems such as the Smart Grid. Anomaly detection is a key strategy for cyber intrusion detection and operates by identifying deviations from profiles of nominal behavior and are thus conceptually appealing for detecting "novel" attacks. Evaluating the performance of such a detector requires assessing: (a) how well it captures the model of nominal behavior, and (b) how well it detects attacks (deviations from normality). Current evaluation methods produce results that give insufficient insight into the operation of a detector, inevitably resulting in a significantly poor characterization of a detectors performance. In this work, we first describe a preliminary taxonomy of key evaluation constructs that are necessary for establishing rigor in the evaluation regime of an anomaly detector. We then focus on clarifying the impact of the operational environment on the manifestation of attacks in monitored data. We show how dynamic and evolving environments can introduce high variability into the data stream perturbing detector performance. Prior research has focused on understanding the impact of this

  14. Human Performance Models of Pilot Behavior

    NASA Technical Reports Server (NTRS)

    Foyle, David C.; Hooey, Becky L.; Byrne, Michael D.; Deutsch, Stephen; Lebiere, Christian; Leiden, Ken; Wickens, Christopher D.; Corker, Kevin M.

    2005-01-01

    Five modeling teams from industry and academia were chosen by the NASA Aviation Safety and Security Program to develop human performance models (HPM) of pilots performing taxi operations and runway instrument approaches with and without advanced displays. One representative from each team will serve as a panelist to discuss their team s model architecture, augmentations and advancements to HPMs, and aviation-safety related lessons learned. Panelists will discuss how modeling results are influenced by a model s architecture and structure, the role of the external environment, specific modeling advances and future directions and challenges for human performance modeling in aviation.

  15. Performance evaluation on vibration control of MR landing gear

    NASA Astrophysics Data System (ADS)

    Lee, D. Y.; Nam, Y. J.; Yamane, R.; Park, M. K.

    2009-02-01

    This paper is concerned with the applicability of the developed MR damper to the landing gear system for the attenuating undesired shock and vibration in the landing and taxing phases. First of all, the experimental model of the MR damper is derived based on the results of performance evaluations. Next, a simplified skyhook controller, which is one of the most straightforward, but effective approaches for improving ride comport in vehicles with active suspensions, is formulated. Then, the vibration control performances of the landing gear system using the MR damper are theoretically evaluated in the landing phase of the aircraft. A series of simulation analyses show that the proposed MR damper with the skyhook controller is effective for suppressing undesired vibration of the aircraft body. Finally, the effectiveness of the simulation results are additionally verified via HILS (Hardware-in-the-loop-simulation) method.

  16. A Note for Missile Autopilot Performance Evaluation Test

    NASA Astrophysics Data System (ADS)

    Eguchi, Hirofumi

    The essential benefit of HardWare-In-the-Loop (HWIL) simulation can be summarized as that the performance of autopilot system is evaluated realistically without the modeling error by using actual hardware such as seeker systems, autopilot systems and servo equipments. The most important requirement at the HWIL simulation test is to set the homing seeker at the 3-axis gimbals center of the flight motion table. But, because of the various reasons such as the length of the homing seeker, the structure of the flight motion table and the shape of attachments, this requirement on setting is not able to be satisfied. In this paper, the effect of this position error on the guidance and control system performance is analyzed and evaluated.

  17. Findings and Preliminary Recommendations from the Michigan State and Indiana University Research Study of Value-Added Models to Evaluate Teacher Performance

    ERIC Educational Resources Information Center

    Guarino, Cassandra M.

    2013-01-01

    The push for accountability in public schooling has extended to the measurement of teacher performance, accelerated by federal efforts through Race to the Top. Currently, a large number of states and districts across the country are computing measures of teacher performance based on the standardized test scores of their students and using them in…

  18. The design and implementation of an operational model evaluation system

    SciTech Connect

    Foster, K.T.

    1995-06-01

    An evaluation of an atmospheric transport and diffusion model`s operational performance typically involves the comparison of the model`s calculations with measurements of an atmospheric pollutant`s temporal and spatial distribution. These evaluations however often use data from a small number of experiments and may be limited to producing some of the commonly quoted statistics based on the differences between model calculations and the measurements. This paper presents efforts to develop a model evaluation system geared for both the objective statistical analysis and the more subjective visualization of the inter-relationships between a model`s calculations and the appropriate field measurement data.

  19. Performance and Architecture Lab Modeling Tool

    SciTech Connect

    2014-06-19

    Analytical application performance models are critical for diagnosing performance-limiting resources, optimizing systems, and designing machines. Creating models, however, is difficult. Furthermore, models are frequently expressed in forms that are hard to distribute and validate. The Performance and Architecture Lab Modeling tool, or Palm, is a modeling tool designed to make application modeling easier. Palm provides a source code modeling annotation language. Not only does the modeling language divide the modeling task into sub problems, it formally links an application's source code with its model. This link is important because a model's purpose is to capture application behavior. Furthermore, this link makes it possible to define rules for generating models according to source code organization. Palm generates hierarchical models according to well-defined rules. Given an application, a set of annotations, and a representative execution environment, Palm will generate the same model. A generated model is a an executable program whose constituent parts directly correspond to the modeled application. Palm generates models by combining top-down (human-provided) semantic insight with bottom-up static and dynamic analysis. A model's hierarchy is defined by static and dynamic source code structure. Because Palm coordinates models and source code, Palm's models are 'first-class' and reproducible. Palm automates common modeling tasks. For instance, Palm incorporates measurements to focus attention, represent constant behavior, and validate models. Palm's workflow is as follows. The workflow's input is source code annotated with Palm modeling annotations. The most important annotation models an instance of a block of code. Given annotated source code, the Palm Compiler produces executables and the Palm Monitor collects a representative performance profile. The Palm Generator synthesizes a model based on the static and dynamic mapping of annotations to program behavior

  20. Performance and Architecture Lab Modeling Tool

    2014-06-19

    Analytical application performance models are critical for diagnosing performance-limiting resources, optimizing systems, and designing machines. Creating models, however, is difficult. Furthermore, models are frequently expressed in forms that are hard to distribute and validate. The Performance and Architecture Lab Modeling tool, or Palm, is a modeling tool designed to make application modeling easier. Palm provides a source code modeling annotation language. Not only does the modeling language divide the modeling task into sub problems, itmore » formally links an application's source code with its model. This link is important because a model's purpose is to capture application behavior. Furthermore, this link makes it possible to define rules for generating models according to source code organization. Palm generates hierarchical models according to well-defined rules. Given an application, a set of annotations, and a representative execution environment, Palm will generate the same model. A generated model is a an executable program whose constituent parts directly correspond to the modeled application. Palm generates models by combining top-down (human-provided) semantic insight with bottom-up static and dynamic analysis. A model's hierarchy is defined by static and dynamic source code structure. Because Palm coordinates models and source code, Palm's models are 'first-class' and reproducible. Palm automates common modeling tasks. For instance, Palm incorporates measurements to focus attention, represent constant behavior, and validate models. Palm's workflow is as follows. The workflow's input is source code annotated with Palm modeling annotations. The most important annotation models an instance of a block of code. Given annotated source code, the Palm Compiler produces executables and the Palm Monitor collects a representative performance profile. The Palm Generator synthesizes a model based on the static and dynamic mapping of annotations to program

  1. The Spiral-Interactive Program Evaluation Model.

    ERIC Educational Resources Information Center

    Khaleel, Ibrahim Adamu

    1988-01-01

    Describes the spiral interactive program evaluation model, which is designed to evaluate vocational-technical education programs in secondary schools in Nigeria. Program evaluation is defined; utility oriented and process oriented models for evaluation are described; and internal and external evaluative factors and variables that define each…

  2. Evaluation of a Mysis bioenergetics model

    USGS Publications Warehouse

    Chipps, S.R.; Bennett, D.H.

    2002-01-01

    Direct approaches for estimating the feeding rate of the opossum shrimp Mysis relicta can be hampered by variable gut residence time (evacuation rate models) and non-linear functional responses (clearance rate models). Bioenergetics modeling provides an alternative method, but the reliability of this approach needs to be evaluated using independent measures of growth and food consumption. In this study, we measured growth and food consumption for M. relicta and compared experimental results with those predicted from a Mysis bioenergetics model. For Mysis reared at 10??C, model predictions were not significantly different from observed values. Moreover, decomposition of mean square error indicated that 70% of the variation between model predictions and observed values was attributable to random error. On average, model predictions were within 12% of observed values. A sensitivity analysis revealed that Mysis respiration and prey energy density were the most sensitive parameters affecting model output. By accounting for uncertainty (95% CLs) in Mysis respiration, we observed a significant improvement in the accuracy of model output (within 5% of observed values), illustrating the importance of sensitive input parameters for model performance. These findings help corroborate the Mysis bioenergetics model and demonstrate the usefulness of this approach for estimating Mysis feeding rate.

  3. FLUORESCENT TRACER EVALUATION OF PROTECTIVE CLOTHING PERFORMANCE

    EPA Science Inventory

    Field studies evaluating chemical protective clothing (CPC), which is often employed as a primary control option to reduce occupational exposures during pesticide applications, are limited. This study, supported by the U.S. Environmental Protection Agency (EPA), was designed to...

  4. Human visual performance model for crewstation design

    NASA Astrophysics Data System (ADS)

    Larimer, James O.; Prevost, Michael P.; Arditi, Aries R.; Azueta, Steven; Bergen, James R.; Lubin, Jeffrey

    1991-08-01

    In a cockpit, the crewstation of an airplane, the ability of the pilot to unambiguously perceive rapidly changing information both internal and external to the crewstation is critical. To assess the impact of crewstation design decisions on the pilot''s ability to perceive information, the designer needs a means of evaluating the trade-offs that result from different designs. The Visibility Modeling Tool (VMT) provides the designer with a CAD tool for assessing these trade-offs. It combines the technologies of computer graphics, computational geometry, human performance modeling and equipment modeling into a computer-based interactive design tool. Through a simple interactive interface, a designer can manipulate design parameters such as the geometry of the cockpit, environmental factors such as ambient lighting, pilot parameters such as point of regard and adaptation state, and equipment parameters such as the location of displays, their size and the contrast of displayed symbology. VMT provides an end-to-end analysis that answers questions such as ''Will the pilot be able to read the display?'' Performance data can be projected, in the form of 3D contours, into the crewstation graphic model, providing the designer with a footprint of the operator''s visual capabilities, defining, for example, the regions in which fonts of a particular type, size and contrast can be read without error. Geometrical data such as the pilot''s volume field of view, occlusions caused by facial geometry, helmet margins, and objects in the crewstation can also be projected into the crewstation graphic model with respect to the coordinates of the aviator''s eyes and fixation point. The intersections of the projections with objects in the crewstation, delineate the area of coverage, masking, or occlusion associated with the objects. Objects in the crewstation space can be projected onto models of the operator''s retinas. These projections can be used to provide the designer with the

  5. Evaluating Performances of Solar-Energy Systems

    NASA Technical Reports Server (NTRS)

    Jaffe, L. D.

    1987-01-01

    CONC11 computer program calculates performances of dish-type solar thermal collectors and power systems. Solar thermal power system consists of one or more collectors, power-conversion subsystems, and powerprocessing subsystems. CONC11 intended to aid system designer in comparing performance of various design alternatives. Written in Athena FORTRAN and Assembler.

  6. Performance Evaluation of the NASA/KSC Transmission System

    NASA Technical Reports Server (NTRS)

    Christensen, Kenneth J.

    2000-01-01

    NASA-KSC currently uses three bridged 100-Mbps FDDI segments as its backbone for data traffic. The FDDI Transmission System (FTXS) connects the KSC industrial area, KSC launch complex 39 area, and the Cape Canaveral Air Force Station. The report presents a performance modeling study of the FTXS and the proposed ATM Transmission System (ATXS). The focus of the study is on performance of MPEG video transmission on these networks. Commercial modeling tools - the CACI Predictor and Comnet tools - were used. In addition, custom software tools were developed to characterize conversation pairs in Sniffer trace (capture) files to use as input to these tools. A baseline study of both non-launch and launch day data traffic on the FTXS is presented. MPEG-1 and MPEG-2 video traffic was characterized and the shaping of it evaluated. It is shown that the characteristics of a video stream has a direct effect on its performance in a network. It is also shown that shaping of video streams is necessary to prevent overflow losses and resulting poor video quality. The developed models can be used to predict when the existing FTXS will 'run out of room' and for optimizing the parameters of ATM links used for transmission of MPEG video. Future work with these models can provide useful input and validation to set-top box projects within the Advanced Networks Development group in NASA-KSC Development Engineering.

  7. Performance modeling of earth resources remote sensors

    NASA Technical Reports Server (NTRS)

    Kidd, R. H.; Wolfe, R. H.

    1976-01-01

    A technique is presented for constructing a mathematical model of an earth resources remote sensor. The technique combines established models of electronic and optical components with formulated models of scan and vibration effects, and it includes a model of the radiation effects of the earth's atmosphere. The resulting composite model is useful for predicting in-flight sensor performance, and a descriptive set of performance parameters is derived in terms of the model. A method is outlined for validating the model for each sensor of interest. The validation for one airborne infrared scanning system is accomplished in part by a satisfactory comparison of predicted response with laboratory data for that sensor.

  8. Building China's municipal healthcare performance evaluation system: a Tuscan perspective.

    PubMed

    Li, Hao; Barsanti, Sara; Bonini, Anna

    2012-08-01

    Regional healthcare performance evaluation systems can help optimize healthcare resources on regional basis and improve the performance of healthcare services provided. The Tuscany region in Italy is a good example of an institution which meets these requirements. China has yet to build such a system based on international experience. In this paper, based on comparative studies between Tuscany and China, we propose that the managing institutions in China's experimental cities can select and commission a third-party agency to, respectively, evaluate the performance of their affiliated hospitals and community health service centers. Following some features of the Tuscan experience, the Chinese municipal healthcare performance evaluation system can be built by focusing on the selection of an appropriate performance evaluation agency, the design of an adequate performance evaluation mechanism and the formulation of a complete set of laws, rules and regulations. When a performance evaluation system at city level is formed, the provincial government can extend the successful experience to other cities.

  9. Sustainable Supplier Performance Evaluation and Selection with Neofuzzy TOPSIS Method

    PubMed Central

    Chaharsooghi, S. K.; Ashrafi, Mehdi

    2014-01-01

    Supplier selection plays an important role in the supply chain management and traditional criteria such as price, quality, and flexibility are considered for supplier performance evaluation in researches. In recent years sustainability has received more attention in the supply chain management literature with triple bottom line (TBL) describing the sustainability in supply chain management with social, environmental, and economic initiatives. This paper explores sustainability in supply chain management and examines the problem of identifying a new model for supplier selection based on extended model of TBL approach in supply chain by presenting fuzzy multicriteria method. Linguistic values of experts' subjective preferences are expressed with fuzzy numbers and Neofuzzy TOPSIS is proposed for finding the best solution of supplier selection problem. Numerical results show that the proposed model is efficient for integrating sustainability in supplier selection problem. The importance of using complimentary aspects of sustainability and Neofuzzy TOPSIS concept in sustainable supplier selection process is shown with sensitivity analysis. PMID:27379267

  10. Sustainable Supplier Performance Evaluation and Selection with Neofuzzy TOPSIS Method.

    PubMed

    Chaharsooghi, S K; Ashrafi, Mehdi

    2014-01-01

    Supplier selection plays an important role in the supply chain management and traditional criteria such as price, quality, and flexibility are considered for supplier performance evaluation in researches. In recent years sustainability has received more attention in the supply chain management literature with triple bottom line (TBL) describing the sustainability in supply chain management with social, environmental, and economic initiatives. This paper explores sustainability in supply chain management and examines the problem of identifying a new model for supplier selection based on extended model of TBL approach in supply chain by presenting fuzzy multicriteria method. Linguistic values of experts' subjective preferences are expressed with fuzzy numbers and Neofuzzy TOPSIS is proposed for finding the best solution of supplier selection problem. Numerical results show that the proposed model is efficient for integrating sustainability in supplier selection problem. The importance of using complimentary aspects of sustainability and Neofuzzy TOPSIS concept in sustainable supplier selection process is shown with sensitivity analysis.

  11. 13 CFR 306.7 - Performance evaluations of University Centers.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 13 Business Credit and Assistance 1 2012-01-01 2012-01-01 false Performance evaluations of..., DEPARTMENT OF COMMERCE TRAINING, RESEARCH AND TECHNICAL ASSISTANCE INVESTMENTS University Center Economic Development Program § 306.7 Performance evaluations of University Centers. (a) EDA will: (1) Evaluate...

  12. 48 CFR 1536.201 - Evaluation of contracting performance.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Contracting for Construction 1536.201 Evaluation of contracting performance. (a) The Contracting Officer will... will file the form in the contractor performance evaluation files which it maintains. (e) The Quality... 48 Federal Acquisition Regulations System 6 2010-10-01 2010-10-01 true Evaluation of...

  13. 48 CFR 2936.201 - Evaluation of contractor performance.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Construction 2936.201 Evaluation of contractor performance. The HCA must establish procedures to evaluate... 48 Federal Acquisition Regulations System 7 2010-10-01 2010-10-01 false Evaluation of contractor performance. 2936.201 Section 2936.201 Federal Acquisition Regulations System DEPARTMENT OF LABOR...

  14. 48 CFR 36.201 - Evaluation of contractor performance.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Contracting for Construction 36.201 Evaluation of contractor performance. See 42.1502(e) for the requirements for preparing past performance evaluations for construction contracts. ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Evaluation of...

  15. 13 CFR 306.7 - Performance evaluations of University Centers.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Performance evaluations of University Centers. 306.7 Section 306.7 Business Credit and Assistance ECONOMIC DEVELOPMENT ADMINISTRATION... Development Program § 306.7 Performance evaluations of University Centers. (a) EDA will: (1) Evaluate...

  16. Team Primacy Concept (TPC) Based Employee Evaluation and Job Performance

    ERIC Educational Resources Information Center

    Muniute, Eivina I.; Alfred, Mary V.

    2007-01-01

    This qualitative study explored how employees learn from Team Primacy Concept (TPC) based employee evaluation and how they use the feedback in performing their jobs. TPC based evaluation is a form of multirater evaluation, during which the employee's performance is discussed by one's peers in a face-to-face team setting. The study used Kolb's…

  17. EVALUATION OF VENTILATION PERFORMANCE FOR INDOOR SPACE

    EPA Science Inventory

    The paper discusses a personal-computer-based application of computational fluid dynamics that can be used to determine the turbulent flow field and time-dependent/steady-state contaminant concentration distributions within isothermal indoor space. (NOTE: Ventilation performance ...

  18. Evaluation of performance impairment by spacecraft contaminants

    NASA Technical Reports Server (NTRS)

    Geller, I.; Hartman, R. J., Jr.; Mendez, V. M.

    1977-01-01

    The environmental contaminants (isolated as off-gases in Skylab and Apollo missions) were evaluated. Specifically, six contaminants were evaluated for their effects on the behavior of juvenile baboons. The concentrations of contaminants were determined through preliminary range-finding studies with laboratory rats. The contaminants evaluated were acetone, methyl ethyl ketone (MEK), methyl isobutyl ketone (MIBK), trichloroethylene (TCE), heptane and Freon 21. When the studies of the individual gases were completed, the baboons were also exposed to a mixture of MEK and TCE. The data obtained revealed alterations in the behavior of baboons exposed to relatively low levels of the contaminants. These findings were presented at the First International Symposium on Voluntary Inhalation of Industrial Solvents in Mexico City, June 21-24, 1976. A preprint of the proceedings is included.

  19. 24 CFR 968.330 - PHA performance and evaluation report.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false PHA performance and evaluation... 250 or More Public Housing Units) § 968.330 PHA performance and evaluation report. For any FFY in which a PHA has received assistance under this subpart, the PHA shall submit a Performance...

  20. Modeling ITER ECH Waveguide Performance

    NASA Astrophysics Data System (ADS)

    Kaufman, M. C.; Lau, C. H.

    2014-10-01

    There are stringent requirements for mode purity and for on-target power as a percentage of source power for the ECH transmission lines on ITER. The design goal is less than 10% total power loss through the line and 95% HE11 mode at the diamond window. The dominant loss mechanism is mode conversion (MC) into higher order modes, and to maintain mode purity, these losses must be minimized. Miter bends and waveguide curvature are major sources of mode conversion. This work uses a code which calculates the mode conversion and attenuation of an arbitrary set of polarized waveguide modes in circular corrugated waveguide with non-zero axial curvature and miter bends. The transmission line is modeled as a structural beam with deformations due to misalignment of waveguide supports, tilts at the interfaces between waveguide sections, gravitational loading, and the extrusion and fabrication process. As these sources of curvature are statistical in nature, the resulting MC losses are found via Monte Carlo modeling. The results of this analysis will provide design guidance for waveguide support span lengths, requirements for minimum alignment offsets, and requirements for waveguide fabrication and quality control.

  1. A Perspective on Computational Human Performance Models as Design Tools

    NASA Technical Reports Server (NTRS)

    Jones, Patricia M.

    2010-01-01

    The design of interactive systems, including levels of automation, displays, and controls, is usually based on design guidelines and iterative empirical prototyping. A complementary approach is to use computational human performance models to evaluate designs. An integrated strategy of model-based and empirical test and evaluation activities is particularly attractive as a methodology for verification and validation of human-rated systems for commercial space. This talk will review several computational human performance modeling approaches and their applicability to design of display and control requirements.

  2. Summary of photovoltaic system performance models

    NASA Technical Reports Server (NTRS)

    Smith, J. H.; Reiter, L. J.

    1984-01-01

    A detailed overview of photovoltaics (PV) performance modeling capabilities developed for analyzing PV system and component design and policy issues is provided. A set of 10 performance models are selected which span a representative range of capabilities from generalized first order calculations to highly specialized electrical network simulations. A set of performance modeling topics and characteristics is defined and used to examine some of the major issues associated with photovoltaic performance modeling. Each of the models is described in the context of these topics and characteristics to assess its purpose, approach, and level of detail. The issues are discussed in terms of the range of model capabilities available and summarized in tabular form for quick reference. The models are grouped into categories to illustrate their purposes and perspectives.

  3. Summary of photovoltaic system performance models

    SciTech Connect

    Smith, J. H.; Reiter, L. J.

    1984-01-15

    The purpose of this study is to provide a detailed overview of photovoltaics (PV) performance modeling capabilities that have been developed during recent years for analyzing PV system and component design and policy issues. A set of 10 performance models have been selected which span a representative range of capabilities from generalized first-order calculations to highly specialized electrical network simulations. A set of performance modeling topics and characteristics is defined and used to examine some of the major issues associated with photovoltaic performance modeling. Next, each of the models is described in the context of these topics and characteristics to assess its purpose, approach, and level of detail. Then each of the issues is discussed in terms of the range of model capabilities available and summarized in tabular form for quick reference. Finally, the models are grouped into categories to illustrate their purposes and perspectives.

  4. Evaluation of Learning Performance of E-Learning in China: A Methodology Based on Change of Internal Mental Model of Learners

    ERIC Educational Resources Information Center

    Zhang, Lingxian; Zhang, Xiaoshuan; Duan, Yanqing; Fu, Zetian; Wang, Yanwei

    2010-01-01

    This paper presents a method of assessment on how Human-Computer Interaction (HCI) and animation influence the psychological process of learning by comparing a traditional web design course and an e-learning web design course, based on the Change of Internal Mental Model of Learners. We constructed the e-learning course based on Gagne's learning…

  5. Evaluation of genome-enabled selection for bacterial cold water disease resistance using progeny performance data in Rainbow Trout: Insights on genotyping methods and genomic prediction models

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Bacterial cold water disease (BCWD) causes significant economic losses in salmonid aquaculture, and traditional family-based breeding programs aimed at improving BCWD resistance have been limited to exploiting only between-family variation. We used genomic selection (GS) models to predict genomic br...

  6. A COMPREHENSIVE EVALUATION OF THE ETA-CMAQ FORECAST MODEL PERFORMANCE FOR O3, ITS RELATED PRECURSORS, AND METEOROLOGICAL PARAMETERS DURING THE 2004 ICARTT STUDY

    EPA Science Inventory

    In this study, the ability of the Eta-CMAQ forecast model to represent the vertical profiles of O3, related chemical species (CO, NO, NO2, H2O2, CH2O, HNO3, SO2, PAN, isoprene, toluene), and meteorological paramete...

  7. Comprehensive Evaluation Model for Nursing Education.

    ERIC Educational Resources Information Center

    Reed, Suellen B.; Riley, William

    1979-01-01

    The comprehensive model for evaluating nursing education programs is described in terms of what is evaluated; who conducts the evaluation; and why it is conducted. A structure for further action and decision making is also presented. (GDC)

  8. A Hybrid Evaluation System Framework (Shell & Web) with Standardized Access to Climate Model Data and Verification Tools for a Clear Climate Science Infrastructure on Big Data High Performance Computers

    NASA Astrophysics Data System (ADS)

    Kadow, Christopher; Illing, Sebastian; Kunst, Oliver; Ulbrich, Uwe; Cubasch, Ulrich

    2015-04-01

    The project 'Integrated Data and Evaluation System for Decadal Scale Prediction' (INTEGRATION) as part of the German decadal prediction project MiKlip develops a central evaluation system. The fully operational hybrid features a HPC shell access and an user friendly web-interface. It employs one common system with a variety of verification tools and validation data from different projects in- and outside of MiKlip. The evaluation system is located at the German Climate Computing Centre (DKRZ) and has direct access to the bulk of its ESGF node including millions of climate model data sets, e.g. from CMIP5 and CORDEX. The database is organized by the international CMOR standard using the meta information of the self-describing model, reanalysis and observational data sets. Apache Solr is used for indexing the different data projects into one common search environment. This implemented meta data system with its advanced but easy to handle search tool supports users, developers and their tools to retrieve the required information. A generic application programming interface (API) allows scientific developers to connect their analysis tools with the evaluation system independently of the programming language used. Users of the evaluation techniques benefit from the common interface of the evaluation system without any need to understand the different scripting languages. Facilitating the provision and usage of tools and climate data increases automatically the number of scientists working with the data sets and identify discrepancies. Additionally, the history and configuration sub-system stores every analysis performed with the evaluation system in a MySQL database. Configurations and results of the tools can be shared among scientists via shell or web-system. Therefore, plugged-in tools gain automatically from transparency and reproducibility. Furthermore, when configurations match while starting a evaluation tool, the system suggests to use results already produced

  9. A Hybrid Evaluation System Framework (Shell & Web) with Standardized Access to Climate Model Data and Verification Tools for a Clear Climate Science Infrastructure on Big Data High Performance Computers

    NASA Astrophysics Data System (ADS)

    Kadow, C.; Illing, S.; Kunst, O.; Cubasch, U.

    2014-12-01

    The project 'Integrated Data and Evaluation System for Decadal Scale Prediction' (INTEGRATION) as part of the German decadal prediction project MiKlip develops a central evaluation system. The fully operational hybrid features a HPC shell access and an user friendly web-interface. It employs one common system with a variety of verification tools and validation data from different projects in- and outside of MiKlip. The evaluation system is located at the German Climate Computing Centre (DKRZ) and has direct access to the bulk of its ESGF node including millions of climate model data sets, e.g. from CMIP5 and CORDEX. The database is organized by the international CMOR standard using the meta information of the self-describing model, reanalysis and observational data sets. Apache Solr is used for indexing the different data projects into one common search environment. This implemented meta data system with its advanced but easy to handle search tool supports users, developers and their tools to retrieve the required information. A generic application programming interface (API) allows scientific developers to connect their analysis tools with the evaluation system independently of the programming language used. Users of the evaluation techniques benefit from the common interface of the evaluation system without any need to understand the different scripting languages. Facilitating the provision and usage of tools and climate data increases automatically the number of scientists working with the data sets and identify discrepancies. Additionally, the history and configuration sub-system stores every analysis performed with the evaluation system in a MySQL database. Configurations and results of the tools can be shared among scientists via shell or web-system. Therefore, plugged-in tools gain automatically from transparency and reproducibility. Furthermore, when configurations match while starting a evaluation tool, the system suggests to use results already produced

  10. Activity-Based Costing Model for Assessing Economic Performance.

    ERIC Educational Resources Information Center

    DeHayes, Daniel W.; Lovrinic, Joseph G.

    1994-01-01

    An economic model for evaluating the cost performance of academic and administrative programs in higher education is described. Examples from its application at Indiana University-Purdue University Indianapolis are used to illustrate how the model has been used to control costs and reengineer processes. (Author/MSE)

  11. Development of the "performance competence evaluation measure": assessing qualitative aspects of dance performance.

    PubMed

    Krasnow, Donna; Chatfield, Steven J

    2009-01-01

    The aim of this study was to develop a measurement tool, the "Performance Competence Evaluation Measure" (PCEM), for the evaluation of qualitative aspects of dance performance. The project had two phases. In the first phase a literature review was conducted to examine 1. the previous development of similar measurement tools, 2. descriptions of dance technique and dance performance applicable to the development of a qualitative measurement tool, and 3. theoretical models from somatic practices that evaluate and assess qualitative aspects of movement and dance activity. The second phase involved the development of a system for using PCEM, and testing its validity and reliability. Three judges from the professional dance community volunteered to test PCEM with a sample of 20 subjects from low-intermediate to advanced classes at a university dance program. The subjects learned a dance combination and were videotaped performing it on two separate occasions, eight weeks apart. The judges reviewed the videos in random order. Logical validity of PCEM was established through assessment by two faculty members of the university dance department and the three judges. Intra-rater and inter-rater reliability demonstrated correlation coefficients of 0.95 and 0.94, respectively. It was concluded that PCEM can serve as a useful measurement tool for future dance science research.

  12. Evaluating the performance of land surface model ORCHIDEE-CAN v1.0 on water and energy flux estimation with a single- and multi-layer energy budget scheme

    NASA Astrophysics Data System (ADS)

    Chen, Yiying; Ryder, James; Bastrikov, Vladislav; McGrath, Matthew J.; Naudts, Kim; Otto, Juliane; Ottlé, Catherine; Peylin, Philippe; Polcher, Jan; Valade, Aude; Black, Andrew; Elbers, Jan A.; Moors, Eddy; Foken, Thomas; van Gorsel, Eva; Haverd, Vanessa; Heinesch, Bernard; Tiedemann, Frank; Knohl, Alexander; Launiainen, Samuli; Loustau, Denis; Ogée, Jérôme; Vessala, Timo; Luyssaert, Sebastiaan

    2016-09-01

    Canopy structure is one of the most important vegetation characteristics for land-atmosphere interactions, as it determines the energy and scalar exchanges between the land surface and the overlying air mass. In this study we evaluated the performance of a newly developed multi-layer energy budget in the ORCHIDEE-CAN v1.0 land surface model (Organising Carbon and Hydrology In Dynamic Ecosystems - CANopy), which simulates canopy structure and can be coupled to an atmospheric model using an implicit coupling procedure. We aim to provide a set of acceptable parameter values for a range of forest types. Top-canopy and sub-canopy flux observations from eight sites were collected in order to conduct this evaluation. The sites crossed climate zones from temperate to boreal and the vegetation types included deciduous, evergreen broad-leaved and evergreen needle-leaved forest with a maximum leaf area index (LAI; all-sided) ranging from 3.5 to 7.0. The parametrization approach proposed in this study was based on three selected physical processes - namely the diffusion, advection, and turbulent mixing within the canopy. Short-term sub-canopy observations and long-term surface fluxes were used to calibrate the parameters in the sub-canopy radiation, turbulence, and resistance modules with an automatic tuning process. The multi-layer model was found to capture the dynamics of sub-canopy turbulence, temperature, and energy fluxes. The performance of the new multi-layer model was further compared against the existing single-layer model. Although the multi-layer model simulation results showed few or no improvements to both the nighttime energy balance and energy partitioning during winter compared with a single-layer model simulation, the increased model complexity does provide a more detailed description of the canopy micrometeorology of various forest types. The multi-layer model links to potential future environmental and ecological studies such as the assessment of in

  13. Performance evaluation of 1 kw PEFC

    SciTech Connect

    Komaki, Hideaki; Tsuchiyama, Syozo

    1996-12-31

    This report covers part of a joint study on a PEFC propulsion system for surface ships, summarized in a presentation to this Seminar, entitled {open_quote}Study on a PEFC Propulsion System for Surface Ships{close_quotes}, and which envisages application to a 1,500 DWT cargo vessel. The aspect treated here concerns the effects brought on PEFC operating performance by conditions particular to shipboard operation. The performance characteristics were examined through tests performed on a 1 kw stack and on a single cell (Manufactured by Fuji Electric Co., Ltd.). The tests covered the items (1) to (4) cited in the headings of the sections that follow. Specifications of the stack and single cell are as given.

  14. Evaluating the factor structure of the Psychological Performance Inventory.

    PubMed

    Golby, Jim; Sheard, Michael; van Wersch, Anna

    2007-08-01

    This study assesses the construct validity of a measure of mental toughness, Loehr's Psychological Performance Inventory. Performers (N = 408, 303 men, 105 women, M age = 24.0 yr., SD = 6.7) drawn from eight sports (artistic rollerskating, basketball, canoeing, golf, rugby league, rugby union, soccer, swimming), and competing at either international, national, county and provincial, or club and regional standards. They completed the 42-item Psychological Performance Inventory during training camps. Principal components analysis provided minimal support for the factor structure. Instead, the exploratory analysis yielded a 4-factor 14-item model (PPI-A). A single factor underlying mental toughness (G(MT)) was identified with higher-order exploratory factor analysis using the Schmid-Leiman procedure. Psychometric analysis of the model, using confirmatory analysis techniques, fitted the data well. Collectively satisfying absolute and incremental fit index benchmarks, the inventory possesses satisfactory psychometric properties, with adequate reliability and convergent and discriminant validity. The results lend preliminary support to the factorial validity and reliability of the model; however, further investigation of its stability is required before recommending practitioners use changes in scores as an index for evaluating effects of training in psychological skills.

  15. Performance Evaluation Gravity Probe B Design

    NASA Technical Reports Server (NTRS)

    Francis, Ronnie; Wells, Eugene M.

    1996-01-01

    This final report documents the work done to develop a 6 degree-of-freedom simulation of the Lockheed Martin Gravity Probe B (GPB) Spacecraft. This simulation includes the effects of vehicle flexibility and propellant slosh. The simulation was used to investigate the control performance of the spacecraft when subjected to realistic on orbit disturbances.

  16. Game Performance Evaluation in Male Goalball Players

    PubMed Central

    Molik, Bartosz; Morgulec-Adamowicz, Natalia; Kosmol, Andrzej; Perkowski, Krzysztof; Bednarczuk, Grzegorz; Skowroński, Waldemar; Gomez, Miguel Angel; Koc, Krzysztof; Rutkowska, Izabela; Szyman, Robert J

    2015-01-01

    Goalball is a Paralympic sport exclusively for athletes who are visually impaired and blind. The aims of this study were twofold: to describe game performance of elite male goalball players based upon the degree of visual impairment, and to determine if game performance was related to anthropometric characteristics of elite male goalball players. The study sample consisted of 44 male goalball athletes. A total of 38 games were recorded during the Summer Paralympic Games in London 2012. Observations were reported using the Game Efficiency Sheet for Goalball. Additional anthropometric measurements included body mass (kg), body height (cm), the arm span (cm) and length of the body in the defensive position (cm). The results differentiating both groups showed that the players with total blindness obtained higher means than the players with visual impairment for game indicators such as the sum of defense (p = 0.03) and the sum of good defense (p = 0.04). The players with visual impairment obtained higher results than those with total blindness for attack efficiency (p = 0.04), the sum of penalty defenses (p = 0.01), and fouls (p = 0.01). The study showed that athletes with blindness demonstrated higher game performance in defence. However, athletes with visual impairment presented higher efficiency in offensive actions. The analyses confirmed that body mass, body height, the arm span and length of the body in the defensive position did not differentiate players’ performance at the elite level. PMID:26834872

  17. Using Ratio Analysis to Evaluate Financial Performance.

    ERIC Educational Resources Information Center

    Minter, John; And Others

    1982-01-01

    The ways in which ratio analysis can help in long-range planning, budgeting, and asset management to strengthen financial performance and help avoid financial difficulties are explained. Types of ratios considered include balance sheet ratios, net operating ratios, and contribution and demand ratios. (MSE)

  18. NREL Evaluates Performance of Hydraulic Hybrid Refuse Vehicles

    SciTech Connect

    2015-09-01

    This highlight describes NREL's evaluation of the in-service performance of 10 next-generation hydraulic hybrid refuse vehicles (HHVs), 8 previous-generation (model year 2013) HHVs, and 8 comparable conventional diesel vehicles operated by Miami-Dade County's Public Works and Waste Management Department in southern Florida. Launched in March 2015, the on-road portion of this 12-month evaluation focuses on collecting and analyzing vehicle performance data - fuel economy, maintenance costs, and drive cycles - from the HHVs and the conventional diesel vehicles. The fuel economy of heavy-duty vehicles, such as refuse trucks, is largely dependent on the load carried and the drive cycles on which they operate. In the right applications, HHVs offer a potential fuel-cost advantage over their conventional counterparts. This advantage is contingent, however, on driving behavior and drive cycles with high kinetic intensity that take advantage of regenerative braking. NREL's evaluation will assess the performance of this technology in commercial operation and help Miami-Dade County determine the ideal routes for maximizing the fuel-saving potential of its HHVs. Based on the field data, NREL will develop a validated vehicle model using the Future Automotive Systems Technology Simulator, also known as FASTSim, to study the impacts of route selection and other vehicle parameters. NREL is also analyzing fueling and maintenance data to support total-cost-of-ownership estimations and forecasts. The study aims to improve understanding of the overall usage and effectiveness of HHVs in refuse operation compared to similar conventional vehicles and to provide unbiased technical information to interested stakeholders.

  19. Evaluating Suit Fit Using Performance Degradation

    NASA Technical Reports Server (NTRS)

    Margerum, Sarah E.; Cowley, Matthew; Harvill, Lauren; Benson, Elizabeth; Rajulu, Sudhakar

    2012-01-01

    The Mark III planetary technology demonstrator space suit can be tailored to an individual by swapping the modular components of the suit, such as the arms, legs, and gloves, as well as adding or removing sizing inserts in key areas. A method was sought to identify the transition from an ideal suit fit to a bad fit and how to quantify this breakdown using a metric of mobility-based human performance data. To this end, the degradation of the range of motion of the elbow and wrist of the suit as a function of suit sizing modifications was investigated to attempt to improve suit fit. The sizing range tested spanned optimal and poor fit and was adjusted incrementally in order to compare each joint angle across five different sizing configurations. Suited range of motion data were collected using a motion capture system for nine isolated and functional tasks utilizing the elbow and wrist joints. A total of four subjects were tested with motions involving both arms simultaneously as well as the right arm by itself. Findings indicated that no single joint drives the performance of the arm as a function of suit size; instead it is based on the interaction of multiple joints along a limb. To determine a size adjustment range where an individual can operate the suit at an acceptable level, a performance detriment limit was set. This user-selected limit reveals the task-dependent tolerance of the suit fit around optimal size. For example, the isolated joint motion indicated that the suit can deviate from optimal by as little as -0.6 in to -2.6 in before experiencing a 10% performance drop in the wrist or elbow joint. The study identified a preliminary method to quantify the impact of size on performance and developed a new way to gauge tolerances around optimal size.

  20. An hierarchical approach to performance evaluation of expert systems

    NASA Technical Reports Server (NTRS)

    Dominick, Wayne D. (Editor); Kavi, Srinu

    1985-01-01

    The number and size of expert systems is growing rapidly. Formal evaluation of these systems - which is not performed for many systems - increases the acceptability by the user community and hence their success. Hierarchical evaluation that had been conducted for computer systems is applied for expert system performance evaluation. Expert systems are also evaluated by treating them as software systems (or programs). This paper reports many of the basic concepts and ideas in the Performance Evaluation of Expert Systems Study being conducted at the University of Southwestern Louisiana.

  1. CTBT Integrated Verification System Evaluation Model

    SciTech Connect

    Edenburn, M.W.; Bunting, M.L.; Payne, A.C. Jr.

    1997-10-01

    Sandia National Laboratories has developed a computer based model called IVSEM (Integrated Verification System Evaluation Model) to estimate the performance of a nuclear detonation monitoring system. The IVSEM project was initiated in June 1994, by Sandia`s Monitoring Systems and Technology Center and has been funded by the US Department of Energy`s Office of Nonproliferation and National Security (DOE/NN). IVSEM is a simple, top-level, modeling tool which estimates the performance of a Comprehensive Nuclear Test Ban Treaty (CTBT) monitoring system and can help explore the impact of various sensor system concepts and technology advancements on CTBT monitoring. One of IVSEM`s unique features is that it integrates results from the various CTBT sensor technologies (seismic, infrasound, radionuclide, and hydroacoustic) and allows the user to investigate synergy among the technologies. Specifically, IVSEM estimates the detection effectiveness (probability of detection) and location accuracy of the integrated system and of each technology subsystem individually. The model attempts to accurately estimate the monitoring system`s performance at medium interfaces (air-land, air-water) and for some evasive testing methods such as seismic decoupling. This report describes version 1.2 of IVSEM.

  2. Performance Evaluation of Resource Management in Cloud Computing Environments.

    PubMed

    Batista, Bruno Guazzelli; Estrella, Julio Cezar; Ferreira, Carlos Henrique Gomes; Filho, Dionisio Machado Leite; Nakamura, Luis Hideo Vasconcelos; Reiff-Marganiec, Stephan; Santana, Marcos José; Santana, Regina Helena Carlucci

    2015-01-01

    Cloud computing is a computational model in which resource providers can offer on-demand services to clients in a transparent way. However, to be able to guarantee quality of service without limiting the number of accepted requests, providers must be able to dynamically manage the available resources so that they can be optimized. This dynamic resource management is not a trivial task, since it involves meeting several challenges related to workload modeling, virtualization, performance modeling, deployment and monitoring of applications on virtualized resources. This paper carries out a performance evaluation of a module for resource management in a cloud environment that includes handling available resources during execution time and ensuring the quality of service defined in the service level agreement. An analysis was conducted of different resource configurations to define which dimension of resource scaling has a real influence on client requests. The results were used to model and implement a simulated cloud system, in which the allocated resource can be changed on-the-fly, with a corresponding change in price. In this way, the proposed module seeks to satisfy both the client by ensuring quality of service, and the provider by ensuring the best use of resources at a fair price.

  3. Performance Evaluation of Resource Management in Cloud Computing Environments

    PubMed Central

    Batista, Bruno Guazzelli; Estrella, Julio Cezar; Ferreira, Carlos Henrique Gomes; Filho, Dionisio Machado Leite; Nakamura, Luis Hideo Vasconcelos; Reiff-Marganiec, Stephan; Santana, Marcos José; Santana, Regina Helena Carlucci

    2015-01-01

    Cloud computing is a computational model in which resource providers can offer on-demand services to clients in a transparent way. However, to be able to guarantee quality of service without limiting the number of accepted requests, providers must be able to dynamically manage the available resources so that they can be optimized. This dynamic resource management is not a trivial task, since it involves meeting several challenges related to workload modeling, virtualization, performance modeling, deployment and monitoring of applications on virtualized resources. This paper carries out a performance evaluation of a module for resource management in a cloud environment that includes handling available resources during execution time and ensuring the quality of service defined in the service level agreement. An analysis was conducted of different resource configurations to define which dimension of resource scaling has a real influence on client requests. The results were used to model and implement a simulated cloud system, in which the allocated resource can be changed on-the-fly, with a corresponding change in price. In this way, the proposed module seeks to satisfy both the client by ensuring quality of service, and the provider by ensuring the best use of resources at a fair price. PMID:26555730

  4. Performance Evaluation of Resource Management in Cloud Computing Environments.

    PubMed

    Batista, Bruno Guazzelli; Estrella, Julio Cezar; Ferreira, Carlos Henrique Gomes; Filho, Dionisio Machado Leite; Nakamura, Luis Hideo Vasconcelos; Reiff-Marganiec, Stephan; Santana, Marcos José; Santana, Regina Helena Carlucci

    2015-01-01

    Cloud computing is a computational model in which resource providers can offer on-demand services to clients in a transparent way. However, to be able to guarantee quality of service without limiting the number of accepted requests, providers must be able to dynamically manage the available resources so that they can be optimized. This dynamic resource management is not a trivial task, since it involves meeting several challenges related to workload modeling, virtualization, performance modeling, deployment and monitoring of applications on virtualized resources. This paper carries out a performance evaluation of a module for resource management in a cloud environment that includes handling available resources during execution time and ensuring the quality of service defined in the service level agreement. An analysis was conducted of different resource configurations to define which dimension of resource scaling has a real influence on client requests. The results were used to model and implement a simulated cloud system, in which the allocated resource can be changed on-the-fly, with a corresponding change in price. In this way, the proposed module seeks to satisfy both the client by ensuring quality of service, and the provider by ensuring the best use of resources at a fair price. PMID:26555730

  5. Optical Performance Modeling of FUSE Telescope Mirror

    NASA Technical Reports Server (NTRS)

    Saha, Timo T.; Ohl, Raymond G.; Friedman, Scott D.; Moos, H. Warren

    2000-01-01

    We describe the Metrology Data Processor (METDAT), the Optical Surface Analysis Code (OSAC), and their application to the image evaluation of the Far Ultraviolet Spectroscopic Explorer (FUSE) mirrors. The FUSE instrument - designed and developed by the Johns Hopkins University and launched in June 1999 is an astrophysics satellite which provides high resolution spectra (lambda/Delta(lambda) = 20,000 - 25,000) in the wavelength region from 90.5 to 118.7 nm The FUSE instrument is comprised of four co-aligned, normal incidence, off-axis parabolic mirrors, four Rowland circle spectrograph channels with holographic gratings, and delay line microchannel plate detectors. The OSAC code provides a comprehensive analysis of optical system performance, including the effects of optical surface misalignments, low spatial frequency deformations described by discrete polynomial terms, mid- and high-spatial frequency deformations (surface roughness), and diffraction due to the finite size of the aperture. Both normal incidence (traditionally infrared, visible, and near ultraviolet mirror systems) and grazing incidence (x-ray mirror systems) systems can be analyzed. The code also properly accounts for reflectance losses on the mirror surfaces. Low frequency surface errors are described in OSAC by using Zernike polynomials for normal incidence mirrors and Legendre-Fourier polynomials for grazing incidence mirrors. The scatter analysis of the mirror is based on scalar scatter theory. The program accepts simple autocovariance (ACV) function models or power spectral density (PSD) models derived from mirror surface metrology data as input to the scatter calculation. The end product of the program is a user-defined pixel array containing the system Point Spread Function (PSF). The METDAT routine is used in conjunction with the OSAC program. This code reads in laboratory metrology data in a normalized format. The code then fits the data using Zernike polynomials for normal incidence

  6. The Discrepancy Evaluation Model: A Systematic Approach for the Evaluation of Career Planning and Placement Programs.

    ERIC Educational Resources Information Center

    Buttram, Joan L.; Covert, Robert W.

    The Discrepancy Evaluation Model (DEM), developed in 1966 by Malcolm Provus, provides information for program assessment and program improvement. Under the DEM, evaluation is defined as the comparison of an actual performance to a desired standard. The DEM embodies five stages of evaluation based upon a program's natural development: program…

  7. Cost and Performance Assumptions for Modeling Electricity Generation Technologies

    SciTech Connect

    Tidball, Rick; Bluestein, Joel; Rodriguez, Nick; Knoke, Stu

    2010-11-01

    The goal of this project was to compare and contrast utility scale power plant characteristics used in data sets that support energy market models. Characteristics include both technology cost and technology performance projections to the year 2050. Cost parameters include installed capital costs and operation and maintenance (O&M) costs. Performance parameters include plant size, heat rate, capacity factor or availability factor, and plant lifetime. Conventional, renewable, and emerging electricity generating technologies were considered. Six data sets, each associated with a different model, were selected. Two of the data sets represent modeled results, not direct model inputs. These two data sets include cost and performance improvements that result from increased deployment as well as resulting capacity factors estimated from particular model runs; other data sets represent model input data. For the technologies contained in each data set, the levelized cost of energy (LCOE) was also evaluated, according to published cost, performance, and fuel assumptions.

  8. Evaluation of A Novel Split-Feeding Anaerobic/Oxic Baffled Reactor (A/OBR) For Foodwaste Anaerobic Digestate: Performance, Modeling and Bacterial Community

    PubMed Central

    Wang, Shaojie; Peng, Liyu; Jiang, Yixin; Gikas, Petros; Zhu, Baoning; Su, Haijia

    2016-01-01

    To enhance the treatment efficiency from an anaerobic digester, a novel six-compartment anaerobic/oxic baffled reactor (A/OBR) was employed. Two kinds of split-feeding A/OBRs R2 and R3, with influent fed in the 1st, 3rd and 5th compartment of the reactor simultaneously at the respective ratios of 6:3:1 and 6:2:2, were compared with the regular-feeding reactor R1 when all influent was fed in the 1st compartment (control). Three aspects, the COD removal, the hydraulic characteristics and the bacterial community, were systematically investigated, compared and evaluated. The results indicated that R2 and R3 had similar tolerance to loading shock, but the R2 had the highest COD removal of 91.6% with a final effluent of 345 mg/L. The mixing patterns in both split-feeding reactors were intermediate between plug-flow and completely-mixed, with dead spaces between 8.17% and 8.35% compared with a 31.9% dead space in R1. Polymerase chain reaction-denaturing gradient gel electrophoresis (PCR-DGGE) analysis revealed that the split-feeding strategy provided a higher bacterial diversity and more stable bacterial community than that in the regular-feeding strategy. Further analysis indicated that Firmicutes, Bacteroidetes, and Proteobacteria were the dominant bacteria, among which Firmicutes and Bacteroidetes might be responsible for organic matter degradation and Proteobacteria for nitrification and denitrification. PMID:27708368

  9. Evaluation of A Novel Split-Feeding Anaerobic/Oxic Baffled Reactor (A/OBR) For Foodwaste Anaerobic Digestate: Performance, Modeling and Bacterial Community

    NASA Astrophysics Data System (ADS)

    Wang, Shaojie; Peng, Liyu; Jiang, Yixin; Gikas, Petros; Zhu, Baoning; Su, Haijia

    2016-10-01

    To enhance the treatment efficiency from an anaerobic digester, a novel six-compartment anaerobic/oxic baffled reactor (A/OBR) was employed. Two kinds of split-feeding A/OBRs R2 and R3, with influent fed in the 1st, 3rd and 5th compartment of the reactor simultaneously at the respective ratios of 6:3:1 and 6:2:2, were compared with the regular-feeding reactor R1 when all influent was fed in the 1st compartment (control). Three aspects, the COD removal, the hydraulic characteristics and the bacterial community, were systematically investigated, compared and evaluated. The results indicated that R2 and R3 had similar tolerance to loading shock, but the R2 had the highest COD removal of 91.6% with a final effluent of 345 mg/L. The mixing patterns in both split-feeding reactors were intermediate between plug-flow and completely-mixed, with dead spaces between 8.17% and 8.35% compared with a 31.9% dead space in R1. Polymerase chain reaction-denaturing gradient gel electrophoresis (PCR-DGGE) analysis revealed that the split-feeding strategy provided a higher bacterial diversity and more stable bacterial community than that in the regular-feeding strategy. Further analysis indicated that Firmicutes, Bacteroidetes, and Proteobacteria were the dominant bacteria, among which Firmicutes and Bacteroidetes might be responsible for organic matter degradation and Proteobacteria for nitrification and denitrification.

  10. An Evaluation Model for Competency Based Teacher Preparatory Programs.

    ERIC Educational Resources Information Center

    Denton, Jon J.

    This discussion describes an evaluation model designed to complement a curriculum development project, the primary goal of which is to structure a performance based program for preservice teachers. Data collected from the implementation of this four-phase model can be used to make decisions for developing and changing performance objectives and…

  11. Performance Evaluation Method for Dissimilar Aircraft Designs

    NASA Technical Reports Server (NTRS)

    Walker, H. J.

    1979-01-01

    A rationale is presented for using the square of the wingspan rather than the wing reference area as a basis for nondimensional comparisons of the aerodynamic and performance characteristics of aircraft that differ substantially in planform and loading. Working relationships are developed and illustrated through application to several categories of aircraft covering a range of Mach numbers from 0.60 to 2.00. For each application, direct comparisons of drag polars, lift-to-drag ratios, and maneuverability are shown for both nondimensional systems. The inaccuracies that may arise in the determination of aerodynamic efficiency based on reference area are noted. Span loading is introduced independently in comparing the combined effects of loading and aerodynamic efficiency on overall performance. Performance comparisons are made for the NACA research aircraft, lifting bodies, century-series fighter aircraft, F-111A aircraft with conventional and supercritical wings, and a group of supersonic aircraft including the B-58 and XB-70 bomber aircraft. An idealized configuration is included in each category to serve as a standard for comparing overall efficiency.

  12. Performance of an integrated network model

    PubMed Central

    Lehmann, François; Dunn, David; Beaulieu, Marie-Dominique; Brophy, James

    2016-01-01

    Objective To evaluate the changes in accessibility, patients’ care experiences, and quality-of-care indicators following a clinic’s transformation into a fully integrated network clinic. Design Mixed-methods study. Setting Verdun, Que. Participants Data on all patient visits were used, in addition to 2 distinct patient cohorts: 134 patients with chronic illness (ie, diabetes, arteriosclerotic heart disease, or both); and 450 women between the ages of 20 and 70 years. Main outcome measures Accessibility was measured by the number of walk-in visits, scheduled visits, and new patient enrolments. With the first cohort, patients’ care experiences were measured using validated serial questionnaires; and quality-of-care indicators were measured using biologic data. With the second cohort, quality of preventive care was measured using the number of Papanicolaou tests performed as a surrogate marker. Results Despite a negligible increase in the number of physicians, there was an increase in accessibility after the clinic’s transition to an integrated network model. During the first 4 years of operation, the number of scheduled visits more than doubled, nonscheduled visits (walk-in visits) increased by 29%, and enrolment of vulnerable patients (those with chronic illnesses) at the clinic remained high. Patient satisfaction with doctors was rated very highly at all points of time that were evaluated. While the number of Pap tests done did not increase with time, the proportion of patients meeting hemoglobin A1c and low-density lipoprotein guideline target levels increased, as did the number of patients tested for microalbuminuria. Conclusion Transformation to an integrated network model of care led to increased efficiency and enhanced accessibility with no negative effects on the doctor-patient relationship. Improvements in biologic data also suggested better quality of care. PMID:27521410

  13. Generic CSP Performance Model for NREL's System Advisor Model: Preprint

    SciTech Connect

    Wagner, M. J.; Zhu, G.

    2011-08-01

    The suite of concentrating solar power (CSP) modeling tools in NREL's System Advisor Model (SAM) includes technology performance models for parabolic troughs, power towers, and dish-Stirling systems. Each model provides the user with unique capabilities that are catered to typical design considerations seen in each technology. Since the scope of the various models is generally limited to common plant configurations, new CSP technologies, component geometries, and subsystem combinations can be difficult to model directly in the existing SAM technology models. To overcome the limitations imposed by representative CSP technology models, NREL has developed a 'Generic Solar System' (GSS) performance model for use in SAM. This paper discusses the formulation and performance considerations included in this model and verifies the model by comparing its results with more detailed models.

  14. Alvord (3000-ft Strawn) LPG flood: design and performance evaluation

    SciTech Connect

    Frazier, G.D.; Todd, M.R.

    1982-01-01

    Mitchell Energy Corporation has implemented a LPG-dry gas miscible process in the Alvord (3000 ft Strawn) Unit in Wise County, Texas utilizing the DOE tertiary incentive program. The field had been waterflooded for 14 years and was producing near its economic limit at the time this project was started. This paper presents the results of the reservoir simulation study that was conducted to evaluate pattern configuration and operating alternatives so as to maximize LPG containment and oil recovery performance. Several recommendations resulting from this study were implemented for the project. Based on the model prediction, tertiary oil recovery is expected to be between 100,000 and 130,000 bbls, or about 7 percent of th oil originally in place in the Unit. An evaluation of the project performance to date is presented. In July of 1981 the injection of a 16% HPV slug of propane was completed. Natural gas is being used to drive the propane slug. A peak oil response of 222 BOPD was achieved in August of 1981 and production has since been declining. The observed performance of the flood indicates that the actual tertiary oil recovered will reach the predicted value, although the project life will be longer than expected. The results presented in this paper indicate that, without the DOE incentive program, the economics for this project would still be uncertain at this time.

  15. Intern Performance in Three Supervisory Models

    ERIC Educational Resources Information Center

    Womack, Sid T.; Hanna, Shellie L.; Callaway, Rebecca; Woodall, Peggy

    2011-01-01

    Differences in intern performance, as measured by a Praxis III-similar instrument were found between interns supervised in three supervisory models: Traditional triad model, cohort model, and distance supervision. Candidates in this study's particular form of distance supervision were not as effective as teachers as candidates in…

  16. Evaluation of Genome-Enabled Selection for Bacterial Cold Water Disease Resistance Using Progeny Performance Data in Rainbow Trout: Insights on Genotyping Methods and Genomic Prediction Models

    PubMed Central

    Vallejo, Roger L.; Leeds, Timothy D.; Fragomeni, Breno O.; Gao, Guangtu; Hernandez, Alvaro G.; Misztal, Ignacy; Welch, Timothy J.; Wiens, Gregory D.; Palti, Yniv

    2016-01-01

    Bacterial cold water disease (BCWD) causes significant economic losses in salmonid aquaculture, and traditional family-based breeding programs aimed at improving BCWD resistance have been limited to exploiting only between-family variation. We used genomic selection (GS) models to predict genomic breeding values (GEBVs) for BCWD resistance in 10 families from the first generation of the NCCCWA BCWD resistance breeding line, compared the predictive ability (PA) of GEBVs to pedigree-based estimated breeding values (EBVs), and compared the impact of two SNP genotyping methods on the accuracy of GEBV predictions. The BCWD phenotypes survival days (DAYS) and survival status (STATUS) had been recorded in training fish (n = 583) subjected to experimental BCWD challenge. Training fish, and their full sibs without phenotypic data that were used as parents of the subsequent generation, were genotyped using two methods: restriction-site associated DNA (RAD) sequencing and the Rainbow Trout Axiom® 57 K SNP array (Chip). Animal-specific GEBVs were estimated using four GS models: BayesB, BayesC, single-step GBLUP (ssGBLUP), and weighted ssGBLUP (wssGBLUP). Family-specific EBVs were estimated using pedigree and phenotype data in the training fish only. The PA of EBVs and GEBVs was assessed by correlating mean progeny phenotype (MPP) with mid-parent EBV (family-specific) or GEBV (animal-specific). The best GEBV predictions were similar to EBV with PA values of 0.49 and 0.46 vs. 0.50 and 0.41 for DAYS and STATUS, respectively. Among the GEBV prediction methods, ssGBLUP consistently had the highest PA. The RAD genotyping platform had GEBVs with similar PA to those of GEBVs from the Chip platform. The PA of ssGBLUP and wssGBLUP methods was higher with the Chip, but for BayesB and BayesC methods it was higher with the RAD platform. The overall GEBV accuracy in this study was low to moderate, likely due to the small training sample used. This study explored the potential of GS for

  17. Evaluation of Genome-Enabled Selection for Bacterial Cold Water Disease Resistance Using Progeny Performance Data in Rainbow Trout: Insights on Genotyping Methods and Genomic Prediction Models.

    PubMed

    Vallejo, Roger L; Leeds, Timothy D; Fragomeni, Breno O; Gao, Guangtu; Hernandez, Alvaro G; Misztal, Ignacy; Welch, Timothy J; Wiens, Gregory D; Palti, Yniv

    2016-01-01

    Bacterial cold water disease (BCWD) causes significant economic losses in salmonid aquaculture, and traditional family-based breeding programs aimed at improving BCWD resistance have been limited to exploiting only between-family variation. We used genomic selection (GS) models to predict genomic breeding values (GEBVs) for BCWD resistance in 10 families from the first generation of the NCCCWA BCWD resistance breeding line, compared the predictive ability (PA) of GEBVs to pedigree-based estimated breeding values (EBVs), and compared the impact of two SNP genotyping methods on the accuracy of GEBV predictions. The BCWD phenotypes survival days (DAYS) and survival status (STATUS) had been recorded in training fish (n = 583) subjected to experimental BCWD challenge. Training fish, and their full sibs without phenotypic data that were used as parents of the subsequent generation, were genotyped using two methods: restriction-site associated DNA (RAD) sequencing and the Rainbow Trout Axiom® 57 K SNP array (Chip). Animal-specific GEBVs were estimated using four GS models: BayesB, BayesC, single-step GBLUP (ssGBLUP), and weighted ssGBLUP (wssGBLUP). Family-specific EBVs were estimated using pedigree and phenotype data in the training fish only. The PA of EBVs and GEBVs was assessed by correlating mean progeny phenotype (MPP) with mid-parent EBV (family-specific) or GEBV (animal-specific). The best GEBV predictions were similar to EBV with PA values of 0.49 and 0.46 vs. 0.50 and 0.41 for DAYS and STATUS, respectively. Among the GEBV prediction methods, ssGBLUP consistently had the highest PA. The RAD genotyping platform had GEBVs with similar PA to those of GEBVs from the Chip platform. The PA of ssGBLUP and wssGBLUP methods was higher with the Chip, but for BayesB and BayesC methods it was higher with the RAD platform. The overall GEBV accuracy in this study was low to moderate, likely due to the small training sample used. This study explored the potential of GS for

  18. Evaluation of Genome-Enabled Selection for Bacterial Cold Water Disease Resistance Using Progeny Performance Data in Rainbow Trout: Insights on Genotyping Methods and Genomic Prediction Models.

    PubMed

    Vallejo, Roger L; Leeds, Timothy D; Fragomeni, Breno O; Gao, Guangtu; Hernandez, Alvaro G; Misztal, Ignacy; Welch, Timothy J; Wiens, Gregory D; Palti, Yniv

    2016-01-01

    Bacterial cold water disease (BCWD) causes significant economic losses in salmonid aquaculture, and traditional family-based breeding programs aimed at improving BCWD resistance have been limited to exploiting only between-family variation. We used genomic selection (GS) models to predict genomic breeding values (GEBVs) for BCWD resistance in 10 families from the first generation of the NCCCWA BCWD resistance breeding line, compared the predictive ability (PA) of GEBVs to pedigree-based estimated breeding values (EBVs), and compared the impact of two SNP genotyping methods on the accuracy of GEBV predictions. The BCWD phenotypes survival days (DAYS) and survival status (STATUS) had been recorded in training fish (n = 583) subjected to experimental BCWD challenge. Training fish, and their full sibs without phenotypic data that were used as parents of the subsequent generation, were genotyped using two methods: restriction-site associated DNA (RAD) sequencing and the Rainbow Trout Axiom® 57 K SNP array (Chip). Animal-specific GEBVs were estimated using four GS models: BayesB, BayesC, single-step GBLUP (ssGBLUP), and weighted ssGBLUP (wssGBLUP). Family-specific EBVs were estimated using pedigree and phenotype data in the training fish only. The PA of EBVs and GEBVs was assessed by correlating mean progeny phenotype (MPP) with mid-parent EBV (family-specific) or GEBV (animal-specific). The best GEBV predictions were similar to EBV with PA values of 0.49 and 0.46 vs. 0.50 and 0.41 for DAYS and STATUS, respectively. Among the GEBV prediction methods, ssGBLUP consistently had the highest PA. The RAD genotyping platform had GEBVs with similar PA to those of GEBVs from the Chip platform. The PA of ssGBLUP and wssGBLUP methods was higher with the Chip, but for BayesB and BayesC methods it was higher with the RAD platform. The overall GEBV accuracy in this study was low to moderate, likely due to the small training sample used. This study explored the potential of GS for

  19. HENC performance evaluation and plutonium calibration

    SciTech Connect

    Menlove, H.O.; Baca, J.; Pecos, J.M.; Davidson, D.R.; McElroy, R.D.; Brochu, D.B.

    1997-10-01

    The authors have designed a high-efficiency neutron counter (HENC) to increase the plutonium content in 200-L waste drums. The counter uses totals neutron counting, coincidence counting, and multiplicity counting to determine the plutonium mass. The HENC was developed as part of a Cooperative Research and Development Agreement between the Department of Energy and Canberra Industries. This report presents the results of the detector modifications, the performance tests, the add-a-source calibration, and the plutonium calibration at Los Alamos National Laboratory (TA-35) in 1996.

  20. Phased array performance evaluation with photoelastic visualization

    SciTech Connect

    Ginzel, Robert; Dao, Gavin

    2014-02-18

    New instrumentation and a widening range of phased array transducer options are affording the industry a greater potential. Visualization of the complex wave components using the photoelastic system can greatly enhance understanding of the generated signals. Diffraction, mode conversion and wave front interaction, together with beam forming for linear, sectorial and matrix arrays, will be viewed using the photoelastic system. Beam focus and steering performance will be shown with a range of embedded and surface targets within glass samples. This paper will present principles and sound field images using this visualization system.