Science.gov

Sample records for performance evaluation model

  1. Performance Evaluation of Dense Gas Dispersion Models.

    NASA Astrophysics Data System (ADS)

    Touma, Jawad S.; Cox, William M.; Thistle, Harold; Zapert, James G.

    1995-03-01

    This paper summarizes the results of a study to evaluate the performance of seven dense gas dispersion models using data from three field experiments. Two models (DEGADIS and SLAB) are in the public domain and the other five (AIRTOX, CHARM, FOCUS, SAFEMODE, and TRACE) are proprietary. The field data used are the Desert Tortoise pressurized ammonia releases, Burro liquefied natural gas spill tests, and the Goldfish anhydrous hydrofluoric acid spill experiments. Desert Tortoise and Goldfish releases were simulated as horizontal jet releases, and Burro as a liquid pool. Performance statistics were used to compare maximum observed concentrations and plume half-width to those predicted by each model. Model performance varied and no model exhibited consistently good performance across all three databases. However, when combined across the three databases, all models performed within a factor of 2. Problems encountered are discussed in order to help future investigators.

  2. Model Performance Evaluation and Scenario Analysis (MPESA) Tutorial

    EPA Science Inventory

    This tool consists of two parts: model performance evaluation and scenario analysis (MPESA). The model performance evaluation consists of two components: model performance evaluation metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit m...

  3. Optical Storage Performance Modeling and Evaluation.

    ERIC Educational Resources Information Center

    Behera, Bailochan; Singh, Harpreet

    1990-01-01

    Evaluates different types of storage media for long-term archival storage of large amounts of data. Existing storage media are reviewed, including optical disks, optical tape, magnetic storage, and microfilm; three models are proposed based on document storage requirements; performance analysis is considered; and cost effectiveness is discussed.…

  4. Performance Evaluation Modeling of Network Sensors

    NASA Technical Reports Server (NTRS)

    Clare, Loren P.; Jennings, Esther H.; Gao, Jay L.

    2003-01-01

    Substantial benefits are promised by operating many spatially separated sensors collectively. Such systems are envisioned to consist of sensor nodes that are connected by a communications network. A simulation tool is being developed to evaluate the performance of networked sensor systems, incorporating such metrics as target detection probabilities, false alarms rates, and classification confusion probabilities. The tool will be used to determine configuration impacts associated with such aspects as spatial laydown, and mixture of different types of sensors (acoustic, seismic, imaging, magnetic, RF, etc.), and fusion architecture. The QualNet discrete-event simulation environment serves as the underlying basis for model development and execution. This platform is recognized for its capabilities in efficiently simulating networking among mobile entities that communicate via wireless media. We are extending QualNet's communications modeling constructs to capture the sensing aspects of multi-target sensing (analogous to multiple access communications), unimodal multi-sensing (broadcast), and multi-modal sensing (multiple channels and correlated transmissions). Methods are also being developed for modeling the sensor signal sources (transmitters), signal propagation through the media, and sensors (receivers) that are consistent with the discrete event paradigm needed for performance determination of sensor network systems. This work is supported under the Microsensors Technical Area of the Army Research Laboratory (ARL) Advanced Sensors Collaborative Technology Alliance.

  5. Evaluating Models of Human Performance: Safety-Critical Systems Applications

    NASA Technical Reports Server (NTRS)

    Feary, Michael S.

    2012-01-01

    This presentation is part of panel discussion on Evaluating Models of Human Performance. The purpose of this panel is to discuss the increasing use of models in the world today and specifically focus on how to describe and evaluate models of human performance. My presentation will focus on discussions of generating distributions of performance, and the evaluation of different strategies for humans performing tasks with mixed initiative (Human-Automation) systems. I will also discuss issues with how to provide Human Performance modeling data to support decisions on acceptability and tradeoffs in the design of safety critical systems. I will conclude with challenges for the future.

  6. Models for evaluating the performability of degradable computing systems

    NASA Technical Reports Server (NTRS)

    Wu, L. T.

    1982-01-01

    Recent advances in multiprocessor technology established the need for unified methods to evaluate computing systems performance and reliability. In response to this modeling need, a general modeling framework that permits the modeling, analysis and evaluation of degradable computing systems is considered. Within this framework, several user oriented performance variables are identified and shown to be proper generalizations of the traditional notions of system performance and reliability. Furthermore, a time varying version of the model is developed to generalize the traditional fault tree reliability evaluation methods of phased missions.

  7. Solid rocket booster performance evaluation model. Volume 4: Program listing

    NASA Technical Reports Server (NTRS)

    1974-01-01

    All subprograms or routines associated with the solid rocket booster performance evaluation model are indexed in this computer listing. An alphanumeric list of each routine in the index is provided in a table of contents.

  8. Evaluating Organic Aerosol Model Performance: Impact of two Embedded Assumptions

    NASA Astrophysics Data System (ADS)

    Jiang, W.; Giroux, E.; Roth, H.; Yin, D.

    2004-05-01

    Organic aerosols are important due to their abundance in the polluted lower atmosphere and their impact on human health and vegetation. However, modeling organic aerosols is a very challenging task because of the complexity of aerosol composition, structure, and formation processes. Assumptions and their associated uncertainties in both models and measurement data make model performance evaluation a truly demanding job. Although some assumptions are obvious, others are hidden and embedded, and can significantly impact modeling results, possibly even changing conclusions about model performance. This paper focuses on analyzing the impact of two embedded assumptions on evaluation of organic aerosol model performance. One assumption is about the enthalpy of vaporization widely used in various secondary organic aerosol (SOA) algorithms. The other is about the conversion factor used to obtain ambient organic aerosol concentrations from measured organic carbon. These two assumptions reflect uncertainties in the model and in the ambient measurement data, respectively. For illustration purposes, various choices of the assumed values are implemented in the evaluation process for an air quality model based on CMAQ (the Community Multiscale Air Quality Model). Model simulations are conducted for the Lower Fraser Valley covering Southwest British Columbia, Canada, and Northwest Washington, United States, for a historical pollution episode in 1993. To understand the impact of the assumed enthalpy of vaporization on modeling results, its impact on instantaneous organic aerosol yields (IAY) through partitioning coefficients is analysed first. The analysis shows that utilizing different enthalpy of vaporization values causes changes in the shapes of IAY curves and in the response of SOA formation capability of reactive organic gases to temperature variations. These changes are then carried into the air quality model and cause substantial changes in the organic aerosol modeling

  9. Faculty Performance Evaluation: The CIPP-SAPS Model.

    ERIC Educational Resources Information Center

    Mitcham, Maralynne

    1981-01-01

    The issues of faculty performance evaluation for allied health professionals are addressed. Daniel Stufflebeam's CIPP (content-imput-process-product) model is introduced and its development into a CIPP-SAPS (self-administrative-peer- student) model is pursued. (Author/CT)

  10. Solid rocket booster performance evaluation model. Volume 2: Users manual

    NASA Technical Reports Server (NTRS)

    1974-01-01

    This users manual for the solid rocket booster performance evaluation model (SRB-II) contains descriptions of the model, the program options, the required program inputs, the program output format and the program error messages. SRB-II is written in FORTRAN and is operational on both the IBM 370/155 and the MSFC UNIVAC 1108 computers.

  11. New performance evaluation models for character detection in images

    NASA Astrophysics Data System (ADS)

    Wang, YanWei; Ding, XiaoQing; Liu, ChangSong; Wang, Kongqiao

    2010-02-01

    Detection of characters regions is a meaningful research work for both highlighting region of interest and recognition for further information processing. A lot of researches have been performed on character localization and extraction and this leads to the great needs of performance evaluation scheme to inspect detection algorithms. In this paper, two probability models are established to accomplish evaluation tasks for different applications respectively. For highlighting region of interest, a Gaussian probability model, which simulates the property of a low-pass Gaussian filter of human vision system (HVS), was constructed to allocate different weights to different character parts. It reveals the greatest potential to describe the performance of detectors, especially, when the result detected is an incomplete character, where other methods cannot effectively work. For the recognition destination, we also introduced a weighted probability model to give an appropriate description for the contribution of detection results to final recognition results. The validity of performance evaluation models proposed in this paper are proved by experiments on web images and natural scene images. These models proposed in this paper may also be able to be applied in evaluating algorithms of locating other objects, like face detection and more wide experiments need to be done to examine the assumption.

  12. Solid rocket booster performance evaluation model. Volume 1: Engineering description

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The space shuttle solid rocket booster performance evaluation model (SRB-II) is made up of analytical and functional simulation techniques linked together so that a single pass through the model will predict the performance of the propulsion elements of a space shuttle solid rocket booster. The available options allow the user to predict static test performance, predict nominal and off nominal flight performance, and reconstruct actual flight and static test performance. Options selected by the user are dependent on the data available. These can include data derived from theoretical analysis, small scale motor test data, large motor test data and motor configuration data. The user has several options for output format that include print, cards, tape and plots. Output includes all major performance parameters (Isp, thrust, flowrate, mass accounting and operating pressures) as a function of time as well as calculated single point performance data. The engineering description of SRB-II discusses the engineering and programming fundamentals used, the function of each module, and the limitations of each module.

  13. Performance Evaluation of 3d Modeling Software for Uav Photogrammetry

    NASA Astrophysics Data System (ADS)

    Yanagi, H.; Chikatsu, H.

    2016-06-01

    UAV (Unmanned Aerial Vehicle) photogrammetry, which combines UAV and freely available internet-based 3D modeling software, is widely used as a low-cost and user-friendly photogrammetry technique in the fields such as remote sensing and geosciences. In UAV photogrammetry, only the platform used in conventional aerial photogrammetry is changed. Consequently, 3D modeling software contributes significantly to its expansion. However, the algorithms of the 3D modelling software are black box algorithms. As a result, only a few studies have been able to evaluate their accuracy using 3D coordinate check points. With this motive, Smart3DCapture and Pix4Dmapper were downloaded from the Internet and commercial software PhotoScan was also employed; investigations were performed in this paper using check points and images obtained from UAV.

  14. Proper bibeta ROC model: algorithm, software, and performance evaluation

    NASA Astrophysics Data System (ADS)

    Chen, Weijie; Hu, Nan

    2016-03-01

    Semi-parametric models are often used to fit data collected in receiver operating characteristic (ROC) experiments to obtain a smooth ROC curve and ROC parameters for statistical inference purposes. The proper bibeta model as recently proposed by Mossman and Peng enjoys several theoretical properties. In addition to having explicit density functions for the latent decision variable and an explicit functional form of the ROC curve, the two parameter bibeta model also has simple closed-form expressions for true-positive fraction (TPF), false-positive fraction (FPF), and the area under the ROC curve (AUC). In this work, we developed a computational algorithm and R package implementing this model for ROC curve fitting. Our algorithm can deal with any ordinal data (categorical or continuous). To improve accuracy, efficiency, and reliability of our software, we adopted several strategies in our computational algorithm including: (1) the LABROC4 categorization to obtain the true maximum likelihood estimation of the ROC parameters; (2) a principled approach to initializing parameters; (3) analytical first-order and second-order derivatives of the likelihood function; (4) an efficient optimization procedure (the L-BFGS algorithm in the R package "nlopt"); and (5) an analytical delta method to estimate the variance of the AUC. We evaluated the performance of our software with intensive simulation studies and compared with the conventional binormal and the proper binormal-likelihood-ratio models developed at the University of Chicago. Our simulation results indicate that our software is highly accurate, efficient, and reliable.

  15. An evaluation of sex-age-kill (SAK) model performance

    USGS Publications Warehouse

    Millspaugh, Joshua J.; Skalski, John R.; Townsend, Richard L.; Diefenbach, Duane R.; Boyce, Mark S.; Hansen, Lonnie P.; Kammermeyer, Kent

    2009-01-01

    The sex-age-kill (SAK) model is widely used to estimate abundance of harvested large mammals, including white-tailed deer (Odocoileus virginianus). Despite a long history of use, few formal evaluations of SAK performance exist. We investigated how violations of the stable age distribution and stationary population assumption, changes to male or female harvest, stochastic effects (i.e., random fluctuations in recruitment and survival), and sampling efforts influenced SAK estimation. When the simulated population had a stable age distribution and λ > 1, the SAK model underestimated abundance. Conversely, when λ < 1, the SAK overestimated abundance. When changes to male harvest were introduced, SAK estimates were opposite the true population trend. In contrast, SAK estimates were robust to changes in female harvest rates. Stochastic effects caused SAK estimates to fluctuate about their equilibrium abundance, but the effect dampened as the size of the surveyed population increased. When we considered both stochastic effects and sampling error at a deer management unit scale the resultant abundance estimates were within ±121.9% of the true population level 95% of the time. These combined results demonstrate extreme sensitivity to model violations and scale of analysis. Without changes to model formulation, the SAK model will be biased when λ ≠ 1. Furthermore, any factor that alters the male harvest rate, such as changes to regulations or changes in hunter attitudes, will bias population estimates. Sex-age-kill estimates may be precise at large spatial scales, such as the state level, but less so at the individual management unit level. Alternative models, such as statistical age-at-harvest models, which require similar data types, might allow for more robust, broad-scale demographic assessments.

  16. Novel Planar Electromagnetic Sensors: Modeling and Performance Evaluation

    PubMed Central

    Mukhopadhyay, Subhas C.

    2005-01-01

    High performance planar electromagnetic sensors, their modeling and a few applications have been reported in this paper. The researches employing planar type electromagnetic sensors have started quite a few years back with the initial emphasis on the inspection of defects on printed circuit board. The use of the planar type sensing system has been extended for the evaluation of near-surface material properties such as conductivity, permittivity, permeability etc and can also be used for the inspection of defects in the near-surface of materials. Recently the sensor has been used for the inspection of quality of saxophone reeds and dairy products. The electromagnetic responses of planar interdigital sensors with pork meats have been investigated.

  17. Evaluation of Turbulence-Model Performance in Jet Flows

    NASA Technical Reports Server (NTRS)

    Woodruff, S. L.; Seiner, J. M.; Hussaini, M. Y.; Erlebacher, G.

    2001-01-01

    The importance of reducing jet noise in both commercial and military aircraft applications has made jet acoustics a significant area of research. A technique for jet noise prediction commonly employed in practice is the MGB approach, based on the Lighthill acoustic analogy. This technique requires as aerodynamic input mean flow quantities and turbulence quantities like the kinetic energy and the dissipation. The purpose of the present paper is to assess existing capabilities for predicting these aerodynamic inputs. Two modern Navier-Stokes flow solvers, coupled with several modern turbulence models, are evaluated by comparison with experiment for their ability to predict mean flow properties in a supersonic jet plume. Potential weaknesses are identified for further investigation. Another comparison with similar intent is discussed by Barber et al. The ultimate goal of this research is to develop a reliable flow solver applicable to the low-noise, propulsion-efficient, nozzle exhaust systems being developed in NASA focused programs. These programs address a broad range of complex nozzle geometries operating in high temperature, compressible, flows. Seiner et al. previously discussed the jet configuration examined here. This convergent-divergent nozzle with an exit diameter of 3.6 inches was designed for an exhaust Mach number of 2.0 and a total temperature of 1680 F. The acoustic and aerodynamic data reported by Seiner et al. covered a range of jet total temperatures from 104 F to 2200 F at the fully-expanded nozzle pressure ratio. The aerodynamic data included centerline mean velocity and total temperature profiles. Computations were performed independently with two computational fluid dynamics (CFD) codes, ISAAC and PAB3D. Turbulence models employed include the k-epsilon model, the Gatski-Speziale algebraic-stress model and the Girimaji model, with and without the Sarkar compressibility correction. Centerline values of mean velocity and mean temperature are

  18. Simulation Modeling and Performance Evaluation of Space Networks

    NASA Technical Reports Server (NTRS)

    Jennings, Esther H.; Segui, John

    2006-01-01

    In space exploration missions, the coordinated use of spacecraft as communication relays increases the efficiency of the endeavors. To conduct trade-off studies of the performance and resource usage of different communication protocols and network designs, JPL designed a comprehensive extendable tool, the Multi-mission Advanced Communications Hybrid Environment for Test and Evaluation (MACHETE). The design and development of MACHETE began in 2000 and is constantly evolving. Currently, MACHETE contains Consultative Committee for Space Data Systems (CCSDS) protocol standards such as Proximity-1, Advanced Orbiting Systems (AOS), Packet Telemetry/Telecommand, Space Communications Protocol Specification (SCPS), and the CCSDS File Delivery Protocol (CFDP). MACHETE uses the Aerospace Corporation s Satellite Orbital Analysis Program (SOAP) to generate the orbital geometry information and contact opportunities. Matlab scripts provide the link characteristics. At the core of MACHETE is a discrete event simulator, QualNet. Delay Tolerant Networking (DTN) is an end-to-end architecture providing communication in and/or through highly stressed networking environments. Stressed networking environments include those with intermittent connectivity, large and/or variable delays, and high bit error rates. To provide its services, the DTN protocols reside at the application layer of the constituent internets, forming a store-and-forward overlay network. The key capabilities of the bundling protocols include custody-based reliability, ability to cope with intermittent connectivity, ability to take advantage of scheduled and opportunistic connectivity, and late binding of names to addresses. In this presentation, we report on the addition of MACHETE models needed to support DTN, namely: the Bundle Protocol (BP) model. To illustrate the use of MACHETE with the additional DTN model, we provide an example simulation to benchmark its performance. We demonstrate the use of the DTN protocol

  19. Evaluating Organizational Performance: Rational, Natural, and Open System Models

    ERIC Educational Resources Information Center

    Martz, Wes

    2013-01-01

    As the definition of organization has evolved, so have the approaches used to evaluate organizational performance. During the past 60 years, organizational theorists and management scholars have developed a comprehensive line of thinking with respect to organizational assessment that serves to inform and be informed by the evaluation discipline.…

  20. Hydrologic and water quality models: Performance measures and evaluation criteria

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Performance measures and corresponding criteria constitute an important aspect of calibration and validation of any hydrological and water quality (H/WQ) model. As new and improved methods and information are developed, it is essential that performance measures and criteria be updated. Therefore, th...

  1. Evaluating performances of simplified physically based landslide susceptibility models.

    NASA Astrophysics Data System (ADS)

    Capparelli, Giovanna; Formetta, Giuseppe; Versace, Pasquale

    2015-04-01

    Rainfall induced shallow landslides cause significant damages involving loss of life and properties. Prediction of shallow landslides susceptible locations is a complex task that involves many disciplines: hydrology, geotechnical science, geomorphology, and statistics. Usually to accomplish this task two main approaches are used: statistical or physically based model. This paper presents a package of GIS based models for landslide susceptibility analysis. It was integrated in the NewAge-JGrass hydrological model using the Object Modeling System (OMS) modeling framework. The package includes three simplified physically based models for landslides susceptibility analysis (M1, M2, and M3) and a component for models verifications. It computes eight goodness of fit indices (GOF) by comparing pixel-by-pixel model results and measurements data. Moreover, the package integration in NewAge-JGrass allows the use of other components such as geographic information system tools to manage inputs-output processes, and automatic calibration algorithms to estimate model parameters. The system offers the possibility to investigate and fairly compare the quality and the robustness of models and models parameters, according a procedure that includes: i) model parameters estimation by optimizing each of the GOF index separately, ii) models evaluation in the ROC plane by using each of the optimal parameter set, and iii) GOF robustness evaluation by assessing their sensitivity to the input parameter variation. This procedure was repeated for all three models. The system was applied for a case study in Calabria (Italy) along the Salerno-Reggio Calabria highway, between Cosenza and Altilia municipality. The analysis provided that among all the optimized indices and all the three models, Average Index (AI) optimization coupled with model M3 is the best modeling solution for our test case. This research was funded by PON Project No. 01_01503 "Integrated Systems for Hydrogeological Risk

  2. Biomechanical modelling and evaluation of construction jobs for performance improvement.

    PubMed

    Parida, Ratri; Ray, Pradip Kumar

    2012-01-01

    Occupational risk factors, such as awkward posture, repetition, lack of rest, insufficient illumination and heavy workload related to construction-related MMH activities may cause musculoskeletal disorders and poor performance of the workers, ergonomic design of construction worksystems was a critical need for improving their health and safety wherein a dynamic biomechanical models were required to be empirically developed and tested at a construction site of Tata Steel, the largest steel making company of India in private sector. In this study, a comprehensive framework is proposed for biomechanical evaluation of shovelling and grinding under diverse work environments. The benefit of such an analysis lies in its usefulness in setting guidelines for designing such jobs with minimization of risks of musculoskeletal disorders (MSDs) and enhancing correct methods of carrying out the jobs leading to reduced fatigue and physical stress. Data based on direct observations and videography were collected for the shovellers and grinders over a number of workcycles. Compressive forces and moments for a number of segments and joints are computed with respect to joint flexion and extension. The results indicate that moments and compressive forces at L5/S1 link are significant for shovellers while moments at elbow and wrist are significant for grinders. PMID:22317733

  3. The Rasch Model for Evaluating Italian Student Performance

    ERIC Educational Resources Information Center

    Camminatiello, Ida; Gallo, Michele; Menini, Tullio

    2010-01-01

    In 1997 the Organisation for Economic Co-operation and Development (OECD) launched the OECD Programme for International Student Assessment (PISA) for collecting information about 15-year-old students in participating countries. Our study analyses the PISA 2006 cognitive test for evaluating the Italian student performance in mathematics, reading…

  4. Evaluating performances of simplified physically based models for landslide susceptibility

    NASA Astrophysics Data System (ADS)

    Formetta, G.; Capparelli, G.; Versace, P.

    2015-12-01

    Rainfall induced shallow landslides cause loss of life and significant damages involving private and public properties, transportation system, etc. Prediction of shallow landslides susceptible locations is a complex task that involves many disciplines: hydrology, geotechnical science, geomorphology, and statistics. Usually to accomplish this task two main approaches are used: statistical or physically based model. Reliable models' applications involve: automatic parameters calibration, objective quantification of the quality of susceptibility maps, model sensitivity analysis. This paper presents a methodology to systemically and objectively calibrate, verify and compare different models and different models performances indicators in order to individuate and eventually select the models whose behaviors are more reliable for a certain case study. The procedure was implemented in package of models for landslide susceptibility analysis and integrated in the NewAge-JGrass hydrological model. The package includes three simplified physically based models for landslides susceptibility analysis (M1, M2, and M3) and a component for models verifications. It computes eight goodness of fit indices by comparing pixel-by-pixel model results and measurements data. Moreover, the package integration in NewAge-JGrass allows the use of other components such as geographic information system tools to manage inputs-output processes, and automatic calibration algorithms to estimate model parameters. The system was applied for a case study in Calabria (Italy) along the Salerno-Reggio Calabria highway, between Cosenza and Altilia municipality. The analysis provided that among all the optimized indices and all the three models, the optimization of the index distance to perfect classification in the receiver operating characteristic plane (D2PC) coupled with model M3 is the best modeling solution for our test case.

  5. Towards Modeling Realistic Mobility for Performance Evaluations in MANET

    NASA Astrophysics Data System (ADS)

    Aravind, Alex; Tahir, Hassan

    Simulation modeling plays crucial role in conducting research on complex dynamic systems like mobile ad hoc networks and often the only way. Simulation has been successfully applied in MANET for more than two decades. In several recent studies, it is observed that the credibility of the simulation results in the field has decreased while the use of simulation has steadily increased. Part of this credibility crisis has been attributed to the simulation of mobility of the nodes in the system. Mobility has such a fundamental influence on the behavior and performance of mobile ad hoc networks. Accurate modeling and knowledge of mobility of the nodes in the system is not only helpful but also essential for the understanding and interpretation of the performance of the system under study. Several ideas, mostly in isolation, have been proposed in the literature to infuse realism in the mobility of nodes. In this paper, we attempt a holistic analysis of creating realistic mobility models and then demonstrate creation and analysis of realistic mobility models using a software tool we have developed. Using our software tool, desired mobility of the nodes in the system can be specified, generated, analyzed, and then the trace can be exported to be used in the performance studies of proposed algorithms or systems.

  6. Performance Evaluation of the Prototype Model NEXT Ion Thruster

    NASA Technical Reports Server (NTRS)

    Herman, Daniel A.; Soulas, George C.; Patterson, Michael J.

    2008-01-01

    The performance testing results of the first prototype model NEXT ion engine, PM1, are presented. The NEXT program has developed the next generation ion propulsion system to enhance and enable Discovery, New Frontiers, and Flagship-type NASA missions. The PM1 thruster exhibits operational behavior consistent with its predecessors, the engineering model thrusters, with substantial mass savings, enhanced thermal margins, and design improvements for environmental testing compliance. The dry mass of PM1 is 12.7 kg. Modifications made in the thruster design have resulted in improved performance and operating margins, as anticipated. PM1 beginning-of-life performance satisfies all of the electric propulsion thruster mission-derived technical requirements. It demonstrates a wide range of throttleability by processing input power levels from 0.5 to 6.9 kW. At 6.9 kW, the PM1 thruster demonstrates specific impulse of 4190 s, 237 mN of thrust, and a thrust efficiency of 0.71. The flat beam profile, flatness parameters vary from 0.66 at low-power to 0.88 at full-power, and advanced ion optics reduce localized accelerator grid erosion and increases margins for electron backstreaming, impingement-limited voltage, and screen grid ion transparency. The thruster throughput capability is predicted to exceed 750 kg of xenon, an equivalent of 36,500 hr of continuous operation at the full-power operating condition.

  7. Performance evaluation of four directional emissivity analytical models with thermal SAIL model and airborne images.

    PubMed

    Ren, Huazhong; Liu, Rongyuan; Yan, Guangjian; Li, Zhao-Liang; Qin, Qiming; Liu, Qiang; Nerry, Françoise

    2015-04-01

    Land surface emissivity is a crucial parameter in the surface status monitoring. This study aims at the evaluation of four directional emissivity models, including two bi-directional reflectance distribution function (BRDF) models and two gap-frequency-based models. Results showed that the kernel-driven BRDF model could well represent directional emissivity with an error less than 0.002, and was consequently used to retrieve emissivity with an accuracy of about 0.012 from an airborne multi-angular thermal infrared data set. Furthermore, we updated the cavity effect factor relating to multiple scattering inside canopy, which improved the performance of the gap-frequency-based models. PMID:25968800

  8. Validation of Ultrafilter Performance Model Based on Systematic Simulant Evaluation

    SciTech Connect

    Russell, Renee L.; Billing, Justin M.; Smith, Harry D.; Peterson, Reid A.

    2009-11-18

    Because of limited availability of test data with actual Hanford tank waste samples, a method was developed to estimate expected filtration performance based on physical characterization data for the Hanford Tank Waste Treatment and Immobilization Plant. A test with simulated waste was analyzed to demonstrate that filtration of this class of waste is consistent with a concentration polarization model. Subsequently, filtration data from actual waste samples were analyzed to demonstrate that centrifuged solids concentrations provide a reasonable estimate of the limiting concentration for filtration.

  9. visCOS: An R-package to evaluate model performance of hydrological models

    NASA Astrophysics Data System (ADS)

    Klotz, Daniel; Herrnegger, Mathew; Wesemann, Johannes; Schulz, Karsten

    2016-04-01

    The evaluation of model performance is a central part of (hydrological) modelling. Much attention has been given to the development of evaluation criteria and diagnostic frameworks. (Klemeš, 1986; Gupta et al., 2008; among many others). Nevertheless, many applications exist for which objective functions do not yet provide satisfying summaries. Thus, the necessity to visualize results arises in order to explore a wider range of model capacities, be it strengths or deficiencies. Visualizations are usually devised for specific projects and these efforts are often not distributed to a broader community (e.g. via open source software packages). Hence, the opportunity to explicitly discuss a state-of-the-art presentation technique is often missed. We therefore present a comprehensive R-package for evaluating model performance by visualizing and exploring different aspects of hydrological time-series. The presented package comprises a set of useful plots and visualization methods, which complement existing packages, such as hydroGOF (Zambrano-Bigiarini et al., 2012). It is derived from practical applications of the hydrological models COSERO and COSEROreg (Kling et al., 2014). visCOS, providing an interface in R, represents an easy-to-use software package for visualizing and assessing model performance and can be implemented in the process of model calibration or model development. The package provides functions to load hydrological data into R, clean the data, process, visualize, explore and finally save the results in a consistent way. Together with an interactive zoom function of the time series, an online calculation of the objective functions for variable time-windows is included. Common hydrological objective functions, such as the Nash-Sutcliffe Efficiency and the Kling-Gupta Efficiency, can also be evaluated and visualized in different ways for defined sub-periods like hydrological years or seasonal sections. Many hydrologists use long-term water-balances as a

  10. Evaluating the performance versus accuracy tradeoff for abstract models

    NASA Astrophysics Data System (ADS)

    McGraw, Robert M.; Clark, Joseph E.

    2001-09-01

    While the military and commercial communities are increasingly reliant on simulation to reduce cost, the cost of developing simulations for their complex system may be costly in themselves. In order to reduce simulation costs, simulation developers have turned toward using collaborative simulation, reusing existing simulation models, and utilizing model abstraction techniques to reduce simulation development time as well as simulation execution time. This paper focuses on model abstraction techniques that can be applied to reduce simulation execution and development time and the effects those techniques have on simulation accuracy.

  11. Evaluating the performance of copula models in phase I-II clinical trials under model misspecification

    PubMed Central

    2014-01-01

    Background Traditionally, phase I oncology trials are designed to determine the maximum tolerated dose (MTD), defined as the highest dose with an acceptable probability of dose limiting toxicities(DLT), of a new treatment via a dose escalation study. An alternate approach is to jointly model toxicity and efficacy and allow dose escalation to depend on a pre-specified efficacy/toxicity tradeoff in a phase I-II design. Several phase I-II trial designs have been discussed in the literature; while these model-based designs are attractive in their performance, they are potentially vulnerable to model misspecification. Methods Phase I-II designs often rely on copula models to specify the joint distribution of toxicity and efficacy, which include an additional correlation parameter that can be difficult to estimate. We compare and contrast three models for the joint probability of toxicity and efficacy, including two copula models that have been proposed for use in phase I-II clinical trials and a simple model that assumes the two outcomes are independent. We evaluate the performance of the various models through simulation both when the models are correct and under model misspecification. Results Both models exhibited similar performance, as measured by the probability of correctly identifying the optimal dose and the number of subjects treated at the optimal dose, regardless of whether the data were generated from the correct or incorrect copula, even when there is substantial correlation between the two outcomes. Similar results were observed for a simple model that assumes independence, even in the presence of strong correlation. Further simulation results indicate that estimating the correlation parameter in copula models is difficult with the sample sizes used in Phase I-II clinical trials. Conclusions Our simulation results indicate that the operating characteristics of phase I-II clinical trials are robust to misspecification of the copula model but that a simple

  12. Evaluation of the Service Review Model with Performance Scorecards

    ERIC Educational Resources Information Center

    Szabo, Thomas G.; Williams, W. Larry; Rafacz, Sharlet D.; Newsome, William; Lydon, Christina A.

    2012-01-01

    The current study combined a management technique termed "Service Review" with performance scorecards to enhance staff and consumer behavior in a human service setting consisting of 11 supervisors and 56 front-line staff working with 9 adult consumers with challenging behaviors. Results of our intervention showed that service review and scorecards…

  13. System performance evaluation of the MAXIM concept with integrated modeling

    NASA Astrophysics Data System (ADS)

    Lieber, Michael D.; Gallagher, Dennis J.; Cash, Webster C.; Shipley, Ann F.

    2003-03-01

    The MAXIM (Mico-Arcsecond X-Ray Imaging Mission) and MAXIM Pathfinder, a technology precursor mission, is considered by NASA as 'visionary missions' in space astronomy. Currently the MAXIM mission design would fly multiple spacecraft in formation, each carrying precision optics, to direct x-rays from an astronomical source to collector and imaging spacecrafts. The mission architecture is complex and provides technical challenges in formaiton flying and external metrology, and target acquisition. To further develop the concept, an integrated model (IM) of the MAXIM and MAXIM Pathfinder was developed. Individual subsystem models from disciplines in structural dynamics, optics, controls, signal processing, detector physics and disturbance modelign are seamlessly integrated into one cohesive model to efficiently support system level trades and analysis. The optical system design is a unique combination of optical concepts and therefore results from the IM were extensively compared with ASAP optical software.

  14. Modeling soil vapor extraction to evaluate performance of a system

    SciTech Connect

    Struttman, T.J. ); Zachary, S.P. )

    1992-01-01

    The site described, located in northeast Ohio, originally had a 5,000 gallon UST that was used to supply gasoline. The tank was determined to be leaking from the fill port. Soil borings were augured to depth of 35 feet to determine the extent of soil contamination. At 20 to 30 feet in depth, contamination extended radially 50 to 60 feet. The estimated 1,600 cubic yard volume, as well as the proximity of existing buildings, made excavation, removal and disposal not cost effective. The depth of contaminated soils made bioremediation impractical. It was determined that sufficient information was available to install a vapor extraction system. The system includes 4 wells that can be individually drafted, a common vapor demister, and a 200 scfm induced draft fan. Vapor probes were installed to monitor both vacuum pressure and vapor concentration. The remediation was streamlined by focusing on installation of equipment and optimization of the system dynamics (operation). Data are collected monthly on individual well pressures, gas concentrations and mass loading in the exhaust. Analysis of these data yields radius of influence and contaminant mass withdrawal values. The draft to individual wells can be adjusted when needed to optimize system withdrawals. A model was developed, based on MODFLOW, and adapted to vapor extraction using known gas flow equations. The model was verified with known observed data. The results of this model were compared with data from the above site to determine appropriateness of using the model to design SVE system.

  15. Formal Implementation of a Performance Evaluation Model for the Face Recognition System

    PubMed Central

    Shin, Yong-Nyuo; Kim, Jason; Lee, Yong-Jun; Shin, Woochang; Choi, Jin-Young

    2008-01-01

    Due to usability features, practical applications, and its lack of intrusiveness, face recognition technology, based on information, derived from individuals' facial features, has been attracting considerable attention recently. Reported recognition rates of commercialized face recognition systems cannot be admitted as official recognition rates, as they are based on assumptions that are beneficial to the specific system and face database. Therefore, performance evaluation methods and tools are necessary to objectively measure the accuracy and performance of any face recognition system. In this paper, we propose and formalize a performance evaluation model for the biometric recognition system, implementing an evaluation tool for face recognition systems based on the proposed model. Furthermore, we performed evaluations objectively by providing guidelines for the design and implementation of a performance evaluation system, formalizing the performance test process. PMID:18317524

  16. A Step beyond Univision Evaluation: Using a Systems Model of Performance Improvement.

    ERIC Educational Resources Information Center

    Sleezer, Catherine M.; Zhang, Jiping; Gradous, Deane B.; Maile, Craig

    1999-01-01

    Examines three views of performance improvement--scientific management, instructional design, and systems thinking--each providing a unique view of performance improvement and specific roles for evaluation. Provides an integrated definition of performance and a synthesis model that encompasses the three views. (AEF)

  17. A model for evaluating the social performance of construction waste management

    SciTech Connect

    Yuan Hongping

    2012-06-15

    Highlights: Black-Right-Pointing-Pointer Scant attention is paid to social performance of construction waste management (CWM). Black-Right-Pointing-Pointer We develop a model for assessing the social performance of CWM. Black-Right-Pointing-Pointer With the model, the social performance of CWM can be quantitatively simulated. - Abstract: It has been determined by existing literature that a lot of research efforts have been made to the economic performance of construction waste management (CWM), but less attention is paid to investigation of the social performance of CWM. This study therefore attempts to develop a model for quantitatively evaluating the social performance of CWM by using a system dynamics (SD) approach. Firstly, major variables affecting the social performance of CWM are identified and a holistic system for assessing the social performance of CWM is formulated in line with feedback relationships underlying these variables. The developed system is then converted into a SD model through the software iThink. An empirical case study is finally conducted to demonstrate application of the model. Results of model validation indicate that the model is robust and reasonable to reflect the situation of the real system under study. Findings of the case study offer helpful insights into effectively promoting the social performance of CWM of the project investigated. Furthermore, the model exhibits great potential to function as an experimental platform for dynamically evaluating effects of management measures on improving the social performance of CWM of construction projects.

  18. Assessment of classical performance measures and signature indices from Flow Duration Curves for model evaluation.

    NASA Astrophysics Data System (ADS)

    Ley, Rita; Hellebrand, Hugo; Casper, Markus C.; Fenicia, Fabrizio

    2015-04-01

    The result of model evaluation is strongly influenced by the choice of the used performance measures. There exist a large variety of performance measures, each with its strengths and weaknesses. Although all of them represent the ability of a hydrological model to reproduce observed stream flow, it is unclear which one is most appropriate for specific applications. The objective of this study is to investigate which performance measure is best suited to find a best performing model structure for a single basin out of multiple model structures. We compare the usability of a new performance measure, the Standardized Signature Index Sum, with several classical statistical performance measures and hydrological performance measures like the Root Mean Square Error or the Nash and Sutcliffe Efficiency. In contrast to the classical and hydrological performance measures, the Standardized Signature Index Sum is based on the comparison of observed and simulated Flow Duration Curves (FDCs). It combines the performance for different parts of the FDC to one measure considering the whole FDC and therefore the whole hydrograph. For this purpose 12 model structures were generated using the SUPERFLEX modeling framework and applied to 53 meso-scale basins in Rhineland Palatinate (Germany). For all calibrated models based on the 12 model structures and 53 basins, we calculate several performance measures and compare their usability to identify a best performing model structure for each basin. In many cases the classical performance measures and the hydrological performance measures assigned similar values to seemingly different hydrographs simulated with different model structures. Therefore, these measures are not well suited for model comparison. The proposed Standardized Signature Index Sum is more effective in revealing differences between model results. Furthermore, it provides information in which part of the hydrograph and how a model fails. The Signature Index Sum allows for a

  19. The third phase of AQMEII: evaluation strategy and multi-model performance analysis

    NASA Astrophysics Data System (ADS)

    Solazzo, Efisio; Galmarini, Stefano; Hogrefe, Christian

    2016-04-01

    AQMEII (Air Quality Model Evaluation International Initiative) is an extraordinary effort promoting policy-relevant research on regional air quality model evaluation across the European and North American atmospheric modelling communities, providing the ideal platform for advancing the evaluation of air quality models at the regional scale. This study presents a comprehensive overview of the multi-model evaluation results achieved in the ongoing third phase of AQMEII. Sixteen regional-scale chemistry transport modelling systems have simulated the air quality for the year 2010 over the two continental areas of Europe and North America, providing pollutant concentration values at the surface as well as vertical profiles. The performance of the modelling systems have been evaluated against observational data for ozone, CO, NO2, PM10, PM2.5, wind speed and temperature, offering a valuable opportunity to learn about the models' behaviour by performing model-to-model and model-to-measurement comparisons. We make use of the error apportionment strategy, a novel approach to model evaluation developed within AQMEII that combines elements of operational and diagnostic evaluation. This method apportions the model error to its spectral components, thereby identifying the space/timescale at which it is most relevant and, when possible, to infer which process/es could have generated it. We investigate the deviation between modelled and observed time series of pollutants through a revised formulation for breaking down the mean square error into bias, variance, and the minimum achievable MSE (mMSE). Each of the error components is analysed independently and apportioned to specific processes based on the corresponding timescale (long scale, synoptic, diurnal, and intra-day). Compared to a conventional operational evaluation approach, the new method allows for a more precise identification of where each portion of the model error predominantly occurs. Information about the nature of

  20. Validating the ACE Model for Evaluating Student Performance Using a Teaching-Learning Process Based on Computational Modeling Systems

    ERIC Educational Resources Information Center

    Louzada, Alexandre Neves; Elia, Marcos da Fonseca; Sampaio, Fábio Ferrentini; Vidal, Andre Luiz Pestana

    2014-01-01

    The aim of this work is to adapt and test, in a Brazilian public school, the ACE model proposed by Borkulo for evaluating student performance as a teaching-learning process based on computational modeling systems. The ACE model is based on different types of reasoning involving three dimensions. In addition to adapting the model and introducing…

  1. Optical modeling and physical performances evaluations for the JT-60SA ECRF antenna

    SciTech Connect

    Platania, P. Figini, L.; Farina, D.; Micheletti, D.; Moro, A.; Sozzi, C.; Isayama, A.; Kobayashi, T.; Moriyama, S.

    2015-12-10

    The purpose of this work is the optical modeling and physical performances evaluations of the JT-60SA ECRF launcher system. The beams have been simulated with the electromagnetic code GRASP® and used as input for ECCD calculations performed with the beam tracing code GRAY, capable of modeling propagation, absorption and current drive of an EC Gaussion beam with general astigmatism. Full details of the optical analysis has been taken into account to model the launched beams. Inductive and advanced reference scenarios has been analysed for physical evaluations in the full poloidal and toroidal steering ranges for two slightly different layouts of the launcher system.

  2. Optical modeling and physical performances evaluations for the JT-60SA ECRF antenna

    NASA Astrophysics Data System (ADS)

    Platania, P.; Figini, L.; Farina, D.; Isayama, A.; Kobayashi, T.; Micheletti, D.; Moriyama, S.; Moro, A.; Sozzi, C.

    2015-12-01

    The purpose of this work is the optical modeling and physical performances evaluations of the JT-60SA ECRF launcher system. The beams have been simulated with the electromagnetic code GRASP® and used as input for ECCD calculations performed with the beam tracing code GRAY, capable of modeling propagation, absorption and current drive of an EC Gaussion beam with general astigmatism. Full details of the optical analysis has been taken into account to model the launched beams. Inductive and advanced reference scenarios has been analysed for physical evaluations in the full poloidal and toroidal steering ranges for two slightly different layouts of the launcher system.

  3. HYDROLOGIC EVALUATION OF LANDFILL PERFORMANCE (HELP) MODEL: USER'S GUIDE FOR VERSION 3

    EPA Science Inventory

    The Hydrologic Evaluation of Landfill Performance (HELP) computer program is a quasi-two-dimensional hydrologic model of water movement across, into, through and out of landfills. he model accepts weather, soil and design data. andfill systems including various combinations of ve...

  4. Important Physiological Parameters and Physical Activity Data for Evaluating Exposure Modeling Performance: a Synthesis

    EPA Science Inventory

    The purpose of this report is to develop a database of physiological parameters needed for understanding and evaluating performance of the APEX and SHEDS exposure/intake dose rate model used by the Environmental Protection Agency (EPA) as part of its regulatory activities. The A...

  5. Stochastic performance modeling and evaluation of obstacle detectability with imaging range sensors

    NASA Technical Reports Server (NTRS)

    Matthies, Larry; Grandjean, Pierrick

    1993-01-01

    Statistical modeling and evaluation of the performance of obstacle detection systems for Unmanned Ground Vehicles (UGVs) is essential for the design, evaluation, and comparison of sensor systems. In this report, we address this issue for imaging range sensors by dividing the evaluation problem into two levels: quality of the range data itself and quality of the obstacle detection algorithms applied to the range data. We review existing models of the quality of range data from stereo vision and AM-CW LADAR, then use these to derive a new model for the quality of a simple obstacle detection algorithm. This model predicts the probability of detecting obstacles and the probability of false alarms, as a function of the size and distance of the obstacle, the resolution of the sensor, and the level of noise in the range data. We evaluate these models experimentally using range data from stereo image pairs of a gravel road with known obstacles at several distances. The results show that the approach is a promising tool for predicting and evaluating the performance of obstacle detection with imaging range sensors.

  6. Source term model evaluations for the low-level waste facility performance assessment

    SciTech Connect

    Yim, M.S.; Su, S.I.

    1995-12-31

    The estimation of release of radionuclides from various waste forms to the bottom boundary of the waste disposal facility (source term) is one of the most important aspects of LLW facility performance assessment. In this work, several currently used source term models are comparatively evaluated for the release of carbon-14 based on a test case problem. The models compared include PRESTO-EPA-CPG, IMPACTS, DUST and NEFTRAN-II. Major differences in assumptions and approaches between the models are described and key parameters are identified through sensitivity analysis. The source term results from different models are compared and other concerns or suggestions are discussed.

  7. Power plant performance modeling: dynamic model evaluation. [Comparison of MMS and RETRAN codes

    SciTech Connect

    DiDomenico, P.N.; Shor, S.W.W.

    1981-10-01

    The dynamic performance of the turbine and feedwater train of a 550-MW oil-fired plant has been modeled by two modeling systems, the Modular Modeling System (MMS) and the Reactor Transient Analysis System (RETRAN). This report documents the performance of each modeling system and provides results on which the reader may be able to base a judgement as to the usefulness of each system for his modeling purposes. It compares transients simulated by the MMS with those recorded during tests conducted on an operating power plant. Specific information is provided on the type of model constructed, the agreement between blind predictions and measurements, and the level and type of modeling effort required together with computer run time. This program could not have been carried out without the willing support of the Boston Edison Company, which performed the transient tests on its Mystic Unit 7 and provided extensive engineering support to the program through the provision of detailed information on the power plant equipment and systems. It is believed that the insights into plant operation provided by the testing program itself resulted in more than sufficient improvement in plant efficiency to pay for the entire test program, but this could not have been foreseen by Boston Edison when they offered the plant for testing.

  8. Wind Evaluation Breadboard: mechanical design and analysis, control architecture, dynamic model, and performance simulation

    NASA Astrophysics Data System (ADS)

    Reyes García-Talavera, Marcos; Viera, Teodora; Núñez, Miguel; Zuluaga, Pablo; Ronquillo, Bernardo; Ronquillo, Mariano; Brunetto, Enzo; Quattri, Marco; Castro, Javier; Hernández, Elvio

    2008-07-01

    The Wind Evaluation Breadboard (WEB) for the European Extremely Large Telescope (ELT) is a primary mirror and telescope simulator formed by seven segments simulators, including position sensors, electromechanical support systems and support structures. The purpose of the WEB is to evaluate the performance of the control of wind buffeting disturbance on ELT segmented mirrors using an electro-mechanical set-up which simulates the real operational constrains applied to large segmented mirrors. The instrument has been designed and developed by IAC, ALTRAN, JUPASA and ESO, with FOGALE responsible of the Edge Sensors, and TNO of the Position Actuators. This paper describes the mechanical design and analysis, the control architecture, the dynamic model generated based on the Finite Element Model and the close loop performance achieved in simulations. A comparison in control performance between segments modal control and actuators local control is also presented.

  9. Evaluating Nextgen Closely Spaced Parallel Operations Concepts with Validated Human Performance Models: Flight Deck Guidelines

    NASA Technical Reports Server (NTRS)

    Hooey, Becky Lee; Gore, Brian Francis; Mahlstedt, Eric; Foyle, David C.

    2013-01-01

    The objectives of the current research were to develop valid human performance models (HPMs) of approach and land operations; use these models to evaluate the impact of NextGen Closely Spaced Parallel Operations (CSPO) on pilot performance; and draw conclusions regarding flight deck display design and pilot-ATC roles and responsibilities for NextGen CSPO concepts. This document presents guidelines and implications for flight deck display designs and candidate roles and responsibilities. A companion document (Gore, Hooey, Mahlstedt, & Foyle, 2013) provides complete scenario descriptions and results including predictions of pilot workload, visual attention and time to detect off-nominal events.

  10. Towards a benchmark simulation model for plant-wide control strategy performance evaluation of WWTPs.

    PubMed

    Jeppsson, U; Rosen, C; Alex, J; Copp, J; Gernaey, K V; Pons, M N; Vanrolleghem, P A

    2006-01-01

    The COST/IWA benchmark simulation model has been available for seven years. Its primary purpose has been to create a platform for control strategy benchmarking of activated sludge processes. The fact that the benchmark has resulted in more than 100 publications, not only in Europe but also worldwide, demonstrates the interest in such a tool within the research community In this paper, an extension of the benchmark simulation model no 1 (BSM1) is proposed. This extension aims at facilitating control strategy development and performance evaluation at a plant-wide level and, consequently, includes both pre-treatment of wastewater as well as the processes describing sludge treatment. The motivation for the extension is the increasing interest and need to operate and control wastewater treatment systems not only at an individual process level but also on a plant-wide basis. To facilitate the changes, the evaluation period has been extended to one year. A prolonged evaluation period allows for long-term control strategies to be assessed and enables the use of control handles that cannot be evaluated in a realistic fashion in the one-week BSM1 evaluation period. In the paper, the extended plant layout is proposed and the new suggested process models are described briefly. Models for influent file design, the benchmarking procedure and the evaluation criteria are also discussed. And finally, some important remaining topics, for which consensus is required, are identified. PMID:16532759

  11. Performance evaluation of hydrological models: Statistical significance for reducing subjectivity in goodness-of-fit assessments

    NASA Astrophysics Data System (ADS)

    Ritter, Axel; Muñoz-Carpena, Rafael

    2013-02-01

    SummarySuccess in the use of computer models for simulating environmental variables and processes requires objective model calibration and verification procedures. Several methods for quantifying the goodness-of-fit of observations against model-calculated values have been proposed but none of them is free of limitations and are often ambiguous. When a single indicator is used it may lead to incorrect verification of the model. Instead, a combination of graphical results, absolute value error statistics (i.e. root mean square error), and normalized goodness-of-fit statistics (i.e. Nash-Sutcliffe Efficiency coefficient, NSE) is currently recommended. Interpretation of NSE values is often subjective, and may be biased by the magnitude and number of data points, data outliers and repeated data. The statistical significance of the performance statistics is an aspect generally ignored that helps in reducing subjectivity in the proper interpretation of the model performance. In this work, approximated probability distributions for two common indicators (NSE and root mean square error) are derived with bootstrapping (block bootstrapping when dealing with time series), followed by bias corrected and accelerated calculation of confidence intervals. Hypothesis testing of the indicators exceeding threshold values is proposed in a unified framework for statistically accepting or rejecting the model performance. It is illustrated how model performance is not linearly related with NSE, which is critical for its proper interpretation. Additionally, the sensitivity of the indicators to model bias, outliers and repeated data is evaluated. The potential of the difference between root mean square error and mean absolute error for detecting outliers is explored, showing that this may be considered a necessary but not a sufficient condition of outlier presence. The usefulness of the approach for the evaluation of model performance is illustrated with case studies including those with

  12. Evaluating stream health based environmental justice model performance at different spatial scales

    NASA Astrophysics Data System (ADS)

    Daneshvar, Fariborz; Nejadhashemi, A. Pouyan; Zhang, Zhen; Herman, Matthew R.; Shortridge, Ashton; Marquart-Pyatt, Sandra

    2016-07-01

    This study evaluated the effects of spatial resolution on environmental justice analysis concerning stream health. The Saginaw River Basin in Michigan was selected since it is an area of concern in the Great Lakes basin. Three Bayesian Conditional Autoregressive (CAR) models (ordinary regression, weighted regression and spatial) were developed for each stream health measure based on 17 socioeconomic and physiographical variables at three census levels. For all stream health measures, spatial models had better performance compared to the two non-spatial ones at the census tract and block group levels. Meanwhile no spatial dependency was found at the county level. Multilevel Bayesian CAR models were also developed to understand the spatial dependency at the three levels. Results showed that considering level interactions improved models' prediction. Residual plots also showed that models developed at the block group and census tract (in contrary to county level models) are able to capture spatial variations.

  13. Systematic Land-Surface-Model Performance Evaluation on different time scales

    NASA Astrophysics Data System (ADS)

    Mahecha, M. D.; Jung, M.; Reichstein, M.; Beer, C.; Braakhekke, M.; Carvalhais, N.; Lange, H.; Lasslop, G.; Le Maire, G.; Seneviratne, S. I.; Vetter, M.

    2008-12-01

    Keeping track of the space--time evolution of CO2--, and H2O--fluxes between the terrestrial biosphere and atmosphere is essential to our understanding of current climate. Monitoring fluxes at site level is one option to characterize the temporal development of ecosystem--atmosphere interactions. Nevertheless, many aspects of ecosystem--atmosphere fluxes become meaningful only when interpreted in time over larger geographical regions. Empirical and process based models play a key role in spatial and temporal upscaling exercises. In this context, comparative model performance evaluations at site level are indispensable. We present a model evaluation scheme which investigates the model-data agreement separately on different time scales. Observed and modeled time series were decomposed by essentially non parametric techniques into subsignals (time scales) of characteristic fluctuations. By evaluating the extracted subsignals of observed and modeled C--fluxes (gross and net ecosystem exchange, GEE and NEE, and terrestrial ecosystem respiration, TER) separately, we obtain scale--dependent performances for the different evaluation measures. Our diagnostic model comparison allows uncovering time scales of model-data agreement and fundamental mismatch. We focus on the systematic evaluation of three land--surface models: Biome--BGC, ORCHIDEE, and LPJ. For the first time all models were driven by consistent site meteorology and compared to respective Eddy-Covariance flux observations. The results show that correct net C--fluxes may result from systematic (simultaneous) biases in TER and GEE on specific time scales of variation. We localize significant model-data mismatches of the annual-seasonal cycles in time and illustrate the recurrence characteristics of such problems. For example LPJ underestimates GEE during winter months and over estimates it in early summer at specific sites. Contrary, ORCHIDEE over-estimates the flux from July to September at these sites. Finally

  14. MODELING AND PERFORMANCE EVALUATION FOR AVIATION SECURITY CARGO INSPECTION QUEUING SYSTEM

    SciTech Connect

    Allgood, Glenn O; Olama, Mohammed M; Rose, Terri A; Brumback, Daryl L

    2009-01-01

    Beginning in 2010, the U.S. will require that all cargo loaded in passenger aircraft be inspected. This will require more efficient processing of cargo and will have a significant impact on the inspection protocols and business practices of government agencies and the airlines. In this paper, we conduct performance evaluation study for an aviation security cargo inspection queuing system for material flow and accountability. The overall performance of the aviation security cargo inspection system is computed, analyzed, and optimized for the different system dynamics. Various performance measures are considered such as system capacity, residual capacity, and throughput. These metrics are performance indicators of the system s ability to service current needs and response capacity to additional requests. The increased physical understanding resulting from execution of the queuing model utilizing these vetted performance measures will reduce the overall cost and shipping delays associated with the new inspection requirements.

  15. The Third Phase of AQMEII: Evaluation Strategy and Multi-Model Performance Analysis

    EPA Science Inventory

    AQMEII (Air Quality Model Evaluation International Initiative) is an extraordinary effort promoting policy-relevant research on regional air quality model evaluation across the European and North American atmospheric modelling communities, providing the ideal platform for advanci...

  16. Seasonal versus Episodic Performance Evaluation for an Eulerian Photochemical Air Quality Model

    SciTech Connect

    Jin, Ling; Brown, Nancy J.; Harley, Robert A.; Bao, Jian-Wen; Michelson, Sara A; Wilczak, James M

    2010-04-16

    This study presents detailed evaluation of the seasonal and episodic performance of the Community Multiscale Air Quality (CMAQ) modeling system applied to simulate air quality at a fine grid spacing (4 km horizontal resolution) in central California, where ozone air pollution problems are severe. A rich aerometric database collected during the summer 2000 Central California Ozone Study (CCOS) is used to prepare model inputs and to evaluate meteorological simulations and chemical outputs. We examine both temporal and spatial behaviors of ozone predictions. We highlight synoptically driven high-ozone events (exemplified by the four intensive operating periods (IOPs)) for evaluating both meteorological inputs and chemical outputs (ozone and its precursors) and compare them to the summer average. For most of the summer days, cross-domain normalized gross errors are less than 25% for modeled hourly ozone, and normalized biases are between {+-}15% for both hourly and peak (1 h and 8 h) ozone. The domain-wide aggregated metrics indicate similar performance between the IOPs and the whole summer with respect to predicted ozone and its precursors. Episode-to-episode differences in ozone predictions are more pronounced at a subregional level. The model performs consistently better in the San Joaquin Valley than other air basins, and episodic ozone predictions there are similar to the summer average. Poorer model performance (normalized peak ozone biases <-15% or >15%) is found in the Sacramento Valley and the Bay Area and is most noticeable in episodes that are subject to the largest uncertainties in meteorological fields (wind directions in the Sacramento Valley and timing and strength of onshore flow in the Bay Area) within the boundary layer.

  17. Methodologies for evaluating performance and assessing uncertainty of atmospheric dispersion models

    NASA Astrophysics Data System (ADS)

    Chang, Joseph C.

    This thesis describes methodologies to evaluate the performance and to assess the uncertainty of atmospheric dispersion models, tools that predict the fate of gases and aerosols upon their release into the atmosphere. Because of the large economic and public-health impacts often associated with the use of the dispersion model results, these models should be properly evaluated, and their uncertainty should be properly accounted for and understood. The CALPUFF, HPAC, and VLSTRACK dispersion modeling systems were applied to the Dipole Pride (DP26) field data (˜20 km in scale), in order to demonstrate the evaluation and uncertainty assessment methodologies. Dispersion model performance was found to be strongly dependent on the wind models used to generate gridded wind fields from observed station data. This is because, despite the fact that the test site was a flat area, the observed surface wind fields still showed considerable spatial variability, partly because of the surrounding mountains. It was found that the two components were comparable for the DP26 field data, with variability more important than uncertainty closer to the source, and less important farther away from the source. Therefore, reducing data errors for input meteorology may not necessarily increase model accuracy due to random turbulence. DP26 was a research-grade field experiment, where the source, meteorological, and concentration data were all well-measured. Another typical application of dispersion modeling is a forensic study where the data are usually quite scarce. An example would be the modeling of the alleged releases of chemical warfare agents during the 1991 Persian Gulf War, where the source data had to rely on intelligence reports, and where Iraq had stopped reporting weather data to the World Meteorological Organization since the 1981 Iran-Iraq-war. Therefore the meteorological fields inside Iraq must be estimated by models such as prognostic mesoscale meteorological models, based on

  18. The School Improvement Model: Tailoring a Teacher and Administrator Performance Evaluation System to Meet the Needs of the School Organization.

    ERIC Educational Resources Information Center

    Walker, Retia Scott

    Described here are the planning and development stages of the Teacher Performance Evaluation (TPE) system and the Administrator Performance Evaluation (APE) system that are components of the School Improvement Model project undertaken by schools in Iowa and Minnesota. One goal is an evaluation system tailored to fit the needs of the school…

  19. An evaluation of the performance of the soil temperature simulation algorithms used in the PRZM model.

    PubMed

    Tsiros, I X; Dimopoulos, I F

    2007-04-01

    Soil temperature simulation is an important component in environmental modeling since it is involved in several aspects of pollutant transport and fate. This paper deals with the performance of the soil temperature simulation algorithms of the well-known environmental model PRZM. Model results are compared and evaluated based on the basis of its ability to predict in situ measured soil temperature profiles in an experimental plot during a 3-year monitoring study. The evaluation of the performance is based on linear regression statistics and typical model statistical errors such as the root mean square error (RMSE) and the normalized objective function (NOF). Results show that the model required minimal calibration to match the observed response of the system. Values of the determination coefficient R(2) were found to be in all cases around the value of 0.98 indicating a very good agreement between measured and simulated data. Values of the RMSE were found to be in the range of 1.2 to 1.4 degrees C, 1.1 to 1.4 degrees C, 0.9 to 1.1 degrees C, and 0.8 to 1.1 degrees C, for the examined 2, 5, 10 and 20 cm soil depths, respectively. Sensitivity analyses were also performed to investigate the influence of various factors involved in the energy balance equation at the ground surface on the soil temperature profiles. The results showed that the model was able to represent important processes affecting the soil temperature regime such as the combined effect of the heat transfer by convection between the ground surface and the atmosphere and the latent heat flux due to soil water evaporation. PMID:17454373

  20. Evaluation of Blade-Strike Models for Estimating the Biological Performance of Large Kaplan Hydro Turbines

    SciTech Connect

    Deng, Zhiqun; Carlson, Thomas J.; Ploskey, Gene R.; Richmond, Marshall C.

    2005-11-30

    BioIndex testing of hydro-turbines is sought as an analog to the hydraulic index testing conducted on hydro-turbines to optimize their power production efficiency. In BioIndex testing the goal is to identify those operations within the range identified by Index testing where the survival of fish passing through the turbine is maximized. BioIndex testing includes the immediate tailrace region as well as the turbine environment between a turbine's intake trashracks and the exit of its draft tube. The US Army Corps of Engineers and the Department of Energy have been evaluating a variety of means, such as numerical and physical turbine models, to investigate the quality of flow through a hydro-turbine and other aspects of the turbine environment that determine its safety for fish. The goal is to use these tools to develop hypotheses identifying turbine operations and predictions of their biological performance that can be tested at prototype scales. Acceptance of hypotheses would be the means for validation of new operating rules for the turbine tested that would be in place when fish were passing through the turbines. The overall goal of this project is to evaluate the performance of numerical blade strike models as a tool to aid development of testable hypotheses for bioIndexing. Evaluation of the performance of numerical blade strike models is accomplished by comparing predictions of fish mortality resulting from strike by turbine runner blades with observations made using live test fish at mainstem Columbia River Dams and with other predictions of blade strike made using observations of beads passing through a 1:25 scale physical turbine model.

  1. Performance Evaluation and Modeling Techniques for Parallel Processors. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Dimpsey, Robert Tod

    1992-01-01

    In practice, the performance evaluation of supercomputers is still substantially driven by singlepoint estimates of metrics (e.g., MFLOPS) obtained by running characteristic benchmarks or workloads. With the rapid increase in the use of time-shared multiprogramming in these systems, such measurements are clearly inadequate. This is because multiprogramming and system overhead, as well as other degradations in performance due to time varying characteristics of workloads, are not taken into account. In multiprogrammed environments, multiple jobs and users can dramatically increase the amount of system overhead and degrade the performance of the machine. Performance techniques, such as benchmarking, which characterize performance on a dedicated machine ignore this major component of true computer performance. Due to the complexity of analysis, there has been little work done in analyzing, modeling, and predicting the performance of applications in multiprogrammed environments. This is especially true for parallel processors, where the costs and benefits of multi-user workloads are exacerbated. While some may claim that the issue of multiprogramming is not a viable one in the supercomputer market, experience shows otherwise. Even in recent massively parallel machines, multiprogramming is a key component. It has even been claimed that a partial cause of the demise of the CM2 was the fact that it did not efficiently support time-sharing. In the same paper, Gordon Bell postulates that, multicomputers will evolve to multiprocessors in order to support efficient multiprogramming. Therefore, it is clear that parallel processors of the future will be required to offer the user a time-shared environment with reasonable response times for the applications. In this type of environment, the most important performance metric is the completion of response time of a given application. However, there are a few evaluation efforts addressing this issue.

  2. Evaluation of the Performance of Smoothing Functions in Generalized Additive Models for Spatial Variation in Disease

    PubMed Central

    Siangphoe, Umaporn; Wheeler, David C.

    2015-01-01

    Generalized additive models (GAMs) with bivariate smoothing functions have been applied to estimate spatial variation in risk for many types of cancers. Only a handful of studies have evaluated the performance of smoothing functions applied in GAMs with regard to different geographical areas of elevated risk and different risk levels. This study evaluates the ability of different smoothing functions to detect overall spatial variation of risk and elevated risk in diverse geographical areas at various risk levels using a simulation study. We created five scenarios with different true risk area shapes (circle, triangle, linear) in a square study region. We applied four different smoothing functions in the GAMs, including two types of thin plate regression splines (TPRS) and two versions of locally weighted scatterplot smoothing (loess). We tested the null hypothesis of constant risk and detected areas of elevated risk using analysis of deviance with permutation methods and assessed the performance of the smoothing methods based on the spatial detection rate, sensitivity, accuracy, precision, power, and false-positive rate. The results showed that all methods had a higher sensitivity and a consistently moderate-to-high accuracy rate when the true disease risk was higher. The models generally performed better in detecting elevated risk areas than detecting overall spatial variation. One of the loess methods had the highest precision in detecting overall spatial variation across scenarios and outperformed the other methods in detecting a linear elevated risk area. The TPRS methods outperformed loess in detecting elevated risk in two circular areas. PMID:25983545

  3. Performance Evaluation of an Intelligent Agents Based Model within Irregular WSN Topologies

    NASA Astrophysics Data System (ADS)

    Ospina, Alberto Piedrahita; Cañola, Alcides Montoya; Carranza, Demetrio Ovalle

    There are many approaches proposed by the scientific community for the implementation and development of Wireless Sensor Networks (WSN). These approaches correspond to different areas of science, such as Electronics, Communications, Computing, Ubiquity, and Quality of Service among others. However, all are subject to the same constraints, because of the nature of WSN devices. The most common constraints of a WSN are the energy consumption, the network nodes organization, the sensor network's task reprogramming, the reliability in the data transmission, the resource optimization (memory and processing), etc. In the Artificial Intelligence Area is has proposed an Distributed System Approach with Mobile Intelligent Agents. An Integration Model of Mobile Intelligent Agents within Wireless Sensor Network solves some of the constraints presented above on WSŃs topologies. However, the model only was tested on the square topologies. In this way, the aim of this paper is to evaluate the performance of this model in irregular topologies.

  4. Energy harvesting from the discrete gust response of a piezoaeroelastic wing: Modeling and performance evaluation

    NASA Astrophysics Data System (ADS)

    Xiang, Jinwu; Wu, Yining; Li, Daochun

    2015-05-01

    The objective of this paper is to investigate energy harvesting from the unfavorable gust response of a piezoelectric wing. An aeroelectroelastic model is built for the evaluation and improvement of the harvesting performance. The structural model is built based on the Euler-Bernoulli beam theory. The unsteady aerodynamics, combined with 1-cosine gust load, are obtained from Jones' approximation of the Wagner function. The state-space equation of the aeroelectroelastic model is derived and solved numerically. The energy conversion efficiency and output density are defined to evaluate the harvesting performance. The effects of the sizes and location of the piezoelectric transducers, the load resistance in the external circuit, and the locations of the elastic axis and gravity center axis of the wing are studied, respectively. The results show that, under a given width of the transducers in chordwise direction, there are one thickness of the transducers corresponding to highest conversion efficiency and one smaller optimal value for the output density. The conversion efficiency has an approximate linear relationship with the width. As the transducers are placed at the wing root, a maximum conversion efficiency is reached under a certain length in the spanwise direction, whereas a smaller length helps reaching a larger output density. One optimal resistance is found to maximize the conversion efficiency. The rearward shift of either the elastic axis or gravity center axis improves the energy output while reducing the conversion efficiency.

  5. Performance Evaluation of Public Hospital Information Systems by the Information System Success Model

    PubMed Central

    Cho, Kyoung Won; Bae, Sung-Kwon; Ryu, Ji-Hye; Kim, Kyeong Na; An, Chang-Ho

    2015-01-01

    Objectives This study was to evaluate the performance of the newly developed information system (IS) implemented on July 1, 2014 at three public hospitals in Korea. Methods User satisfaction scores of twelve key performance indicators of six IS success factors based on the DeLone and McLean IS Success Model were utilized to evaluate IS performance before and after the newly developed system was introduced. Results All scores increased after system introduction except for the completeness of medical records and impact on the clinical environment. The relationships among six IS factors were also analyzed to identify the important factors influencing three IS success factors (Intention to Use, User Satisfaction, and Net Benefits). All relationships were significant except for the relationships among Service Quality, Intention to Use, and Net Benefits. Conclusions The results suggest that hospitals should not only focus on systems and information quality; rather, they should also continuously improve service quality to improve user satisfaction and eventually reach full the potential of IS performance. PMID:25705557

  6. Integrating Soft Set Theory and Fuzzy Linguistic Model to Evaluate the Performance of Training Simulation Systems.

    PubMed

    Chang, Kuei-Hu; Chang, Yung-Chia; Chain, Kai; Chung, Hsiang-Yu

    2016-01-01

    The advancement of high technologies and the arrival of the information age have caused changes to the modern warfare. The military forces of many countries have replaced partially real training drills with training simulation systems to achieve combat readiness. However, considerable types of training simulation systems are used in military settings. In addition, differences in system set up time, functions, the environment, and the competency of system operators, as well as incomplete information have made it difficult to evaluate the performance of training simulation systems. To address the aforementioned problems, this study integrated analytic hierarchy process, soft set theory, and the fuzzy linguistic representation model to evaluate the performance of various training simulation systems. Furthermore, importance-performance analysis was adopted to examine the influence of saving costs and training safety of training simulation systems. The findings of this study are expected to facilitate applying military training simulation systems, avoiding wasting of resources (e.g., low utility and idle time), and providing data for subsequent applications and analysis. To verify the method proposed in this study, the numerical examples of the performance evaluation of training simulation systems were adopted and compared with the numerical results of an AHP and a novel AHP-based ranking technique. The results verified that not only could expert-provided questionnaire information be fully considered to lower the repetition rate of performance ranking, but a two-dimensional graph could also be used to help administrators allocate limited resources, thereby enhancing the investment benefits and training effectiveness of a training simulation system. PMID:27598390

  7. Formative Evaluation in the Performance Context.

    ERIC Educational Resources Information Center

    Dick, Walter; King, Debby

    1994-01-01

    Reviews the traditional formative evaluation model used by instructional designers; summarizes Kirkpatrick's model of evaluation; proposes the integration of part of Kirkpatrick's model with traditional formative evaluation; and discusses performance-context formative evaluation. (three references) (LRW)

  8. Solid rocket booster performance evaluation model. Volume 3: Sample case. [propellant combustion simulation/internal ballistics

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The solid rocket booster performance evaluation model (SRB-11) is used to predict internal ballistics in a sample motor. This motor contains a five segmented grain. The first segment has a 14 pointed star configuration with a web which wraps partially around the forward dome. The other segments are circular in cross-section and are tapered along the interior burning surface. Two of the segments are inhibited on the forward face. The nozzle is not assumed to be submerged. The performance prediction is broken into two simulation parts: the delivered end item specific impulse and the propellant properties which are required as inputs for the internal ballistics module are determined; and the internal ballistics for the entire burn duration of the motor are simulated.

  9. Apprentice Performance Evaluation.

    ERIC Educational Resources Information Center

    Gast, Clyde W.

    The Granite City (Illinois) Steel apprentices are under a performance evaluation from entry to graduation. Federally approved, the program is guided by joint apprenticeship committees whose monthly meetings include performance evaluation from three information sources: journeymen, supervisors, and instructors. Journeymen's evaluations are made…

  10. Modeling and dynamic performance evaluation of target capture in robotic systems

    SciTech Connect

    Koevecses, J.; Cleghorn, W.L.; Fenton, R.G.

    2000-04-01

    In this paper, a dynamic system consisting of a robot manipulator and a target is analyzed. The target is considered in a general way as a dynamic subsystem having finite mass and moments of inertia (e.g., a rigid body or a second robot). The situation investigated is when the robot establishes interaction with the target in such a way that it intercepts and captures a reference element of the target. The analysis of target capture is divided into three phases in terms of time: the precapture, free motion (finite motion); the transition from free to constrained motion in the vicinity of interception and capture (impulsive motion); and the postcapture, constrained motion (finite motion). The greatest attention is paid to the analysis of the phase of transition, the impulsive motion, and dynamics of the system. Based on the use of impulsive constraints and the Jourdainian formulation of analytical dynamics, a novel approach is proposed for the dynamic modeling of target capture by a robot manipulator. The proposed approach is suitable to handle both finite and impulsive motions in a common analytical framework. Based on the dynamic model developed and using a geometric representation of the system's dynamics, a detailed analysis and a performance evaluation framework are presented for the phase of transition. Both rigid and structurally flexible models of robots are considered. For the performance evaluation analyses, two main concepts are proposed and corresponding performance measures are derived. These tools may be used in the analysis, design, and control of time-varying robotic systems. The dynamic system of a three-link robot arm capturing a rigid body is used to illustrate the material presented.

  11. Evaluation of blade-strike models for estimating the biological performance of large Kaplan hydro turbines

    SciTech Connect

    Deng, Z.; Carlson, T. J.; Ploskey, G. R.; Richmond, M. C.

    2005-11-01

    Bio-indexing of hydro turbines has been identified as an important means to optimize passage conditions for fish by identifying operations for existing and new design turbines that minimize the probability of injury. Cost-effective implementation of bio-indexing requires the use of tools such as numerical and physical turbine models to generate hypotheses for turbine operations that can be tested at prototype scales using live fish. Blade strike has been proposed as an index variable for the biological performance of turbines. Report reviews an evaluation of the use of numerical blade-strike models as a means with which to predict the probability of blade strike and injury of juvenile salmon smolt passing through large Kaplan turbines on the mainstem Columbia River.

  12. Evaluation of GLAS Demonstration Model Loop Heat Pipe Thermal Vacuum Performance with Various Fluid Charges

    NASA Technical Reports Server (NTRS)

    Baker, Charles; Butler, Dan; Ku, Jentung; Grob, Eric; Swanson, Ted; Nikitkin, Michael; Paquin, Krista C. (Technical Monitor)

    2001-01-01

    Two loop heat pipes (LHPs) are to be used for tight thermal control of the Geoscience Laser Altimeter System (GLAS) instrument, planned for flight in late 2001. The LHPs are charged with Propylene as a working fluid. One LHP will be used to transport 110 W from a laser a radiator, the other will transport 190 W from electronic boxes to a separate radiator. The application includes a large amount of thermal mass in each LHP system and low initial startup powers. This along with some non-ideal flight design compromises, such as a less than ideal charge level for this design concept with a symmetrical secondary wick, lead to inadequate performance of the flight LHPs during the flight thermal vacuum test in October of 2000. This presentation focuses on identifying; the sources of the flight test difficulties by modifying the charge and test setup of the successfully tested Development Model Loop Heat Pipe (DM LHP). While very similar to the flight design, the DM L14P did have several significant difference in design and method of testing. These differences were evaluated for affect on performance by conforming the DM LHP to look more like the flight units. The major difference that was evaluated was the relative fill level of the working fluid within the concentrically design LHP compensation chamber. Other differences were also assessed through performance testing including starter heater size and "hot biasing" of major interior components. Performance was assessed with respect to startup, low power operation, conductance, and control heater power. The results of the testing showed that performance improves as initial charge increases, and when the starter heater is made smaller. The "hot biasing" of the major components did not appear to have a detrimental effect. As a result of test results of the DM LHP, modifications are being made to the flight units to increase the fluid charge and increase the watt-density of the starter heater.

  13. Evaluating and Improving the Performance of Common Land Model Using FLUXNET Data

    NASA Astrophysics Data System (ADS)

    Zhang, X.; Dai, Y. J.; Dickinson, R. E.

    2015-12-01

    Common Land Model(CoLM), combined the best features of LSM, BATS, and IAP94, has been widely applied and shown its good quality. However, land surface processes are crucial for weather and climate model initialization, hence it's necessary to constrain land surface model performances using observational data. In our preliminary work, eddy covariance measurements from 20 FLUXNET sites with over 100 site-years were used to evaluate CoLM while simulating energy balance fluxes in different climate conditions and vegetation categories. And the results show CoLM simulates well for all four energy fluxes, with sensible heat flux(H) better than latent heat flux(LE), net radiation (Rnet) the best. In terms of different vegetation categories, CoLM performs the best on evergreen needle-leaf forest among the 8 selected land cover types, and shows significant priority at evergreen broadleaf forest. Although a good agreement of simulation and observation is found on seasonal cycles at the 20 sample sites, it produces extreme bias mostly at summer noon, but not shows consistent bias among different seasons. This underestimate was associated with the weakness in simulating of soil water in dry seasons and incomplete description of photosynthesis as well, that's why we will first focus on implementing mesophyll diffusion in CoLM to improve the physical process of photosynthesis.

  14. Evaluating Teacher Performance Fairly.

    ERIC Educational Resources Information Center

    Sportsman, Michel Allain

    1986-01-01

    Describes foundation and development of a performance-based teacher evaluation method developed in Missouri which makes mastery learning the basis for outcomes of instruction. Eight discrete parts of the teaching act characterizing successful teaching, four criteria important in performance-based evaluation development, and four definable phases…

  15. A computer model for the evaluation of the effect of corneal topography on optical performance.

    PubMed

    Camp, J J; Maguire, L J; Cameron, B M; Robb, R A

    1990-04-15

    We developed a method that models the effect of irregular corneal surface topography on corneal optical performance. A computer program mimics the function of an optical bench. The method generates a variety of objects (single point, standard Snellen letters, low contrast Snellen letters, arbitrarily complex objects) in object space. The lens is the corneal surface evaluated by a corneal topography analysis system. The objects are refracted by the cornea by using raytracing analysis to produce an image, which is displayed on a video monitor. Optically degraded images are generated by raytracing analysis of selected irregular corneal surfaces, such as those from patients with keratoconus and those from patients having undergone epikeratophakia for aphakia. PMID:2330940

  16. Model performance evaluation (validation and calibration) in model-based studies of therapeutic interventions for cardiovascular diseases : a review and suggested reporting framework.

    PubMed

    Haji Ali Afzali, Hossein; Gray, Jodi; Karnon, Jonathan

    2013-04-01

    Decision analytic models play an increasingly important role in the economic evaluation of health technologies. Given uncertainties around the assumptions used to develop such models, several guidelines have been published to identify and assess 'best practice' in the model development process, including general modelling approach (e.g., time horizon), model structure, input data and model performance evaluation. This paper focuses on model performance evaluation. In the absence of a sufficient level of detail around model performance evaluation, concerns regarding the accuracy of model outputs, and hence the credibility of such models, are frequently raised. Following presentation of its components, a review of the application and reporting of model performance evaluation is presented. Taking cardiovascular disease as an illustrative example, the review investigates the use of face validity, internal validity, external validity, and cross model validity. As a part of the performance evaluation process, model calibration is also discussed and its use in applied studies investigated. The review found that the application and reporting of model performance evaluation across 81 studies of treatment for cardiovascular disease was variable. Cross-model validation was reported in 55 % of the reviewed studies, though the level of detail provided varied considerably. We found that very few studies documented other types of validity, and only 6 % of the reviewed articles reported a calibration process. Considering the above findings, we propose a comprehensive model performance evaluation framework (checklist), informed by a review of best-practice guidelines. This framework provides a basis for more accurate and consistent documentation of model performance evaluation. This will improve the peer review process and the comparability of modelling studies. Recognising the fundamental role of decision analytic models in informing public funding decisions, the proposed

  17. Modelling and evaluation of nitrogen removal performance in subsurface flow and free water surface constructed wetlands.

    PubMed

    Tunçsiper, B; Ayaz, S C; Akça, L

    2006-01-01

    With the aim of protecting drinking water sources in rural regions, pilot-scale subsurface water flow (SSF) and free water surface flow (FWS) constructed wetland systems were evaluated for removal efficiencies of nitrogenous pollutants in tertiary stage treated wastewaters (effluent from the Pasaköy biological nutrient removal plant). Five different hydraulic application rates and emergent (Canna, Cyperus, Typhia sp., Phragmites sp., Juncus, Poaceae, Paspalum and Iris) and floating (Pistia, Salvina and Lemna) plant species were assayed. The average annual NH4-N, NO3-N and organic-N treatment efficiencies were 81, 40 and 74% in SSFs and 76, 59 and 75% in FWSs, respectively. Two types of the models (first-order plug flow and multiple regression) were tried to estimate the system performances. Nitrification, denitrification and ammonification rate constants (k20) values in SSF and FWS systems were 0.898 d-1 and 0.541 d(-1), 0.486 d(-1) and 0.502 d(-1), 0.986 d(-1) and 0.908, respectively. Results show that the first-order plug flow model clearly estimates slightly higher or lower values than observed when compared with the other model. PMID:16889247

  18. Evaluation of Turbulence Models Performance in Predicting Incipient Cavitation in an Enlarged Step-Nozzle

    NASA Astrophysics Data System (ADS)

    Naseri, H.; Koukouvinis, P.; Gavaises, M.

    2015-12-01

    Predictive capability of RANS and LES models to calculate incipient cavitation of water in a step nozzle is assessed. The RANS models namely, Realizable k-ε, SST k-ω and Reynolds Stress Model did not predict any cavitation, due to the limitation of RANS models to predict the low pressure vortex cores. LES WALE model was able to predict the cavitation by capturing the shear layer instability and vortex shedding. The performance of a barotropic cavitation model and Rayleigh-Plesset-based cavitation models was compared using WALE model. Although the phase change formulation is different in these models, the predicted cavitation and flow field were not significantly different.

  19. Performance evaluation of Al-Zahra academic medical center based on Iran balanced scorecard model

    PubMed Central

    Raeisi, Ahmad Reza; Yarmohammadian, Mohammad Hossein; Bakhsh, Roghayeh Mohammadi; Gangi, Hamid

    2012-01-01

    Background: Growth and development in any country's national health system, without an efficient evaluation system, lacks the basic concepts and tools necessary for fulfilling the system's goals. The balanced scorecard (BSC) is a technique widely used to measure the performance of an organization. The basic core of the BSC is guided by the organization's vision and strategies, which are the bases for the formation of four perspectives of BSC. The goal of this research is the performance evaluation of Al-Zahra Academic Medical Center in Isfahan University of Medical Sciences, based on Iran BSC model. Materials and Methods: This is a combination (quantitative–qualitative) research which was done at Al-Zahra Academic Medical Center in Isfahan University of Medical Sciences in 2011. The research populations were hospital managers at different levels. Sampling method was purposive sampling in which the key informed personnel participated in determining the performance indicators of hospital as the BSC team members in focused discussion groups. After determining the conceptual elements in focused discussion groups, the performance objectives (targets) and indicators of hospital were determined and sorted in perspectives by the group discussion participants. Following that, the performance indicators were calculated by the experts according to the predetermined objectives; then, the score of each indicator and the mean score of each perspective were calculated. Results: Research findings included development of the organizational mission, vision, values, objectives, and strategies. The strategies agreed upon by the participants in the focus discussion group included five strategies, which were customer satisfaction, continuous quality improvement, development of human resources, supporting innovation, expansion of services and improving the productivity. Research participants also agreed upon four perspectives for the Al-Zahra hospital BSC. In the patients and community

  20. Application of Wavelet Filters in an Evaluation of Photochemical Model Performance

    EPA Science Inventory

    Air quality model evaluation can be enhanced with time-scale specific comparisons of outputs and observations. For example, high-frequency (hours to one day) time scale information in observed ozone is not well captured by deterministic models and its incorporation into model pe...

  1. Performance evaluation of groundwater model hydrostratigraphy from airborne electromagnetic data and lithological borehole logs

    NASA Astrophysics Data System (ADS)

    Marker, P. A.; Foged, N.; He, X.; Christiansen, A. V.; Refsgaard, J. C.; Auken, E.; Bauer-Gottwein, P.

    2015-09-01

    Large-scale hydrological models are important decision support tools in water resources management. The largest source of uncertainty in such models is the hydrostratigraphic model. Geometry and configuration of hydrogeological units are often poorly determined from hydrogeological data alone. Due to sparse sampling in space, lithological borehole logs may overlook structures that are important for groundwater flow at larger scales. Good spatial coverage along with high spatial resolution makes airborne electromagnetic (AEM) data valuable for the structural input to large-scale groundwater models. We present a novel method to automatically integrate large AEM data sets and lithological information into large-scale hydrological models. Clay-fraction maps are produced by translating geophysical resistivity into clay-fraction values using lithological borehole information. Voxel models of electrical resistivity and clay fraction are classified into hydrostratigraphic zones using k-means clustering. Hydraulic conductivity values of the zones are estimated by hydrological calibration using hydraulic head and stream discharge observations. The method is applied to a Danish case study. Benchmarking hydrological performance by comparison of performance statistics from comparable hydrological models, the cluster model performed competitively. Calibrations of 11 hydrostratigraphic cluster models with 1-11 hydraulic conductivity zones showed improved hydrological performance with an increasing number of clusters. Beyond the 5-cluster model hydrological performance did not improve. Due to reproducibility and possibility of method standardization and automation, we believe that hydrostratigraphic model generation with the proposed method has important prospects for groundwater models used in water resources management.

  2. Inter-comparison and performance evaluation of chemistry transport models over Indian region

    NASA Astrophysics Data System (ADS)

    Govardhan, Gaurav R.; Nanjundiah, Ravi S.; Satheesh, S. K.; Moorthy, K. Krishna; Takemura, Toshihiko

    2016-01-01

    Aerosol loading over the South Asian region has the potential to affect the monsoon rainfall, Himalayan glaciers and regional air-quality, with implications for the billions in this region. While field campaigns and network observations provide primary data, they tend to be location/season specific. Numerical models are useful to regionalize such location-specific data. Studies have shown that numerical models underestimate the aerosol scenario over the Indian region, mainly due to shortcomings related to meteorology and the emission inventories used. In this context, we have evaluated the performance of two such chemistry-transport models: WRF-Chem and SPRINTARS over an India-centric domain. The models differ in many aspects including physical domain, horizontal resolution, meteorological forcing and so on etc. Despite these differences, both the models simulated similar spatial patterns of Black Carbon (BC) mass concentration, (with a spatial correlation of 0.9 with each other), and a reasonable estimates of its concentration, though both of them under-estimated vis-a-vis the observations. While the emissions are lower (higher) in SPRINTARS (WRF-Chem), overestimation of wind parameters in WRF-Chem caused the concentration to be similar in both models. Additionally, we quantified the underestimations of anthropogenic BC emissions in the inventories used these two models and three other widely used emission inventories. Our analysis indicates that all these emission inventories underestimate the emissions of BC over India by a factor that ranges from 1.5 to 2.9. We have also studied the model simulations of aerosol optical depth over the Indian region. The models differ significantly in simulations of AOD, with WRF-Chem having a better agreement with satellite observations of AOD as far as the spatial pattern is concerned. It is important to note that in addition to BC, dust can also contribute significantly to AOD. The models differ in simulations of the spatial

  3. Undergraduate Engineering Students' Beliefs, Coping Strategies, and Academic Performance: An Evaluation of Theoretical Models

    ERIC Educational Resources Information Center

    Hsieh, Pei-Hsuan; Sullivan, Jeremy R.; Sass, Daniel A.; Guerra, Norma S.

    2012-01-01

    Research has identified factors associated with academic success by evaluating relations among psychological and academic variables, although few studies have examined theoretical models to understand the complex links. This study used structural equation modeling to investigate whether the relation between test anxiety and final course grades was…

  4. Applying the Many-Facet Rasch Model to Evaluate PowerPoint Presentation Performance in Higher Education

    ERIC Educational Resources Information Center

    Basturk, Ramazan

    2008-01-01

    This study investigated the usefulness of the many-facet Rasch model (MFRM) in evaluating the quality of performance related to PowerPoint presentations in higher education. The Rasch Model utilizes item response theory stating that the probability of a correct response to a test item/task depends largely on a single parameter, the ability of the…

  5. Goal Setting and Performance Evaluation with Different Starting Positions: The Modeling Dilemma.

    ERIC Educational Resources Information Center

    Pray, Thomas F.; Gold, Steven

    1991-01-01

    Reviews 10 computerized business simulations used to teach business policy courses, discusses problems with measuring performance, and presents a statistically based approach to assessing performance that permits individual team goal setting as part of the computer model, and allows simulated firms to start with different financial and operating…

  6. Signal and image processing systems performance evaluation, simulation, and modeling; Proceedings of the Meeting, Orlando, FL, Apr. 4, 5, 1991

    NASA Astrophysics Data System (ADS)

    Nasr, Hatem N.; Bazakos, Michael E.

    The various aspects of the evaluation and modeling problems in algorithms, sensors, and systems are addressed. Consideration is given to a generic modular imaging IR signal processor, real-time architecture based on the image-processing module family, application of the Proto Ware simulation testbed to the design and evaluation of advanced avionics, development of a fire-and-forget imaging infrared seeker missile simulation, an adaptive morphological filter for image processing, laboratory development of a nonlinear optical tracking filter, a dynamic end-to-end model testbed for IR detection algorithms, wind tunnel model aircraft attitude and motion analysis, an information-theoretic approach to optimal quantization, parametric analysis of target/decoy performance, neural networks for automated target recognition parameters adaptation, performance evaluation of a texture-based segmentation algorithm, evaluation of image tracker algorithms, and multisensor fusion methodologies. (No individual items are abstracted in this volume)

  7. A strategic management model for evaluation of health, safety and environmental performance.

    PubMed

    Abbaspour, Majid; Toutounchian, Solmaz; Roayaei, Emad; Nassiri, Parvin

    2012-05-01

    Strategic health, safety, and environmental management system (HSE-MS) involves systematic and cooperative planning in each phase of the lifecycle of a project to ensure that interaction among the industry group, client, contractor, stakeholder, and host community exists with the highest level of health, safety, and environmental standard performances. Therefore, it seems necessary to assess the HSE-MS performance of contractor(s) by a comparative strategic management model with the aim of continuous improvement. The present Strategic Management Model (SMM) has been illustrated by a case study and the results show that the model is a suitable management tool for decision making in a contract environment, especially in oil and gas fields and based on accepted international standards within the framework of management deming cycle. To develop this model, a data bank has been created, which includes the statistical data calculated by converting the HSE performance qualitative data into quantitative values. Based on this fact, the structure of the model has been formed by defining HSE performance indicators according to the HSE-MS model. Therefore, 178 indicators have been selected which have been grouped into four attributes. Model output provides quantitative measures of HSE-MS performance as a percentage of an ideal level with maximum possible score for each attribute. Defining the strengths and weaknesses of the contractor(s) is another capability of this model. On the other hand, this model provides a ranking that could be used as the basis for decision making at the contractors' pre-qualification phase or during the execution of the project. PMID:21739281

  8. An evaluation of the performance of chemistry transport models by comparison with research aircraft observations. Part 1: Concepts and overall model performance

    NASA Astrophysics Data System (ADS)

    Brunner, D.; Staehelin, J.; Rogers, H. L.; Köhler, M. O.; Pyle, J. A.; Hauglustaine, D.; Jourdain, L.; Berntsen, T. K.; Gauss, M.; Isaksen, I. S. A.; Meijer, E.; van Velthoven, P.; Pitari, G.; Mancini, E.; Grewe, V.; Sausen, R.

    2003-05-01

    A rigorous evaluation of five global Chemistry-Transport and two Chemistry-Climate Models operated by several different groups in Europe was performed by comparing the models with trace gas observations from a number of research aircraft measurement campaigns. Whenever possible the models were run over the four-year period 1995-1998 and at each simulation time step the instantaneous tracer fields were interpolated to all coinciding observation points. This approach allows for a very close comparison with observations and fully accounts for the specific meteorological conditions during the measurement flights, which is important considering the often limited availability and representativity of such trace gas measurements. A new extensive database including all major research aircraft and commercial airliner measurements between 1995 and 1998 as well as ozone soundings was established specifically to support this type of direct comparison. Quantitative methods were applied to judge model performance including the calculation of average concentration biases and the visualization of correlations and RMS errors in the form of so-called Taylor diagrams. We present the general concepts applied, the structure and content of the database, and an overall analysis of model skills over four distinct regions. These regions were selected to represent various degrees and types of pollution and to cover large geographical domains with sufficient availability of observations. Comparison of model results with the observations revealed specific problems for each individual model. This study suggests what further improvements are needed and can serve as a benchmark for re-evaluations of such improvements. In general all models show deficiencies with respect to both mean concentrations and vertical gradients of the important trace gases ozone, CO and NOx at the tropopause. Too strong two-way mixing across the tropopause is suggested to be the main reason for differences between

  9. An evaluation of the performance of chemistry transport models by comparison with research aircraft observations. Part 1: Concepts and overall model performance

    NASA Astrophysics Data System (ADS)

    Brunner, D.; Staehelin, J.; Rogers, H. L.; Köhler, M. O.; Pyle, J. A.; Hauglustaine, D.; Jourdain, L.; Berntsen, T. K.; Gauss, M.; Isaksen, I. S. A.; Meijer, E.; van Velthoven, P.; Pitari, G.; Mancini, E.; Grewe, G.; Sausen, R.

    2003-10-01

    A rigorous evaluation of five global Chemistry-Transport and two Chemistry-Climate Models operated by several different groups in Europe, was performed. Comparisons were made of the models with trace gas observations from a number of research aircraft measurement campaigns during the four-year period 1995-1998. Whenever possible the models were run over the same four-year period and at each simulation time step the instantaneous tracer fields were interpolated to all coinciding observation points. This approach allows for a very close comparison with observations and fully accounts for the specific meteorological conditions during the measurement flights. This is important considering the often limited availability and representativity of such trace gas measurements. A new extensive database including all major research and commercial aircraft measurements between 1995 and 1998, as well as ozone soundings, was established specifically to support this type of direct comparison. Quantitative methods were applied to judge model performance including the calculation of average concentration biases and the visualization of correlations and RMS errors in the form of so-called Taylor diagrams. We present the general concepts applied, the structure and content of the database, and an overall analysis of model skills over four distinct regions. These regions were selected to represent various atmospheric conditions and to cover large geographical domains such that sufficient observations are available for comparison. The comparison of model results with the observations revealed specific problems for each individual model. This study suggests the further improvements needed and serves as a benchmark for re-evaluations of such improvements. In general all models show deficiencies with respect to both mean concentrations and vertical gradients of important trace gases. These include ozone, CO and NOx at the tropopause. Too strong two-way mixing across the tropopause is

  10. The performance evaluation test for prototype model of Longwave Infrared Imager (LIR) onboard PLANET-C

    NASA Astrophysics Data System (ADS)

    Fukuhara, Tetsuya; Taguchi, Makoto; Imamura, Takeshi

    which are acquired continuously. The vibration test for the UMBA was also carried out and the result showed the UMBA survived without any pixel defects or malfunctions. The tolerance to high-energy protons was tested and verified using a commercial camera in which a same type of UMBA is mounted. Based on these results, a flight model is now being manufactured with minor modifications from the prototype. The performance of flight model will be evaluated during 2008-09 in time for the scheduled launch year of 2010.

  11. Performance Evaluation of Models that Describe the Soil Water Retention Curve between Saturation and Oven Dryness

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The objective of this work was to evaluate eight closed-form unimodal analytical expressions that describe the soil-water retention curve over the complete range of soil water contents. To meet this objective, the eight models were compared in terms of their accuracy (root mean square error, RMSE), ...

  12. Evaluation of Turbulence-Model Performance as Applied to Jet-Noise Prediction

    NASA Technical Reports Server (NTRS)

    Woodruff, S. L.; Seiner, J. M.; Hussaini, M. Y.; Erlebacher, G.

    1998-01-01

    The accurate prediction of jet noise is possible only if the jet flow field can be predicted accurately. Predictions for the mean velocity and turbulence quantities in the jet flowfield are typically the product of a Reynolds-averaged Navier-Stokes solver coupled with a turbulence model. To evaluate the effectiveness of solvers and turbulence models in predicting those quantities most important to jet noise prediction, two CFD codes and several turbulence models were applied to a jet configuration over a range of jet temperatures for which experimental data is available.

  13. Performance Evaluation Process.

    ERIC Educational Resources Information Center

    1998

    This document contains four papers from a symposium on the performance evaluation process and human resource development (HRD). "Assessing the Effectiveness of OJT (On the Job Training): A Case Study Approach" (Julie Furst-Bowe, Debra Gates) is a case study of the effectiveness of OJT in one of a high-tech manufacturing company's product lines.…

  14. Evaluating ice sheet model performance over the last glacial cycle using paleo data

    NASA Astrophysics Data System (ADS)

    Robinson, Alexander; Alvarez-Solas, Jorge; Montoya, Marisa

    2015-04-01

    Estimating the past evolution of ice sheets is important for improving our understanding of their role in the Earth system and for quantifying their contribution to sea-level changes. Limited but significant paleo data and proxies are available to give insights into past changes that are valid, at least, on a local scale. Meanwhile, models can be used to provide a mechanistic picture of ice sheet changes. Combined data-model comparisons are therefore useful exercises that allow models to be confronted with real-world information and lead to better understanding of the mechanisms driving changes. In turn, models can potentially be used to validate the data by providing a physical explanation for observed phenomena. Here we focus on the evolution of the Greenland ice sheet through the last glacial cycle to highlight common problems and potential opportunities for data-model comparisons. We will present several examples of how present generation model results are inconsistent with estimates from paleo data, either in terms of the boundary forcing given to the model or the resulting characteristics of the ice sheet. We also propose a set of data-model comparisons as the starting point for developing a more standardized paleo model performance check. Incorporating such a test into modeling efforts could generate new insights in coupled climate - ice sheet modeling.

  15. Modeling and simulation of a VTOL UAV for landing gear performance evaluation

    NASA Astrophysics Data System (ADS)

    Chan, Brendan J.; Sandu, Corina; Ko, Andy; Streett, Tim

    2007-04-01

    A multibody dynamics model of a Vertical Take-off and Landing (VTOL) Unmanned Aerial Vehicle (UAV) is presented in this study. The scope of the project was to investigate a lightweight landing gear which has a stable and robust landing performance. Two original designs of the landing gear for the module of interest have been modeled and analyzed in this study. Two new designs have also been developed, modeled, and analyzed. A limited analysis of the forces that occur in the legs/struts has also been performed, to account for possible failure of the members due to buckling. The model incorporates a sloped surface of deformable terrain for stability analysis of the landing scenarios, and unilateral constraints to model the ground reaction forces upon contact. The lift forces on the UAV are modeled as mathematical relations dependent on the speed of the ducted fan to enable the variation of the impact velocities and the different landing scenarios. The simulations conducted illustrate that initial conditions at landing have a big impact on the stability of the module. The two new designs account for the worst possible scenario, and, for the material properties given, end with a larger weight than the one of the original design with three legs and a ring. Simulation data from several landing scenarios are presented in this paper, with analysis of the difference in performance among the different designs.

  16. A parametric analysis microcomputer model for evaluating the thermodynamic performance of a reciprocating Brayton cycle engine

    SciTech Connect

    Tsongas, G.A. ); White, T.J. )

    1989-10-01

    A Brayton open-cycle engine is under development. It operates similarly to a gas turbine engine, but uses reciprocating piston compressor and expander components. The design appears to have a number of advantages, including multifuel capability, the potential for lower cost, and the ability to be scaled to small sizes without significant loss in efficiency. An interactive microcomputer model has been developed that analyzes the thermodynamic performance of the engine. The model incorporates all the important irreversibilities found in piston devices, including heat transfer, mechanical friction, pressure losses, and mass loss and recirculation. There are 38 input parameters to the model. Key independent operating parameters are maximum temperature, compressor rpm, and pressure ratio. The development of the model and its assumptions are outlined in this paper. The emphasis is on model applications.

  17. Evaluating the Sensitivity of Agricultural Model Performance to Different Climate Inputs: Supplemental Material

    NASA Technical Reports Server (NTRS)

    Glotter, Michael J.; Ruane, Alex C.; Moyer, Elisabeth J.; Elliott, Joshua W.

    2015-01-01

    Projections of future food production necessarily rely on models, which must themselves be validated through historical assessments comparing modeled and observed yields. Reliable historical validation requires both accurate agricultural models and accurate climate inputs. Problems with either may compromise the validation exercise. Previous studies have compared the effects of different climate inputs on agricultural projections but either incompletely or without a ground truth of observed yields that would allow distinguishing errors due to climate inputs from those intrinsic to the crop model. This study is a systematic evaluation of the reliability of a widely used crop model for simulating U.S. maize yields when driven by multiple observational data products. The parallelized Decision Support System for Agrotechnology Transfer (pDSSAT) is driven with climate inputs from multiple sources reanalysis, reanalysis that is bias corrected with observed climate, and a control dataset and compared with observed historical yields. The simulations show that model output is more accurate when driven by any observation-based precipitation product than when driven by non-bias-corrected reanalysis. The simulations also suggest, in contrast to previous studies, that biased precipitation distribution is significant for yields only in arid regions. Some issues persist for all choices of climate inputs: crop yields appear to be oversensitive to precipitation fluctuations but under sensitive to floods and heat waves. These results suggest that the most important issue for agricultural projections may be not climate inputs but structural limitations in the crop models themselves.

  18. Photorefractive two-beam coupling joint transform correlator: modeling and performance evaluation.

    PubMed

    Nehmetallah, G; Khoury, J; Banerjee, P P

    2016-05-20

    The photorefractive two-beam coupling joint transform correlator combines two features. The first is embedded semi-adaptive optimality, which weighs the correlation against clutter and noise in the input, and the second is the intrinsic dynamic range compression nonlinearity, which improves several metrics simultaneously without metric trade-off. Although the two beam coupling correlator was invented many years ago, its outstanding performance was recognized on only relatively simple images. There was no study about the performance of this correlator on complicated images and using different figures of merit. In this paper, the study is extended to more complicated images. For the first time, to our knowledge, we demonstrate simultaneous improvement in metrics performance without metric trade-off. The performance was evaluated compared to the classical joint transform correlator. A typical experimental result to validate the simulation results was also shown in this work. The best performing operation parameters were identified to guide the experimental work and for future comparison with other well-known optimal correlation filters. PMID:27411127

  19. Evaluation of Model Results and Measured Performance of Net-Zero Energy Homes in Hawaii: Preprint

    SciTech Connect

    Norton, P.; Kiatreungwattana, K.; Kelly, K. J.

    2013-03-01

    The Kaupuni community consists of 19 affordable net-zero energy homes that were built within the Waianae Valley of Oahu, Hawaii in 2011. The project was developed for the native Hawaiian community led by the Department of Hawaiian Homelands. This paper presents a comparison of the modeled and measured energy performance of the homes. Over the first year of occupancy, the community as a whole performed within 1% of the net-zero energy goals. The data show a range of performance from house to house with the majority of the homes consistently near or exceeding net-zero, while a few fall short of the predicted net-zero energy performance. The impact of building floor plan, weather, and cooling set point on this comparison is discussed. The project demonstrates the value of using building energy simulations as a tool to assist the project to achieve energy performance goals. Lessons learned from the energy performance monitoring has had immediate benefits in providing feedback to the homeowners, and will be used to influence future energy efficient designs in Hawaii and other tropical climates.

  20. On shrinkage and model extrapolation in the evaluation of clinical center performance

    PubMed Central

    Varewyck, Machteld; Goetghebeur, Els; Eriksson, Marie; Vansteelandt, Stijn

    2014-01-01

    We consider statistical methods for benchmarking clinical centers based on a dichotomous outcome indicator. Borrowing ideas from the causal inference literature, we aim to reveal how the entire study population would have fared under the current care level of each center. To this end, we evaluate direct standardization based on fixed versus random center effects outcome models that incorporate patient-specific baseline covariates to adjust for differential case-mix. We explore fixed effects (FE) regression with Firth correction and normal mixed effects (ME) regression to maintain convergence in the presence of very small centers. Moreover, we study doubly robust FE regression to avoid outcome model extrapolation. Simulation studies show that shrinkage following standard ME modeling can result in substantial power loss relative to the considered alternatives, especially for small centers. Results are consistent with findings in the analysis of 30-day mortality risk following acute stroke across 90 centers in the Swedish Stroke Register. PMID:24812420

  1. Performance evaluation of continuity of care records (CCRs): parsing models in a mobile health management system.

    PubMed

    Chen, Hung-Ming; Liou, Yong-Zan

    2014-10-01

    In a mobile health management system, mobile devices act as the application hosting devices for personal health records (PHRs) and the healthcare servers construct to exchange and analyze PHRs. One of the most popular PHR standards is continuity of care record (CCR). The CCR is expressed in XML formats. However, parsing is an expensive operation that can degrade XML processing performance. Hence, the objective of this study was to identify different operational and performance characteristics for those CCR parsing models including the XML DOM parser, the SAX parser, the PULL parser, and the JSON parser with regard to JSON data converted from XML-based CCR. Thus, developers can make sensible choices for their target PHR applications to parse CCRs when using mobile devices or servers with different system resources. Furthermore, the simulation experiments of four case studies are conducted to compare the parsing performance on Android mobile devices and the server with large quantities of CCR data. PMID:25086611

  2. ATMOSPHERIC MODEL EVALUATION

    EPA Science Inventory

    Evaluation of the Models-3/CMAQ is conducted in this task. The focus is on evaluation of ozone, other photochemical oxidants, and fine particles using data from both routine monitoring networks and special, intensive field programs. Two types of evaluations are performed here: pe...

  3. A Predictive Performance Model to Evaluate the Contention Cost in Application Servers

    SciTech Connect

    Chen, Shiping; Gorton, Ian )

    2002-12-04

    In multi-tier enterprise systems, application servers are key components that implement business logic and provide application services. To support a large number of simultaneous accesses from clients over the Internet and intranet, most application servers use replication and multi-threading to handle concurrent requests. While multiple processes and multiple threads enhance the processing bandwidth of servers, they also increase the contention for resources in application servers. This paper investigates this issue empirically based on a middleware benchmark. A cost model is proposed to estimate the overall performance of application servers, including the contention overhead. This model is then used to determine the optimal degree of the concurrency of application servers for a specific client load. A case study based on CORBA is presented to validate our model and demonstrate its application.

  4. The secret assumption of transfer functions: problems with spatial autocorrelation in evaluating model performance

    NASA Astrophysics Data System (ADS)

    Telford, R. J.; Birks, H. J. B.

    2005-11-01

    The estimation of the predictive power of transfer functions assumes that the test sites are independent of the modelling sites. Cross-validation in the presence of spatial autocorrelation seriously violates this assumption. This assumption and the consequences of its violation have not been discussed before. We show, by simulation, that the expected r2 of a transfer function model from an autocorrelated environment can be high, and is not near zero as commonly assumed. We investigate a foraminiferal sea surface temperature training set for the North Atlantic, for which, with cross-validation, the modern analogue technique (MAT) and artificial neural networks (ANN) outperform transfer function methods based on a unimodal species-environment response model. However, when a spatially independent test set, the South Atlantic, is used, all models have a similar predictive power. We show that there is a spatial structure in the foraminiferal assemblages even after accounting for temperature, presumably due to autocorrelations in other environmental variables. Since the residuals from MAT show little spatial structure, in contrast to the residuals of unimodal response models, we contend that MAT has inappropriately internalized the non-temperature spatial structure to improve its performance. We argue that most, if not all, estimates of the predictive power of MAT and ANN models for sea surface temperatures hitherto published are over-optimistic and misleading.

  5. Modeling and Evaluating Pilot Performance in NextGen: Review of and Recommendations Regarding Pilot Modeling Efforts, Architectures, and Validation Studies

    NASA Technical Reports Server (NTRS)

    Wickens, Christopher; Sebok, Angelia; Keller, John; Peters, Steve; Small, Ronald; Hutchins, Shaun; Algarin, Liana; Gore, Brian Francis; Hooey, Becky Lee; Foyle, David C.

    2013-01-01

    NextGen operations are associated with a variety of changes to the national airspace system (NAS) including changes to the allocation of roles and responsibilities among operators and automation, the use of new technologies and automation, additional information presented on the flight deck, and the entire concept of operations (ConOps). In the transition to NextGen airspace, aviation and air operations designers need to consider the implications of design or system changes on human performance and the potential for error. To ensure continued safety of the NAS, it will be necessary for researchers to evaluate design concepts and potential NextGen scenarios well before implementation. One approach for such evaluations is through human performance modeling. Human performance models (HPMs) provide effective tools for predicting and evaluating operator performance in systems. HPMs offer significant advantages over empirical, human-in-the-loop testing in that (1) they allow detailed analyses of systems that have not yet been built, (2) they offer great flexibility for extensive data collection, (3) they do not require experimental participants, and thus can offer cost and time savings. HPMs differ in their ability to predict performance and safety with NextGen procedures, equipment and ConOps. Models also vary in terms of how they approach human performance (e.g., some focus on cognitive processing, others focus on discrete tasks performed by a human, while others consider perceptual processes), and in terms of their associated validation efforts. The objectives of this research effort were to support the Federal Aviation Administration (FAA) in identifying HPMs that are appropriate for predicting pilot performance in NextGen operations, to provide guidance on how to evaluate the quality of different models, and to identify gaps in pilot performance modeling research, that could guide future research opportunities. This research effort is intended to help the FAA

  6. Prediction of Human Glomerular Filtration Rate from Preterm Neonates to Adults: Evaluation of Predictive Performance of Several Empirical Models.

    PubMed

    Mahmood, Iftekhar; Staschen, Carl-Michael

    2016-03-01

    The objective of this study was to evaluate the predictive performance of several allometric empirical models (body weight dependent, age dependent, fixed exponent 0.75, a data-dependent single exponent, and maturation models) to predict glomerular filtration rate (GFR) in preterm and term neonates, infants, children, and adults without any renal disease. In this analysis, the models were developed from GFR data obtained from inulin clearance (preterm neonates to adults; n = 93) and the predictive performance of these models were evaluated in 335 subjects (preterm neonates to adults). The primary end point was the prediction of GFR from the empirical allometric models and the comparison of the predicted GFR with measured GFR. A prediction error within ±30% was considered acceptable. Overall, the predictive performance of the four models (BDE, ADE, and two maturation models) for the prediction of mean GFR was good across all age groups but the prediction of GFR in individual healthy subjects especially in neonates and infants was erratic and may be clinically unacceptable. PMID:26801317

  7. Performance Evaluation and Modeling of Erosion Resistant Turbine Engine Thermal Barrier Coatings

    NASA Technical Reports Server (NTRS)

    Miller, Robert A.; Zhu, Dongming; Kuczmarski, Maria

    2008-01-01

    The erosion resistant turbine thermal barrier coating system is critical to the rotorcraft engine performance and durability. The objective of this work was to determine erosion resistance of advanced thermal barrier coating systems under simulated engine erosion and thermal gradient environments, thus validating a new thermal barrier coating turbine blade technology for future rotorcraft applications. A high velocity burner rig based erosion test approach was established and a new series of rare earth oxide- and TiO2/Ta2O5- alloyed, ZrO2-based low conductivity thermal barrier coatings were designed and processed. The low conductivity thermal barrier coating systems demonstrated significant improvements in the erosion resistance. A comprehensive model based on accumulated strain damage low cycle fatigue is formulated for blade erosion life prediction. The work is currently aiming at the simulated engine erosion testing of advanced thermal barrier coated turbine blades to establish and validate the coating life prediction models.

  8. Evaluation of Round Window Stimulation Performance in Otosclerosis Using Finite Element Modeling

    PubMed Central

    Yang, Shanguo; Xu, Dan; Liu, Xiaole

    2016-01-01

    Round window (RW) stimulation is a new type of middle ear implant's application for treating patients with middle ear disease, such as otosclerosis. However, clinical outcomes show a substantial degree of variability. One source of variability is the variation in the material properties of the ear components caused by the disease. To investigate the influence of the otosclerosis on the performance of the RW stimulation, a human ear finite element model including middle ear and cochlea was established based on a set of microcomputerized tomography section images of a human temporal bone. Three characteristic changes of the otosclerosis in the auditory system were simulated in the FE model: stapedial annular ligament stiffness enlargement, stapedial abnormal bone growth, and partial fixation of the malleus. The FE model was verified by comparing the model-predicted results with published experimental measurements. The equivalent sound pressure (ESP) of RW stimulation was calculated via comparing the differential intracochlear pressure produced by the RW stimulation and the normal eardrum sound stimulation. The results show that the increase of stapedial annular ligament and partial fixation of the malleus decreases RW stimulation's ESP prominently at lower frequencies. In contrast, the stapedial abnormal bone growth deteriorates RW stimulation's ESP severely at higher frequencies. PMID:27034709

  9. Evaluation of Round Window Stimulation Performance in Otosclerosis Using Finite Element Modeling.

    PubMed

    Yang, Shanguo; Xu, Dan; Liu, Xiaole

    2016-01-01

    Round window (RW) stimulation is a new type of middle ear implant's application for treating patients with middle ear disease, such as otosclerosis. However, clinical outcomes show a substantial degree of variability. One source of variability is the variation in the material properties of the ear components caused by the disease. To investigate the influence of the otosclerosis on the performance of the RW stimulation, a human ear finite element model including middle ear and cochlea was established based on a set of microcomputerized tomography section images of a human temporal bone. Three characteristic changes of the otosclerosis in the auditory system were simulated in the FE model: stapedial annular ligament stiffness enlargement, stapedial abnormal bone growth, and partial fixation of the malleus. The FE model was verified by comparing the model-predicted results with published experimental measurements. The equivalent sound pressure (ESP) of RW stimulation was calculated via comparing the differential intracochlear pressure produced by the RW stimulation and the normal eardrum sound stimulation. The results show that the increase of stapedial annular ligament and partial fixation of the malleus decreases RW stimulation's ESP prominently at lower frequencies. In contrast, the stapedial abnormal bone growth deteriorates RW stimulation's ESP severely at higher frequencies. PMID:27034709

  10. Performance Evaluation of O-Ring Seals in Model 9975 Packaging Assemblies (U)

    SciTech Connect

    Skidmore, Eric

    1998-12-28

    The Materials Consultation Group of SRTC has completed a review of existing literature and data regarding the useable service life of Viton{reg_sign} GLT fluoroelastomer O-rings currently used in the Model 9975 packaging assemblies. Although the shipping and transportation period is normally limited to 2 years, it is anticipated that these packages will be used for longer-term storage of Pu-bearing materials in KAMS (K-Area Materials Storage) prior to processing or disposition in the APSF (Actinide Packaging and Storage Facility). Based on the service conditions and review of available literature, Materials Consultation concludes that there is sufficient existing data to establish the technical basis for storage of Pu-bearing materials using Parker Seals O-ring compound V835-75 (or equivalent) for up to 10 years following the 2-year shipping period. Although significant physical deterioration of the O-rings and release of product is not expected, definite changes in physical properties will occur. However, due to the complex relationship between elastomer formulation, seal properties, and competing degradation mechanisms, the actual degree of property variation and impact upon seal performance is difficult to predict. Therefore, accelerated aging and/or surveillance programs are recommended to validate the assumptions outlined in this report and to assess the long-term performance of O-ring seals under actual service conditions. Such programs could provide a unique opportunity to develop nonexistent long-term performance data, as well as address storage extension issues if necessary.

  11. Modeling and dosimetric performance evaluation of the RayStation treatment planning system.

    PubMed

    Mzenda, Bongile; Mugabe, Koki V; Sims, Rick; Godwin, Guy; Loria, Dayan

    2014-01-01

    The physics modeling, dose calculation accuracy and plan quality assessment of the RayStation (v3.5) treatment planning system (TPS) is presented in this study, with appropriate comparisons to the more established Pinnacle (v9.2) TPS. Modeling and validation for the Elekta MLCi and Agility beam models resulted in a good match to treatment machine-measured data based on tolerances of 3% for in-field and out-of-field regions, 10% for buildup and penumbral regions, and a gamma 2%/2mm dose/distance acceptance criteria. TPS commissioning using a wide range of appropriately selected dosimetry equipment, and following published guidelines, established the MLC modeling and dose calculation accuracy to be within standard tolerances for all tests performed. In both homogeneous and heterogeneous mediums, central axis calculations agreed with measurements within 2% for open fields and 3% for wedged fields, and within 4% off-axis. Treatment plan comparisons for identical clinical goals were made to Pinnacle for the following complex clinical cases: hypofractionated non-small cell lung carcinoma, head and neck, stereotactic spine, as well as for several standard clinical cases comprising of prostate, brain, and breast plans. DVHs, target, and critical organ doses, as well as measured point doses and gamma indices, applying both local and global (Van Dyk) normalization at 2%/2 mm and 3%/3 mm (10% lower threshold) acceptance criteria for these composite plans were assessed. In addition 3DVH was used to compare the perturbed dose distributions to the TPS 3D dose distributions. For all 32 cases, the patients QA checks showed > 95% of pixels passing 3% global/3mm gamma. PMID:25207563

  12. Integrated DEA models and grey system theory to evaluate past-to-future performance: a case of Indian electricity industry.

    PubMed

    Wang, Chia-Nan; Nguyen, Nhu-Ty; Tran, Thanh-Tuyen

    2015-01-01

    The growth of economy and population together with the higher demand in energy has created many concerns for the Indian electricity industry whose capacity is at 211 gigawatts mostly in coal-fired plants. Due to insufficient fuel supply, India suffers from a shortage of electricity generation, leading to rolling blackouts; thus, performance evaluation and ranking the industry turn into significant issues. By this study, we expect to evaluate the rankings of these companies under control of the Ministry of Power. Also, this research would like to test if there are any significant differences between the two DEA models: Malmquist nonradial and Malmquist radial. Then, one advance model of MPI would be chosen to see these companies' performance in recent years and next few years by using forecasting results of Grey system theory. Totally, the realistic data 14 are considered to be in this evaluation after the strict selection from the whole industry. The results found that all companies have not shown many abrupt changes on their scores, and it is always not consistently good or consistently standing out, which demonstrated the high applicable usability of the integrated methods. This integrated numerical research gives a better "past-present-future" insights into performance evaluation in Indian electricity industry. PMID:25821854

  13. Integrated DEA Models and Grey System Theory to Evaluate Past-to-Future Performance: A Case of Indian Electricity Industry

    PubMed Central

    Wang, Chia-Nan; Tran, Thanh-Tuyen

    2015-01-01

    The growth of economy and population together with the higher demand in energy has created many concerns for the Indian electricity industry whose capacity is at 211 gigawatts mostly in coal-fired plants. Due to insufficient fuel supply, India suffers from a shortage of electricity generation, leading to rolling blackouts; thus, performance evaluation and ranking the industry turn into significant issues. By this study, we expect to evaluate the rankings of these companies under control of the Ministry of Power. Also, this research would like to test if there are any significant differences between the two DEA models: Malmquist nonradial and Malmquist radial. Then, one advance model of MPI would be chosen to see these companies' performance in recent years and next few years by using forecasting results of Grey system theory. Totally, the realistic data 14 are considered to be in this evaluation after the strict selection from the whole industry. The results found that all companies have not shown many abrupt changes on their scores, and it is always not consistently good or consistently standing out, which demonstrated the high applicable usability of the integrated methods. This integrated numerical research gives a better “past-present-future” insights into performance evaluation in Indian electricity industry. PMID:25821854

  14. A parametric multiclass Bayes error estimator for the multispectral scanner spatial model performance evaluation

    NASA Technical Reports Server (NTRS)

    Mobasseri, B. G.; Mcgillem, C. D.; Anuta, P. E. (Principal Investigator)

    1978-01-01

    The author has identified the following significant results. The probability of correct classification of various populations in data was defined as the primary performance index. The multispectral data being of multiclass nature as well, required a Bayes error estimation procedure that was dependent on a set of class statistics alone. The classification error was expressed in terms of an N dimensional integral, where N was the dimensionality of the feature space. The multispectral scanner spatial model was represented by a linear shift, invariant multiple, port system where the N spectral bands comprised the input processes. The scanner characteristic function, the relationship governing the transformation of the input spatial, and hence, spectral correlation matrices through the systems, was developed.

  15. ATAMM enhancement and multiprocessor performance evaluation

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Mielke, Roland R.; Som, Sukhamoy; Obando, Rodrigo; Malekpour, Mahyar R.; Jones, Robert L., III; Mandala, Brij Mohan V.

    1991-01-01

    ATAMM (Algorithm To Architecture Mapping Model) enhancement and multiprocessor performance evaluation is discussed. The following topics are included: the ATAMM model; ATAMM enhancement; ADM (Advanced Development Model) implementation of ATAMM; and ATAMM support tools.

  16. Simulation of air quality over Central-Eastern Europe - Performance evaluation of WRF-CAMx modelling system

    NASA Astrophysics Data System (ADS)

    Maciejewska, Katarzyna; Juda-Rezler, Katarzyna; Reizer, Magdalena

    2013-04-01

    The main goal of presented work is to evaluate the accuracy of modelling the atmospheric transport and transformation on regional scale, performed with 25 km grid spacing. The coupled Mesoscale Weather Model - Chemical Transport Model (CTM) has been applied for Europe under European-American AQMEII project (Air Quality Modelling Evaluation International Initiative - http://aqmeii.jrc.ec.europa.eu/). The modelling domain was centered over Denmark (57.00°N, 10.00°E) with 172 x 172 grid points in x and y direction. The map projection choice was Lambert conformal. In the applied modelling system the Comprehensive Air Quality Model with extensions (CAMx) from ENVIRON International Corporation (Novato, California) was coupled off-line to the Weather Research and Forecasting (WRF), developed by National Center for Atmospheric Research (NCAR). WRF-CAMx simulations have been carried out for 2006. The anthropogenic emisions database has been provided by TNO (Netherlands Organisation for Applied Scientific Research) under AQMEII initiative. Area and line emissions were proceeded by emission model EMIL (Juda-Rezler et al., 2012) [1], while for the point sources the EPS3 model (Emission Processor v.3 from ENVIRON) was implemented in order to obtain vertical distribution of emission. Boundary conditions were acquired from coupling the GEMS (Global and regional Earth-system Monitoring using Satellite and in-situ data) modelling system results with satellite observations. The modelling system has been evaluated for the area of Central-Eastern Europe, regarding ozone and particulate matter (PM) concentrations. For each pollutant measured data from rural background AirBase and EMEP stations, with more than 75% of daily data, has been used. Original 'operational' evaluation methodology, proposed by Juda-Rezler et al. (2012) was applied. Selected set of metrics consists of 5 groups: bias measures, error measures, correlation measures, measures of model variance and spread, which

  17. Functional Performance Evaluation

    NASA Technical Reports Server (NTRS)

    Greenisen, Michael C.; Hayes, Judith C.; Siconolfi, Steven F.; Moore, Alan D.

    1999-01-01

    The Extended Duration Orbiter Medical Project (EDOMP) was established to address specific issues associated with optimizing the ability of crews to complete mission tasks deemed essential to entry, landing, and egress for spaceflights lasting up to 16 days. The main objectives of this functional performance evaluation were to investigate the physiological effects of long-duration spaceflight on skeletal muscle strength and endurance, as well as aerobic capacity and orthostatic function. Long-duration exposure to a microgravity environment may produce physiological alterations that affect crew ability to complete critical tasks such as extravehicular activity (EVA), intravehicular activity (IVA), and nominal or emergency egress. Ultimately, this information will be used to develop and verify countermeasures. The answers to three specific functional performance questions were sought: (1) What are the performance decrements resulting from missions of varying durations? (2) What are the physical requirements for successful entry, landing, and emergency egress from the Shuttle? and (3) What combination of preflight fitness training and in-flight countermeasures will minimize in-flight muscle performance decrements? To answer these questions, the Exercise Countermeasures Project looked at physiological changes associated with muscle degradation as well as orthostatic intolerance. A means of ensuring motor coordination was necessary to maintain proficiency in piloting skills, EVA, and IVA tasks. In addition, it was necessary to maintain musculoskeletal strength and function to meet the rigors associated with moderate altitude bailout and with nominal or emergency egress from the landed Orbiter. Eight investigations, referred to as Detailed Supplementary Objectives (DSOs) 475, 476, 477, 606, 608, 617, 618, and 624, were conducted to study muscle degradation and the effects of exercise on exercise capacity and orthostatic function (Table 3-1). This chapter is divided into

  18. Performance evaluation of AERMOD, CALPUFF, and legacy air dispersion models using the Winter Validation Tracer Study dataset

    NASA Astrophysics Data System (ADS)

    Rood, Arthur S.

    2014-06-01

    The performance of the steady-state air dispersion models AERMOD and Industrial Source Complex 2 (ISC2), and Lagrangian puff models CALPUFF and RATCHET were evaluated using the Winter Validation Tracer Study dataset. The Winter Validation Tracer Study was performed in February 1991 at the former Rocky Flats Environmental Technology Site near Denver, Colorado. Twelve, 11-h tests were conducted where a conservative tracer was released and measured hourly at 140 samplers in concentric rings 8 km and 16 km from the release point. Performance objectives were unpaired maximum one- and nine-hour average concentration, location of plume maximum, plume impact area, arc-integrated concentration, unpaired nine-hour average concentration, and paired ensemble means. Performance objectives were aimed at addressing regulatory compliance, and dose reconstruction assessment questions. The objective of regulatory compliance is not to underestimate maximum concentrations whereas for dose reconstruction, the objective is an unbiased estimate of concentration in space and time. Performance measures included the fractional bias, normalized mean square error, geometric mean, geometric mean variance, correlation coefficient, and fraction of observations within a factor of two. The Lagrangian puff models tended to exhibit the smallest variance, highest correlation, and highest number of predictions within a factor of two compared to the steady-state models at both the 8-km and 16-km distance. Maximum one- and nine-hour average concentrations were less likely to be under-predicted by the steady-state models compared to the Lagrangian puff models. The characteristic of the steady-state models not to under-predict maximum concentrations make them well suited for regulatory compliance demonstration, whereas the Lagrangian puff models are better suited for dose reconstruction and long range transport.

  19. Validation of alternative models in genetic evaluation of racing performance in North Swedish and Norwegian cold-blooded trotters.

    PubMed

    Olsen, H F; Klemetsdal, G; Odegård, J; Arnason, T

    2012-04-01

    There have been several approaches to the estimation of breeding values of performance in trotters, and the objective of this study was to validate different alternatives for genetic evaluation of racing performance in the North Swedish and Norwegian cold-blooded trotters. The current bivariate approach with the traits racing status (RACE) and earnings (EARN) was compared with a threshold-linear animal model and the univariate alternative with the performance trait only. The models were compared based on cross-validation of standardized earnings, using mean-squared errors of prediction (MSEP) and the correlation between the phenotype (Y) and the estimated breeding value (EBV). Despite possible effects of selection, a rather high estimate of heritability of EARN was found in our univariate analysis. The genetic trend estimate for EARN was clearly higher in the bivariate specification than in the univariate model, as a consequence of the considerable size of estimated heritability of RACE and its high correlation with EARN (approximately 0.8). RACE is highly influenced by ancestry rather than the on-farm performance of the horse itself. Consequently, the use of RACE in the genetic analysis may inflate the genetic trend of EARN because of a double counting of pedigree information. Although, because of the higher predictive ability of the bivariate specification, the improved ranking of animals within a year-class and the inability to discriminate between models for genetic trend, we propose to base prediction of breeding values on the current bivariate model. PMID:22394238

  20. Performance evaluation and modelling studies of gravel--coir fibre--sand multimedia stormwater filter.

    PubMed

    Samuel, Manoj P; Senthilvel, S; Tamilmani, D; Mathew, A C

    2012-09-01

    A horizontal flow multimedia stormwater filter was developed and tested for hydraulic efficiency and pollutant removal efficiency. Gravel, coconut (Cocos nucifera) fibre and sand were selected as the media and filled in 1:1:1 proportion. A fabric screen made up of woven sisal hemp was used to separate the media. The adsorption behaviour of coir fibre was determined in a series of column and batch studies and the corresponding isotherms were developed. The hydraulic efficiency of the filter showed a diminishing trend as the sediment level in inflow increased. The filter exhibited 100% sediment removal at lower sediment concentrations in inflow water (>6 g L(-1)). The filter could remove NO3(-), SO4(2-) and total solids (TS) effectively. Removal percentages of Mg(2+) and Na(+) were also found to be good. Similar results were obtained from a field evaluation study. Studies were also conducted to determine the pattern of silt and sediment deposition inside the filter body. The effects of residence time and rate of flow on removal percentages of NO3(-) and TS were also investigated out. In addition, a multiple regression equation that mathematically represents the filtration process was developed. Based on estimated annual costs and returns, all financial viability criteria (internal rate of return, net present value and benefit-cost ratio) were found favourable and affordable to farmers for investment in the developed filtration system. The model MUSIC was calibrated and validated for field conditions with respect to the developed stormwater filter. PMID:23240200

  1. Spatiotemporal model evaluation across Europe: A methodology based on expert knowledge, multiple datasets, physiography, flow signatures and performance metrics

    NASA Astrophysics Data System (ADS)

    Donnelly, Chantal; Andersson, Jafet; Arheimer, Berit; Gustafsson, David; Hundecha, Yeshewatesfa; Pechlivanidis, Ilias

    2015-04-01

    The hydrological model E-HYPE is spatially distributed with an average subbasin size of 200 km2 for continental Europe. The third version of the model (E-HYPE v3.0) has recently been released, building on experience in setting up multi-basin models at the large scale using open data from readily available sources. A methodology adopting a stepwise calibration of the model is used to optimize model performance to multiple datasets including (a) satellite estimates of potential evapotranspiration and ice cover, (b) in situ snow depth measurements, and (c) 116++ discharge stations representing a variety of catchment sizes, hydro-climatologies, physiographies and anthropogenic influences across Europe. Furthermore, the model is evaluated against an independent validation set of 750 discharge stations. This assists on determining how well the model represents the spatiotemporal variation in flow signatures including low, mean and high flows, flashiness, coefficient of variation and various scales of temporal variation (daily, seasonal and interannual). Assuming that the stations sufficiently represent the variation in catchment scales, hydro-climatology and physiography across Europe, the spread in performance of the validation stations may be assumed to represent the uncertainty in predicting an ungauged basin. This assumption will be further explored. Model evaluation using a large database of discharge data has the added value of informing on spatial errors, which can then be related to erroneous/uncertain input data (e.g. presence of undercatch in gridded precipitation databases), insufficient processes descriptions (e.g. groundwater recharge for a region), and limited knowledge on anthropogenic processes (e.g. extractions, regulation). This has then fed back into development of improved input data sets for precipitation, improved model process descriptions for irrigation and regulation and a new model module for deep aquifer interchange. E-HYPEv3.0 performs well

  2. The EMEFS model evaluation

    SciTech Connect

    Barchet, W.R. ); Dennis, R.L. ); Seilkop, S.K. ); Banic, C.M.; Davies, D.; Hoff, R.M.; Macdonald, A.M.; Mickle, R.E.; Padro, J.; Puckett, K. ); Byun, D.; McHenry, J.N.

    1991-12-01

    The binational Eulerian Model Evaluation Field Study (EMEFS) consisted of several coordinated data gathering and model evaluation activities. In the EMEFS, data were collected by five air and precipitation monitoring networks between June 1988 and June 1990. Model evaluation is continuing. This interim report summarizes the progress made in the evaluation of the Regional Acid Deposition Model (RADM) and the Acid Deposition and Oxidant Model (ADOM) through the December 1990 completion of a State of Science and Technology report on model evaluation for the National Acid Precipitation Assessment Program (NAPAP). Because various assessment applications of RADM had to be evaluated for NAPAP, the report emphasizes the RADM component of the evaluation. A protocol for the evaluation was developed by the model evaluation team and defined the observed and predicted values to be used and the methods by which the observed and predicted values were to be compared. Scatter plots and time series of predicted and observed values were used to present the comparisons graphically. Difference statistics and correlations were used to quantify model performance. 64 refs., 34 figs., 6 tabs.

  3. Methodology to evaluate the performance of simulation models for alternative compiler and operating system configurations

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Simulation modelers increasingly require greater flexibility for model implementation on diverse operating systems, and they demand high computational speed for efficient iterative simulations. Additionally, model users may differ in preference for proprietary versus open-source software environment...

  4. Performance evaluation of WRF-Noah Land surface model estimated soil moisture for hydrological application: Synergistic evaluation using SMOS retrieved soil moisture

    NASA Astrophysics Data System (ADS)

    Srivastava, Prashant K.; Han, Dawei; Rico-Ramirez, Miguel A.; O'Neill, Peggy; Islam, Tanvir; Gupta, Manika; Dai, Qiang

    2015-10-01

    This study explores the performance of soil moisture data from the global European Centre for Medium Range Weather Forecasts (ECMWF) ERA interim reanalysis datasets using the Weather Research and Forecasting (WRF) mesoscale numerical weather model coupled with the Noah Land surface model for hydrological applications. For evaluating the performance of WRF for soil moisture estimation, three domains are taken into account. The domain with best performance is used for estimating the soil moisture deficit (SMD). Further, several approaches are presented in this study to evaluate the efficiency of WRF simulated soil moisture for SMD estimation and compared against Soil Moisture and Ocean Salinity (SMOS) downscaled and non-downscaled soil moisture. In this study, the first approach is based on the empirical relationship between WRF soil moisture and the SMD on a continuous time series basis, while the second approach is focused on the vegetation cover impact on SMD retrieval, depicted in terms of growing and non-growing seasons. The linear growing and non-growing seasonal model in combination performs well with the NSE = 0.79, RMSE = 0.011 m; Bias = 0.24 m, in comparison to linear model (NSE = 0.70, RMSE = 0.013 m; Bias = 0.01 m). The performance obtained using WRF soil moisture is comparable to SMOS level 2 product but lower than the downscaled SMOS datasets. The results indicate that methodologies could be useful for modelers working in the field of soil moisture information system and SMD estimation at a catchment scale. The study could be useful for ungauged basins that pose a challenge to hydrological modeling due to unavailability of datasets for proper model calibration and validation.

  5. Evaluating the catching performance of aerodynamic rain gauges through field comparisons and CFD modelling

    NASA Astrophysics Data System (ADS)

    Pollock, Michael; Colli, Matteo; Stagnaro, Mattia; Lanza, Luca; Quinn, Paul; Dutton, Mark; O'Donnell, Greg; Wilkinson, Mark; Black, Andrew; O'Connell, Enda

    2016-04-01

    Accurate rainfall measurement is a fundamental requirement in a broad range of applications including flood risk and water resource management. The most widely used method of measuring rainfall is the rain gauge, which is often also considered to be the most accurate. In the context of hydrological modelling, measurements from rain gauges are interpolated to produce an areal representation, which forms an important input to drive hydrological models and calibrate rainfall radars. In each stage of this process another layer of uncertainty is introduced. The initial measurement errors are propagated through the chain, compounding the overall uncertainty. This study looks at the fundamental source of error, in the rainfall measurement itself; and specifically addresses the largest of these, the systematic 'wind-induced' error. Snowfall is outside the scope. The shape of a precipitation gauge significantly affects its collection efficiency (CE), with respect to a reference measurement. This is due to the airflow around the gauge, which causes a deflection in the trajectories of the raindrops near the gauge orifice. Computational Fluid-Dynamic (CFD) simulations are used to evaluate the time-averaged airflows realized around the EML ARG100, EML SBS500 and EML Kalyx-RG rain gauges, when impacted by wind. These gauges have a similar aerodynamic profile - a shape comparable to that of a champagne flute - and they are used globally. The funnel diameter of each gauge, respectively, is 252mm, 254mm and 127mm. The SBS500 is used by the UK Met Office and the Scottish Environmental Protection Agency. Terms of comparison are provided by the results obtained for standard rain gauge shapes manufactured by Casella and OTT which, respectively, have a uniform and a tapered cylindrical shape. The simulations were executed for five different wind speeds; 2, 5, 7, 10 and 18 ms-1. Results indicate that aerodynamic gauges have a different impact on the time-averaged airflow patterns

  6. Evaluating Economic Performance and Policies.

    ERIC Educational Resources Information Center

    Thurow, Lester C.

    1987-01-01

    Argues that a social welfare approach to evaluating economic performance is inappropriate at the high school level. Provides several historical case studies which could be used to augment instruction aimed at the evaluation of economic performance and policies. (JDH)

  7. Evaluating Student Teacher Performance

    ERIC Educational Resources Information Center

    Castolo, Carmencita L.; Dizon, Rosemariebeth R.

    2007-01-01

    Evaluation is a continuous process interwoven into the entire students teaching experience. Preplanning the evaluation process is therefore very important. Without continuous planned evaluation from the co-operating teacher, the value of student teaching is greatly reduced. One of the main purposes of the student teaching experience is to allow…

  8. Data envelopment analysis model for the appraisal and relative performance evaluation of nurses at an intensive care unit.

    PubMed

    Osman, Ibrahim H; Berbary, Lynn N; Sidani, Yusuf; Al-Ayoubi, Baydaa; Emrouznejad, Ali

    2011-10-01

    The appraisal and relative performance evaluation of nurses are very important and beneficial for both nurses and employers in an era of clinical governance, increased accountability and high standards of health care services. They enhance and consolidate the knowledge and practical skills of nurses by identification of training and career development plans as well as improvement in health care quality services, increase in job satisfaction and use of cost-effective resources. In this paper, a data envelopment analysis (DEA) model is proposed for the appraisal and relative performance evaluation of nurses. The model is validated on thirty-two nurses working at an Intensive Care Unit (ICU) at one of the most recognized hospitals in Lebanon. The DEA was able to classify nurses into efficient and inefficient ones. The set of efficient nurses was used to establish an internal best practice benchmark to project career development plans for improving the performance of other inefficient nurses. The DEA result confirmed the ranking of some nurses and highlighted injustice in other cases that were produced by the currently practiced appraisal system. Further, the DEA model is shown to be an effective talent management and motivational tool as it can provide clear managerial plans related to promoting, training and development activities from the perspective of nurses, hence increasing their satisfaction, motivation and acceptance of appraisal results. Due to such features, the model is currently being considered for implementation at ICU. Finally, the ratio of the number DEA units to the number of input/output measures is revisited with new suggested values on its upper and lower limits depending on the type of DEA models and the desired number of efficient units from a managerial perspective. PMID:20734223

  9. Evaluating the performance of the Lee-Carter method and its variants in modelling and forecasting Malaysian mortality

    NASA Astrophysics Data System (ADS)

    Zakiyatussariroh, W. H. Wan; Said, Z. Mohammad; Norazan, M. R.

    2014-12-01

    This study investigated the performance of the Lee-Carter (LC) method and it variants in modeling and forecasting Malaysia mortality. These include the original LC, the Lee-Miller (LM) variant and the Booth-Maindonald-Smith (BMS) variant. These methods were evaluated using Malaysia's mortality data which was measured based on age specific death rates (ASDR) for 1971 to 2009 for overall population while those for 1980-2009 were used in separate models for male and female population. The performance of the variants has been examined in term of the goodness of fit of the models and forecasting accuracy. Comparison was made based on several criteria namely, mean square error (MSE), root mean square error (RMSE), mean absolute deviation (MAD) and mean absolute percentage error (MAPE). The results indicate that BMS method was outperformed in in-sample fitting for overall population and when the models were fitted separately for male and female population. However, in the case of out-sample forecast accuracy, BMS method only best when the data were fitted to overall population. When the data were fitted separately for male and female, LCnone performed better for male population and LM method is good for female population.

  10. Evaluation of Blade-Strike Models for Estimating the Biological Performance of Kaplan Turbines

    SciTech Connect

    Deng, Zhiqun; Carlson, Thomas J.; Ploskey, Gene R.; Richmond, Marshall C.; Dauble, Dennis D.

    2007-11-10

    Bio-indexing of hydroturbines is an important means to optimize passage conditions for fish by identifying operations for existing and new design turbines that minimize the probability of injury. Cost-effective implementation of bio-indexing requires the use of tools such as numerical and physical turbine models to generate hypotheses for turbine operations that can be tested at prototype scales using live fish. Numerical deterministic and stochastic blade strike models were developed for a 1:25-scale physical turbine model built by the U.S. Army Corps of Engineers for the original design turbine at McNary Dam and for prototype-scale original design and replacement minimum gap runner (MGR) turbines at Bonneville Dam's first powerhouse. Blade strike probabilities predicted by both models were comparable with the overall trends in blade strike probability observed in both prototype-scale live fish survival studies and physical turbine model using neutrally buoyant beads. The predictions from the stochastic model were closer to the experimental data than the predictions from the deterministic model because the stochastic model included more realistic consideration of the aspect of fish approaching to the leading edges of turbine runner blades. Therefore, the stochastic model should be the preferred method for the prediction of blade strike and injury probability for juvenile salmon and steelhead using numerical blade-strike models.

  11. Evaluation of a micro-scale wind model's performance over realistic building clusters using wind tunnel experiments

    NASA Astrophysics Data System (ADS)

    Zhang, Ning; Du, Yunsong; Miao, Shiguang; Fang, Xiaoyi

    2016-08-01

    The simulation performance over complex building clusters of a wind simulation model (Wind Information Field Fast Analysis model, WIFFA) in a micro-scale air pollutant dispersion model system (Urban Microscale Air Pollution dispersion Simulation model, UMAPS) is evaluated using various wind tunnel experimental data including the CEDVAL (Compilation of Experimental Data for Validation of Micro-Scale Dispersion Models) wind tunnel experiment data and the NJU-FZ experiment data (Nanjing University-Fang Zhuang neighborhood wind tunnel experiment data). The results show that the wind model can reproduce the vortexes triggered by urban buildings well, and the flow patterns in urban street canyons and building clusters can also be represented. Due to the complex shapes of buildings and their distributions, the simulation deviations/discrepancies from the measurements are usually caused by the simplification of the building shapes and the determination of the key zone sizes. The computational efficiencies of different cases are also discussed in this paper. The model has a high computational efficiency compared to traditional numerical models that solve the Navier-Stokes equations, and can produce very high-resolution (1-5 m) wind fields of a complex neighborhood scale urban building canopy (~ 1 km ×1 km) in less than 3 min when run on a personal computer.

  12. An evaluation of 1D loss model collections for the off-design performance prediction of automotive turbocharger compressors

    NASA Astrophysics Data System (ADS)

    Harley, P.; Spence, S.; Early, J.; Filsinger, D.; Dietrich, M.

    2013-12-01

    Single-zone modelling is used to assess different collections of impeller 1D loss models. Three collections of loss models have been identified in literature, and the background to each of these collections is discussed. Each collection is evaluated using three modern automotive turbocharger style centrifugal compressors; comparisons of performance for each of the collections are made. An empirical data set taken from standard hot gas stand tests for each turbocharger is used as a baseline for comparison. Compressor range is predicted in this study; impeller diffusion ratio is shown to be a useful method of predicting compressor surge in 1D, and choke is predicted using basic compressible flow theory. The compressor designer can use this as a guide to identify the most compatible collection of losses for turbocharger compressor design applications. The analysis indicates the most appropriate collection for the design of automotive turbocharger centrifugal compressors.

  13. GROUND-WATER MODEL TESTING: SYSTEMATIC EVALUATION AND TESTING OF CODE FUNCTIONALITY AND PERFORMANCE

    EPA Science Inventory

    Effective use of ground-water simulation codes as management decision tools requires the establishment of their functionality, performance characteristics, and applicability to the problem at hand. This is accomplished through application of a systematic code-testing protocol and...

  14. BPACK -- A computer model package for boiler reburning/co-firing performance evaluations. User`s manual, Volume 1

    SciTech Connect

    Wu, K.T.; Li, B.; Payne, R.

    1992-06-01

    This manual presents and describes a package of computer models uniquely developed for boiler thermal performance and emissions evaluations by the Energy and Environmental Research Corporation. The model package permits boiler heat transfer, fuels combustion, and pollutant emissions predictions related to a number of practical boiler operations such as fuel-switching, fuels co-firing, and reburning NO{sub x} reductions. The models are adaptable to most boiler/combustor designs and can handle burner fuels in solid, liquid, gaseous, and slurried forms. The models are also capable of performing predictions for combustion applications involving gaseous-fuel reburning, and co-firing of solid/gas, liquid/gas, gas/gas, slurry/gas fuels. The model package is conveniently named as BPACK (Boiler Package) and consists of six computer codes, of which three of them are main computational codes and the other three are input codes. The three main codes are: (a) a two-dimensional furnace heat-transfer and combustion code: (b) a detailed chemical-kinetics code; and (c) a boiler convective passage code. This user`s manual presents the computer model package in two volumes. Volume 1 describes in detail a number of topics which are of general users` interest, including the physical and chemical basis of the models, a complete description of the model applicability, options, input/output, and the default inputs. Volume 2 contains a detailed record of the worked examples to assist users in applying the models, and to illustrate the versatility of the codes.

  15. Evaluating the performance of a glacier erosion model applied to Peyto Glacier, Alberta, Canada

    NASA Astrophysics Data System (ADS)

    Vogt, R.; Mlynowski, T. J.; Menounos, B.

    2013-12-01

    Glaciers are effective agents of erosion for many mountainous regions, but primary rates of erosion are difficult to quantify due to unknown conditions at the glacier bed. We develop a numerical model of subglacial erosion and passively couple it to a vertically integrated ice flow model (UBC regional glaciation model). The model accounts for seasonal changes in water pressure at the glacier bed which affect rates of abrasion and quarrying. We apply our erosion model to Peyto Glacier, and compare estimates of glacier erosion to the mass of fine sediment contained in a lake immediately down valley from the glacier. A series of experiments with our model and ones based on subglacial sliding rates are run to explore model sensitivity to bedrock hardness, seasonal hydrology, changes in mass balance, and longer-term dimensional changes of the glacier. Our experiments show that, as expected, erosion rates are most sensitive to bedrock hardness and changes in glacier mass balance. Silt and clay contained in Peyto Lake primarily originate from the glacier, and represent sediments derived from abrasion and comminution of material produced by quarrying. Average specific sediment yield during the period AD1917-1970 from the lake is 467×190 Mg km-2yr-1 and reaches a maximum of 928 Mg km-2yr-1 in AD1941. Converting to a specific sediment yield, modelled average abrasion and quarrying rates during the comparative period are 142×44 Mg km-2yr-1 and 1167×213 Mg km-2yr-1 respectively. Modelled quarrying accounts for approximately 85-95% of the erosion occurring beneath the glacier. The basal sliding model estimates combined abrasion and quarrying. During the comparative period, estimated yields average 427×136 Mg km-2yr-1, lower than the combined abrasion and quarrying models. Both models predict maximum sediment yield when Peyto Glacier reached its maximum extent. The simplistic erosion model shows higher sensitivity to climate, as seen by accentuated sediment yield peaks

  16. Evaluating the Performance of a Coupled Distributed Hydrologic - Hydraulic Model for Flash Flood Modeling Using Multiple Precipitation Data Sources

    NASA Astrophysics Data System (ADS)

    Nguyen, P.; Sorooshian, S.; Hsu, K.; AghaKouchak, A.; Sanders, B. F.

    2013-12-01

    Flash floods are considered one of the most hazardous natural disasters, which kills thousands of people and causes billions of US dollar economic damages annually world-wide. Forecasting flash floods to provide accurate warnings in a timely manner is still challenging. At the Center for Hydrometeorology and Remote Sensing (CHRS) at the University of California, Irvine, we have been developing a coupled high resolution distributed hydrologic-hydraulic system for flash flood modeling which has been successfully tested for some selected areas in the U.S. and has potential to be implemented in global scale. The system employs the National Weather Service's distributed hydrologic model (HL-RDHM) as a rainfall-runoff generator, and a high-resolution hydraulic model (BreZo) for simulating the channel and flood-plain flows realistically. In this research, we evaluate the system for flash flood warning using multiple precipitation sources (gauge, radar and satellite and forecast). A flash flood event occurring on June 11, 2010 in the Upper Little Missouri River watershed in Arkansas is used as a case study. The catchment was delineated into 123 sub-catchments based on the 10m Digital Elevation Model (DEM) topography data from USGS. From HL-RDHM surface runoff, 123 hydrographs can be derived and connected as inputs to BreZo. The system was calibrated using NEXRAD Stage IV radar-based rainfall by tuning the roughness parameter in BreZo to best match the USGS discharge observation at the catchment outlet. The results show good agreement with the USGS gauge flow measurement (Nash-Sutcliffe coefficient = 0.91) when using Stage IV data. The system is under investigation with satellite-based precipitation data, rain gauge and Global Forecast System (GFS) data and will be reported in the presentation.

  17. Distributed Space Mission Design for Earth Observation Using Model-Based Performance Evaluation

    NASA Technical Reports Server (NTRS)

    Nag, Sreeja; LeMoigne-Stewart, Jacqueline; Cervantes, Ben; DeWeck, Oliver

    2015-01-01

    Distributed Space Missions (DSMs) are gaining momentum in their application to earth observation missions owing to their unique ability to increase observation sampling in multiple dimensions. DSM design is a complex problem with many design variables, multiple objectives determining performance and cost and emergent, often unexpected, behaviors. There are very few open-access tools available to explore the tradespace of variables, minimize cost and maximize performance for pre-defined science goals, and therefore select the most optimal design. This paper presents a software tool that can multiple DSM architectures based on pre-defined design variable ranges and size those architectures in terms of predefined science and cost metrics. The tool will help a user select Pareto optimal DSM designs based on design of experiments techniques. The tool will be applied to some earth observation examples to demonstrate its applicability in making some key decisions between different performance metrics and cost metrics early in the design lifecycle.

  18. Performance evaluation of dual-frequency driving plate ultrasonic motor based on an analytical model.

    PubMed

    Pang, Yafei; Yang, Ming; Chen, Xuying; He, Wei; Li, Shiyang; Li, Chaodong

    2011-08-01

    An analytical model is presented to explain the effects of dual-frequency drive on the plate ultrasonic motor in this paper. The experimental prototype is a plate ultrasonic motor using single-phase asymmetric excitation, which can work under a single vibration or multiple vibration modes. Based on the linear superposition of vibrations with two different excitation frequencies, an analytical model is established using the classic Coulomb friction model, and the non-load rotation speed and maximum stall torque are deduced. Moreover, some crucial parameters such as preload and dead-zone in dual-frequency superposition model are identified or modified automatically by searching for the maximum correlation coefficient between simulation and experimental data using single-frequency drive. It is found that simulation and experiment results agree well when no excitation frequency component is at resonance. PMID:21859583

  19. Surface characteristics modeling and performance evaluation of urban building materials using LiDAR data.

    PubMed

    Li, Xiaolu; Liang, Yu

    2015-05-20

    Analysis of light detection and ranging (LiDAR) intensity data to extract surface features is of great interest in remote sensing research. One potential application of LiDAR intensity data is target classification. A new bidirectional reflectance distribution function (BRDF) model is derived for target characterization of rough and smooth surfaces. Based on the geometry of our coaxial full-waveform LiDAR system, the integration method is improved through coordinate transformation to establish the relationship between the BRDF model and intensity data of LiDAR. A series of experiments using typical urban building materials are implemented to validate the proposed BRDF model and integration method. The fitting results show that three parameters extracted from the proposed BRDF model can distinguish the urban building materials from perspectives of roughness, specular reflectance, and diffuse reflectance. A comprehensive analysis of these parameters will help characterize surface features in a physically rigorous manner. PMID:26192511

  20. Performance Standards and Evaluations in IR Test Collections: Cluster-Based Retrieval Models.

    ERIC Educational Resources Information Center

    Shaw, W. M., Jr.; And Others

    1997-01-01

    Describes a study that computed low performance standards for the group of queries in 13 information retrieval (IR) test collections. Derived from the random graph hypothesis, these standards represent the highest levels of retrieval effectiveness that can be obtained from meaningless clustering structures. (Author/LRW)

  1. Development and Evaluation of a Performance Modeling Flight Test Approach Based on Quasi Steady-State Maneuvers

    NASA Technical Reports Server (NTRS)

    Yechout, T. R.; Braman, K. B.

    1984-01-01

    The development, implementation and flight test evaluation of a performance modeling technique which required a limited amount of quasisteady state flight test data to predict the overall one g performance characteristics of an aircraft. The concept definition phase of the program include development of: (1) the relationship for defining aerodynamic characteristics from quasi steady state maneuvers; (2) a simplified in flight thrust and airflow prediction technique; (3) a flight test maneuvering sequence which efficiently provided definition of baseline aerodynamic and engine characteristics including power effects on lift and drag; and (4) the algorithms necessary for cruise and flight trajectory predictions. Implementation of the concept include design of the overall flight test data flow, definition of instrumentation system and ground test requirements, development and verification of all applicable software and consolidation of the overall requirements in a flight test plan.

  2. OBJECTIVE REDUCTION OF THE SPACE-TIME DOMAIN DIMENSIONALITY FOR EVALUATING MODEL PERFORMANCE

    EPA Science Inventory

    In the United States, photochemical air quality models are the principal tools used by governmental agencies to develop emission reduction strategies aimed at achieving National Ambient Air Quality Standards (NAAQS). Before they can be applied with confidence in a regulatory sett...

  3. Evaluation of the Logistic Model for GAC Performance in Water Treatment

    EPA Science Inventory

    Full-scale field measurement and rapid small-scale column test data from the Greater Cincinnati (Ohio) Water Works (GCWW) were used to calibrate and investigate the application of the logistic model for simulating breakthrough of total organic carbon (TOC) in granular activated c...

  4. Application of a boiler performance model to evaluate low-rank coal fired subcritical and supercritical boilers

    SciTech Connect

    Ahn, Y.K.; Buchanan, T.L.; Zaharchuk, R.

    1995-12-31

    A number of thermal drying processes that could be used to dry and upgrade Low-Rank Coals (LRCs) are under development. G/C evaluated these processes and selected the SynCoal process as the optimum process to dry the LRC. Initially, the evaluation was made on the basis of the cost of dried LRC, delivered to Korea, and later the evaluation was made on a cost-of-electricity (COE) basis. Two cases were evaluated: firing the dried LRC in an existing subcritical PC plant and in a new supercritical boiler. For the existing PC plant, Korea Electric Power Corporation`s (KEPCO`s) 270 MWe Honam plant was selected. A Boiler Performance Model (BPM) was used to evaluate performances of both subcritical and supercritical units for firing various coals. The results showed that upgraded Usibelli coal was marginally competitive due to its high mine-mouth cost, but Rosebud coal was very competitive due to its low mine-mouth cost. In these cases the coals were upgraded by using the SynCoal process. This report investigates the impact of tax incentives resulting from the Energy Policy Act of 1992 on the competitiveness of the upgraded Alaska Usibelli and Montana Rosebud coals for application to PC plants. The SynCoal process has been qualified by the Internal Revenue Service for tax benefits derived from the Energy Policy Act. The economic analyses include costs and sensitivity analyses for alternative ways of selling fines produced during the SynCoal process: briquetting fines and adding them to the finished product, or cooling fines and selling them to users at the same price as SynCoal product in the domestic market. These analyses included the effects of tax incentive when applicable.

  5. Rank and order: evaluating the performance of SNPs for individual assignment in a non-model organism.

    PubMed

    Storer, Caroline G; Pascal, Carita E; Roberts, Steven B; Templin, William D; Seeb, Lisa W; Seeb, James E

    2012-01-01

    Single nucleotide polymorphisms (SNPs) are valuable tools for ecological and evolutionary studies. In non-model species, the use of SNPs has been limited by the number of markers available. However, new technologies and decreasing technology costs have facilitated the discovery of a constantly increasing number of SNPs. With hundreds or thousands of SNPs potentially available, there is interest in comparing and developing methods for evaluating SNPs to create panels of high-throughput assays that are customized for performance, research questions, and resources. Here we use five different methods to rank 43 new SNPs and 71 previously published SNPs for sockeye salmon: F(ST), informativeness (I(n)), average contribution to principal components (LC), and the locus-ranking programs BELS and WHICHLOCI. We then tested the performance of these different ranking methods by creating 48- and 96-SNP panels of the top-ranked loci for each method and used empirical and simulated data to obtain the probability of assigning individuals to the correct population using each panel. All 96-SNP panels performed similarly and better than the 48-SNP panels except for the 96-SNP BELS panel. Among the 48-SNP panels, panels created from F(ST), I(n), and LC ranks performed better than panels formed using the top-ranked loci from the programs BELS and WHICHLOCI. The application of ranking methods to optimize panel performance will become more important as more high-throughput assays become available. PMID:23185290

  6. Proposal for a Conceptual Model for Evaluating Lean Product Development Performance: A Study of LPD Enablers in Manufacturing Companies

    NASA Astrophysics Data System (ADS)

    Osezua Aikhuele, Daniel; Mohd Turan, Faiz

    2016-02-01

    The instability in today's market and the emerging demands for mass customized products by customers, are driving companies to seek for cost effective and time efficient improvements in their production system and this have led to real pressure for the adaptation of new developmental architecture and operational parameters to remain competitive in the market. Among such developmental architecture adopted, is the integration of lean thinking in the product development process. However, due to lack of clear understanding of the lean performance and its measurements, many companies are unable to implement and fully integrate the lean principle into their product development process and without a proper performance measurement, the performance level of the organizational value stream will be unknown and the specific area of improvement as it relates to the LPD program cannot be tracked. Hence, it will result in poor decision making in the LPD implementation. This paper therefore seeks to present a conceptual model for evaluation of LPD performances by identifying and analysing the core existing LPD enabler (Chief Engineer, Cross-functional teams, Set-based engineering, Poka-yoke (mistakeproofing), Knowledge-based environment, Value-focused planning and development, Top management support, Technology, Supplier integration, Workforce commitment and Continuous improvement culture) for assessing the LPD performance.

  7. Performance evaluation of a web-based system to exchange Electronic Health Records using Queueing model (M/M/1).

    PubMed

    de la Torre, Isabel; Díaz, Francisco Javier; Antón, Míriam; Martínez, Mario; Díez, José Fernando; Boto, Daniel; López, Miguel; Hornero, Roberto; López, María Isabel

    2012-04-01

    Response time measurement of a web-based system is essential to evaluate its performance. This paper shows a comparison of the response times of a Web-based system for Ophthalmologic Electronic Health Records (EHRs), TeleOftalWeb. It makes use of different database models like Oracle 10 g, dbXML 2.0, Xindice 1.2, and eXist 1.1.1. The system's modelling, which uses Tandem Queue networks, will allow us to estimate the service times of the different components of the system (CPU, network and databases). In order to calculate those times, associated to the different databases, benchmarking techniques are used. The final objective of the comparison is to choose the database system resulting in the lowest response time to TeleOftalWeb and to compare the obtained results using a new benchmarking. PMID:20703642

  8. Instrument performance evaluation

    SciTech Connect

    Swinth, K.L.

    1993-03-01

    Deficiencies exist in both the performance and the quality of health physics instruments. Recognizing the implications of such deficiencies for the protection of workers and the public, in the early 1980s the DOE and the NRC encouraged the development of a performance standard and established a program to test a series of instruments against criteria in the standard. The purpose of the testing was to establish the practicality of the criteria in the standard, to determine the performance of a cross section of available instruments, and to establish a testing capability. Over 100 instruments were tested, resulting in a practical standard and an understanding of the deficiencies in available instruments. In parallel with the instrument testing, a value-impact study clearly established the benefits of implementing a formal testing program. An ad hoc committee also met several times to establish recommendations for the voluntary implementation of a testing program based on the studies and the performance standard. For several reasons, a formal program did not materialize. Ongoing tests and studies have supported the development of specific instruments and have helped specific clients understand the performance of their instruments. The purpose of this presentation is to trace the history of instrument testing to date and suggest the benefits of a centralized formal program.

  9. Experimental investigation of the SCC of Inconel 600 and a predictive model for evaluating service performance

    SciTech Connect

    Bulischeck, T.S.; Van Rooyen, D.

    1980-07-01

    A research program currently in progress at Brookhaven National Laboratory has produced quantitative data on the various factors which may influence the service life of Inconel 600 steam generator tubing. A basic model is presented to relate data produced using accelerated test methods to actual service conditions. The effects of temperature, environment, stress, strain and strain rate on a number of heats of mill annealed material and tubing with beneficial heat treatments are presented. The initiation and propagation stages of intergranular stress corrosion cracking are treated separately. Although crack initiation time is altered by different chemical environments or perhaps composition, the crack growth rates appear to be governed by a temperature dependent process.

  10. Performance evaluation in color face hallucination with error regression model in MPCA subspace method

    NASA Astrophysics Data System (ADS)

    Asavaskulkiet, Krissada

    2014-01-01

    This paper proposes a novel face super-resolution reconstruction (hallucination) technique for YCbCr color space. The underlying idea is to learn with an error regression model and multi-linear principal component analysis (MPCA). From hallucination framework, many color face images are explained in YCbCr space. To reduce the time complexity of color face hallucination, we can be naturally described the color face imaged as tensors or multi-linear arrays. In addition, the error regression analysis is used to find the error estimation which can be obtained from the existing LR in tensor space. In learning process is from the mistakes in reconstruct face images of the training dataset by MPCA, then finding the relationship between input and error by regression analysis. In hallucinating process uses normal method by backprojection of MPCA, after that the result is corrected with the error estimation. In this contribution we show that our hallucination technique can be suitable for color face images both in RGB and YCbCr space. By using the MPCA subspace with error regression model, we can generate photorealistic color face images. Our approach is demonstrated by extensive experiments with high-quality hallucinated color faces. Comparison with existing algorithms shows the effectiveness of the proposed method.

  11. Performance Evaluation of Missing-Value Imputation Clustering Based on a Multivariate Gaussian Mixture Model

    PubMed Central

    Wu, Chuanli; Gao, Yuexia; Hua, Tianqi; Xu, Chenwu

    2016-01-01

    Background It is challenging to deal with mixture models when missing values occur in clustering datasets. Methods and Results We propose a dynamic clustering algorithm based on a multivariate Gaussian mixture model that efficiently imputes missing values to generate a “pseudo-complete” dataset. Parameters from different clusters and missing values are estimated according to the maximum likelihood implemented with an expectation-maximization algorithm, and multivariate individuals are clustered with Bayesian posterior probability. A simulation showed that our proposed method has a fast convergence speed and it accurately estimates missing values. Our proposed algorithm was further validated with Fisher’s Iris dataset, the Yeast Cell-cycle Gene-expression dataset, and the CIFAR-10 images dataset. The results indicate that our algorithm offers highly accurate clustering, comparable to that using a complete dataset without missing values. Furthermore, our algorithm resulted in a lower misjudgment rate than both clustering algorithms with missing data deleted and with missing-value imputation by mean replacement. Conclusion We demonstrate that our missing-value imputation clustering algorithm is feasible and superior to both of these other clustering algorithms in certain situations. PMID:27552203

  12. Evaluating the Performance of a Climate-Driven Mortality Model during Heat Waves and Cold Spells in Europe

    PubMed Central

    Lowe, Rachel; Ballester, Joan; Creswick, James; Robine, Jean-Marie; Herrmann, François R.; Rodó, Xavier

    2015-01-01

    The impact of climate change on human health is a serious concern. In particular, changes in the frequency and intensity of heat waves and cold spells are of high relevance in terms of mortality and morbidity. This demonstrates the urgent need for reliable early-warning systems to help authorities prepare and respond to emergency situations. In this study, we evaluate the performance of a climate-driven mortality model to provide probabilistic predictions of exceeding emergency mortality thresholds for heat wave and cold spell scenarios. Daily mortality data corresponding to 187 NUTS2 regions across 16 countries in Europe were obtained from 1998–2003. Data were aggregated to 54 larger regions in Europe, defined according to similarities in population structure and climate. Location-specific average mortality rates, at given temperature intervals over the time period, were modelled to account for the increased mortality observed during both high and low temperature extremes and differing comfort temperatures between regions. Model parameters were estimated in a Bayesian framework, in order to generate probabilistic simulations of mortality across Europe for time periods of interest. For the heat wave scenario (1–15 August 2003), the model was successfully able to anticipate the occurrence or non-occurrence of mortality rates exceeding the emergency threshold (75th percentile of the mortality distribution) for 89% of the 54 regions, given a probability decision threshold of 70%. For the cold spell scenario (1–15 January 2003), mortality events in 69% of the regions were correctly anticipated with a probability decision threshold of 70%. By using a more conservative decision threshold of 30%, this proportion increased to 87%. Overall, the model performed better for the heat wave scenario. By replacing observed temperature data in the model with forecast temperature, from state-of-the-art European forecasting systems, probabilistic mortality predictions could

  13. Evaluating the performance of a climate-driven mortality model during heat waves and cold spells in Europe.

    PubMed

    Lowe, Rachel; Ballester, Joan; Creswick, James; Robine, Jean-Marie; Herrmann, François R; Rodó, Xavier

    2015-02-01

    The impact of climate change on human health is a serious concern. In particular, changes in the frequency and intensity of heat waves and cold spells are of high relevance in terms of mortality and morbidity. This demonstrates the urgent need for reliable early-warning systems to help authorities prepare and respond to emergency situations. In this study, we evaluate the performance of a climate-driven mortality model to provide probabilistic predictions of exceeding emergency mortality thresholds for heat wave and cold spell scenarios. Daily mortality data corresponding to 187 NUTS2 regions across 16 countries in Europe were obtained from 1998-2003. Data were aggregated to 54 larger regions in Europe, defined according to similarities in population structure and climate. Location-specific average mortality rates, at given temperature intervals over the time period, were modelled to account for the increased mortality observed during both high and low temperature extremes and differing comfort temperatures between regions. Model parameters were estimated in a Bayesian framework, in order to generate probabilistic simulations of mortality across Europe for time periods of interest. For the heat wave scenario (1-15 August 2003), the model was successfully able to anticipate the occurrence or non-occurrence of mortality rates exceeding the emergency threshold (75th percentile of the mortality distribution) for 89% of the 54 regions, given a probability decision threshold of 70%. For the cold spell scenario (1-15 January 2003), mortality events in 69% of the regions were correctly anticipated with a probability decision threshold of 70%. By using a more conservative decision threshold of 30%, this proportion increased to 87%. Overall, the model performed better for the heat wave scenario. By replacing observed temperature data in the model with forecast temperature, from state-of-the-art European forecasting systems, probabilistic mortality predictions could

  14. EPICS performance evaluation

    SciTech Connect

    Botlo, M.; Jagielski, M.; Romero, A.

    1993-09-01

    The authors report on the software architecture, some CPU and memory issues, and the performance of the Experimental Physics and Industrial Control System (EPICS). Specifically, they subject each EPICS software layer to a series of tests and extract quantitative results that should be useful to system architects planning to use EPICS for control applications.

  15. IR DIAL performance modeling

    SciTech Connect

    Sharlemann, E.T.

    1994-07-01

    We are developing a DIAL performance model for CALIOPE at LLNL. The intent of the model is to provide quick and interactive parameter sensitivity calculations with immediate graphical output. A brief overview of the features of the performance model is given, along with an example of performance calculations for a non-CALIOPE application.

  16. Evaluation of an ensemble of regional climate model simulations over South America driven by the ERA-Interim reanalysis: model performance and uncertainties

    NASA Astrophysics Data System (ADS)

    Solman, Silvina A.; Sanchez, E.; Samuelsson, P.; da Rocha, R. P.; Li, L.; Marengo, J.; Pessacg, N. L.; Remedio, A. R. C.; Chou, S. C.; Berbery, H.; Le Treut, H.; de Castro, M.; Jacob, D.

    2013-09-01

    The capability of a set of 7 coordinated regional climate model simulations performed in the framework of the CLARIS-LPB Project in reproducing the mean climate conditions over the South American continent has been evaluated. The model simulations were forced by the ERA-Interim reanalysis dataset for the period 1990-2008 on a grid resolution of 50 km, following the CORDEX protocol. The analysis was focused on evaluating the reliability of simulating mean precipitation and surface air temperature, which are the variables most commonly used for impact studies. Both the common features and the differences among individual models have been evaluated and compared against several observational datasets. In this study the ensemble bias and the degree of agreement among individual models have been quantified. The evaluation was focused on the seasonal means, the area-averaged annual cycles and the frequency distributions of monthly means over target sub-regions. Results show that the Regional Climate Model ensemble reproduces adequately well these features, with biases mostly within ±2 °C and ±20 % for temperature and precipitation, respectively. However, the multi-model ensemble depicts larger biases and larger uncertainty (as defined by the standard deviation of the models) over tropical regions compared with subtropical regions. Though some systematic biases were detected particularly over the La Plata Basin region, such as underestimation of rainfall during winter months and overestimation of temperature during summer months, every model shares a similar behavior and, consequently, the uncertainty in simulating current climate conditions is low. Every model is able to capture the variety in the shape of the frequency distribution for both temperature and precipitation along the South American continent. Differences among individual models and observations revealed the nature of individual model biases, showing either a shift in the distribution or an overestimation

  17. Performance Objectives: Foundation for Evaluation

    ERIC Educational Resources Information Center

    McKinney, Floyd L.; Mannebach, Alfred J.

    1970-01-01

    Only when agricultural educators and others evaluate agricultural education programs on the basis of student's performance in relation to valid and realistic performance objectives will progress be made in educational program improvement. (Authors)

  18. Performance evaluation of the Enraf-Nonius Model 872 radar gage

    SciTech Connect

    Peters, T.J.; Park, W.R.

    1992-12-01

    There are indications that the Enraf-Nonius Radar Gage installed in Tank 241-SY-101 may not be providing an accurate reading of the true surface level in the waste tank. The Pacific Northwest Laboratory (PNL) performed an initial study to determine the effect of the following items on the distance read by the gage: Tank riser; Material permittivity and conductivity Foam; Proportion of supernatant to solid material in the field of view of the instrument; Physical geometry of the supernatant and solid material changing in the field of view with respect to time; and Varying water content in the solid material. The results of the tests indicate that distance measured by the radar gage is affected by the permittivity, conductivity, and angle of the target surface. These parameters affect the complex input impedance of the signal received by the radar gage to measure the distance to the target. In Tank 101-SY, the radar gage is placed on top of a 12 in. diameter riser. The riser affects the field of view of the instrument, and a much smaller target surface is detected when the radar beam propagates through a riser. In addition, the riser acts as a waveguide, and standing waves are enhanced between the target surface and the radar gage. The result is a change in the level measured by the radar gage due to changing properties of the target surface even when the distance to the target does not change. The test results indicate that the radar will not detect dry crust or foam. However, if the crust or foam is stirred so that it becomes wet, then the crust or foam became detectable. The level read using the radar gage decreased as the moisture in the crust or foam evaporated.

  19. CMIP5 Global Climate Model Performance Evaluation and Climate Scenario Development over the South-Central United States

    NASA Astrophysics Data System (ADS)

    Rosendahl, D. H.; Rupp, D. E.; Mcpherson, R. A.; Moore, B., III

    2015-12-01

    Future climate change projections from Global Climate Models (GCMs) are the primary drivers of regional downscaling and impacts research - from which relevant information for stakeholders is generated at the regional and local levels. Therefore understanding uncertainties in GCMs is a fundamental necessity if the scientific community is to provide useful and reliable future climate change information that can be utilized by end users and decision makers. Two different assessments of the Coupled Model Intercomparison Project Phase 5 (CMIP5) GCM ensemble were conducted for the south-central United States. The first was a performance evaluation over the historical period for metrics of near surface meteorological variables (e.g., temperature, precipitation) and system-based phenomena, which include large-scale processes that can influence the region (e.g., low-level jet, ENSO). These metrics were used to identify a subset of models of higher performance across the region which were then used to constrain future climate change projections. A second assessment explored climate scenario development where all model climate change projections were assumed equally likely and future projections with the highest impact were identified (e.g., temperature and precipitation combinations of hottest/driest, hottest/wettest, and highest variability). Each of these assessments identify a subset of models that may prove useful to regional downscaling and impacts researchers who may be restricted by the total number of GCMs they can utilize. Results from these assessments will be provided as well as a discussion on when each would be useful and appropriate to use.

  20. Study on Comprehensive Evaluation Model of Wind Farm Operation Performances Based on the Multi-Level Fuzzy Method

    NASA Astrophysics Data System (ADS)

    Zhao, Junyi; Huang, Yuanchao; Yang, Chaoying; Han, Yu

    In order to evaluate comprehensively and objectively the safety performance of the grid connected wind farms, a comprehensive evaluation index system is built, and multilayer fuzzy synthesis evaluation method is used to evaluate the grid connected wind farm operation safety. Firstly, a judgment matrix is built to determine the weight of each index, then, according to the fuzzy boundary tectonic membership description of each factor and factor fuzzy evaluation matrix, finally, through the composite operation of multi-layer evaluation object belongs to grade fuzzy behavior index and membership function, implementation of the system, on the performance of grid connected wind farm, a comprehensive evaluation of the quantitative and the relative ranking of wind farm. In the example analysis, the comprehensive evaluation of three typical wind farm to verify the effectiveness and feasibility of the method.

  1. Evaluating and Improving Teacher Performance.

    ERIC Educational Resources Information Center

    Manatt, Richard P.

    This workbook, coordinated with Manatt Teacher Performance Evaluation (TPE) workshops, summarizes large group presentation in sequence with the transparancies used. The first four modules of the workbook deal with the state of the art of evaluating and improving teacher performance; the development of the TPE system, including selection of…

  2. Evaluation of rural-air-quality simulation models. Addendum B: graphical display of model performance using the Clifty Creek data base

    SciTech Connect

    Cox, W.M.; Moss, G.K.; Tikvart, J.A.; Baldridge, E.

    1985-08-01

    The addendum uses a variety of graphical formats to display and compare the performance of four rural models using the Clifty Creek data base. The four models included MPTER (EPA), PPSP (Martin Marietta Corp.), MPSDM (ERT), and TEM-8A (Texas Air Control Board). Graphic displays were developed and used for both operational evaluation and diagnostic evaluation purposes. Plots of bias of the average vs station downwind distance by stability and wind-speed class revealed clear patterns of accentuated underprediction and overprediction for stations closer to the source. PPSP showed a tendency for decreasing overprediction with increasing station distance for all meteorological subsets while the other three models showed varying patterns depending on the meteorological class. Diurnal plots of the bias of the average vs hour of the day revealed a pattern of underestimation during the nocturnal hours and overestimation during hours of strong solar radiation with MPSDM and MPTER showing the least overall bias throughout the day.

  3. Evaluating, interpreting, and communicating performance of hydrologic/water quality models considering intended use: A review and recommendations

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Previous publications have outlined recommended practices for hydrologic and water quality (H/WQ) modeling, but none have formulated comprehensive guidelines for the final stage of modeling applications, namely evaluation, interpretation, and communication of model results and the consideration of t...

  4. Multiprocessor performance modeling with ADAS

    NASA Technical Reports Server (NTRS)

    Hayes, Paul J.; Andrews, Asa M.

    1989-01-01

    A graph managing strategy referred to as the Algorithm to Architecture Mapping Model (ATAMM) appears useful for the time-optimized execution of application algorithm graphs in embedded multiprocessors and for the performance prediction of graph designs. This paper reports the modeling of ATAMM in the Architecture Design and Assessment System (ADAS) to make an independent verification of ATAMM's performance prediction capability and to provide a user framework for the evaluation of arbitrary algorithm graphs. Following an overview of ATAMM and its major functional rules are descriptions of the ADAS model of ATAMM, methods to enter an arbitrary graph into the model, and techniques to analyze the simulation results. The performance of a 7-node graph example is evaluated using the ADAS model and verifies the ATAMM concept by substantiating previously published performance results.

  5. Photovoltaic array performance model.

    SciTech Connect

    Kratochvil, Jay A.; Boyson, William Earl; King, David L.

    2004-08-01

    This document summarizes the equations and applications associated with the photovoltaic array performance model developed at Sandia National Laboratories over the last twelve years. Electrical, thermal, and optical characteristics for photovoltaic modules are included in the model, and the model is designed to use hourly solar resource and meteorological data. The versatility and accuracy of the model has been validated for flat-plate modules (all technologies) and for concentrator modules, as well as for large arrays of modules. Applications include system design and sizing, 'translation' of field performance measurements to standard reporting conditions, system performance optimization, and real-time comparison of measured versus expected system performance.

  6. Quantification of leachate discharged to groundwater using the water balance method and the hydrologic evaluation of landfill performance (HELP) model.

    PubMed

    Alslaibi, Tamer M; Abustan, Ismail; Mogheir, Yunes K; Afifi, Samir

    2013-01-01

    Landfills are a source of groundwater pollution in Gaza Strip. This study focused on Deir Al Balah landfill, which is a unique sanitary landfill site in Gaza Strip (i.e., it has a lining system and a leachate recirculation system). The objective of this article is to assess the generated leachate quantity and percolation to the groundwater aquifer at a specific site, using the approaches of (i) the hydrologic evaluation of landfill performance model (HELP) and (ii) the water balance method (WBM). The results show that when using the HELP model, the average volume of leachate discharged from Deir Al Balah landfill during the period 1997 to 2007 was around, 6800 m3/year. Meanwhile, the average volume of leachate percolated through the clay layer was 550 m3/year, which represents around 8% of the generated leachate. Meanwhile, the WBM indicated that the average volume of leachate discharged from Deir Al Balah landfill during the same period was around 7660 m3/year--about half of which comes from the moisture content of the waste, while the remainder comes from the infiltration of precipitation and re-circulated leachate. Therefore, the estimated quantity of leachate to groundwater by these two methods was very close. However, compared with the measured leachate quantity, these results were overestimated and indicated a dangerous threat to the groundwater aquifer, as there was no separation between municipal, hazardous and industrial wastes, in the area. PMID:23148014

  7. Evaluating Model Performance of an Ensemble-based Chemical Data Assimilation System During INTEX-B Field Mission

    NASA Technical Reports Server (NTRS)

    Arellano, A. F., Jr.; Raeder, K.; Anderson, J. L.; Hess, P. G.; Emmons, L. K.; Edwards, D. P.; Pfister, G. G.; Campos, T. L.; Sachse, G. W.

    2007-01-01

    We present a global chemical data assimilation system using a global atmosphere model, the Community Atmosphere Model (CAM3) with simplified chemistry and the Data Assimilation Research Testbed (DART) assimilation package. DART is a community software facility for assimilation studies using the ensemble Kalman filter approach. Here, we apply the assimilation system to constrain global tropospheric carbon monoxide (CO) by assimilating meteorological observations of temperature and horizontal wind velocity and satellite CO retrievals from the Measurement of Pollution in the Troposphere (MOPITT) satellite instrument. We verify the system performance using independent CO observations taken on board the NSFINCAR C-130 and NASA DC-8 aircrafts during the April 2006 part of the Intercontinental Chemical Transport Experiment (INTEX-B). Our evaluations show that MOPITT data assimilation provides significant improvements in terms of capturing the observed CO variability relative to no MOPITT assimilation (i.e. the correlation improves from 0.62 to 0.71, significant at 99% confidence). The assimilation provides evidence of median CO loading of about 150 ppbv at 700 hPa over the NE Pacific during April 2006. This is marginally higher than the modeled CO with no MOPITT assimilation (-140 ppbv). Our ensemble-based estimates of model uncertainty also show model overprediction over the source region (i.e. China) and underprediction over the NE Pacific, suggesting model errors that cannot be readily explained by emissions alone. These results have important implications for improving regional chemical forecasts and for inverse modeling of CO sources and further demonstrate the utility of the assimilation system in comparing non-coincident measurements, e.g. comparing satellite retrievals of CO with in-situ aircraft measurements. The work described above also brought to light several short-comings of the data assimilation approach for CO profiles. Because of the limited vertical

  8. Evaluation of the performance of the WRF 1-Dimensional Lake model over the East Africa Great Lakes

    NASA Astrophysics Data System (ADS)

    Gudoshava, M.; Semazzi, F. H. M.

    2015-12-01

    This study seeks to investigate the performance of the 1-Dimensional lake model coupled to WRF over East Africa. The Africa Great lakes exert a great influence on the climate of the region and a number of studies have shown how the lake influences the circulation and the total precipitation over the region. The lakes have highly variable depths, with Lake Victoria having an average depth of 40m and Lake Tanganyika a depth of 450m. The Lake model for WRF was tested and calibrated for the Great lakes, however it was not tested for tropical lakes. We hypothesize that the inclusion of a 1-dimensional lake will reduce the precipitation bias as compared to the WRF model without the lake model. In addition initializing the lake temperature using a vertical temperature profile that closes resembles the one over these lakes will greatly reduce the spin up time. The simulations utilized three nested domains at 36, 12 and 4km. The 4km domain is centered over Lake Victoria Basin, while the 12 km domain includes all the lakes in East Africa. The Tropical Rainfall Measuring Mission (TRMM) datasets are used in evaluating the precipitation, and the following statistics were calculated: root mean square error, standard deviation of the model and observations and mean bias. The results show that the use of the 1-dimensional lake model improves the precipitation over the region considerably compared to an uncoupled model. The asymmetrical rainfall pattern is evident in the simulations. However using the default vertical temperature profile with a three-month spin up is not adequate to transfer heat to the bottom of the lake. Hence the temperatures are still very cold at the bottom. A nine-month spin up improves the lake surface temperatures and lake temperatures at the bottom. A two year spin up greatly improves the lake surface temperatures and hence the total precipitation over the lake. Thus longer spin up time allows for adequate heat transfer in the lake. Initializing the

  9. Evaluating Administrative/Supervisory Performance.

    ERIC Educational Resources Information Center

    Educational Research Service, Arlington, VA.

    This is a report on the third survey conducted on procedures for evaluating the performance of administrators and supervisors in local school systems. A questionnaire was sent to school systems enrolling 25,000 or more pupils, and results indicated that 84 of the 154 responding systems have formal evaluation procedures. Tables and discussions of…

  10. The clinical performance evaluation of novel protein chips for eleven biomarkers detection and the diagnostic model study

    PubMed Central

    Luo, Yuan; Zhu, Xu; Zhang, Pengjun; Shen, Qian; Wang, Zi; Wen, Xinyu; Wang, Ling; Gao, Jing; Dong, Jin; Yang, Caie; Wu, Tangming; Zhu, Zheng; Tian, Yaping

    2015-01-01

    We aimed to develop and validate two novel protein chips, which are based on microarray chemiluminescence immunoassay and can simultaneously detected 11 biomarkers, and then to evaluate their clinical diagnostic value by comparing with the traditional methods. Protein chips were evaluated for limit of detection, specificity, common interferences, linearity, precision and accuracy. 11 biomarkers were simultaneously detected by traditional methods and protein chips in 3683 samples, which included 1723 cancer patients, 1798 benign diseases patients and 162 healthy controls. After assay validation, protein chips demonstrated high sensitivity, high specificity, good linearity, low imprecision and were free of common interferences. Compared with the traditional methods, protein chips have good correlation in the detection of all the 13 kinds of biomarkers (r≥0.935, P<0.001). For specific cancer detection, there were no statistically significant differences between the traditional method and novel protein chips, except that male protein chip showed significantly better diagnostic value on NSE detection (P=0.004) but significantly worse value on pro-GRP detection (P=0.012), female chip showed significantly better diagnostic value on pro-GRP detection (P=0.005). Furthermore, both male and female multivariate diagnostic models had significantly better diagnostic value than single detection of PGI, PG II, pro-GRP, NSE and CA125 (P<0.05). In addition, male models had significantly better diagnostic value than single CA199 and free-PSA (P<0.05), while female models observed significantly better diagnostic value than single CA724 and β-HCG (P<0.05). For total disease or cancer detection, the AUC of multivariate logistic regression for the male and female disease detection was 0.981 (95% CI: 0.975-0.987) and 0.836 (95% CI: 0.798-0.874), respectively. While, that for total cancer detection was 0.691 (95% CI: 0.666-0.717) and 0.753 (95% CI: 0.731-0.775), respectively. The new

  11. Performability evaluation of the SIFT computer

    NASA Technical Reports Server (NTRS)

    Meyer, J. F.; Furchtgott, D. G.; Wu, L. T.

    1979-01-01

    Performability modeling and evaluation techniques are applied to the SIFT computer as it might operate in the computational evironment of an air transport mission. User-visible performance of the total system (SIFT plus its environment) is modeled as a random variable taking values in a set of levels of accomplishment. These levels are defined in terms of four attributes of total system behavior: safety, no change in mission profile, no operational penalties, and no economic process whose states describe the internal structure of SIFT as well as relavant conditions of the environment. Base model state trajectories are related to accomplishment levels via a capability function which is formulated in terms of a 3-level model hierarchy. Performability evaluation algorithms are then applied to determine the performability of the total system for various choices of computer and environment parameter values. Numerical results of those evaluations are presented and, in conclusion, some implications of this effort are discussed.

  12. Design and performance evaluation of a simplified dynamic model for combined sewer overflows in pumped sewer systems

    NASA Astrophysics Data System (ADS)

    van Daal-Rombouts, Petra; Sun, Siao; Langeveld, Jeroen; Bertrand-Krajewski, Jean-Luc; Clemens, François

    2016-07-01

    Optimisation or real time control (RTC) studies in wastewater systems increasingly require rapid simulations of sewer systems in extensive catchments. To reduce the simulation time calibrated simplified models are applied, with the performance generally based on the goodness of fit of the calibration. In this research the performance of three simplified and a full hydrodynamic (FH) model for two catchments are compared based on the correct determination of CSO event occurrences and of the total discharged volumes to the surface water. Simplified model M1 consists of a rainfall runoff outflow (RRO) model only. M2 combines the RRO model with a static reservoir model for the sewer behaviour. M3 comprises the RRO model and a dynamic reservoir model. The dynamic reservoir characteristics were derived from FH model simulations. It was found that M2 and M3 are able to describe the sewer behaviour of the catchments, contrary to M1. The preferred model structure depends on the quality of the information (geometrical database and monitoring data) available for the design and calibration of the model. Finally, calibrated simplified models are shown to be preferable to uncalibrated FH models when performing optimisation or RTC studies.

  13. The use of multilevel models to evaluate sources of variation in reproductive performance in dairy cattle in Reunion Island.

    PubMed

    Dohoo, I R; Tillard, E; Stryhn, H; Faye, B

    2001-07-19

    Sources of variation in measures of reproductive performance in dairy cattle were evaluated using data collected from 3207 lactations in 1570 cows in 50 herds from five geographic regions of Reunion Island (located off the east coast of Madagascar). Three continuously distributed reproductive parameters (intervals from calving-to-conception, calving-to-first-service and first-service-to-conception) were considered, along with one Binomial outcome (first-service-conception risk). Multilevel models which take into account the hierarchical nature of the data were used to fit all models. For the overall measure of calving-to-conception interval, 86% of the variation resided at the lactation level with only 7, 6 and 2% at the cow, herd and regional levels, respectively. The proportion of variance at the herd and cow levels were slightly higher for the calving-to-first-service interval (12 and 9%, respectively) - but for the other two parameters (first-service-conception risk and first-service-to-conception interval), >90% of the variation resided at the lactation level. For the three continuous dependent variables, comparison of results between models based on log-transformed data and Box-Cox-transformed data suggested that minor departures from the assumption of normality did not have a substantial effect on the variance estimates. For the Binomial dependent variable, five different estimation procedures (penalised quasi-likelihood, Markov-Chain Monte Carlo, parametric and non-parametric bootstrap estimates and maximum-likelihood) yielded substantially different results for the estimate of the cow-level variance. PMID:11448500

  14. Development of empirical models for performance evaluation of UASB reactors treating poultry manure wastewater under different operational conditions.

    PubMed

    Yetilmezsoy, Kaan; Sakar, Suleyman

    2008-05-01

    A nonlinear modeling study was carried out to evaluate the performance of UASB reactors treating poultry manure wastewater under different organic and hydraulic loading conditions. Two identical pilot scale up-flow anaerobic sludge blanket (UASB) reactors (15.7 L) were run at mesophilic conditions (30-35 degrees C) in a temperature-controlled environment with three hydraulic retention times (theta) of 15.7, 12 and 8.0 days. Imposed volumetric organic loading rates (L(V)) ranged from 0.65 to 4.257 kg COD/(m(3) day). The pH of the feed varied between 6.68 and 7.82. The hydraulic loading rates (L(H)) were controlled between 0.105 and 0.21 m(3)/(m(2)day). The daily biogas production rates ranged between 4.2 and 29.4 L/day. High volumetric COD removal rates (R(V)) ranging from 0.546 to 3.779 kg COD(removed)/(m(3)day) were achieved. On the basis of experimental results, two empirical models having a satisfactory correlation coefficient of about 0.9954 and 0.9416 were developed to predict daily biogas production (Q(g)) and effluent COD concentration (S(e)), respectively. Findings of this modeling study showed that optimal COD removals ranging from 86.3% to 90.6% were predicted with HRTs of 7.9, 9.5, 11.2, 12.6, 13.7 and 14.3 days, and L(V) of 1.27, 1.58, 1.78, 1.99, 2.20 and 2.45 kg COD/(m(3)day) for the corresponding influent substrate concentrations (S(i)) of 10,000, 15,000, 20,000, 25,000, 30,000 and 35,000 mg/L, respectively. PMID:17913349

  15. EVALUATING THE PERFORMANCE OF REGIONAL-SCALE PHOTOCHEMICAL MODELING SYSTEMS: PART I--METEOROLOGICAL PREDICTIONS. (R825260)

    EPA Science Inventory

    In this study, the concept of scale analysis is applied to evaluate two state-of-science meteorological models, namely MM5 and RAMS3b, currently being used to drive regional-scale air quality models. To this end, seasonal time series of observations and predictions for temperatur...

  16. Hydrologic Evaluation of Landfill Performance (HELP) Model: B (Set Includes, A- User's Guide for Version 3 w/disks, B-Engineering Documentation for Version 3

    EPA Science Inventory

    The Hydrologic Evaluation of Landfill Performance (HELP) computer program is a quasi-two-dimensional hydrologic model of water movement across, into, through and out of landfills. The model accepts weather, soil and design data. Landfill systems including various combinations o...

  17. Evaluation of performance of a BLSS model in long-term operation in dynamic and steady states

    NASA Astrophysics Data System (ADS)

    Gros, Jean-Bernard; Tikhomirov, Alex; Ushakova, Sofya; Velitchko, Vladimir; Tikhomirova, Natalia; Lasseur, Christophe

    Evaluation of performance of a BLSS model, including higher plants for food production and biodegradation of human waste, in long-term operation in dynamic and steady states was performed. The model system was conceived for supplying vegetarian food and oxygen to 0.07 human. The following data were obtained in steady-state operating conditions. Average rate of wheat, chufa, radish, lettuce and Salicornia edible biomass accumulation were 8.7, 5.5, 0.6, 0.6 and metricconverterProductID2.5 g2.5 g per day respectively. Thus, to mimic the vegetarian edible biomass consumption by a human it was necessary to withdraw 17.9 g/d from total mass ex-change. Simultaneously, human mineralized exometabolites (artificial mineralized urine, AMU) in the amount of approximately 7% of a daily norm were introduced into the nutrient solu-tion for irrigation of the plants cultivated on a neutral substrate (expanded clay aggregate). The estimated value of 5.8 g/d of wheat and Salicornia inedible biomass was introduced in the soil-like substrate (SLS) to fully meet the plants need in nitrogen. The rest of wheat and Salicornia inedible biomass, 5.7 g/d, was stored. Thus in all, 23.6g of vegetarian dry matter had been stored. Assuming edible biomass is eaten up by the human, the closure coefficient of the vegetarian biomass inclusion into matter recycling amounted to 88%. The analysis of the long-term model operation showed that the main factors limiting increase of recycling processes were the following: a) Partly unbalanced mineral composition of daily human waste with daily needs of plants culti-` vated in the system. Thus, when fully satisfied with respect to nitrogen, the plants experienced a lack of macro elements such as P, Mg and Ca by more than 50%; b) Partly unbalanced mineral composition of edible biomass of the plants cultivated in the SLS with that of inedible biomass of the plants cultivated by hydroponic method on neutral substrate introduced in the SLS; c) Accumulation of

  18. Evaluation of the Mesoscale Meteorological Model (MM5)-Community Multi-Scale Air Quality Model (CMAQ) performance in hindcast and forecast of ground-level ozone.

    PubMed

    Nghiem, Le Hoang; Kim Oanh, Nguyen Thi

    2008-10-01

    This paper presents the first attempt to apply the Mesoscale Meteorological Model (MM5)-Community Multi-Scale Air Quality Model (CMAQ) model system to simulate ground-level ozone (O3) over the continental Southeast Asia (CSEA) region for both hindcast and forecast purposes. Hindcast simulation was done over the CSEA domain for two historical O3 episodes, January 26-29, 2004 (January episode, northeast monsoon) and March 24-26, 2004 (March episode, southwest monsoon). Experimental forecast was done for next-day hourly O3 during January 2006 over the central part of Thailand (CENTHAI). Available data from 20 ambient monitoring stations in Thailand and 3 stations in Ho Chi Minh City, Vietnam, were used for the episode analysis and for the model performance evaluation. The year 2000 anthropogenic emission inventory prepared by the Center for Global and Regional Environmental Research at the University of Iowa was projected to the simulation year on the basis of the regional average economic growth rate. Hourly emission in urban areas was prepared using ambient carbon monoxide concentration as a surrogate for the emission intensity. Biogenic emissions were estimated based on data from the Global Emissions Inventory Activity. Hindcast simulations (CSEA) were performed with 0.5 degree x 0.5 degree resolution, whereas forecast simulations (CENTHAI) were done with 0.1 degree x 0.1 degree hourly emission input data. MM5-CMAQ model system performance during the selected episodes satisfactorily met U.S. Environmental Protection Agency criteria for O3 for most simulated days. The experiment forecast for next-day hourly O3 in January 2006 yielded promising results. Modeled plumes of ozone in both hindcast and forecast cases agreed with the main wind fields and extended over considerable downwind distances from large urban areas. PMID:18939781

  19. Class diagram based evaluation of software performance

    NASA Astrophysics Data System (ADS)

    Pham, Huong V.; Nguyen, Binh N.

    2013-03-01

    The evaluation of software performance in the early stages of the software life cycle is important and it has been widely studied. In the software model specification, class diagram is the important object-oriented software specification model. The measures based on a class diagram have been widely studied to evaluate quality of software such as complexity, maintainability, reuse capability, etc. However the software performance evaluation based on Class model has not been widely studied, especially for object-oriented design of embedded software. Therefore, in this paper we propose a new approach to directly evaluate the software performance based on class diagrams. From a class diagram, we determine the parameters which are used to evaluate and build formula of the measures such as Size of Class Variables, Size of Class Methods, Size of Instance Variables, Size of Instance Methods, etc. Then, we do analysis of the dependence of performance on these measures and build the performance evaluation function from class diagram. Thereby we can choose the best class diagram based on this evaluation function.

  20. Speculations on Performance Models.

    ERIC Educational Resources Information Center

    Fromkin, Victoria

    1968-01-01

    According to the author, competence and performance and their interrelationships are the concern of linguistics. Performance models must: (1) be based on physical data of speech; (2) describe the phenomena under investigation; (3) predict events which are confirmed by experiment; (4) suggest causal relationships by identifying necessary and…

  1. Performance and evaluation of real-time multicomputer control systems

    NASA Technical Reports Server (NTRS)

    Shin, K. G.

    1983-01-01

    New performance measures, detailed examples, modeling of error detection process, performance evaluation of rollback recovery methods, experiments on FTMP, and optimal size of an NMR cluster are discussed.

  2. Using the coupled wake boundary layer model to evaluate the effect of turbulence intensity on wind farm performance

    NASA Astrophysics Data System (ADS)

    Stevens, Richard J. A. M.; Gayme, Dennice; Meneveau, Charles

    2015-06-01

    We use the recently introduced coupled wake boundary layer (CWBL) model to predict the effect of turbulence intensity on the performance of a wind farm. The CWBL model combines a standard wake model with a “top-down” approach to get improved predictions for the power output compared to a stand-alone wake model. Here we compare the CWBL model results for different turbulence intensities with the Horns Rev field measurements by Hansen et al., Wind Energy 15, 183196 (2012). We show that the main trends as function of the turbulence intensity are captured very well by the model and discuss differences between the field measurements and model results based on comparisons with LES results from Wu and Porté-Agel, Renewable Energy 75, 945-955 (2015).

  3. VENTURI SCRUBBER PERFORMANCE MODEL

    EPA Science Inventory

    The paper presents a new model for predicting the particle collection performance of venturi scrubbers. It assumes that particles are collected by atomized liquid only in the throat section. The particle collection mechanism is inertial impaction, and the model uses a single drop...

  4. Energy performance evaluation of AAC

    NASA Astrophysics Data System (ADS)

    Aybek, Hulya

    The U.S. building industry constitutes the largest consumer of energy (i.e., electricity, natural gas, petroleum) in the world. The building sector uses almost 41 percent of the primary energy and approximately 72 percent of the available electricity in the United States. As global energy-generating resources are being depleted at exponential rates, the amount of energy consumed and wasted cannot be ignored. Professionals concerned about the environment have placed a high priority on finding solutions that reduce energy consumption while maintaining occupant comfort. Sustainable design and the judicious combination of building materials comprise one solution to this problem. A future including sustainable energy may result from using energy simulation software to accurately estimate energy consumption and from applying building materials that achieve the potential results derived through simulation analysis. Energy-modeling tools assist professionals with making informed decisions about energy performance during the early planning phases of a design project, such as determining the most advantageous combination of building materials, choosing mechanical systems, and determining building orientation on the site. By implementing energy simulation software to estimate the effect of these factors on the energy consumption of a building, designers can make adjustments to their designs during the design phase when the effect on cost is minimal. The primary objective of this research consisted of identifying a method with which to properly select energy-efficient building materials and involved evaluating the potential of these materials to earn LEED credits when properly applied to a structure. In addition, this objective included establishing a framework that provides suggestions for improvements to currently available simulation software that enhance the viability of the estimates concerning energy efficiency and the achievements of LEED credits. The primary objective

  5. GUIDANCE FOR THE PERFORMANCE EVALUATION OF THREE-DIMENSIONAL AIR QUALITY MODELING SYSTEMS FOR PARTICULATE MATTER AND VISIBILITY

    EPA Science Inventory

    The National Ambient Air Quality Standards for particulate matter (PM) and the federal regional haze regulations place some emphasis on the assessment of fine particle (PM; 5) concentrations. Current air quality models need to be improved and evaluated against observations to a...

  6. Repository Integration Program: RIP performance assessment and strategy evaluation model theory manual and user`s guide

    SciTech Connect

    1995-11-01

    This report describes the theory and capabilities of RIP (Repository Integration Program). RIP is a powerful and flexible computational tool for carrying out probabilistic integrated total system performance assessments for geologic repositories. The primary purpose of RIP is to provide a management tool for guiding system design and site characterization. In addition, the performance assessment model (and the process of eliciting model input) can act as a mechanism for integrating the large amount of available information into a meaningful whole (in a sense, allowing one to keep the ``big picture`` and the ultimate aims of the project clearly in focus). Such an integration is useful both for project managers and project scientists. RIP is based on a `` top down`` approach to performance assessment that concentrates on the integration of the entire system, and utilizes relatively high-level descriptive models and parameters. The key point in the application of such a ``top down`` approach is that the simplified models and associated high-level parameters must incorporate an accurate representation of their uncertainty. RIP is designed in a very flexible manner such that details can be readily added to various components of the model without modifying the computer code. Uncertainty is also handled in a very flexible manner, and both parameter and model (process) uncertainty can be explicitly considered. Uncertainty is propagated through the integrated PA model using an enhanced Monte Carlo method. RIP must rely heavily on subjective assessment (expert opinion) for much of its input. The process of eliciting the high-level input parameters required for RIP is critical to its successful application. As a result, in order for any project to successfully apply a tool such as RIP, an enormous amount of communication and cooperation must exist between the data collectors, the process modelers, and the performance. assessment modelers.

  7. ESMValTool (v1.0) - a community diagnostic and performance metrics tool for routine evaluation of Earth system models in CMIP

    NASA Astrophysics Data System (ADS)

    Eyring, Veronika; Righi, Mattia; Lauer, Axel; Evaldsson, Martin; Wenzel, Sabrina; Jones, Colin; Anav, Alessandro; Andrews, Oliver; Cionni, Irene; Davin, Edouard L.; Deser, Clara; Ehbrecht, Carsten; Friedlingstein, Pierre; Gleckler, Peter; Gottschaldt, Klaus-Dirk; Hagemann, Stefan; Juckes, Martin; Kindermann, Stephan; Krasting, John; Kunert, Dominik; Levine, Richard; Loew, Alexander; Mäkelä, Jarmo; Martin, Gill; Mason, Erik; Phillips, Adam S.; Read, Simon; Rio, Catherine; Roehrig, Romain; Senftleben, Daniel; Sterl, Andreas; van Ulft, Lambertus H.; Walton, Jeremy; Wang, Shiyu; Williams, Keith D.

    2016-05-01

    A community diagnostics and performance metrics tool for the evaluation of Earth system models (ESMs) has been developed that allows for routine comparison of single or multiple models, either against predecessor versions or against observations. The priority of the effort so far has been to target specific scientific themes focusing on selected essential climate variables (ECVs), a range of known systematic biases common to ESMs, such as coupled tropical climate variability, monsoons, Southern Ocean processes, continental dry biases, and soil hydrology-climate interactions, as well as atmospheric CO2 budgets, tropospheric and stratospheric ozone, and tropospheric aerosols. The tool is being developed in such a way that additional analyses can easily be added. A set of standard namelists for each scientific topic reproduces specific sets of diagnostics or performance metrics that have demonstrated their importance in ESM evaluation in the peer-reviewed literature. The Earth System Model Evaluation Tool (ESMValTool) is a community effort open to both users and developers encouraging open exchange of diagnostic source code and evaluation results from the Coupled Model Intercomparison Project (CMIP) ensemble. This will facilitate and improve ESM evaluation beyond the state-of-the-art and aims at supporting such activities within CMIP and at individual modelling centres. Ultimately, we envisage running the ESMValTool alongside the Earth System Grid Federation (ESGF) as part of a more routine evaluation of CMIP model simulations while utilizing observations available in standard formats (obs4MIPs) or provided by the user.

  8. ESMValTool (v1.0) - a community diagnostic and performance metrics tool for routine evaluation of Earth System Models in CMIP

    NASA Astrophysics Data System (ADS)

    Eyring, V.; Righi, M.; Evaldsson, M.; Lauer, A.; Wenzel, S.; Jones, C.; Anav, A.; Andrews, O.; Cionni, I.; Davin, E. L.; Deser, C.; Ehbrecht, C.; Friedlingstein, P.; Gleckler, P.; Gottschaldt, K.-D.; Hagemann, S.; Juckes, M.; Kindermann, S.; Krasting, J.; Kunert, D.; Levine, R.; Loew, A.; Mäkelä, J.; Martin, G.; Mason, E.; Phillips, A.; Read, S.; Rio, C.; Roehrig, R.; Senftleben, D.; Sterl, A.; van Ulft, L. H.; Walton, J.; Wang, S.; Williams, K. D.

    2015-09-01

    A community diagnostics and performance metrics tool for the evaluation of Earth System Models (ESMs) has been developed that allows for routine comparison of single or multiple models, either against predecessor versions or against observations. The priority of the effort so far has been to target specific scientific themes focusing on selected Essential Climate Variables (ECVs), a range of known systematic biases common to ESMs, such as coupled tropical climate variability, monsoons, Southern Ocean processes, continental dry biases and soil hydrology-climate interactions, as well as atmospheric CO2 budgets, tropospheric and stratospheric ozone, and tropospheric aerosols. The tool is being developed in such a way that additional analyses can easily be added. A set of standard namelists for each scientific topic reproduces specific sets of diagnostics or performance metrics that have demonstrated their importance in ESM evaluation in the peer-reviewed literature. The Earth System Model Evaluation Tool (ESMValTool) is a community effort open to both users and developers encouraging open exchange of diagnostic source code and evaluation results from the CMIP ensemble. This will facilitate and improve ESM evaluation beyond the state-of-the-art and aims at supporting such activities within the Coupled Model Intercomparison Project (CMIP) and at individual modelling centres. Ultimately, we envisage running the ESMValTool alongside the Earth System Grid Federation (ESGF) as part of a more routine evaluation of CMIP model simulations while utilizing observations available in standard formats (obs4MIPs) or provided by the user.

  9. Evaluating Nursing Students' Clinical Performance.

    PubMed

    Koharchik, Linda; Weideman, Yvonne L; Walters, Cynthia A; Hardy, Elaine

    2015-10-01

    This article is one in a series on the roles of adjunct clinical faculty and preceptors, who teach nursing students to apply knowledge in clinical settings. This article describes aspects of the student evaluation process, which should involve regular feedback and clearly stated performance expectations. PMID:26402292

  10. Physics and Performance Evaluation Group

    SciTech Connect

    Donini, Andrea; Pascoli, Silvia; Winter, Walter; Yasuda, Osamu

    2008-02-21

    We summarize the objectives and results of the ''international scoping study of a future neutrino factory and superbeam facility'' (ISS) physics working group. Furthermore, we discuss how the ISS study should develop into a neutrino factory design study (IDS-NF) from the point of view of physics and performance evaluation.

  11. Composite Load Model Evaluation

    SciTech Connect

    Lu, Ning; Qiao, Hong

    2007-09-30

    The WECC load modeling task force has dedicated its effort in the past few years to develop a composite load model that can represent behaviors of different end-user components. The modeling structure of the composite load model is recommended by the WECC load modeling task force. GE Energy has implemented this composite load model with a new function CMPLDW in its power system simulation software package, PSLF. For the last several years, Bonneville Power Administration (BPA) has taken the lead and collaborated with GE Energy to develop the new composite load model. Pacific Northwest National Laboratory (PNNL) and BPA joint force and conducted the evaluation of the CMPLDW and test its parameter settings to make sure that: • the model initializes properly, • all the parameter settings are functioning, and • the simulation results are as expected. The PNNL effort focused on testing the CMPLDW in a 4-bus system. An exhaustive testing on each parameter setting has been performed to guarantee each setting works. This report is a summary of the PNNL testing results and conclusions.

  12. Ion thruster performance model

    NASA Technical Reports Server (NTRS)

    Brophy, J. R.

    1984-01-01

    A model of ion thruster performance is developed for high flux density, cusped magnetic field thruster designs. This model is formulated in terms of the average energy required to produce an ion in the discharge chamber plasma and the fraction of these ions that are extracted to form the beam. The direct loss of high energy (primary) electrons from the plasma to the anode is shown to have a major effect on thruster performance. The model provides simple algebraic equations enabling one to calculate the beam ion energy cost, the average discharge chamber plasma ion energy cost, the primary electron density, the primary-to-Maxwellian electron density ratio and the Maxwellian electron temperature. Experiments indicate that the model correctly predicts the variation in plasma ion energy cost for changes in propellant gas (Ar, Kr and Xe), grid transparency to neutral atoms, beam extraction area, discharge voltage, and discharge chamber wall temperature. The model and experiments indicate that thruster performance may be described in terms of only four thruster configuration dependent parameters and two operating parameters. The model also suggests that improved performance should be exhibited by thruster designs which extract a large fraction of the ions produced in the discharge chamber, which have good primary electron and neutral atom containment and which operate at high propellant flow rates.

  13. Evaluation of the performance of four chemical transport models in predicting the aerosol chemical composition in Europe in 2005

    NASA Astrophysics Data System (ADS)

    Prank, Marje; Sofiev, Mikhail; Tsyro, Svetlana; Hendriks, Carlijn; Semeena, Valiyaveetil; Vazhappilly Francis, Xavier; Butler, Tim; Denier van der Gon, Hugo; Friedrich, Rainer; Hendricks, Johannes; Kong, Xin; Lawrence, Mark; Righi, Mattia; Samaras, Zissis; Sausen, Robert; Kukkonen, Jaakko; Sokhi, Ranjeet

    2016-05-01

    Four regional chemistry transport models were applied to simulate the concentration and composition of particulate matter (PM) in Europe for 2005 with horizontal resolution ~ 20 km. The modelled concentrations were compared with the measurements of PM chemical composition by the European Monitoring and Evaluation Programme (EMEP) monitoring network. All models systematically underestimated PM10 and PM2.5 by 10-60 %, depending on the model and the season of the year, when the calculated dry PM mass was compared with the measurements. The average water content at laboratory conditions was estimated between 5 and 20 % for PM2.5 and between 10 and 25 % for PM10. For majority of the PM chemical components, the relative underestimation was smaller than it was for total PM, exceptions being the carbonaceous particles and mineral dust. Some species, such as sea salt and NO3-, were overpredicted by the models. There were notable differences between the models' predictions of the seasonal variations of PM, mainly attributable to different treatments or omission of some source categories and aerosol processes. Benzo(a)pyrene concentrations were overestimated by all the models over the whole year. The study stresses the importance of improving the models' skill in simulating mineral dust and carbonaceous compounds, necessity for high-quality emissions from wildland fires, as well as the need for an explicit consideration of aerosol water content in model-measurement comparison.

  14. The Integrated Farm System Model: software for evaluating the performance, environmental impact and economics of farming systems

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The Integrated Farm System Model (IFSM) is a process level simulation of the performance of crop, beef and dairy farming systems that estimates major environmental impacts, production costs, and farm profitability. The IFSM simulates all major farm components on a process level. This enables the int...

  15. USE OF A STOCHASTIC MODEL TO EVALUATE UNCERTAINTY IN A PERFORMANCE ASSESSMENT AT THE SAVANNAH RIVER SITE - 8120

    SciTech Connect

    Hiergesell, R; Glenn Taylor, G

    2008-01-21

    A significant effort has recently been initiated to address probabilistic issues within radiological Performance Assessments (PA's) conducted at the Savannah River Site (SRS). This effort is considered to be part of a continual process, as is the program of PA analysis and maintenance across the Department of Energy (DOE) complex. At SRS, findings in the initial probabilistic analysis of the Slit Trenches in the E-Area PA were built upon and improved in the later development of the probabilistic model for the F-Area Tank Farm. Within the PA studies conducted at SRS, the initial effort of the uncertainty analyses was focused on the Slit Trenches as part of the E-Area PA. Specifically, a probabilistic model was developed for Slit Trench 5 within the E-Area. This model was utilized in deterministic mode to compare its results against the 2- and 3-D model results of the deterministic models. Then, utilizing the PDFs, the model was used to perform multiple realizations and produce probabilistic results. Later, a second probabilistic sensitivity and uncertainty analysis was undertaken for the F-Area Tank Farm PA. This effort is currently underway. Many improvements were made in how the flow and transport processes were incorporated within this model.

  16. Possible future projection of Indian Summer Monsoon Rainfall (ISMR) with the evaluation of model performance in Coupled Model Inter-comparison Project Phase 5 (CMIP5)

    NASA Astrophysics Data System (ADS)

    Parth Sarthi, P.; Ghosh, Soumik; Kumar, Praveen

    2015-06-01

    The Indian Summer Monsoon (ISM) is crucial for agriculture and water resources in India. The large spatial and temporal variability of Indian Summer Monsoon Rainfall (ISMR) leads to flood and drought especially over northern plains of India, so quantitative and qualitative assessment of future projected rainfall will be important for policy framework. Evaluation of models performance in simulating rainfall and wind circulation of the Historical experiment (1961-2005) and its future projected change in RCPs (2006-2050) 4.5 and 8.5 in CMIP5 are carried out. In the Historical experiment, the model simulated rainfall is validated with observed rainfall of IMD (1961-2005) and GPCP (1979-2005) and only six (6) models BCC-CSM1.1(m), CCSM4, CESM1(BGC), CESM1(CAM5), CESM1(WACCM), and MPI-ESM-MR are found suitable in capturing ISMR and JJAS wind circulation at 850 & 200 hPa as in NCEP reanalysis, which shows anticyclonic circulation over Arabian Sea at 850 hPa and cyclonic circulation at 200 hPa along with excess and deficit rainfall over monsoon regions of NWI, NEI, WCI, CNI and PI at 99% & 95% confidence levels. Future projected change of JJAS wind shows anticyclonic circulation over Arabian Sea at 850 hPa and cyclonic circulation around 40° N,70°E-90°E at 200 hPa which may be a possible cause of changes in JJAS rainfall over Indian regions.

  17. Performance Criteria and Evaluation System

    1992-06-18

    The Performance Criteria and Evaluation System (PCES) was developed in order to make a data base of criteria accessible to radiation safety staff. The criteria included in the package are applicable to occupational radiation safety at DOE reactor and nonreactor nuclear facilities, but any data base of criteria may be created using the Criterion Data Base Utiliity (CDU). PCES assists personnel in carrying out oversight, line, and support activities.

  18. A novel hybrid MCDM model for performance evaluation of research and technology organizations based on BSC approach.

    PubMed

    Varmazyar, Mohsen; Dehghanbaghi, Maryam; Afkhami, Mehdi

    2016-10-01

    Balanced Scorecard (BSC) is a strategic evaluation tool using both financial and non-financial indicators to determine the business performance of organizations or companies. In this paper, a new integrated approach based on the Balanced Scorecard (BSC) and multi-criteria decision making (MCDM) methods are proposed to evaluate the performance of research centers of research and technology organization (RTO) in Iran. Decision-Making Trial and Evaluation Laboratory (DEMATEL) are employed to reflect the interdependencies among BSC perspectives. Then, Analytic Network Process (ANP) is utilized to weight the indices influencing the considered problem. In the next step, we apply four MCDM methods including Additive Ratio Assessment (ARAS), Complex Proportional Assessment (COPRAS), Multi-Objective Optimization by Ratio Analysis (MOORA), and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) for ranking of alternatives. Finally, the utility interval technique is applied to combine the ranking results of MCDM methods. Weighted utility intervals are computed by constructing a correlation matrix between the ranking methods. A real case is presented to show the efficacy of the proposed approach. PMID:27371786

  19. Propagation modeling and evaluation of communication system performance in nuclear environments. Final report 11 Nov 76-29 Feb 80

    SciTech Connect

    Rino, C.L.

    1980-02-29

    This report summarizes propagation modeling work for predicting communication-system performance in disturbed nuclear environments. Simple formulas are developed that characterize the onset of scintillation, the coherence time of the scintillation, the coherence bandwidth loss and associated delay jitter, plus the angle-of-arrival scintillation for radar applications. The calculations are based on a power-law phase-screen model, and they fully accommodate a varying spectral index and arbitrary propagation angles relative to the principal irregularity axis. In a power-law environment, the signal structure is critically dependent upon the power-law index, particularly under strong-scatter conditions.

  20. Performance evaluation of a dataflow architecture

    SciTech Connect

    Ghosal, D. . Computer Science Center); Bhuyan, L.N. . Dept. of Computer Science)

    1990-05-01

    This paper deals with formulation and validation of an analytical approach for the performance evaluation of the Manchester dataflow computer. The analytical approach is based on closed queuing network models. The average parallelism of the dataflow graph being executed on the dataflow architecture is shown to be related to the population of the closed network. The model of the dataflow computer has been validated by comparing the analytical results to those obtained from the prototype Manchester dataflow computer and our simulation. The bottleneck centers in the prototype machine have been identified through the model and various architectural modifications have been investigated from performance considerations.

  1. NCCDS performance model

    NASA Technical Reports Server (NTRS)

    Richmond, Eric; Vallone, Antonio

    1994-01-01

    The NASA/GSFC Network Control Center (NCC) provides communication services between ground facilities and spacecraft missions in near-earth orbit that use the Space Network. The NCC Data System (NCCDS) provides computational support and is expected to be highly utilized by the service requests needed in the future years. A performance model of the NCCDS has been developed to assess the future workload and possible enhancements. The model computes message volumes from mission request profiles and SN resource levels and generates the loads for NCCDS configurations as a function of operational scenarios and processing activities. The model has been calibrated using the results of benchmarks performed on the operational NCCDS facility and used to assess some future SN service request scenarios.

  2. EVALUATION OF THE COMMUNITY MULTISCALE AIR QUALITY (CMAQ) MODEL VERSION 4.5: UNCERTAINTIES AND SENSITIVITIES IMPACTING MODEL PERFORMANCE: PART II - PARTICULATE MATTER

    EPA Science Inventory

    This paper presents an analysis of the CMAQ v4.5 model performance for particulate matter and its chemical components for the simulated year 2001. This is part two is two part series of papers that examines the model performance of CMAQ v4.5.

  3. Temperature-Dependent Modeling and Performance Evaluation of Multi-Walled CNT and Single-Walled CNT as Global Interconnects

    NASA Astrophysics Data System (ADS)

    Singh, Karmjit; Raj, Balwinder

    2015-12-01

    The influence of temperature on multi-walled carbon nanotube (MWCNT) interconnects have been studied. A temperature-dependent equivalent circuit model is presented for the impedance parameters of MWCNT bundle interconnects that captures various electron-phonon scattering mechanisms as a function of temperature. To estimate the performance of MWCNT bundle interconnects, the signal delay, power dissipation and power delay product (PDP) were simulated based on a temperature-dependent model that results in improvement in the delay, power and PDP estimation accuracy compared to the temperature-independent model. The results revealed that the power delay product of MWCNT bundle interconnects increases with increasing temperature from 200 K to 450 K for three different technology nodes, i.e., 32 nm, 22 nm and 16 nm, based upon a 1000-μm interconnect length. A similar analysis was performed for single-walled carbon nanotube (SWCNT) bundle interconnects and the results are compared with MWCNT bundle interconnects, indicating that the delay, power and power delay product (PDP) also increased with increasing temperature from 200 K to 450 K. The interconnects of the MWCNT bundle interconnects gave a better performance in terms of delay, power and PDP as compared to the SWCNT bundle interconnects.

  4. Performance Evaluation of K-DEMO Cable-in-conduit Conductors Using the Florida Electro-Mechanical Cable Model

    SciTech Connect

    Zhai, Yuhu

    2013-07-16

    The United States ITER Project Office (USIPO) is responsible for design of the Toroidal Field (TF) insert coil, which will allow validation of the performance of significant lengths of the conductors to be used in the full scale TF coils in relevant conditions of field, current density and mechanical strain. The Japan Atomic Energy Agency (JAEA) will build the TF insert which will be tested at the Central Solenoid Model Coil (CSMC) Test facility at JAEA, Naka, Japan. Three dimensional mathematical model of TF Insert was created based on the initial design geometry data, and included the following features: orthotropic material properties of superconductor material and insulation; external magnetic field from CSMC, temperature dependent properties of the materials; pre-compression and plastic deformation in lap joint. Major geometrical characteristics of the design were preserved including cable jacket and insulation shape, mandrel outline, and support clamps and spacers. The model is capable of performing coupled structural, thermal, and electromagnetic analysis using ANSYS. Numerical simulations were performed for room temperature conditions; cool down to 4K, and the operating regime with 68kA current at 11.8 Tesla background field. Numerical simulations led to the final design of the coil producing the required strain levels on the cable, while simultaneously satisfying the ITER magnet structural design criteria.

  5. Airlift column photobioreactors for Porphyridium sp. culturing: Part II. verification of dynamic growth rate model for reactor performance evaluation.

    PubMed

    Luo, Hu-Ping; Al-Dahhan, Muthanna H

    2012-04-01

    Dynamic growth rate model has been developed to quantify the impact of hydrodynamics on the growth of photosynthetic microorganisms and to predict the photobioreactor performance. Rigorous verification of such reactor models, however, is rare in the literature. In this part of work, verification of a dynamic growth rate model developed in Luo and Al-Dahhan (2004) [Biotech Bioeng 85(4): 382-393] was attempted using the experimental results reported in Part I of this work and results from literature. The irradiance distribution inside the studied reactor was also measured at different optical densities and successfully correlated by the Lambert-Beer Law. When reliable hydrodynamic data were used, the dynamic growth rate model successfully predicted the algae's growth rate obtained in the experiments in both low and high irradiance regime indicating the robustness of this model. The simulation results also indicate the hydrodynamics is significantly different between the real algae culturing system and an air-water system that signifies the importance in using reliable data input for the growth rate model. PMID:22068388

  6. Evaluation of the performance of different atmospheric chemical transport models and inter-comparison of nitrogen and sulphur deposition estimates for the UK

    NASA Astrophysics Data System (ADS)

    Dore, A. J.; Carslaw, D. C.; Braban, C.; Cain, M.; Chemel, C.; Conolly, C.; Derwent, R. G.; Griffiths, S. J.; Hall, J.; Hayman, G.; Lawrence, S.; Metcalfe, S. E.; Redington, A.; Simpson, D.; Sutton, M. A.; Sutton, P.; Tang, Y. S.; Vieno, M.; Werner, M.; Whyatt, J. D.

    2015-10-01

    An evaluation has been made of a number of contrasting atmospheric chemical transport models, of varying complexity, applied to estimate sulphur and nitrogen deposition in the UK. The models were evaluated by comparison with annually averaged measurements of gas, aerosol and precipitation concentrations from the national monitoring networks. The models were evaluated in relation to performance criteria. They were generally able to satisfy a criterion of 'fitness for purpose' that at least 50% of modelled concentrations should be within a factor of two of measured values. The second criterion, that the magnitude of the normalised mean bias should be less than 20%, was not always satisfied. Considering known uncertainties in measurement techniques, this criterion may be too strict. Overall, simpler models were able to give a good representation of measured gas concentrations whilst the use of dynamic meteorology, and complex photo-chemical reactions resulted in a generally better representation of measured aerosol and precipitation concentrations by more complex models. The models were compared graphically by plotting maps and cross-country transects of wet and dry deposition as well as calculating budgets of total wet and dry deposition to the UK for sulphur, oxidised nitrogen and reduced nitrogen. The total deposition to the UK varied by ±22-36% amongst the different models depending on the deposition component. At a local scale estimates of both dry and wet deposition for individual 5 km × 5 km model grid squares were found to vary between the different models by up to a factor of 4.

  7. Performance Evaluation of Nano-JASMINE

    NASA Astrophysics Data System (ADS)

    Hatsutori, Y.; Kobayashi, Y.; Gouda, N.; Yano, T.; Murooka, J.; Niwa, Y.; Yamada, Y.

    2011-02-01

    We report the results of performance evaluation of the first Japanese astrometry satellite, Nano-JASMINE. It is a very small satellite and weighs only 35 kg. It aims to carry out astrometry measurement of nearby bright stars (z ≤ 7.5 mag) with an accuracy of 3 milli-arcseconds. Nano-JASMINE will be launched by Cyclone-4 rocket in August 2011 from Brazil. The current status is in the process of evaluating the performances. A series of performance tests and numerical analysis were conducted. As a result, the engineering model (EM) of the telescope was measured to be achieving a diffraction-limited performance and confirmed that it has enough performance for scientific astrometry.

  8. An evaluation of the performance of chemistry transport models - Part 2: Detailed comparison with two selected campaigns

    NASA Astrophysics Data System (ADS)

    Brunner, D.; Staehelin, J.; Rogers, H. L.; Khler, M. O.; Pyle, J. A.; Hauglustaine, D. A.; Jourdain, L.; Berntsen, T. K.; Gauss, M.; Isaksen, I. S. A.; Meijer, E.; van Velthoven, P.; Pitari, G.; Mancini, E.; Grewe, V.; Sausen, R.

    2005-01-01

    This is the second part of a rigorous model evaluation study involving five global Chemistry-Transport and two Chemistry-Climate Models operated by different groups in Europe. Simulated trace gas fields were interpolated to the exact times and positions of the observations to account for the actual weather conditions and hence for the specific histories of the sampled air masses. In this part of the study we focus on a detailed comparison with two selected campaigns, PEM-Tropics A and SONEX, contrasting the clean environment of the tropical Pacific with the more polluted North Atlantic region. The study highlights the different strengths and weaknesses of the models in accurately simulating key processes in the UT/LS region including stratosphere-troposphere-exchange, rapid convective transport, lightning emissions, radical chemistry and ozone production. Model simulated Radon, which was used as an idealized tracer for continental influence, was occasionally much better correlated with measured CO than simulated CO pointing towards deficiencies in the used biomass burning emission fields. The abundance and variability of HOx radicals is in general well represented in the models as inferred directly from the comparison with measured OH and HO2 and indirectly from the comparison with hydrogen peroxide concentrations. Components of the NOy family such as PAN, HNO3 and NO were found to compare less favorably. Interestingly, models showing good agreement with observations in the case of PEM-Tropics A often failed in the case of SONEX and vice versa. A better description of NOx and NOy emissions, chemistry and sinks is thought to be key to future model improvements with respect to the representation of chemistry in the UT/LS region.

  9. An evaluation of the performance of chemistry transport models, Part 2: detailed comparison with two selected campaigns

    NASA Astrophysics Data System (ADS)

    Brunner, D.; Staehelin, J.; Rogers, H. L.; Köhler, M. O.; Pyle, J. A.; Hauglustaine, D. A.; Jourdain, L.; Berntsen, T. K.; Gauss, M.; Isaksen, I. S. A.; Meijer, E.;  van Velthoven, P.;  Pitari, G.;  Mancini, E.;  Grewe, V.;  Sausen, R.

    2004-11-01

    This is the second part of a rigorous model evaluation study involving five global Chemistry-Transport and two Chemistry-Climate Models operated by different groups in Europe. Simulated trace gas fields were interpolated to the exact times and positions of the observations to account for the actual weather conditions and hence for the specific histories of the sampled air masses. In this part of the study we focus on a detailed comparison with two selected campaigns, PEM-Tropics A and SONEX, contrasting the clean environment of the tropical Pacific with the more polluted North Atlantic region. The study highlights the different strengths and weaknesses of the models in accurately simulating key processes in the UT/LS region including stratosphere-troposphere-exchange, rapid convective transport, lightning emissions, radical chemistry and ozone production. Model simulated Radon, which was used as an idealized tracer for continental influence, was occasionally much better correlated with measured CO than simulated CO pointing towards deficiencies in the used biomass burning emission fields. The abundance and variability of HOx radicals is in general well represented in the models as inferred directly from the comparison with measured OH and HO2 and indirectly from the comparison with hydrogen peroxide concentrations. Components of the NOy family such as PAN, HNO3 and NO were found to compare less favorably. Interestingly, models showing good agreement with observations in the case of PEM-Tropics A often failed in the case of SONEX and vice versa. A better description of NOx and NOy emissions, chemistry and sinks is thought to be key to future model improvements with respect to the representation of chemistry in the UT/LS region.

  10. METAPHOR (version 1): Users guide. [performability modeling

    NASA Technical Reports Server (NTRS)

    Furchtgott, D. G.

    1979-01-01

    General information concerning METAPHOR, an interactive software package to facilitate performability modeling and evaluation, is presented. Example systems are studied and their performabilities are calculated. Each available METAPHOR command and array generator is described. Complete METAPHOR sessions are included.

  11. A new performance evaluation tool

    SciTech Connect

    Kindl, F.H.

    1996-12-31

    The paper describes a Steam Cycle Diagnostic Program (SCDP), that has been specifically designed to respond to the increasing need of electric power generators for periodic performance monitoring, and quick identification of the causes for any observed increase in fuel consumption. There is a description of program objectives, modeling and test data inputs, results, underlying program logic, validation of program accuracy by comparison with acceptance test quality data, and examples of program usage.

  12. Evaluation testbed for ATD performance prediction (ETAPP)

    NASA Astrophysics Data System (ADS)

    Ralph, Scott K.; Eaton, Ross; Snorrason, Magnús; Irvine, John; Vanstone, Steve

    2007-04-01

    Automatic target detection (ATD) systems process imagery to detect and locate targets in imagery in support of a variety of military missions. Accurate prediction of ATD performance would assist in system design and trade studies, collection management, and mission planning. A need exists for ATD performance prediction based exclusively on information available from the imagery and its associated metadata. We present a predictor based on image measures quantifying the intrinsic ATD difficulty on an image. The modeling effort consists of two phases: a learning phase, where image measures are computed for a set of test images, the ATD performance is measured, and a prediction model is developed; and a second phase to test and validate performance prediction. The learning phase produces a mapping, valid across various ATR algorithms, which is even applicable when no image truth is available (e.g., when evaluating denied area imagery). The testbed has plug-in capability to allow rapid evaluation of new ATR algorithms. The image measures employed in the model include: statistics derived from a constant false alarm rate (CFAR) processor, the Power Spectrum Signature, and others. We present performance predictors for two trained ATD classifiers, one constructed using using GENIE Pro TM, a tool developed at Los Alamos National Laboratory, and the other eCognition TM, developed by Definiens (http://www.definiens.com/products). We present analyses of the two performance predictions, and compare the underlying prediction models. The paper concludes with a discussion of future research.

  13. Evaluation of solar pond performance

    SciTech Connect

    Wittenberg, L.J.

    1981-01-01

    During 1978 the City of Miamisburg constructed a large, salt-gradient solar pond as part of its community park development project. The thermal energy stored in the pond is being used to heat an outdoor swimming pool in the summer and an adjacent recreational building during part of the winter. This solar pond, which occupies an area of 2020 m/sup 2/ (22,000 ft/sup 2/), was designed from experience obtained at smaller research ponds. This project is directed toward data collection and evaluation of the thermal performance and operational characteristics of the largest, operational, salt-gradient solar pond in the United States; to gain firsthand experience regarding the maintenance, adjustments and repairs required of a large, operational solar pond facility; and to provide technical consulation regarding the operation and the optimization of the pond performance.

  14. Performance evaluation of ant colony optimization-based solution strategies on the mixed-model assembly line balancing problem

    NASA Astrophysics Data System (ADS)

    Akpinar, Sener; Mirac Bayhan, G.

    2014-06-01

    The aim of this article is to compare the performances of iterative ant colony optimization (ACO)-based solution strategies on a mixed-model assembly line balancing problem of type II (MMALBP-II) by addressing some particular features of real-world assembly line balancing problems such as parallel workstations and zoning constraints. To solve the problem, where the objective is to minimize the cycle time (i.e. maximize the production rate) for a predefined number of workstations in an existing assembly line, two ACO-based approaches which differ in the mission assigned to artificial ants are used. Furthermore, each ACO-based approach is conducted with two different pheromone release strategies: global and local pheromone updating rules. The four ACO-based approaches are used for solving 20 representative MMALBP-II to compare their performance in terms of computational time and solution quality. Detailed comparison results are presented.

  15. 48 CFR 436.604 - Performance evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 4 2010-10-01 2010-10-01 false Performance evaluation... Performance evaluation. Preparation of performance evaluation reports. (a) In addition to the requirements of FAR 36.604, performance evaluation reports shall be prepared for indefinite-delivery type...

  16. Small wind turbine performance evaluation using field test data and a coupled aero-electro-mechanical model

    NASA Astrophysics Data System (ADS)

    Wallace, Brian D.

    A series of field tests and theoretical analyses were performed on various wind turbine rotor designs at two Penn State residential-scale wind-electric facilities. This work involved the prediction and experimental measurement of the electrical and aerodynamic performance of three wind turbines; a 3 kW rated Whisper 175, 2.4 kW rated Skystream 3.7, and the Penn State designed Carolus wind turbine. Both the Skystream and Whisper 175 wind turbines are OEM blades which were originally installed at the facilities. The Carolus rotor is a carbon-fiber composite 2-bladed machine, designed and assembled at Penn State, with the intent of replacing the Whisper 175 rotor at the off-grid system. Rotor aerodynamic performance is modeled using WT_Perf, a National Renewable Energy Laboratory developed Blade Element Momentum theory based performance prediction code. Steady-state power curves are predicted by coupling experimentally determined electrical characteristics with the aerodynamic performance of the rotor simulated with WT_Perf. A dynamometer test stand is used to establish the electromechanical efficiencies of the wind-electric system generator. Through the coupling of WT_Perf and dynamometer test results, an aero-electro-mechanical analysis procedure is developed and provides accurate predictions of wind system performance. The analysis of three different wind turbines gives a comprehensive assessment of the capability of the field test facilities and the accuracy of aero-electro-mechanical analysis procedures. Results from this study show that the Carolus and Whisper 175 rotors are running at higher tip-speed ratios than are optimum for power production. The aero-electro-mechanical analysis predicted the high operating tip-speed ratios of the rotors and was accurate at predicting output power for the systems. It is shown that the wind turbines operate at high tip-speeds because of a miss-match between the aerodynamic drive torque and the operating torque of the wind

  17. Towards Fully Coupled Atmosphere-Hydrology Model Systems: Recent Developments and Performance Evaluation For Different Climate Regions

    NASA Astrophysics Data System (ADS)

    Kunstmann, Harald; Fersch, Benjamin; Rummler, Thomas; Wagner, Sven; Arnault, Joel; Senatore, Alfonso; Gochis, David

    2015-04-01

    Limitations in the adequate representation of terrestrial hydrologic processes controlling the land-atmosphere coupling are assumed to be a significant factor currently limiting prediction skills of regional atmospheric models. The necessity for more comprehensive process descriptions accounting for the interdependencies between water- and energy fluxes at the compartmental interfaces are driving recent developments in hydrometeorological modeling towards more sophisticated treatment of terrestrial hydrologic processes. It is particularly the lateral surface and subsurface water fluxes that are neglected in standard regional atmospheric models. Current developments in enhanced lateral hydrological process descriptions in the WRF model system will be presented. Based on WRF and WRF-Hydro, new modules and concepts for integrating the saturated zone by a 2-dim groundwater scheme and coupling approaches to the unsaturated zone will be presented. The fully coupled model system allows to model the complete regional water cycle, from the top of the atmosphere, via the boundary layer, the land surface, the unsaturated zone and the saturated zone till the flow in the river beds. With this increasing complexity, that also allows to describe the complex interaction of the regional water cycle on different spatial and temporal scales, the reliability and predictability of model simulations can only be shown, if performance is tested for a variety of hydrological variables for different climatological environments. We will show results of fully coupled simulations for the regions of sempiternal humid Southern Bavaria/Germany (rivers Isar and Ammer) and semiarid to subhumid Westafrica (river Sissilli). In both regions, in addition to streamflow measurements, also the validation of heat fluxes is possible via Eddy-Covariance stations within hydrometeorological testbeds. In the German Isar/Ammer region, e.g., we apply the extended WRF-Hydro modeling system in 3km atmospheric- grid

  18. Prospective safety performance evaluation on construction sites.

    PubMed

    Wu, Xianguo; Liu, Qian; Zhang, Limao; Skibniewski, Miroslaw J; Wang, Yanhong

    2015-05-01

    This paper presents a systematic Structural Equation Modeling (SEM) based approach for Prospective Safety Performance Evaluation (PSPE) on construction sites, with causal relationships and interactions between enablers and the goals of PSPE taken into account. According to a sample of 450 valid questionnaire surveys from 30 Chinese construction enterprises, a SEM model with 26 items included for PSPE in the context of Chinese construction industry is established and then verified through the goodness-of-fit test. Three typical types of construction enterprises, namely the state-owned enterprise, private enterprise and Sino-foreign joint venture, are selected as samples to measure the level of safety performance given the enterprise scale, ownership and business strategy are different. Results provide a full understanding of safety performance practice in the construction industry, and indicate that the level of overall safety performance situation on working sites is rated at least a level of III (Fair) or above. This phenomenon can be explained that the construction industry has gradually matured with the norms, and construction enterprises should improve the level of safety performance as not to be eliminated from the government-led construction industry. The differences existing in the safety performance practice regarding different construction enterprise categories are compared and analyzed according to evaluation results. This research provides insights into cause-effect relationships among safety performance factors and goals, which, in turn, can facilitate the improvement of high safety performance in the construction industry. PMID:25746166

  19. How do current irrigation practices perform? Evaluation of different irrigation scheduling approaches based on experiements and crop model simulations

    NASA Astrophysics Data System (ADS)

    Seidel, Sabine J.; Werisch, Stefan; Barfus, Klemens; Wagner, Michael; Schütze, Niels; Laber, Hermann

    2014-05-01

    The increasing worldwide water scarcity, costs and negative off-site effects of irrigation are leading to the necessity of developing methods of irrigation that increase water productivity. Various approaches are available for irrigation scheduling. Traditionally schedules are calculated based on soil water balance (SWB) calculations using some measure of reference evaporation and empirical crop coeffcients. These crop-specific coefficients are provided by the FAO but are also available for different regions (e.g. Germany). The approach is simple but there are several inaccuracies due to simplifications and limitations such as poor transferability. Crop growth models - which simulate the main physiological plant processes through a set of assumptions and calibration parameter - are widely used to support decision making, but also for yield gap or scenario analyses. One major advantage of mechanistic models compared to empirical approaches is their spatial and temporal transferability. Irrigation scheduling can also be based on measurements of soil water tension which is closely related to plant stress. Advantages of precise and easy measurements are able to be automated but face difficulties of finding the place where to probe especially in heterogenous soils. In this study, a two-year field experiment was used to extensively evaluate the three mentioned irrigation scheduling approaches regarding their efficiency on irrigation water application with the aim to promote better agronomic practices in irrigated horticulture. To evaluate the tested irrigation scheduling approaches, an extensive plant and soil water data collection was used to precisely calibrate the mechanistic crop model Daisy. The experiment was conducted with white cabbage (Brassica oleracea L.) on a sandy loamy field in 2012/13 near Dresden, Germany. Hereby, three irrigation scheduling approaches were tested: (i) two schedules were estimated based on SWB calculations using different crop

  20. A Discrete Event Simulation Model for Evaluating the Performances of an M/G/C/C State Dependent Queuing System

    PubMed Central

    Khalid, Ruzelan; M. Nawawi, Mohd Kamal; Kawsar, Luthful A.; Ghani, Noraida A.; Kamil, Anton A.; Mustafa, Adli

    2013-01-01

    M/G/C/C state dependent queuing networks consider service rates as a function of the number of residing entities (e.g., pedestrians, vehicles, and products). However, modeling such dynamic rates is not supported in modern Discrete Simulation System (DES) software. We designed an approach to cater this limitation and used it to construct the M/G/C/C state-dependent queuing model in Arena software. Using the model, we have evaluated and analyzed the impacts of various arrival rates to the throughput, the blocking probability, the expected service time and the expected number of entities in a complex network topology. Results indicated that there is a range of arrival rates for each network where the simulation results fluctuate drastically across replications and this causes the simulation results and analytical results exhibit discrepancies. Detail results that show how tally the simulation results and the analytical results in both abstract and graphical forms and some scientific justifications for these have been documented and discussed. PMID:23560037

  1. Evaluating the performance of CMIP3 and CMIP5 global climate models over the north-east Atlantic region

    NASA Astrophysics Data System (ADS)

    Perez, Jorge; Menendez, Melisa; Mendez, Fernando J.; Losada, Inigo J.

    2014-11-01

    One of the main sources of uncertainty in estimating climate projections affected by global warming is the choice of the global climate model (GCM). The aim of this study is to evaluate the skill of GCMs from CMIP3 and CMIP5 databases in the north-east Atlantic Ocean region. It is well known that the seasonal and interannual variability of surface inland variables (e.g. precipitation and snow) and ocean variables (e.g. wave height and storm surge) are linked to the atmospheric circulation patterns. Thus, an automatic synoptic classification, based on weather types, has been used to assess whether GCMs are able to reproduce spatial patterns and climate variability. Three important factors have been analyzed: the skill of GCMs to reproduce the synoptic situations, the skill of GCMs to reproduce the historical inter-annual variability and the consistency of GCMs experiments during twenty-first century projections. The results of this analysis indicate that the most skilled GCMs in the study region are UKMO-HadGEM2, ECHAM5/MPI-OM and MIROC3.2(hires) for CMIP3 scenarios and ACCESS1.0, EC-EARTH, HadGEM2-CC, HadGEM2-ES and CMCC-CM for CMIP5 scenarios. These models are therefore recommended for the estimation of future regional multi-model projections of surface variables driven by the atmospheric circulation in the north-east Atlantic Ocean region.

  2. Evaluating Performance Portability of OpenACC

    SciTech Connect

    Sabne, Amit J; Sakdhnagool, Putt; Lee, Seyong; Vetter, Jeffrey S

    2015-01-01

    Accelerator-based heterogeneous computing is gaining momentum in High Performance Computing arena. However, the increased complexity of the accelerator architectures demands more generic, high-level programming models. OpenACC is one such attempt to tackle the problem. While the abstraction endowed by OpenACC offers productivity, it raises questions on its portability. This paper evaluates the performance portability obtained by OpenACC on twelve OpenACC programs on NVIDIA CUDA, AMD GCN, and Intel MIC architectures. We study the effects of various compiler optimizations and OpenACC program settings on these architectures to provide insights into the achieved performance portability.

  3. 48 CFR 236.604 - Performance evaluation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ..., DEPARTMENT OF DEFENSE SPECIAL CATEGORIES OF CONTRACTING CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACTS Architect-Engineer Services 236.604 Performance evaluation. Prepare a separate performance evaluation after... familiar with the architect-engineer contractor's performance....

  4. 48 CFR 236.604 - Performance evaluation.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ..., DEPARTMENT OF DEFENSE SPECIAL CATEGORIES OF CONTRACTING CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACTS Architect-Engineer Services 236.604 Performance evaluation. Prepare a separate performance evaluation after... familiar with the architect-engineer contractor's performance....

  5. SEASAT SAR performance evaluation study

    NASA Technical Reports Server (NTRS)

    1982-01-01

    The performance of the SEASAT synthetic aperture radar (SAR) sensor was evaluated using data processed by the MDA digital processor. Two particular aspects are considered the location accuracy of image data, and the calibration of the measured backscatter amplitude of a set of corner reflectors. The image location accuracy was assessed by selecting identifiable targets in several scenes, converting their image location to UTM coordinates, and comparing the results to map sheets. The error standard deviation is measured to be approximately 30 meters. The amplitude was calibrated by measuring the responses of the Goldstone corner reflector array and comparing the results to theoretical values. A linear regression of the measured against theoretical values results in a slope of 0.954 with a correlation coefficient of 0.970.

  6. Evaluation of squib performance variables

    SciTech Connect

    Munger, A.C.; Woods, C.M.; Phillabaum, M.R.

    1991-01-01

    The use of Kinetic Energy Device for measuring the output of a pyrotechnic squib or actuator was presented in the proceedings of the Thirteenth Pyrotechnic Seminar held in Grand Junction, Colorado 1988. This device was demonstrated as a valuable tool for evaluating the interface design between the squib and the next assembly. The thrust of this investigation was to evaluate the amount of containment that the interface provides and its effect on the amount of energy transmitted to a moving piston on the other side of the interface. Experiments were repeats of tests done with another test device known as the Variable Explosive Chamber. This data was presented in the proceedings of the Twelfth Pyrotechnic Seminar held in Juan-les-Pins, France 1987. A second area of investigation was to determine the effects of variation in the average compaction density and total mass of pyrotechnic powder load on the performance of the squib. The data shown here is for one specific geometry but may have implications to other geometries and even to other devices such as ignitors or matches. The equations of motion are examined for two geometries of test actuators. Pressure pulse curves are derived from the displacement versus time records for the extremes of a constant density, variable mass test series. 4 refs.

  7. Applying Human-performance Models to Designing and Evaluating Nuclear Power Plants: Review Guidance and Technical Basis

    SciTech Connect

    O'Hara, J.M.

    2009-11-30

    Human performance models (HPMs) are simulations of human behavior with which we can predict human performance. Designers use them to support their human factors engineering (HFE) programs for a wide range of complex systems, including commercial nuclear power plants. Applicants to U.S. Nuclear Regulatory Commission (NRC) can use HPMs for design certifications, operating licenses, and license amendments. In the context of nuclear-plant safety, it is important to assure that HPMs are verified and validated, and their usage is consistent with their intended purpose. Using HPMs improperly may generate misleading or incorrect information, entailing safety concerns. The objective of this research was to develop guidance to support the NRC staff's reviews of an applicant's use of HPMs in an HFE program. The guidance is divided into three topical areas: (1) HPM Verification, (2) HPM Validation, and (3) User Interface Verification. Following this guidance will help ensure the benefits of HPMs are achieved in a technically sound, defensible manner. During the course of developing this guidance, I identified several issues that could not be addressed; they also are discussed.

  8. Next Generation Balloon Performance Model

    NASA Astrophysics Data System (ADS)

    Pankine, A.; Nock, K.; Heun, M.; Schlaifer, S.

    Global Aerospace Corporation is developing a new trajectory and performance modeling tool for Earth and Planetary Balloons, called Navajo. This tool will advance the state of the art for balloon performance models and assist NASA and commercial balloon designers, campaign and mission planners, and flight operations staff by providing high-accuracy vertical and horizontal trajectory predictions. Nothing like Navajo currently exists. The Navajo design integrates environment, balloon (or Lighter Than Air - LTA), gondola (for ballast and communications), and trajectory control system submodels to provide rapid and exhaustive evaluation of vertical and horizontal balloon and LTA vehicle trajectories. The concept utilizes an extensible computer application architecture to permit definit ion of additional flight system components and environments. The Navajo architecture decouples the balloon performance and environment models so that users can swap balloon and environment models easily and assess the capabilities of new balloon technologies in a variety of environments. The Navajo design provides integrated capabilities for safety analysis for Earth balloon trajectories, and utilize improved thermal models. We report on our progress towards the development of Navajo.

  9. Using a Shared Governance Structure to Evaluate the Implementation of a New Model of Care: The Shared Experience of a Performance Improvement Committee

    PubMed Central

    Myers, Mary; Parchen, Debra; Geraci, Marilla; Brenholtz, Roger; Knisely-Carrigan, Denise; Hastings, Clare

    2013-01-01

    Sustaining change in the behaviors and habits of experienced practicing nurses can be frustrating and daunting, even when changes are based on evidence. Partnering with an active shared governance structure to communicate change and elicit feedback is an established method to foster partnership, equity, accountability and ownership. Few recent exemplars in the literature link shared governance, change management and evidence-based practice to transitions in care models. This article describes an innovative staff-driven approach used by nurses in a shared governance performance improvement committee to use evidence based practice in determining the best methods to evaluate the implementation of a new model of care. PMID:24061583

  10. Distributed ecohydrological modelling to evaluate irrigation system performance in Sirsa district, India II: Impact of viable water management scenarios

    NASA Astrophysics Data System (ADS)

    Singh, R.; Jhorar, R. K.; van Dam, J. C.; Feddes, R. A.

    2006-10-01

    SummaryThis study focuses on the identification of appropriate strategies to improve water management and productivity in an irrigated area of 4270 km 2 in India (Sirsa district). The field scale ecohydrological model SWAP in combination with field experiments, remote sensing and GIS has been applied in a distributed manner generating the required hydrological and biophysical variables to evaluate alternative water management scenarios at different spatial and temporal scales. Simulation results for the period 1991-2001 show that the water and salt limited crop production is 1.2-2.0 times higher than the actual recorded crop production. Improved crop husbandry in terms of improved crop varieties, timely sowing, better nutrient supply and more effective weed, pest and disease control, will increase crop yields and water productivity in Sirsa district. The scenario results further showed that reduction of seepage losses to 25-30% of the total canal inflow and reallocation of 15% canal water inflow from the northern to the central canal commands will improve significantly the long term water productivity, halt the rising and declining groundwater levels, and decrease the salinization in Sirsa district.

  11. Evaluation of solar pond performance

    SciTech Connect

    Wittenberg, L.J.

    1980-01-01

    The City of Miamisburg, Ohio, constructed during 1978 a large, salt-gradient solar pond as part of its community park development project. The thermal energy stored in the pond is being used to heat an outdoor swimming pool in the summer and an adjacent recreational building during part of the winter. This solar pond, which occupies an area of 2020 m/sup 2/ (22,000 sq. ft.), was designed from experience obtained at smaller research ponds located at Ohio State University, the University of New Mexico and similar ponds operated in Israel. During the summer of 1979, the initial heat (40,000 kWh, 136 million Btu) was withdrawn from the solar pond to heat the outdoor swimming pool. All of the data collection systems were installed and functioned as designed so that operational data were obtained. The observed performance of the pond was compared with several of the predicted models for this type of pond. (MHR)

  12. A system-level mathematical model for evaluation of power train performance of load-leveled electric-vehicles

    NASA Technical Reports Server (NTRS)

    Purohit, G. P.; Leising, C. J.

    1984-01-01

    The power train performance of load leveled electric vehicles can be compared with that of nonload leveled systems by use of a simple mathematical model. This method of measurement involves a number of parameters including the degree of load leveling and regeneration, the flywheel mechanical to electrical energy fraction, and efficiencies of the motor, generator, flywheel, and transmission. Basic efficiency terms are defined and representative comparisons of a variety of systems are presented. Results of the study indicate that mechanical transfer of energy into and out of the flywheel is more advantageous than electrical transfer. An optimum degree of load leveling may be achieved in terms of the driving cycle, battery characteristics, mode of mechanization, and the efficiency of the components. For state of the art mechanically coupled flyheel systems, load leveling losses can be held to a reasonable 10%; electrically coupled systems can have losses that are up to six times larger. Propulsion system efficiencies for mechanically coupled flywheel systems are predicted to be approximately the 60% achieved on conventional nonload leveled systems.

  13. Performance evaluation of an automotive thermoelectric generator

    NASA Astrophysics Data System (ADS)

    Dubitsky, Andrei O.

    Around 40% of the total fuel energy in typical internal combustion engines (ICEs) is rejected to the environment in the form of exhaust gas waste heat. Efficient recovery of this waste heat in automobiles can promise a fuel economy improvement of 5%. The thermal energy can be harvested through thermoelectric generators (TEGs) utilizing the Seebeck effect. In the present work, a versatile test bench has been designed and built in order to simulate conditions found on test vehicles. This allows experimental performance evaluation and model validation of automotive thermoelectric generators. An electrically heated exhaust gas circuit and a circulator based coolant loop enable integrated system testing of hot and cold side heat exchangers, thermoelectric modules (TEMs), and thermal interface materials at various scales. A transient thermal model of the coolant loop was created in order to design a system which can maintain constant coolant temperature under variable heat input. Additionally, as electrical heaters cannot match the transient response of an ICE, modelling was completed in order to design a relaxed exhaust flow and temperature history utilizing the system thermal lag. This profile reduced required heating power and gas flow rates by over 50%. The test bench was used to evaluate a DOE/GM initial prototype automotive TEG and validate analytical performance models. The maximum electrical power generation was found to be 54 W with a thermal conversion efficiency of 1.8%. It has been found that thermal interface management is critical for achieving maximum system performance, with novel designs being considered for further improvement.

  14. Hydrological evaluation of landfill performance (HELP) model assessment of the geology at Los Alamos National Laboratory, Technical Area 54, Material Disposal Area J

    SciTech Connect

    Vigil-Holterman, L.

    2002-01-01

    The purpose of this paper is: (1) conduct HELP model variations in weather data, profile characteristics, and hydraulic conductivities for major rock units; (2) compare and contrast the results of simulations; (3) obtain an estimation of leakage through the landfill from the surface to the aquifer; and (4) evaluate contaminant transport to the aquifer utilizing leakage estimation. The conclusions of this paper are: (1) the HELP model is useful to assess landfill design alternatives or the performance of a pre-existing landfill; (2) model results using site-specific data incorporated into the Weather Generator (Trail 4), varied significantly from generalized runs (Trials 1-3), consequently, models that lack site-specific data should be used cautiously; and (3) data from this study suggest that there will not be significant downward percolation of leachate from the surface of the landfill cap to the aquifer-leachate transport rates have been calculated to be slow.

  15. Performance Evaluation of a Data Validation System

    NASA Technical Reports Server (NTRS)

    Wong, Edmond (Technical Monitor); Sowers, T. Shane; Santi, L. Michael; Bickford, Randall L.

    2005-01-01

    Online data validation is a performance-enhancing component of modern control and health management systems. It is essential that performance of the data validation system be verified prior to its use in a control and health management system. A new Data Qualification and Validation (DQV) Test-bed application was developed to provide a systematic test environment for this performance verification. The DQV Test-bed was used to evaluate a model-based data validation package known as the Data Quality Validation Studio (DQVS). DQVS was employed as the primary data validation component of a rocket engine health management (EHM) system developed under NASA's NGLT (Next Generation Launch Technology) program. In this paper, the DQVS and DQV Test-bed software applications are described, and the DQV Test-bed verification procedure for this EHM system application is presented. Test-bed results are summarized and implications for EHM system performance improvements are discussed.

  16. 48 CFR 36.604 - Performance evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Performance evaluation. 36.604 Section 36.604 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION SPECIAL... Performance evaluation. See 42.1502(f) for the requirements for preparing past performance evaluations...

  17. Attribution Theory and Academic Library Performance Evaluation.

    ERIC Educational Resources Information Center

    Gedeon, Julie A.; Rubin, Richard E.

    1999-01-01

    Discusses problems with performance evaluations in academic libraries and examines attribution theory, a sociopsychological theory which helps explain how biases may arise in the performance-evaluation process and may be responsible for producing serious and unrecognized inequities. Considers fairness in performance evaluation and differential…

  18. Evaluating the performance of SURFEXv5 as a new land surface scheme for the ALADINcy36 and ALARO-0 models

    NASA Astrophysics Data System (ADS)

    Hamdi, R.; Degrauwe, D.; Duerinckx, A.; Cedilnik, J.; Costa, V.; Dalkilic, T.; Essaouini, K.; Jerczynki, M.; Kocaman, F.; Kullmann, L.; Mahfouf, J.-F.; Meier, F.; Sassi, M.; Schneider, S.; Váňa, F.; Termonia, P.

    2014-01-01

    The newly developed land surface scheme SURFEX (SURFace EXternalisée) is implemented into a limited-area numerical weather prediction model running operationally in a number of countries of the ALADIN and HIRLAM consortia. The primary question addressed is the ability of SURFEX to be used as a new land surface scheme and thus assessing its potential use in an operational configuration instead of the original ISBA (Interactions between Soil, Biosphere, and Atmosphere) scheme. The results show that the introduction of SURFEX either shows improvement for or has a neutral impact on the 2 m temperature, 2 m relative humidity and 10 m wind. However, it seems that SURFEX has a tendency to produce higher maximum temperatures at high-elevation stations during winter daytime, which degrades the 2 m temperature scores. In addition, surface radiative and energy fluxes improve compared to observations from the Cabauw tower. The results also show that promising improvements with a demonstrated positive impact on the forecast performance are achieved by introducing the town energy balance (TEB) scheme. It was found that the use of SURFEX has a neutral impact on the precipitation scores. However, the implementation of TEB within SURFEX for a high-resolution run tends to cause rainfall to be locally concentrated, and the total accumulated precipitation obviously decreases during the summer. One of the novel features developed in SURFEX is the availability of a more advanced surface data assimilation using the extended Kalman filter. The results over Belgium show that the forecast scores are similar between the extended Kalman filter and the classical optimal interpolation scheme. Finally, concerning the vertical scores, the introduction of SURFEX either shows improvement for or has a neutral impact in the free atmosphere.

  19. 13 CFR 304.4 - Performance evaluations.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Performance evaluations. 304.4... ECONOMIC DEVELOPMENT DISTRICTS § 304.4 Performance evaluations. (a) EDA shall evaluate the management... the District Organization continues to receive Investment Assistance. EDA's evaluation shall...

  20. PERFORMANCE EVALUATION OF TYPE I MARINE SANITATION DEVICES

    EPA Science Inventory

    This performance test was designed to evaluate the effectiveness of two Type I Marine Sanitation Devices (MSDs): the Electro Scan Model EST 12, manufactured by Raritan Engineering Company, Inc., and the Thermopure-2, manufactured by Gross Mechanical Laboratories, Inc. Performance...

  1. Error Reduction Program. [combustor performance evaluation codes

    NASA Technical Reports Server (NTRS)

    Syed, S. A.; Chiappetta, L. M.; Gosman, A. D.

    1985-01-01

    The details of a study to select, incorporate and evaluate the best available finite difference scheme to reduce numerical error in combustor performance evaluation codes are described. The combustor performance computer programs chosen were the two dimensional and three dimensional versions of Pratt & Whitney's TEACH code. The criteria used to select schemes required that the difference equations mirror the properties of the governing differential equation, be more accurate than the current hybrid difference scheme, be stable and economical, be compatible with TEACH codes, use only modest amounts of additional storage, and be relatively simple. The methods of assessment used in the selection process consisted of examination of the difference equation, evaluation of the properties of the coefficient matrix, Taylor series analysis, and performance on model problems. Five schemes from the literature and three schemes developed during the course of the study were evaluated. This effort resulted in the incorporation of a scheme in 3D-TEACH which is usuallly more accurate than the hybrid differencing method and never less accurate.

  2. Evaluating GC/MS Performance

    SciTech Connect

    Alcaraz, A; Dougan, A

    2006-11-26

    and Water Check': By selecting View - Diagnostics/Vacuum Control - Vacuum - Air and Water Check. A Yes/No dialogue box will appear; select No (use current values). It is very important to select No! Otherwise the tune values are drastically altered. The software program will generate a water/air report similar to figure 3. Evaluating the GC/MS system with a performance standard: This procedure should allow the analyst to verify that the chromatographic column and associated components are working adequately to separate the various classes of chemical compounds (e.g., hydrocarbons, alcohols, fatty acids, aromatics, etc.). Use the same GC/MS conditions used to collect the system background and solvent check (part 1 of this document). Figure 5 is an example of a commercial GC/MS column test mixture used to evaluate GC/MS prior to analysis.

  3. Performance evaluation of two OCR systems

    SciTech Connect

    Chen, S.; Subramaniam, S.; Haralick, R.M.; Phillips, I.T.

    1994-12-31

    An experimental protocol for the performance evaluation of Optical Character Recognition (OCR) algorithms is described. The protocol is intended to serve as a model for using the University of Washington English Document Image Database-I to evaluate OCR systems. The plain text zones (without special symbols) in this database have over 2,300,000 characters. The performances of two UNIX-based OCR systems, namely Caere OCR v109a and Xerox ScanWorX v2.0, are measured. The results suggest that Caere OCR outperforms ScanWorX in terms of recognition accuracy; however, ScanWorX is more robust in the presence of image flaws.

  4. Integrated Assessment Model Evaluation

    NASA Astrophysics Data System (ADS)

    Smith, S. J.; Clarke, L.; Edmonds, J. A.; Weyant, J. P.

    2012-12-01

    Integrated assessment models of climate change (IAMs) are widely used to provide insights into the dynamics of the coupled human and socio-economic system, including emission mitigation analysis and the generation of future emission scenarios. Similar to the climate modeling community, the integrated assessment community has a two decade history of model inter-comparison, which has served as one of the primary venues for model evaluation and confirmation. While analysis of historical trends in the socio-economic system has long played a key role in diagnostics of future scenarios from IAMs, formal hindcast experiments are just now being contemplated as evaluation exercises. Some initial thoughts on setting up such IAM evaluation experiments are discussed. Socio-economic systems do not follow strict physical laws, which means that evaluation needs to take place in a context, unlike that of physical system models, in which there are few fixed, unchanging relationships. Of course strict validation of even earth system models is not possible (Oreskes etal 2004), a fact borne out by the inability of models to constrain the climate sensitivity. Energy-system models have also been grappling with some of the same questions over the last quarter century. For example, one of "the many questions in the energy field that are waiting for answers in the next 20 years" identified by Hans Landsberg in 1985 was "Will the price of oil resume its upward movement?" Of course we are still asking this question today. While, arguably, even fewer constraints apply to socio-economic systems, numerous historical trends and patterns have been identified, although often only in broad terms, that are used to guide the development of model components, parameter ranges, and scenario assumptions. IAM evaluation exercises are expected to provide useful information for interpreting model results and improving model behavior. A key step is the recognition of model boundaries, that is, what is inside

  5. Modelling of tests performed in order to evaluate the residual strength of corroded beams in the framework of the benchmark of the rance beams

    NASA Astrophysics Data System (ADS)

    Millard, A.; Vivier, M.

    2006-11-01

    The Benchmark of the Rance beams has been organised in order to evaluate the capabilities of various modelling tools, to predict the residual load carrying capacity of corroded beams. The Rance beams have been corroded in a marine environment for nearly 40years. Different types of prestressed beams, made of different types of cement, have been subjected to four points bending monotonous and cyclic tests as well as direct traction tests. The tests have been carried on up to failure, in order to evaluate the residual carrying capacity of the beams. Different teams have participated to the blind prediction of the tests results. In this framework, the CEA/DM2S/LM2S team has performed bidimensionnal modellings which are described in details in this paper. The various constitutive elements of the beams are represented : for concrete, the isotropic Mazars' damage model is used, in a non local version, for prestressing and passive steels, an elasto-plastic strain hardening model is adopted. The corrosion effects, taken into account for the longitudinal rebars, are derived on one hand from the measurements performed on the beams after the tests, and on the other hand from the literature. They consist mainly in a reduction of the rebars cross-section, as well as in their ductility. In principle, the properties of the bond between steel and rebars are also modified by the corrosion. Here, because of the unavailability of specific data on the smooth rebars of the Rance beams, the bond has been modelled by means of specific joint finite elements. The load carrying capacity has been calculated for the monotonous as well as the cyclic tests. Moreover, a sensitivity analysis has been performed, by considering variants where either the rebars are sane, or they have only reduced sections, with their original ductility. The results are compared to the experimental database, and discussed.

  6. Performance Evaluation of Various Parameterization Schemes in Weather Research and Forecasting (WRF) Model : A Case Study Subtropical Urban Agglomeration National Capital Region (NCR), India

    NASA Astrophysics Data System (ADS)

    Sindhwani, R.; Kumar, S.; Goyal, P.

    2015-12-01

    Meteorological parameters play a very significant and crucial role in simulating regional air quality. This study has been carried to evaluate the performance of WRF model to various combinations of physical parameterization schemes for predicting surface and upper air meteorology around the capital city of India, Delhi popularly known as National Capital Region (NCR). Eight sensitivity experiments has been conducted to find the best combination of the parameterization schemes for the study area during summer (4th - 18th April, 2010 ) season. The model predicted surface temperatures at 2m, relative humidity at 2m and wind speeds at 10m are compared with the observations from Central Pollution Control Board (at Dwarka and Shadipur monitoring stations) and Indian Meteorological Department (VIDP and VIDD stations) whereas the upper-air potential temperature profile and wind speed profile are validated using Wyoming Weather Web data archive at VIDD station. The qualitative and quantitative analysis of simulations indicate that for temperature and relative humidity, the combination consisting of Yonsei Unversity (YSU) as the Planetary Boundary Layer (PBL) scheme, the Monin Obhukhov as the surface layer (SL) scheme along with NOAH land surface model (LSM) has been found to be performing better than other combinations. The combination consisting of Mellor Yamada Janjic (Eta) as the PBL scheme, Monin Obhukhov Janjic (Eta) as the SL scheme and Noah LSM performs reasonably well in reproducing the observed wind conditions. This indicates that the selection of parameterization schemes may depend on the intended application of the model for a given region.

  7. Evaluating the long-term performance for rainfall-induced shallow landslides prediction using a physically-based model in Taiwan

    NASA Astrophysics Data System (ADS)

    Ho, Jui-Yi; Tun Lee, Kwan; Chen, Yi-Chin; Hwang, Gong-Do; Yang, Tsun-Hua; Lin, Gwo-Fong

    2015-04-01

    Rainfall-induced shallow landslides usually occur during typhoons or rainstorms and cause major damage in Taiwan. An efficient prediction could mitigate the loss of life and property. It means that issuing a timely and accurate warning could avoid or reduce the damage before the shallow landslides occur. The objective of this study is to evaluate the long-term performance of rainfall-induced shallow landslides using a physically-based model in Taiwan. The Su-Hua and Southern Cross-Island highways in northeastern and southern Taiwan, suffering major impacts from shallow landslides, were selected as the study areas. The detail hydrologic record and geological information were collected for analysis to test the model performance by running the entire hourly rainfall data. Two comparisons were made to evaluate the model performance: (1) one with observed shallow landslides and (2) another with forecasts form the empirical alert threshold based on rainfall intensity and cumulative rainfall. The analytical results indicated that all of the three methods can efficiently detect the occurrence of shallow landslides (the probability of shallow landslide detection ratio is close to 1.0). However, the empirical alert threshold did not consider the hyetograph (distribution of rainfall depth in the storm event), land cover, geological, geomorphological factors such as slope and contributed area induce the high false alarm ratio (false alarm ratio > 0.5) in the study areas. The proposed physically-based model could efficiently decrease the times of false alarm. Therefore, the proposed physically-based model may be a better choice for predicting the rainfall-induced shallow landslide. The threat score is 0.75 on the Su-Hua highway and 1.00 on the Southern Cross-island highway by proposed physically-based model indicating that the predicted and recorded shallow landslide are in best agreement. The results showed that the long-term performance of the proposed physically-based model

  8. How To Evaluate Teacher Performence.

    ERIC Educational Resources Information Center

    Wilson, Laval S.

    Teacher evaluations tend to be like clothes. Whatever is in vogue at the time is utilized extensively by those who are attempting to remain modern and current. If you stay around long enough, the "hot" methods of today will probably recycle to be the new discovery of the future. In the end, each school district develops an evaluation process that…

  9. Performance evaluation of carbon dioxide-alkanolamine- water system by equation of state/excess Gibbs energy models

    NASA Astrophysics Data System (ADS)

    Suleman, H.; Maulud, A. S.; Man, Z.

    2016-06-01

    Numerous thermodynamic techniques have been applied to correlate carbon dioxide- alkanolamine-water systems, with varying accuracy and complexity. With advent of high pressure carbon dioxide absorption in industry, the development of high pressure thermodynamic models have became an exigency. Equation of state/excess Gibbs energy models promises a substantial improvement in this field. Many researchers have shown application of these models to high pressure vapour liquid equilibria of said system with good correlation. However, no study shows the range of application of these models in presence of other competitive techniques. Therefore, this study quantitatively describes the range of application of equation of state/excess Gibbs energy models to carbon dioxide-alkanolamine systems. The model uses Linear Combination of Vidal and Michelsen mixing rule for correlation of carbon dioxide absorption in single aqueous monoethanolamine, diethanolamine and methyldiethanolamine mixtures. The results show that correlation of equation of state/excess Gibbs energy models show a transient change at carbon dioxide loadings of 0.8. Therefore, these models are applicable to the above mentioned system for carbon dioxide loadings beyond 0.8 mol/mol and higher. The observations are similar in behaviour for all tested alkanolamines and are therefore generalized for the system.

  10. Predictive performance models and multiple task performance

    NASA Technical Reports Server (NTRS)

    Wickens, Christopher D.; Larish, Inge; Contorer, Aaron

    1989-01-01

    Five models that predict how performance of multiple tasks will interact in complex task scenarios are discussed. The models are shown in terms of the assumptions they make about human operator divided attention. The different assumptions about attention are then empirically validated in a multitask helicopter flight simulation. It is concluded from this simulation that the most important assumption relates to the coding of demand level of different component tasks.

  11. A Model Performance

    ERIC Educational Resources Information Center

    Thornton, Bradley D.; Smalley, Robert A.

    2008-01-01

    Building information modeling (BIM) uses three-dimensional modeling concepts, information technology and interoperable software to design, construct and operate a facility. However, BIM can be more than a tool for virtual modeling--it can provide schools with a 3-D walkthrough of a project while it still is on the electronic drawing board. BIM can…

  12. Hierarchical clustering analysis of reading aloud data: a new technique for evaluating the performance of computational models.

    PubMed

    Robidoux, Serje; Pritchard, Stephen C

    2014-01-01

    DRC (Coltheart et al., 2001) and CDP++ (Perry et al., 2010) are two of the most successful models of reading aloud. These models differ primarily in how their sublexical systems convert letter strings into phonological codes. DRC adopts a set of grapheme-to-phoneme conversion rules (GPCs) while CDP++ uses a simple trained network that has been exposed to a combination of rules and the spellings and pronunciations of known words. Thus far the debate between fixed rules and learned associations has largely emphasized reaction time experiments, error rates in dyslexias, and item-level variance from large-scale databases. Recently, Pritchard et al. (2012) examined the models' non-word reading in a new way. They compared responses produced by the models to those produced by 45 skilled readers. Their item-by-item analysis is informative, but leaves open some questions that can be addressed with a different technique. Using hierarchical clustering techniques, we first examined the subject data to identify if there are classes of subjects that are similar to each other in their overall response profiles. We found that there are indeed two groups of subject that differ in their pronunciations for certain consonant clusters. We also tested the possibility that CDP++ is modeling one set of subjects well, while DRC is modeling a different set of subjects. We found that CDP++ does not fit any human reader's response pattern very well, while DRC fits the human readers as well as or better than any other reader. PMID:24744745

  13. On the use of the post-closure methods uncertainty band to evaluate the performance of land surface models against eddy covariance flux data

    NASA Astrophysics Data System (ADS)

    Ingwersen, J.; Imukova, K.; Högy, P.; Streck, T.

    2015-04-01

    The energy balance of eddy covariance (EC) flux data is normally not closed. Therefore, at least if used for modelling, EC flux data are usually post-closed, i.e. the measured turbulent fluxes are adjusted so as to close the energy balance. At the current state of knowledge, however, it is not clear how to partition the missing energy in the right way. Eddy flux data therefore contain some uncertainty due to the unknown nature of the energy balance gap, which should be considered in model evaluation and the interpretation of simulation results. We propose to construct the post-closure methods uncertainty band (PUB), which essentially designates the differences between non-adjusted flux data and flux data adjusted with the three post-closure methods (Bowen ratio, latent heat flux (LE) and sensible heat flux (H) method). To demonstrate this approach, simulations with the NOAH-MP land surface model were evaluated based on EC measurements conducted at a winter wheat stand in southwest Germany in 2011, and the performance of the Jarvis and Ball-Berry stomatal resistance scheme was compared. The width of the PUB of the LE was up to 110 W m-2 (21% of net radiation). Our study shows that it is crucial to account for the uncertainty in EC flux data originating from lacking energy balance closure. Working with only a single post-closing method might result in severe misinterpretations in model-data comparisons.

  14. On the use of the post-closure method uncertainty band to evaluate the performance of land surface models against eddy covariance flux data

    NASA Astrophysics Data System (ADS)

    Ingwersen, J.; Imukova, K.; Högy, P.; Streck, T.

    2014-12-01

    The energy balance of eddy covariance (EC) flux data is normally not closed. Therefore, at least if used for modeling, EC flux data are usually post-closed, i.e. the measured turbulent fluxes are adjusted so as to close the energy balance. At the current state of knowledge, however, it is not clear how to partition the missing energy in the right way. Eddy flux data therefore contain some uncertainty due to the unknown nature of the energy balance gap, which should be considered in model evaluation and the interpretation of simulation results. We propose to construct the post-closure method uncertainty band (PUB), which essentially designates the differences between non-adjusted flux data and flux data adjusted with the three post-closure methods (Bowen ratio, latent heat flux (LE) and sensible heat flux (H) method). To demonstrate this approach, simulations with the NOAH-MP land surface model were evaluated based on EC measurements conducted at a winter wheat stand in Southwest Germany in 2011, and the performance of the Jarvis and Ball-Berry stomatal resistance scheme was compared. The width of the PUB of the LE was up to 110 W m-2 (21% of net radiation). Our study shows that it is crucial to account for the uncertainty of EC flux data originating from lacking energy balance closure. Working with only a single post-closing method might result in severe misinterpretations in model-data comparisons.

  15. A Method for Missile Autopilot Performance Evaluation

    NASA Astrophysics Data System (ADS)

    Eguchi, Hirofumi

    The essential benefit of HardWare-In-the-Loop (HWIL) simulation can be summarized as that the performance of autopilot system is evaluated realistically without the modeling error by using actual hardware such as seeker systems, autopilot systems and servo equipments. HWIL simulation, however, requires very expensive facilities: in these facilities, the target model generator is the indispensable subsystem. In this paper, one example of HWIL simulation facility with a target model generator for RF seeker systems is introduced at first. But this generator has the functional limitation on the line-of-sight angle as almost other generators, then, a test method to overcome the line-of-sight angle limitation is proposed.

  16. 48 CFR 2936.604 - Performance evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... CONTRACTING REQUIREMENTS CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACTS Architect-Engineer Services 2936.604 Performance evaluation. (a) The HCA must establish procedures to evaluate architect-engineer contractor... reports must be made using Standard Form 1421, Performance Evaluation (Architect-Engineer) as...

  17. 48 CFR 2936.604 - Performance evaluation.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... CONTRACTING REQUIREMENTS CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACTS Architect-Engineer Services 2936.604 Performance evaluation. (a) The HCA must establish procedures to evaluate architect-engineer contractor... reports must be made using Standard Form 1421, Performance Evaluation (Architect-Engineer) as...

  18. 48 CFR 2936.604 - Performance evaluation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... CONTRACTING REQUIREMENTS CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACTS Architect-Engineer Services 2936.604 Performance evaluation. (a) The HCA must establish procedures to evaluate architect-engineer contractor... reports must be made using Standard Form 1421, Performance Evaluation (Architect-Engineer) as...

  19. VPPA weld model evaluation

    NASA Technical Reports Server (NTRS)

    Mccutcheon, Kimble D.; Gordon, Stephen S.; Thompson, Paul A.

    1992-01-01

    NASA uses the Variable Polarity Plasma Arc Welding (VPPAW) process extensively for fabrication of Space Shuttle External Tanks. This welding process has been in use at NASA since the late 1970's but the physics of the process have never been satisfactorily modeled and understood. In an attempt to advance the level of understanding of VPPAW, Dr. Arthur C. Nunes, Jr., (NASA) has developed a mathematical model of the process. The work described in this report evaluated and used two versions (level-0 and level-1) of Dr. Nunes' model, and a model derived by the University of Alabama at Huntsville (UAH) from Dr. Nunes' level-1 model. Two series of VPPAW experiments were done, using over 400 different combinations of welding parameters. Observations were made of VPPAW process behavior as a function of specific welding parameter changes. Data from these weld experiments was used to evaluate and suggest improvements to Dr. Nunes' model. Experimental data and correlations with the model were used to develop a multi-variable control algorithm for use with a future VPPAW controller. This algorithm is designed to control weld widths (both on the crown and root of the weld) based upon the weld parameters, base metal properties, and real-time observation of the crown width. The algorithm exhibited accuracy comparable to that of the weld width measurements for both aluminum and mild steel welds.

  20. 48 CFR 236.604 - Performance evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ..., DEPARTMENT OF DEFENSE SPECIAL CATEGORIES OF CONTRACTING CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACTS Architect-Engineer Services 236.604 Performance evaluation. (a) Preparation of performance reports. Use DD Form 2631, Performance Evaluation (Architect-Engineer), instead of SF 1421. (2) Prepare a...

  1. What makes a top research medical school? A call for a new model to evaluate academic physicians and medical school performance.

    PubMed

    Goldstein, Matthew J; Lunn, Mitchell R; Peng, Lily

    2015-05-01

    Since the publication of the Flexner Report in 1910, the medical education enterprise has undergone many changes to ensure that medical schools meet a minimum standard for the curricula and clinical training they offer students. Although the efforts of the licensing and accrediting bodies have raised the quality of medical education, the educational processes that produce the physicians who provide the best patient care and conduct the best biomedical research have not been identified. Comparative analyses are powerful tools to understand the differences between institutions, but they are challenging to carry out. As a result, the analysis performed by U.S. News & World Report (USN&WR) has become the default tool to compare U.S. medical schools. Medical educators must explore more rigorous and equitable approaches to analyze and understand the performance of medical schools. In particular, a better understanding and more thorough evaluation of the most successful institutions in producing academic physicians with biomedical research careers are needed. In this Perspective, the authors present a new model to evaluate medical schools' production of academic physicians who advance medicine through basic, clinical, translational, and implementation science research. This model is based on relevant and accessible objective criteria that should replace the subjective criteria used in the current USN&WR rankings system. By fostering a national discussion about the most meaningful criteria that should be measured and reported, the authors hope to increase transparency of assessment standards and ultimately improve educational quality. PMID:25607941

  2. Evaluating the performance of the Community Land Model (CLM4.5) for a western US coniferous forest under annual drought stress

    NASA Astrophysics Data System (ADS)

    Duarte, H.; Lin, J. C.; Ehleringer, J. R.

    2014-12-01

    The Community Land Model (CLM) is the land model of NCAR's Community Earth System Model (CESM), encompassing land biogeophysics, biogeochemistry, hydrology, and ecosystem dynamics components. Several modifications were implemented in its most recent release (CLM4.5), including a revised photosynthesis scheme and improved hydrology, among an extensive list of updates. Since version 4.0, CLM also includes parameterizations related to photosynthetic carbon isotope discrimination. In this study we evaluate the performance of CLM4.5 at the Wind River Field Station AmeriFlux site (US-Wrc), with particular attention to its parameterization of ecosystem drought response. US-Wrc is located near the WA/OR border in a coniferous forest (Douglas-fir/western hemlock), in a region characterized by strongly seasonal climate and summer drought. Long-term meteorological/biological data are available through the AmeriFlux repository (almost a decade of L4 (gap-filled) data available, starting in 1998). Another factor that makes the site so unique is the availability of a decade-long record of carbon isotope ratios (δ13C). Here we run CLM in offline mode, forced by the observed meteorological data, and then compare modeled surface fluxes (CO2, sensible heat, and latent heat) against observed eddy-covariance fluxes. We also use the observed δ13C values to assess the parameterizations of carbon isotope discrimination in the model. We will present the result of the analysis and discuss possible improvements in the model.

  3. Infrasound Sensor Models and Evaluations

    SciTech Connect

    KROMER,RICHARD P.; MCDONALD,TIMOTHY S.

    2000-07-31

    Sandia National Laboratories has continued to evaluate the performance of infrasound sensors that are candidates for use by the International Monitoring System (IMS) for the Comprehensive Nuclear-Test-Ban Treaty Organization. The performance criteria against which these sensors are assessed are specified in ``Operational Manual for Infra-sound Monitoring and the International Exchange of Infrasound Data''. This presentation includes the results of efforts concerning two of these sensors: (1) Chaparral Physics Model 5; and (2) CEA MB2000. Sandia is working with Chaparral Physics in order to improve the capability of the Model 5 (a prototype sensor) to be calibrated and evaluated. With the assistance of the Scripps Institution of Oceanography, Sandia is also conducting tests to evaluate the performance of the CEA MB2000. Sensor models based on theoretical transfer functions and manufacturer specifications for these two devices have been developed. This presentation will feature the results of coherence-based data analysis of signals from a huddle test, utilizing several sensors of both types, in order to verify the sensor performance.

  4. Evaluating Service Organization Models

    PubMed Central

    TOUATI, NASSERA; PINEAULT, RAYNALD; CHAMPAGNE, FRANÇOIS; DENIS, JEAN-LOUIS; BROUSSELLE, ASTRID; CONTANDRIOPOULOS, ANDRÉ-PIERRE; GENEAU, ROBERT

    2016-01-01

    Based on the example of the evaluation of service organization models, this article shows how a configurational approach overcomes the limits of traditional methods which for the most part have studied the individual components of various models considered independently of one another. These traditional methods have led to results (observed effects) that are difficult to interpret. The configurational approach, in contrast, is based on the hypothesis that effects are associated with a set of internally coherent model features that form various configurations. These configurations, like their effects, are context-dependent. We explore the theoretical basis of the configuration approach in order to emphasize its relevance, and discuss the methodological challenges inherent in the application of this approach through an in-depth analysis of the scientific literature. We also propose methodological solutions to these challenges. We illustrate from an example how a configurational approach has been used to evaluate primary care models. Finally, we begin a discussion on the implications of this new evaluation approach for the scientific and decision-making communities.

  5. INTEGRATED WATER TREATMENT SYSTEM PERFORMANCE EVALUATION

    SciTech Connect

    SEXTON RA; MEEUWSEN WE

    2009-03-12

    This document describes the results of an evaluation of the current Integrated Water Treatment System (IWTS) operation against design performance and a determination of short term and long term actions recommended to sustain IWTS performance.

  6. Performance Evaluation of PBL Schemes of ARW Model in Simulating Thermo-Dynamical Structure of Pre-Monsoon Convective Episodes over Kharagpur Using STORM Data Sets

    NASA Astrophysics Data System (ADS)

    Madala, Srikanth; Satyanarayana, A. N. V.; Srinivas, C. V.; Tyagi, Bhishma

    2016-05-01

    In the present study, advanced research WRF (ARW) model is employed to simulate convective thunderstorm episodes over Kharagpur (22°30'N, 87°20'E) region of Gangetic West Bengal, India. High-resolution simulations are conducted using 1 × 1 degree NCEP final analysis meteorological fields for initial and boundary conditions for events. The performance of two non-local [Yonsei University (YSU), Asymmetric Convective Model version 2 (ACM2)] and two local turbulence kinetic energy closures [Mellor-Yamada-Janjic (MYJ), Bougeault-Lacarrere (BouLac)] are evaluated in simulating planetary boundary layer (PBL) parameters and thermodynamic structure of the atmosphere. The model-simulated parameters are validated with available in situ meteorological observations obtained from micro-meteorological tower as well has high-resolution DigiCORA radiosonde ascents during STORM-2007 field experiment at the study location and Doppler Weather Radar (DWR) imageries. It has been found that the PBL structure simulated with the TKE closures MYJ and BouLac are in better agreement with observations than the non-local closures. The model simulations with these schemes also captured the reflectivity, surface pressure patterns such as wake-low, meso-high, pre-squall low and the convective updrafts and downdrafts reasonably well. Qualitative and quantitative comparisons reveal that the MYJ followed by BouLac schemes better simulated various features of the thunderstorm events over Kharagpur region. The better performance of MYJ followed by BouLac is evident in the lesser mean bias, mean absolute error, root mean square error and good correlation coefficient for various surface meteorological variables as well as thermo-dynamical structure of the atmosphere relative to other PBL schemes. The better performance of the TKE closures may be attributed to their higher mixing efficiency, larger convective energy and better simulation of humidity promoting moist convection relative to non

  7. Pragmatic geometric model evaluation

    NASA Astrophysics Data System (ADS)

    Pamer, Robert

    2015-04-01

    Quantification of subsurface model reliability is mathematically and technically demanding as there are many different sources of uncertainty and some of the factors can be assessed merely in a subjective way. For many practical applications in industry or risk assessment (e. g. geothermal drilling) a quantitative estimation of possible geometric variations in depth unit is preferred over relative numbers because of cost calculations for different scenarios. The talk gives an overview of several factors that affect the geometry of structural subsurface models that are based upon typical geological survey organization (GSO) data like geological maps, borehole data and conceptually driven construction of subsurface elements (e. g. fault network). Within the context of the trans-European project "GeoMol" uncertainty analysis has to be very pragmatic also because of different data rights, data policies and modelling software between the project partners. In a case study a two-step evaluation methodology for geometric subsurface model uncertainty is being developed. In a first step several models of the same volume of interest have been calculated by omitting successively more and more input data types (seismic constraints, fault network, outcrop data). The positions of the various horizon surfaces are then compared. The procedure is equivalent to comparing data of various levels of detail and therefore structural complexity. This gives a measure of the structural significance of each data set in space and as a consequence areas of geometric complexity are identified. These areas are usually very data sensitive hence geometric variability in between individual data points in these areas is higher than in areas of low structural complexity. Instead of calculating a multitude of different models by varying some input data or parameters as it is done by Monte-Carlo-simulations, the aim of the second step of the evaluation procedure (which is part of the ongoing work) is to

  8. Performance Evaluation of the SPT-140

    NASA Technical Reports Server (NTRS)

    Manzella, David; Sarmiento, Charles; Sankovic, John; Haag, Tom

    1997-01-01

    As part of an on-going cooperative program with industry, an engineering model SPT-140 Hall thruster, which may be suitable for orbit insertion and station-keeping of geosynchronous communication satellites, was evaluated with respect to thrust and radiated electromagnetic interference at the NASA Lewis Research Center. Performance measurements were made using a laboratory model propellant feed system and commercial power supplies. The engine was operated in a space simulation chamber capable of providing background pressures of 4 x 10(exp -6) Torr or less during thruster operation. Thrust was measured at input powers ranging from 1.5 to 5 kilowatts with two different output filter configurations. The broadband electromagnetic emission spectra generated by the engine was also measured for a range of frequencies from 0.01 to 18,000 Mhz. These results are compared to the noise threshold of the measurement system and MIL-STD-461C where appropriate.

  9. Evaluation of the performance of SiBcrop model in predicting carbon fluxes and crop yields in the croplands of the US mid continental region

    NASA Astrophysics Data System (ADS)

    Lokupitiya, E.; Denning, S.; Paustian, K.; Corbin, K.; Baker, I.; Schaefer, K.

    2008-12-01

    The accurate representation of phenology, physiology, and major crop variables is important in the land- atmosphere carbon models being used to predict carbon and other exchanges of the man-made cropland ecosystems. We evaluated the performance of SiBcrop model (which is the Simple Biosphere model (SiB) with a new scheme for crop phenology and physiology) in predicting carbon exchanges of the US mid continental region which has several major crops. The use of the new phenology scheme within SiB remarkably improved the prediction of LAI and carbon fluxes for corn, soybean, and wheat crops as compared with the observed data at several Ameriflux eddy covariance flux tower sites with those crops. SiBcrop better predicted the onset and end of the growing season, harvest, interannual variability associated with crop rotation, day time carbon draw down, and day to day variability in the carbon exchanges. The model has been coupled with RAMS, the regional Atmospheric Modeling System (developed at Colorado State University), and the coupled SiBcrop-RAMS has predicted better carbon and other fluxes compared to the original SiB-RAMS. SiBcrop also predicted daily variation in biomass in different plant pools (i.e. roots, leaves, stems, and products). In this study, we further evaluated the performance of SiBcrop by comparing the yield estimates based on the grain/seed biomass at harvest predicted by SiBcrop for relevant major crops, against the county-level crop yields reported by the US National Agricultural Statistics Service (NASS). Initially, the model runs were based on crop maps scaled at 40 km resolution; the maps were used to derive the fraction of corn, soybean, and wheat at each grid cell across the US Mid Continental Intensive (MCI) region under the North American Carbon Program (NACP). The yield biomass carbon values (at harvest) predicted for each grid cell by SiBcrop were extrapolated to derive the county-level yield biomass carbon values, which were then

  10. S-191 sensor performance evaluation

    NASA Technical Reports Server (NTRS)

    Hughes, C. L.

    1975-01-01

    A final analysis was performed on the Skylab S-191 spectrometer data received from missions SL-2, SL-3, and SL-4. The repeatability and accuracy of the S-191 spectroradiometric internal calibration was determined by correlation to the output obtained from well-defined external targets. These included targets on the moon and earth as well as deep space. In addition, the accuracy of the S-191 short wavelength autocalibration was flight checked by correlation of the earth resources experimental package S-191 outputs and the Backup Unit S-191 outputs after viewing selected targets on the moon.

  11. Evaluating the performance of SURFEXv5 as a new land surface scheme for the ALADINcy36 and ALARO-0 models

    NASA Astrophysics Data System (ADS)

    Hamdi, R.; Degrauwe, D.; Duerinckx, A.; Cedilnik, J.; Costa, V.; Dalkilic, T.; Essaouini, K.; Jerczynki, M.; Kocaman, F.; Kullmann, L.; Mahfouf, J.-F.; Meier, F.; Sassi, M.; Schneider, S.; Váňa, F.; Termonia, P.

    2013-07-01

    The newly developed land surface scheme SURFEX (Surface Externalisée) is implemented into a limited area numerical weather prediction model running operationally in a number of countries of the ALADIN and HIRLAM consortia. The primary question addressed is the ability of SURFEX to be used as a new land surface scheme and thus assessing its potential use in an operational configuration instead of the original ISBA (Interactions between Soil, Biosphere, and Atmosphere) scheme. The results show that the introduction of SURFEX either gives improvements or neutral impact on the 2 m temperature, 2 m relative humidity, and 10 m wind. However, it seems that SURFEX has a tendency to produce higher maximum temperatures at high elevation stations during winter daytime which degrades the scores. In addition, surface radiative and energy fluxes improve compared to observations from the Cabauw tower. The results also show that promising improvements with a demonstrated positive impact are achieved by introducing the Town Energy Balance (TEB) scheme. It was found that the use of SURFEX has a neutral impact on the precipitation scores. However, the implementation of TEB within SURFEX for a high resolution run tends to cause rainfall to be locally concentrated and the total accumulated precipitation decreases obviously during the summer. One of the novel features developed in SURFEX is the availability of a more advanced surface data assimilation using the Extended Kalman Filter. The results over Belgium show that the forecast scores are similar between the Extended Kalman Filter and the classical Optimal Interpolation scheme. Finally, concerning the upper air scores, the introduction of SURFEX either gives improvement or neutral impact in the free atmosphere.

  12. Evaluating iterative reconstruction performance in computed tomography

    SciTech Connect

    Chen, Baiyu Solomon, Justin; Ramirez Giraldo, Juan Carlos; Samei, Ehsan

    2014-12-15

    Purpose: Iterative reconstruction (IR) offers notable advantages in computed tomography (CT). However, its performance characterization is complicated by its potentially nonlinear behavior, impacting performance in terms of specific tasks. This study aimed to evaluate the performance of IR with both task-specific and task-generic strategies. Methods: The performance of IR in CT was mathematically assessed with an observer model that predicted the detection accuracy in terms of the detectability index (d′). d′ was calculated based on the properties of the image noise and resolution, the observer, and the detection task. The characterizations of image noise and resolution were extended to accommodate the nonlinearity of IR. A library of tasks was mathematically modeled at a range of sizes (radius 1–4 mm), contrast levels (10–100 HU), and edge profiles (sharp and soft). Unique d′ values were calculated for each task with respect to five radiation exposure levels (volume CT dose index, CTDI{sub vol}: 3.4–64.8 mGy) and four reconstruction algorithms (filtered backprojection reconstruction, FBP; iterative reconstruction in imaging space, IRIS; and sinogram affirmed iterative reconstruction with strengths of 3 and 5, SAFIRE3 and SAFIRE5; all provided by Siemens Healthcare, Forchheim, Germany). The d′ values were translated into the areas under the receiver operating characteristic curve (AUC) to represent human observer performance. For each task and reconstruction algorithm, a threshold dose was derived as the minimum dose required to achieve a threshold AUC of 0.9. A task-specific dose reduction potential of IR was calculated as the difference between the threshold doses for IR and FBP. A task-generic comparison was further made between IR and FBP in terms of the percent of all tasks yielding an AUC higher than the threshold. Results: IR required less dose than FBP to achieve the threshold AUC. In general, SAFIRE5 showed the most significant dose reduction

  13. Performance evaluation and modeling of a submerged membrane bioreactor treating combined municipal and industrial wastewater using radial basis function artificial neural networks.

    PubMed

    Mirbagheri, Seyed Ahmad; Bagheri, Majid; Boudaghpour, Siamak; Ehteshami, Majid; Bagheri, Zahra

    2015-01-01

    Treatment process models are efficient tools to assure proper operation and better control of wastewater treatment systems. The current research was an effort to evaluate performance of a submerged membrane bioreactor (SMBR) treating combined municipal and industrial wastewater and to simulate effluent quality parameters of the SMBR using a radial basis function artificial neural network (RBFANN). The results showed that the treatment efficiencies increase and hydraulic retention time (HRT) decreases for combined wastewater compared with municipal and industrial wastewaters. The BOD, COD, [Formula: see text] and total phosphorous (TP) removal efficiencies for combined wastewater at HRT of 7 hours were 96.9%, 96%, 96.7% and 92%, respectively. As desirable criteria for treating wastewater, the TBOD/TP ratio increased, the BOD and COD concentrations decreased to 700 and 1000 mg/L, respectively and the BOD/COD ratio was about 0.5 for combined wastewater. The training procedures of the RBFANN models were successful for all predicted components. The train and test models showed an almost perfect match between the experimental and predicted values of effluent BOD, COD, [Formula: see text] and TP. The coefficient of determination (R(2)) values were higher than 0.98 and root mean squared error (RMSE) values did not exceed 7% for train and test models. PMID:25798288

  14. Performance evaluations of the ATST secondary mirror

    NASA Astrophysics Data System (ADS)

    Cho, Myung K.; DeVries, Joseph; Hansen, Eric

    2007-09-01

    The Advanced Technology Solar Telescope (ATST) has a 4.24m off-axis primary mirror designed to deliver diffraction-limited images of the sun. Its baseline secondary mirror (M2) design uses a 0.65m diameter Silicon Carbide mirror mounted kinematically by a bi-pod flexure mechanism at three equally spaced locations. Unlike other common telescopes, the ATST M2 is to be exposed to a significant solar heat loading. A thermal management system will be developed to accommodate the solar loading and minimize "mirror seeing effect" by controlling the temperature difference between the M2 optical surface and the ambient air at the site. Thermo-elastic analyses for steady state thermal behaviors of the ATST secondary mirror was performed using finite element analysis by I-DEAS TM and PCFRINGE TM for the optical analysis. We examined extensive heat transfer simulation cases and their results are discussed. The goal of this study is to evaluate the optical performances of M2 using thermal models and mechanical models. Thermal responses from the models enable us to manipulate time dependent thermal loadings to synthesize the operational environment for the design and development of TMS.

  15. Evaluating the performance of reference evapotranspiration equations with scintillometer measurements under Mediterranean climate and effects on olive grove actual evapotranspiration estimated with FAO-56 water balance model

    NASA Astrophysics Data System (ADS)

    Minacapilli, Mario; Cammalleri, Carmelo; Ciraolo, Giuseppe; Provenzano, Giuseppe; Rallo, Giovanni

    2014-05-01

    The concept of reference evapotranspiration (ETo) is widely used to support water resource management in agriculture and for irrigation scheduling, especially under arid and semi-arid conditions. The Penman-Monteith standardized formulations, as suggested by ASCE and FAO-56 papers, are generally applied for accurate estimations of ETo, at hourly and daily scale. When detailed meteorological information are not available, several alternative and simplified equations, using a limited number of variables, have been proposed (Blaney-Criddle, Hargreaves-Samani, Turc, Makkinen and Pristley-Taylor). In this paper, scintillometer measurements collected for six month in 2005, on an experimental plot under "reference" conditions, were used to validate different ETo equations at hourly and daily scale. Experimental plot is located in a typical agricultural Mediterranean environment (Sicily, Italy), where olive groves is the dominant crop. As proved by other researches, the comparison confirmed the best agreement between estimated and measured fluxes corresponds to FAO-56 Penman-Monteith standardized equation, that was characterized by both the lowest average error and the minimum bias. However, the analysis also evidenced a quite good performance of Pristley-Taylor equation, that can be considered as a valid alternative to the more sophisticated Penman-Monteith method. The different ETo series, obtained by the considered simplified equations, were then used as input in the FAO-56 water balance model, in order to evaluate, for olive groves, the errors on estimated actual evapotranspiration ET. To this aim soil and crop model input parameters were settled by considering previous experimental researches already used to calibrate and validate the FAO-56 water balance model on olive groves, for the same study area. Also in this case, assuming as the true values of ET those obtained using the water balance coupled with Penman-Monteith ETo input values, the Priestley-Taylor equation

  16. Using the Many-Facet Rasch Model to Evaluate Standard-Setting Judgments: Setting Performance Standards for Advanced Placement® Examinations

    ERIC Educational Resources Information Center

    Kaliski, Pamela; Wind, Stefanie A.; Engelhard, George, Jr.; Morgan, Deanna; Plake, Barbara; Reshetar, Rosemary

    2012-01-01

    The Many-Facet Rasch (MFR) Model is traditionally used to evaluate the quality of ratings on constructed response assessments; however, it can also be used to evaluate the quality of judgments from panel-based standard setting procedures. The current study illustrates the use of the MFR Model by examining the quality of ratings obtained from a…

  17. Theory and Practice on Teacher Performance Evaluation

    ERIC Educational Resources Information Center

    Yonghong, Cai; Chongde, Lin

    2006-01-01

    Teacher performance evaluation plays a key role in educational personnel reform, so it has been an important yet difficult issue in educational reform. Previous evaluations on teachers failed to make strict distinction among the three dominant types of evaluation, namely, capability, achievement, and effectiveness. Moreover, teacher performance…

  18. A Teacher's Guide to Teaching Performance Evaluation.

    ERIC Educational Resources Information Center

    Armstrong, Harold R.

    What is popularly known in teacher evaluation as "the Redfern Approach" has emerged from almost two decades of experimentation and discussion. This approach involves setting performance standards and job targets, monitoring the data, the evaluating, the evaluation conference, and related followup activities. This guide is intended to fill a gap in…

  19. Towards Systematic Benchmarking of Climate Model Performance

    NASA Astrophysics Data System (ADS)

    Gleckler, P. J.

    2014-12-01

    The process by which climate models are evaluated has evolved substantially over the past decade, with the Coupled Model Intercomparison Project (CMIP) serving as a centralizing activity for coordinating model experimentation and enabling research. Scientists with a broad spectrum of expertise have contributed to the CMIP model evaluation process, resulting in many hundreds of publications that have served as a key resource for the IPCC process. For several reasons, efforts are now underway to further systematize some aspects of the model evaluation process. First, some model evaluation can now be considered routine and should not require "re-inventing the wheel" or a journal publication simply to update results with newer models. Second, the benefit of CMIP research to model development has not been optimal because the publication of results generally takes several years and is usually not reproducible for benchmarking newer model versions. And third, there are now hundreds of model versions and many thousands of simulations, but there is no community-based mechanism for routinely monitoring model performance changes. An important change in the design of CMIP6 can help address these limitations. CMIP6 will include a small set standardized experiments as an ongoing exercise (CMIP "DECK": ongoing Diagnostic, Evaluation and Characterization of Klima), so that modeling groups can submit them at any time and not be overly constrained by deadlines. In this presentation, efforts to establish routine benchmarking of existing and future CMIP simulations will be described. To date, some benchmarking tools have been made available to all CMIP modeling groups to enable them to readily compare with CMIP5 simulations during the model development process. A natural extension of this effort is to make results from all CMIP simulations widely available, including the results from newer models as soon as the simulations become available for research. Making the results from routine

  20. Performance evaluation of a non-hydrostatic regional climate model over the Mediterranean/Black Sea area and climate projections for the XXI century

    NASA Astrophysics Data System (ADS)

    Mercogliano, Paola; Bucchignani, Edoardo; Montesarchio, Myriam; Zollo, Alessandra Lucia

    2013-04-01

    In the framework of the Work Package 4 (Developing integrated tools for environmental assessment) of PERSEUS Project, high resolution climate simulations have been performed, with the aim of furthering knowledge in the field of climate variability at regional scale, its causes and impacts. CMCC is a no profit centre whose aims are the promotion, research coordination and scientific activities in the field of climate changes. In this work, we show results of numerical simulation performed over a very wide area (13W-46E; 29-56N) at spatial resolution of 14 km, which includes the Mediterranean and Black Seas, using the regional climate model COSMO-CLM. It is a non-hydrostatic model for the simulation of atmospheric processes, developed by the DWD-Germany for weather forecast services; successively, the model has been updated by the CLM-Community, in order to develop climatic applications. It is the only documented numerical model system in Europe designed for spatial resolutions down to 1 km with a range of applicability encompassing operational numerical weather prediction, regional climate modelling the dispersion of trace gases and aerosol and idealised studies and applicable in all regions of the world for a wide range of available climate simulations from global climate and NWP models. Different reasons justify the development of a regional model: the first is the increasing number of works in literature asserting that regional models have also the features to provide more detailed description of the climate extremes, that are often more important then their mean values for natural and human systems. The second one is that high resolution modelling shows adequate features to provide information for impact assessment studies. At CMCC, regional climate modelling is a part of an integrated simulation system and it has been used in different European and African projects to provide qualitative and quantitative evaluation of the hydrogeological and public health risks

  1. Performance Evaluation of Emerging High Performance Computing Technologies using WRF

    NASA Astrophysics Data System (ADS)

    Newby, G. B.; Morton, D.

    2008-12-01

    The Arctic Region Supercomputing Center (ARSC) has evaluated multicore processors and other emerging processor technologies for a variety of high performance computing applications in the earth and space sciences, especially climate and weather applications. A flagship effort has been to assess dual core processor nodes on ARSC's Midnight supercomputer, in which two-socket systems were compared to eight-socket systems. Midnight is utilized for ARSC's twice-daily weather research and forecasting (WRF) model runs, available at weather.arsc.edu. Among other findings on Midnight, it was found that the Hypertransport system for interconnecting Opteron processors, memory, and other subsystems does not scale as well on eight-socket (sixteen processor) systems as well as two-socket (four processor) systems. A fundamental limitation is the cache snooping operation performed whenever a computational thread accesses main memory. This increases memory latency as the number of processor sockets increases. This is particularly noticeable on applications such as WRF that are primarily CPU-bound, versus applications that are bound by input/output or communication. The new Cray XT5 supercomputer at ARSC features quad core processors, and will host a variety of scaling experiments for WRF, CCSM4, and other models. Early results will be presented, including a series of WRF runs for Alaska with grid resolutions under 2km. ARSC will discuss a set of standardized test cases for the Alaska domain, similar to existing test cases for CONUS. These test cases will provide different configuration sizes and resolutions, suitable for single processors up to thousands. Beyond multi-core Opteron-based supercomputers, ARSC has examined WRF and other applications on additional emerging technologies. One such technology is the graphics processing unit, or GPU. The 9800-series nVidia GPU was evaluated with the cuBLAS software library. While in-socket GPUs might be forthcoming in the future, current

  2. Managing Technological Change by Changing Performance Appraisal to Performance Evaluation.

    ERIC Educational Resources Information Center

    Marquardt, Steve

    1996-01-01

    Academic libraries can improve their management of change by reshaping performance appraisal into performance planning. This article notes problems with traditional employee evaluation as well as benefits of alternatives that focus on the future, on users, on planning and learning, and on skills needed to address problems and enhance individual…

  3. BioVapor Model Evaluation

    EPA Science Inventory

    General background on modeling and specifics of modeling vapor intrusion are given. Three classical model applications are described and related to the problem of petroleum vapor intrusion. These indicate the need for model calibration and uncertainty analysis. Evaluation of Bi...

  4. Social Program Evaluation: Six Models.

    ERIC Educational Resources Information Center

    New Directions for Program Evaluation, 1980

    1980-01-01

    Representative models of program evaluation are described by their approach to values, and categorized by empirical style: positivism versus humanism. The models are: social process audit; experimental/quasi-experimental research design; goal-free evaluation; systems evaluation; cost-benefit analysis; and accountability program evaluation. (CP)

  5. Evaluation Theory, Models, and Applications

    ERIC Educational Resources Information Center

    Stufflebeam, Daniel L.; Shinkfield, Anthony J.

    2007-01-01

    "Evaluation Theory, Models, and Applications" is designed for evaluators and students who need to develop a commanding knowledge of the evaluation field: its history, theory and standards, models and approaches, procedures, and inclusion of personnel as well as program evaluation. This important book shows how to choose from a growing array of…

  6. Conductor gestures influence evaluations of ensemble performance

    PubMed Central

    Morrison, Steven J.; Price, Harry E.; Smedley, Eric M.; Meals, Cory D.

    2014-01-01

    Previous research has found that listener evaluations of ensemble performances vary depending on the expressivity of the conductor’s gestures, even when performances are otherwise identical. It was the purpose of the present study to test whether this effect of visual information was evident in the evaluation of specific aspects of ensemble performance: articulation and dynamics. We constructed a set of 32 music performances that combined auditory and visual information and were designed to feature a high degree of contrast along one of two target characteristics: articulation and dynamics. We paired each of four music excerpts recorded by a chamber ensemble in both a high- and low-contrast condition with video of four conductors demonstrating high- and low-contrast gesture specifically appropriate to either articulation or dynamics. Using one of two equivalent test forms, college music majors and non-majors (N = 285) viewed sixteen 30 s performances and evaluated the quality of the ensemble’s articulation, dynamics, technique, and tempo along with overall expressivity. Results showed significantly higher evaluations for performances featuring high rather than low conducting expressivity regardless of the ensemble’s performance quality. Evaluations for both articulation and dynamics were strongly and positively correlated with evaluations of overall ensemble expressivity. PMID:25104944

  7. Conductor gestures influence evaluations of ensemble performance.

    PubMed

    Morrison, Steven J; Price, Harry E; Smedley, Eric M; Meals, Cory D

    2014-01-01

    Previous research has found that listener evaluations of ensemble performances vary depending on the expressivity of the conductor's gestures, even when performances are otherwise identical. It was the purpose of the present study to test whether this effect of visual information was evident in the evaluation of specific aspects of ensemble performance: articulation and dynamics. We constructed a set of 32 music performances that combined auditory and visual information and were designed to feature a high degree of contrast along one of two target characteristics: articulation and dynamics. We paired each of four music excerpts recorded by a chamber ensemble in both a high- and low-contrast condition with video of four conductors demonstrating high- and low-contrast gesture specifically appropriate to either articulation or dynamics. Using one of two equivalent test forms, college music majors and non-majors (N = 285) viewed sixteen 30 s performances and evaluated the quality of the ensemble's articulation, dynamics, technique, and tempo along with overall expressivity. Results showed significantly higher evaluations for performances featuring high rather than low conducting expressivity regardless of the ensemble's performance quality. Evaluations for both articulation and dynamics were strongly and positively correlated with evaluations of overall ensemble expressivity. PMID:25104944

  8. LANDSAT-4 horizon scanner performance evaluation

    NASA Technical Reports Server (NTRS)

    Bilanow, S.; Chen, L. C.; Davis, W. M.; Stanley, J. P.

    1984-01-01

    Representative data spans covering a little more than a year since the LANDSAT-4 launch were analyzed to evaluate the flight performance of the satellite's horizon scanner. High frequency noise was filtered out by 128-point averaging. The effects of Earth oblateness and spacecraft altitude variations are modeled, and residual systematic errors are analyzed. A model for the predicted radiance effects is compared with the flight data and deficiencies in the radiance effects modeling are noted. Correction coefficients are provided for a finite Fourier series representation of the systematic errors in the data. Analysis of the seasonal dependence of the coefficients indicates the effects of some early mission problems with the reference attitudes which were computed by the onboard computer using star trackers and gyro data. The effects of sun and moon interference, unexplained anomalies in the data, and sensor noise characteristics and their power spectrum are described. The variability of full orbit data averages is shown. Plots of the sensor data for all the available data spans are included.

  9. Performance of STICS model to predict rainfed corn evapotranspiration and biomass evaluated for 6 years between 1995 and 2006 using daily aggregated eddy covariance fluxes and ancillary measurements.

    NASA Astrophysics Data System (ADS)

    Pattey, Elizabeth; Jégo, Guillaume; Bourgeois, Gaétan

    2010-05-01

    Verifying the performance of process-based crop growth models to predict evapotranspiration and crop biomass is a key component of the adaptation of agricultural crop production to climate variations. STICS, developed by INRA, was part of the models selected by Agriculture and Agri-Food Canada to be implemented for environmental assessment studies on climate variations, because of its built-in ability to assimilate biophysical descriptors such as LAI derived from satellite imagery and its open architecture. The model prediction of shoot biomass was calibrated using destructive biomass measurements over one season, by adjusting six cultivar parameters and three generic plant parameters to define two grain corn cultivars adapted to the 1000-km long Mixedwood Plains ecozone. Its performance was then evaluated using a database of 40 years-sites of corn destructive biomass and yield. In this study we evaluate the temporal response of STICS evapotranspiration and biomass accumulation predictions against estimates using daily aggregated eddy covariance fluxes. The flux tower was located in an experimental farm south of Ottawa and measurements carried out over corn fields in 1995, 1996, 1998, 2000, 2002 and 2006. Daytime and nighttime fluxes were QC/QA and gap-filled separately. Soil respiration was partitioned to calculate the corn net daily CO2 uptake, which was converted into dry biomass. Out of the six growing seasons, three (1995, 1998, 2002) had water stress periods during corn grain filling. Year 2000 was cool and wet, while 1996 had heat and rainfall distributed evenly over the season and 2006 had a wet spring. STICS can predict evapotranspiration using either crop coefficients, when wind speed and air moisture are not available, or resistance. The first approach provided higher prediction for all the years than the resistance approach and the flux measurements. The dynamic of evapotranspiration prediction of STICS was very good for the growing seasons without

  10. TPF-C Performance Modeling

    NASA Technical Reports Server (NTRS)

    Shaklan, Stuart

    2008-01-01

    This slide presentation reviews the performance modeling of the Terrestrial Planet Finder Coronagraph (TPF-C). Included is a chart of the Error Budget Models, definitions of the static and dynamic terms, a chart showing the aberration sensitivity at 2 lambda/D, charts showing the thermal performance models and analysis, surface requirements, high-level requirements, and calculations for the beam walk model.Also included is a description of the control systems, and a flow for the iterative design and analysis cycle.

  11. STATISTICAL BASIS FOR LABORATORY PERFORMANCE EVALUATION LIMITS

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) conducts studies to evaluate the performance of drinking water and wastewater laboratories that analyze samples for major EPA programs. The studies involve sample concentrates which the participating laboratories dilute to volume wit...

  12. 48 CFR 436.604 - Performance evaluation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... CATEGORIES OF CONTRACTING CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACTS Architect-Engineer Service 436.604... require a performance evaluation report on the work done by the architect-engineer after the completion...

  13. 48 CFR 436.604 - Performance evaluation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... CATEGORIES OF CONTRACTING CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACTS Architect-Engineer Service 436.604... require a performance evaluation report on the work done by the architect-engineer after the completion...

  14. 48 CFR 436.604 - Performance evaluation.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... CATEGORIES OF CONTRACTING CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACTS Architect-Engineer Service 436.604... require a performance evaluation report on the work done by the architect-engineer after the completion...

  15. 48 CFR 436.604 - Performance evaluation.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... CATEGORIES OF CONTRACTING CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACTS Architect-Engineer Service 436.604... require a performance evaluation report on the work done by the architect-engineer after the completion...

  16. Performance Evaluation of Undulator Radiation at CEBAF

    SciTech Connect

    Chuyu Liu, Geoffrey Krafft, Guimei Wang

    2010-05-01

    The performance of undulator radiation (UR) at CEBAF with a 3.5 m helical undulator is evaluated and compared with APS undulator-A radiation in terms of brilliance, peak brilliance, spectral flux, flux density and intensity distribution.

  17. Actinide Sorption in a Brine/Dolomite Rock System: Evaluating the Degree of Conservatism in Kd Ranges used in Performance Assessment Modeling for the WIPP Nuclear Waste Repository

    NASA Astrophysics Data System (ADS)

    Dittrich, T. M.; Reed, D. T.

    2015-12-01

    The Waste Isolation Pilot Plant (WIPP) near Carlsbad, NM is the only operating nuclear waste repository in the US and has been accepting transuranic (TRU) waste since 1999. The WIPP is located in a salt deposit approximately 650 m below the surface and performance assessment (PA) modeling for a 10,000 year period is required to recertify the operating license with the US EPA every five years. The main pathway of concern for environmental release of radioactivity is a human intrusion caused by drilling into a pressurized brine reservoir below the repository. This could result in the flooding of the repository and subsequent transport in the high transmissivity layer (dolomite-rich Culebra formation) above the waste disposal rooms. We evaluate the degree of conservatism in the estimated sorption partition coefficients (Kds) ranges used in the PA based on an approach developed with granite rock and actinides (Dittrich and Reimus, 2015; Dittrich et al., 2015). Sorption onto the waste storage material (Fe drums) may also play a role in mobile actinide concentrations. We will present (1) a conceptual overview of how Kds are used in the PA model, (2) technical background of the evolution of the ranges and (3) results from batch and column experiments and model predictions for Kds with WIPP dolomite and clays, brine with various actinides, and ligands (e.g., acetate, citrate, EDTA) that could promote transport. The current Kd ranges used in performance models are based on oxidation state and are 5-400, 0.5-10,000, 0.03-200, and 0.03-20 mL g-1 for elements with oxidation states of III, IV, V, and VI, respectively. Based on redox conditions predicted in the brines, possible actinide species include Pu(III), Pu(IV), U(IV), U(VI), Np(IV), Np(V), Am(III), and Th(IV). We will also discuss the challenges of upscaling from lab experiments to field scale predictions, the role of colloids, and the effect of engineered barrier materials (e.g., MgO) on transport conditions. Dittrich

  18. Evaluation of high-performance computing software

    SciTech Connect

    Browne, S.; Dongarra, J.; Rowan, T.

    1996-12-31

    The absence of unbiased and up to date comparative evaluations of high-performance computing software complicates a user`s search for the appropriate software package. The National HPCC Software Exchange (NHSE) is attacking this problem using an approach that includes independent evaluations of software, incorporation of author and user feedback into the evaluations, and Web access to the evaluations. We are applying this approach to the Parallel Tools Library (PTLIB), a new software repository for parallel systems software and tools, and HPC-Netlib, a high performance branch of the Netlib mathematical software repository. Updating the evaluations with feed-back and making it available via the Web helps ensure accuracy and timeliness, and using independent reviewers produces unbiased comparative evaluations difficult to find elsewhere.

  19. Improvement of Automotive Part Supplier Performance Evaluation

    NASA Astrophysics Data System (ADS)

    Kongmunee, Chalermkwan; Chutima, Parames

    2016-05-01

    This research investigates the problem of the part supplier performance evaluation in a major Japanese automotive plant in Thailand. Its current evaluation scheme is based on experiences and self-opinion of the evaluators. As a result, many poor performance suppliers are still considered as good suppliers and allow to supply parts to the plant without further improvement obligation. To alleviate this problem, the brainstorming session among stakeholders and evaluators are formally conducted. The result of which is the appropriate evaluation criteria and sub-criteria. The analytical hierarchy process is also used to find suitable weights for each criteria and sub-criteria. The results show that a newly developed evaluation method is significantly better than the previous one in segregating between good and poor suppliers.

  20. Building Leadership Talent through Performance Evaluation

    ERIC Educational Resources Information Center

    Clifford, Matthew

    2015-01-01

    Most states and districts scramble to provide professional development to support principals, but "principal evaluation" is often lost amid competing priorities. Evaluation is an important method for supporting principal growth, communicating performance expectations to principals, and improving leadership practice. It provides leaders…

  1. Performance-Based Evaluation and School Librarians

    ERIC Educational Resources Information Center

    Church, Audrey P.

    2015-01-01

    Evaluation of instructional personnel is standard procedure in our Pre-K-12 public schools, and its purpose is to document educator effectiveness. With Race to the Top and No Child Left Behind waivers, states are required to implement performance-based evaluations that demonstrate student academic progress. This three-year study describes the…

  2. Assessment beyond Performance: Phenomenography in Educational Evaluation

    ERIC Educational Resources Information Center

    Micari, Marina; Light, Gregory; Calkins, Susanna; Streitwieser, Bernhard

    2007-01-01

    Increasing calls for accountability in education have promoted improvements in quantitative evaluation approaches that measure student performance; however, this has often been to the detriment of qualitative approaches, reducing the richness of educational evaluation as an enterprise. In this article the authors assert that it is not merely…

  3. Performance evaluation of video colonoscope systems

    NASA Astrophysics Data System (ADS)

    Picciano, Lawrence D.; Keller, James P.

    1994-05-01

    A comparative engineering performance evaluation was performed on video colonoscope systems from all three of the current U.S. suppliers: Fujinon, Olympus, and Pentax. Video system test methods, results, and conclusions based on their clinical significance are the focus of this paper.

  4. EVALUATION OF THE COMMUNITY MULTISCALE AIR QUALITY (CMAQ) MODEL VERSION 4.5: UNCERTAINTIES AND SENSITIVITIES IMPACTING MODEL PERFORMANCE: PART I - OZONE

    EPA Science Inventory

    This study examines ozone (O3) predictions from the Community Multiscale Air Quality (CMAQ) model version 4.5 and discusses potential factors influencing the model results. Daily maximum 8-hr average O3 levels are largely underpredicted when observed O...

  5. Image Evaluation For Sensor Performance Standards

    NASA Astrophysics Data System (ADS)

    Peck, Lorin C.

    1989-02-01

    The subject of imagery evaluation as it applies to electro-optical (EO) sensor performance testing standards is discussed. Some of the difficulties encountered in the development of these standards for the various aircraft Line Replaceable Units (LRUs) are listed. The use of system performance testing is regarded as a requirement for the depot maintenance program to insure the integrity of total system performance requirements for EO imaging systems such as the Advanced Tactical Air Reconnaissance System (ATARS). The necessity for tying NATO Essential Elements of Information (EEIs) together with Imagery Interpretation Rating Scale (IIRS) numbers is explained. The requirements for a field target suitable for EO imagery evaluation is explained.

  6. Nickel cadmium battery performance modelling

    NASA Technical Reports Server (NTRS)

    Clark, K.; Halpert, G.; Timmerman, P.

    1989-01-01

    The development of a model to predict cell/battery behavior given databases of temperature is described. The model accommodates batteries of various structural as well as thermal designs. Cell internal design modifications can be accommodated as long as the databases reflect the cell's performance characteristics. Operational parameters can be varied to simulate any number of charge or discharge methods under any orbital regime. The flexibility of the model stems from the broad scope of input variables and allows the prediction of battery performance under simulated mission or test conditions.

  7. Performance Appraisal: An Evaluation of Cambridgeshire Libraries' System.

    ERIC Educational Resources Information Center

    Hemmings, Richard

    1989-01-01

    Describes the design, implementation, and practice of a personnel evaluation method (performance appraisal) at the Cambridgeshire Libraries. Findings reported include staff attitudes and perceptions of the method, and the overall effectiveness of the evaluation scheme. Various theoretical models of appraisal and practical applications in…

  8. Air Conditioner Compressor Performance Model

    SciTech Connect

    Lu, Ning; Xie, YuLong; Huang, Zhenyu

    2008-09-05

    During the past three years, the Western Electricity Coordinating Council (WECC) Load Modeling Task Force (LMTF) has led the effort to develop the new modeling approach. As part of this effort, the Bonneville Power Administration (BPA), Southern California Edison (SCE), and Electric Power Research Institute (EPRI) Solutions tested 27 residential air-conditioning units to assess their response to delayed voltage recovery transients. After completing these tests, different modeling approaches were proposed, among them a performance modeling approach that proved to be one of the three favored for its simplicity and ability to recreate different SVR events satisfactorily. Funded by the California Energy Commission (CEC) under its load modeling project, researchers at Pacific Northwest National Laboratory (PNNL) led the follow-on task to analyze the motor testing data to derive the parameters needed to develop a performance models for the single-phase air-conditioning (SPAC) unit. To derive the performance model, PNNL researchers first used the motor voltage and frequency ramping test data to obtain the real (P) and reactive (Q) power versus voltage (V) and frequency (f) curves. Then, curve fitting was used to develop the P-V, Q-V, P-f, and Q-f relationships for motor running and stalling states. The resulting performance model ignores the dynamic response of the air-conditioning motor. Because the inertia of the air-conditioning motor is very small (H<0.05), the motor reaches from one steady state to another in a few cycles. So, the performance model is a fair representation of the motor behaviors in both running and stalling states.

  9. Simulated and reconstructed climate in Europe during the last five centuries: joint evaluation of climate models performance and the dynamical consistency of gridded reconstructions

    NASA Astrophysics Data System (ADS)

    José Gómez-Navarro, Juan; Bothe, Oliver; Wagner, Sebastian; Zorita, Eduardo; Werner, Johannes P.; Luterbacher, Jürg; Raible, Christoph C.; Montávez, Juan Pedro

    2015-04-01

    This study jointly analyses European winter and summer temperature and precipitation gridded climate reconstructions and a regional climate simulation reaching a resolution of 45 km over the period 1501-1990. In a first step, the simulation is compared to observational records to establish the model performance and to identify the most prominent caveats. It is found that the regional simulation is able to add value to the driving global simulation, which allows it to reproduce accurately the most prominent characteristics of the European climate, although remarkable biases can also be identified. In a second step, the simulation is compared to a set on independent reconstructions. The high-resolution of the simulation and the reconstructions allows to analyse the European area for nine sub-areas. An overall good agreement is found between the reconstructed and simulated climate variability across different areas, supporting a consistency of both products and the proper calibration of the reconstructions. However, biases appear between both datasets, that thanks to the evaluation of the model performance carried out before, can be attributed to deficiencies in the simulation. Although the simulation responds to external forcing, it largely differers with reconstructions in their estimates of the past climate evolution for European sub-regions. In particular, there are deviations between simulated and reconstructed anomalies during the Maunder and Dalton minima, i.e. the simulated response is much stronger than the reconstructed. This disagreement is to some extent expected given the prominent role of internal variability in the regional evolution of temperature and precipitation. However the inability of the model to reproduce any warm period similar to that recorded around 1740 in the reconstructions indicates fundamental limitations in the simulation that preclude reproducing exceptionally anomalous conditions. Despite these limitations, the simulated climate is a

  10. Next generation imager performance model

    NASA Astrophysics Data System (ADS)

    Teaney, Brian; Reynolds, Joseph

    2010-04-01

    The next generation of Army imager performance models is currently under development at NVESD. The aim of this new model is to provide a flexible and extensible engineering tool for system design which encapsulates all of the capabilities of the existing Night Vision model suite (NVThermIP, SSCamIP, etc) along with many new design tools and features including a more intuitive interface, the ability to perform trade studies, and a library of standard and user generated components. By combining the previous model architectures in one interface the new design is better suited to capture emerging technologies such as fusion and new sensor modalities. In this paper we will describe the general structure of the model and some of its current capabilities along with future development plans.

  11. Effects of Performers' External Characteristics on Performance Evaluations.

    ERIC Educational Resources Information Center

    Bermingham, Gudrun A.

    2000-01-01

    States that fairness has been a major concern in the field of music adjudication. Reviews the research literature to reveal information about three external characteristics (race, gender, and physical attractiveness) that may affect judges' performance evaluations and influence fairness of music adjudication. Includes references. (CMK)

  12. Impact of Full-Day Head Start Prekindergarten Class Model on Student Academic Performance, Cognitive Skills, and Learning Behaviors by the End of Grade 2. Evaluation Brief

    ERIC Educational Resources Information Center

    Zhao, Huafang; Modarresi, Shahpar

    2013-01-01

    This brief describes the impact of the Montgomery County (Maryland) Public Schools (MCPS) 2007-2008 full-day Head Start prekindergarten (pre-K) class model on student academic performance, cognitive skills, and learning behaviors by the end of Grade 2. This is the fourth impact study of the MCPS full-day Head Start pre-K class model. The following…

  13. MPD Thruster Performance Analytic Models

    NASA Astrophysics Data System (ADS)

    Gilland, James; Johnston, Geoffrey

    2003-01-01

    Magnetoplasmadynamic (MPD) thrusters are capable of accelerating quasi-neutral plasmas to high exhaust velocities using Megawatts (MW) of electric power. These characteristics make such devices worthy of consideration for demanding, far-term missions such as the human exploration of Mars or beyond. Assessment of MPD thrusters at the system and mission level is often difficult due to their status as ongoing experimental research topics rather than developed thrusters. However, in order to assess MPD thrusters' utility in later missions, some adequate characterization of performance, or more exactly, projected performance, and system level definition are required for use in analyses. The most recent physical models of self-field MPD thrusters have been examined, assessed, and reconfigured for use by systems and mission analysts. The physical models allow for rational projections of thruster performance based on physical parameters that can be measured in the laboratory. The models and their implications for the design of future MPD thrusters are presented.

  14. MPD Thruster Performance Analytic Models

    NASA Technical Reports Server (NTRS)

    Gilland, James; Johnston, Geoffrey

    2007-01-01

    Magnetoplasmadynamic (MPD) thrusters are capable of accelerating quasi-neutral plasmas to high exhaust velocities using Megawatts (MW) of electric power. These characteristics make such devices worthy of consideration for demanding, far-term missions such as the human exploration of Mars or beyond. Assessment of MPD thrusters at the system and mission level is often difficult due to their status as ongoing experimental research topics rather than developed thrusters. However, in order to assess MPD thrusters utility in later missions, some adequate characterization of performance, or more exactly, projected performance, and system level definition are required for use in analyses. The most recent physical models of self-field MPD thrusters have been examined, assessed, and reconfigured for use by systems and mission analysts. The physical models allow for rational projections of thruster performance based on physical parameters that can be measured in the laboratory. The models and their implications for the design of future MPD thrusters are presented.

  15. MPD Thruster Performance Analytic Models

    NASA Technical Reports Server (NTRS)

    Gilland, James; Johnston, Geoffrey

    2003-01-01

    Magnetoplasmadynamic (MPD) thrusters are capable of accelerating quasi-neutral plasmas to high exhaust velocities using Megawatts (MW) of electric power. These characteristics make such devices worthy of consideration for demanding, far-term missions such as the human exploration of Mars or beyond. Assessment of MPD thrusters at the system and mission level is often difficult due to their status as ongoing experimental research topics rather than developed thrusters. However, in order to assess MPD thrusters utility in later missions, some adequate characterization of performance, or more exactly, projected performance, and system level definition are required for use in analyses. The most recent physical models of self-field MPD thrusters have been examined, assessed, and reconfigured for use by systems and mission analysts. The physical models allow for rational projections of thruster performance based on physical parameters that can be measured in the laboratory. The models and their implications for the design of future MPD thrusters are presented.

  16. Smith Newton Vehicle Performance Evaluation (Brochure)

    SciTech Connect

    Not Available

    2012-08-01

    The Fleet Test and Evaluation Team at the U.S. Department of Energy's National Renewable Energy Laboratory is evaluating and documenting the performance of electric and plug-in hybrid electric drive systems in medium-duty trucks across the nation. Through this project, Smith Electric Vehicles will build and deploy 500 all-electric medium-duty trucks. The trucks will be deployed in diverse climates across the country.

  17. Performance Evaluation and Benchmarking of Intelligent Systems

    SciTech Connect

    Madhavan, Raj; Messina, Elena; Tunstel, Edward

    2009-09-01

    To design and develop capable, dependable, and affordable intelligent systems, their performance must be measurable. Scientific methodologies for standardization and benchmarking are crucial for quantitatively evaluating the performance of emerging robotic and intelligent systems technologies. There is currently no accepted standard for quantitatively measuring the performance of these systems against user-defined requirements; and furthermore, there is no consensus on what objective evaluation procedures need to be followed to understand the performance of these systems. The lack of reproducible and repeatable test methods has precluded researchers working towards a common goal from exchanging and communicating results, inter-comparing system performance, and leveraging previous work that could otherwise avoid duplication and expedite technology transfer. Currently, this lack of cohesion in the community hinders progress in many domains, such as manufacturing, service, healthcare, and security. By providing the research community with access to standardized tools, reference data sets, and open source libraries of solutions, researchers and consumers will be able to evaluate the cost and benefits associated with intelligent systems and associated technologies. In this vein, the edited book volume addresses performance evaluation and metrics for intelligent systems, in general, while emphasizing the need and solutions for standardized methods. To the knowledge of the editors, there is not a single book on the market that is solely dedicated to the subject of performance evaluation and benchmarking of intelligent systems. Even books that address this topic do so only marginally or are out of date. The research work presented in this volume fills this void by drawing from the experiences and insights of experts gained both through theoretical development and practical implementation of intelligent systems in a variety of diverse application domains. The book presents

  18. Smith Newton Vehicle Performance Evaluation - Cumulative (Brochure)

    SciTech Connect

    Not Available

    2014-08-01

    The Fleet Test and Evaluation Team at the U.S. Department of Energy's National Renewable Energy Laboratory is evaluating and documenting the performance of electric and plug-in hybrid electric drive systems in medium-duty trucks across the nation. U.S. companies participating in this evaluation project received funding from the American Recovery and Reinvestment Act to cover part of the cost of purchasing these vehicles. Through this project, Smith Electric Vehicles is building and deploying 500 all-electric medium-duty trucks that will be deployed by a variety of companies in diverse climates across the country.

  19. Performance evaluations of demountable electrical connections

    SciTech Connect

    Niemann, R.C.; Cha, Y.S.; Hull, J.R.; Buckles, W.E.; Daugherty, M.A.

    1993-07-01

    Electrical conductors operating in cryogenic environments can require demountable connections along their lengths. The connections must have low resistance and high reliability and should allow ready assembly and disassembly. In this work, the performance of two types of connections has been evaluated. The first connection type is a clamped surface-to-surface joint. The second connection type is a screwed joint that incorporates male and female machine-thread components. The connections for copper conductors have been evaluated experimentally at 77 K. Experimental variables included thread surface treatment and assembly methods. The results of the evaluations are presented.

  20. DRACS thermal performance evaluation for FHR

    SciTech Connect

    Lv, Q.; Lin, H. C.; Kim, I. H.; Sun, X.; Christensen, R. N.; Blue, T. E.; Yoder, G. L.; Wilson, D. F.; Sabharwall, P.

    2015-03-01

    Direct Reactor Auxiliary Cooling System (DRACS) is a passive decay heat removal system proposed for the Fluoride-salt-cooled High-temperature Reactor (FHR) that combines coated particle fuel and a graphite moderator with a liquid fluoride salt as the coolant. The DRACS features three coupled natural circulation/convection loops, relying completely on buoyancy as the driving force. These loops are coupled through two heat exchangers, namely, the DRACS Heat Exchanger and the Natural Draft Heat Exchanger. In addition, a fluidic diode is employed to minimize the parasitic flow into the DRACS primary loop and correspondingly the heat loss to the DRACS during normal operation of the reactor, and to keep the DRACS ready for activation, if needed, during accidents. To help with the design and thermal performance evaluation of the DRACS, a computer code using MATLAB has been developed. This code is based on a one-dimensional formulation and its principle is to solve the energy balance and integral momentum equations. By discretizing the DRACS system in the axial direction, a bulk mean temperature is assumed for each mesh cell. The temperatures of all the cells, as well as the mass flow rates in the DRACS loops, are predicted by solving the governing equations that are obtained by integrating the energy conservation equation over each cell and integrating the momentum conservation equation over each of the DRACS loops. In addition, an intermediate heat transfer loop equipped with a pump has also been modeled in the code. This enables the study of flow reversal phenomenon in the DRACS primary loop, associated with the pump trip process. Experimental data from a High-Temperature DRACS Test Facility (HTDF) are not available yet to benchmark the code. A preliminary code validation is performed by using natural circulation experimental data available in the literature, which are as closely relevant as possible. The code is subsequently applied to the HTDF that is under

  1. Evaluation of Outbreak Detection Performance Using Multi-Stream Syndromic Surveillance for Influenza-Like Illness in Rural Hubei Province, China: A Temporal Simulation Model Based on Healthcare-Seeking Behaviors

    PubMed Central

    Fan, Yunzhou; Wang, Ying; Jiang, Hongbo; Yang, Wenwen; Yu, Miao; Yan, Weirong; Diwan, Vinod K.; Xu, Biao; Dong, Hengjin; Palm, Lars; Nie, Shaofa

    2014-01-01

    Background Syndromic surveillance promotes the early detection of diseases outbreaks. Although syndromic surveillance has increased in developing countries, performance on outbreak detection, particularly in cases of multi-stream surveillance, has scarcely been evaluated in rural areas. Objective This study introduces a temporal simulation model based on healthcare-seeking behaviors to evaluate the performance of multi-stream syndromic surveillance for influenza-like illness. Methods Data were obtained in six towns of rural Hubei Province, China, from April 2012 to June 2013. A Susceptible-Exposed-Infectious-Recovered model generated 27 scenarios of simulated influenza A (H1N1) outbreaks, which were converted into corresponding simulated syndromic datasets through the healthcare-behaviors model. We then superimposed converted syndromic datasets onto the baselines obtained to create the testing datasets. Outbreak performance of single-stream surveillance of clinic visit, frequency of over the counter drug purchases, school absenteeism, and multi-stream surveillance of their combinations were evaluated using receiver operating characteristic curves and activity monitoring operation curves. Results In the six towns examined, clinic visit surveillance and school absenteeism surveillance exhibited superior performances of outbreak detection than over the counter drug purchase frequency surveillance; the performance of multi-stream surveillance was preferable to signal-stream surveillance, particularly at low specificity (Sp <90%). Conclusions The temporal simulation model based on healthcare-seeking behaviors offers an accessible method for evaluating the performance of multi-stream surveillance. PMID:25409025

  2. Evaluation Model for Career Programs. Final Report.

    ERIC Educational Resources Information Center

    Byerly, Richard L.; And Others

    A study was conducted to provide and test an evaluative model that could be utilized in providing curricular evaluation of the various career programs. Two career fields, dental assistant and auto mechanic, were chosen for study. A questionnaire based upon the actual job performance was completed by six groups connected with the auto mechanics and…

  3. Hypersonic Interceptor Performance Evaluation Center aero-optics performance predictions

    NASA Astrophysics Data System (ADS)

    Sutton, George W.; Pond, John E.; Snow, Ronald; Hwang, Yanfang

    1993-06-01

    This paper describes the Hypersonic Interceptor Performance Evaluation Center's (HIPEC) aerooptics performance predictions capability. It includes code results for three dimensional shapes and comparisons to initial experiments. HIPEC consists of a collection of aerothermal, aerodynamic computational codes which are capable of covering the entire flight regime from subsonic to hypersonic flow and include chemical reactions and turbulence. Heat transfer to the various surfaces is calculated as an input to cooling and ablation processes. HIPEC also has aero-optics codes to determine the effect of the mean flowfield and turbulence on the tracking and imaging capability of on-board optical sensors. The paper concentrates on the latter aspects.

  4. Evaluating the performance of a new model for predicting the growth of Clostridium perfringens in cooked, uncured meat and poultry products under isothermal, heating, and dynamically cooling conditions

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Clostridium perfringens Type A is a significant public health threat and may germinate, outgrow, and multiply during cooling of cooked meats. This study evaluates a new C. perfringens growth model in IPMP Dynamic Prediction using the same criteria and cooling data in Mohr and others (2015), but inc...

  5. Performance evaluation of ground based radar systems

    NASA Astrophysics Data System (ADS)

    Grant, Stanley E.

    1994-06-01

    Ground based radar systems are a critical resource to the command, control, and communications system. This thesis provides the tools and methods to better understand the actual performance of an operational ground based radar system. This thesis defines two measurable performance standards: (1) the baseline performance, which is based on the sensor's internal characteristics, and (2) the theoretical performance, which considers not only the sensor's internal characteristics, but also the effects of the surrounding terrain and atmosphere on the sensor's performance. The baseline radar system performance, often used by operators, contractors, and radar modeling software to determine the expected system performance, is a simplistic and unrealistic means to predict actual radar system performance. The theoretical radar system performance is more complex; but, the results are much more indicative of the actual performance of an operational radar system. The AN/UPS-1 at the Naval Postgraduate School was used as the system under test to illustrate the baseline and theoretical radar system performance. The terrain effects are shown by performing a multipath study and producing coverage diagrams. The key variables used to construct the multipath study and coverage diagrams are discussed in detail. The atmospheric effects are illustrated by using the Integrated Refractive Effects Prediction System (IREPS) and the Engineer's Refractive Effects Prediction System (EREPS) software tools to produce propagations conditions summaries and coverage displays.

  6. ASBESTOS IN DRINKING WATER PERFORMANCE EVALUATION STUDIES

    EPA Science Inventory

    Performance evaluations of laboratories testing for asbestos in drinking water according to USEPA Test Method 100.1 or 100.2 are complicated by the difficulty of providing stable sample dispersions of asbestos in water. Reference samples of a graduated series of chrysotile asbes...

  7. ASBESTOS IN DRINKING WATER PERFORMANCE EVALUATION STUDIES

    EPA Science Inventory

    Performance evaluations of laboratories testing for asbestos in drinking water according to USEPA Test Method 100.1 or 100.2 are complicated by the difficulty of providing stable sample dispersions of asbestos in water. Reference samples of a graduated series of chrysotile asbest...

  8. EVALUATION OF CONFOCAL MICROSCOPY SYSTEM PERFORMANCE

    EPA Science Inventory

    BACKGROUND. The confocal laser scanning microscope (CLSM) has enormous potential in many biological fields. Currently there is a subjective nature in the assessment of a confocal microscope's performance by primarily evaluating the system with a specific test slide provided by ea...

  9. A New Approach to Evaluating Performance.

    PubMed

    Bleich, Michael R

    2016-09-01

    A leadership task is evaluating the performance of individuals for organizational fit. Traditional approaches have included leader-subordinate reviews, self-review, and peer review. A new approach is evolving in team-based organizations, introduced in this article. J Contin Educ Nurs. 2016;47(9):393-394. PMID:27580504

  10. GENERAL METHODS FOR REMEDIAL PERFORMANCE EVALUATIONS

    EPA Science Inventory

    This document was developed by an EPA-funded project to explain technical considerations and principles necessary to evaluated the performance of ground-water contamination remediations at hazardous waste sites. This is neither a "cookbook", nor an encyclopedia of recommended fi...

  11. PERFORMANCE EVALUATION OF AN IMPROVED STREET SWEEPER

    EPA Science Inventory

    The report gives results of an extensive evaluation of the Improved Street Sweeper (ISS) in Bellevue, WA, and in San Diego, CA. The cleaning performance of the ISS was compared with that of broom sweepers and a vacuum sweeper. The ISS cleaned streets better than the other sweeper...

  12. The EMEFS model evaluation. An interim report

    SciTech Connect

    Barchet, W.R.; Dennis, R.L.; Seilkop, S.K.; Banic, C.M.; Davies, D.; Hoff, R.M.; Macdonald, A.M.; Mickle, R.E.; Padro, J.; Puckett, K.; Byun, D.; McHenry, J.N.; Karamchandani, P.; Venkatram, A.; Fung, C.; Misra, P.K.; Hansen, D.A.; Chang, J.S.

    1991-12-01

    The binational Eulerian Model Evaluation Field Study (EMEFS) consisted of several coordinated data gathering and model evaluation activities. In the EMEFS, data were collected by five air and precipitation monitoring networks between June 1988 and June 1990. Model evaluation is continuing. This interim report summarizes the progress made in the evaluation of the Regional Acid Deposition Model (RADM) and the Acid Deposition and Oxidant Model (ADOM) through the December 1990 completion of a State of Science and Technology report on model evaluation for the National Acid Precipitation Assessment Program (NAPAP). Because various assessment applications of RADM had to be evaluated for NAPAP, the report emphasizes the RADM component of the evaluation. A protocol for the evaluation was developed by the model evaluation team and defined the observed and predicted values to be used and the methods by which the observed and predicted values were to be compared. Scatter plots and time series of predicted and observed values were used to present the comparisons graphically. Difference statistics and correlations were used to quantify model performance. 64 refs., 34 figs., 6 tabs.

  13. Performance evaluation of two personal bioaerosol samplers.

    PubMed

    Tolchinsky, Alexander D; Sigaev, Vladimir I; Varfolomeev, Alexander N; Uspenskaya, Svetlana N; Cheng, Yung S; Su, Wei-Chung

    2011-01-01

    In this study, the performance of two newly developed personal bioaerosol samplers for monitoring the level of environmental and occupational airborne microorganisms was evaluated. These new personal bioaerosol samplers were designed based on a swirling cyclone with recirculating liquid film. The performance evaluation included collection efficiency tests using inert aerosols, the bioaerosol survival test using viable airborne microorganism, and the evaluation of using non-aqueous collection liquid for long-period sampling. The test results showed that these two newly developed personal bioaerosol samplers are capable of doing high efficiency, aerosol sampling (the cutoff diameters are around 0.7 μm for both samplers), and have proven to provide acceptable survival for the collected bioaerosols. By using an appropriate non-aqueous collection liquid, these two personal bioaerosol samplers should be able to permit continuous, long-period bioaerosol sampling with considerable viability for the captured bioaerosols. PMID:22175872

  14. 40 CFR 63.5850 - How do I conduct performance tests, performance evaluations, and design evaluations?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... procedures in EPA Method 3B of appendix A to 40 CFR part 60 to determine an oxygen correction factor if... performance test, performance evaluation, and design evaluation in 40 CFR part 63, subpart SS, that applies to... requirements in § 63.7(e)(1) and under the specific conditions that 40 CFR part 63, subpart SS, specifies....

  15. 40 CFR 63.5850 - How do I conduct performance tests, performance evaluations, and design evaluations?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... procedures in EPA Method 3B of appendix A to 40 CFR part 60 to determine an oxygen correction factor if... performance test, performance evaluation, and design evaluation in 40 CFR part 63, subpart SS, that applies to... requirements in § 63.7(e)(1) and under the specific conditions that 40 CFR part 63, subpart SS, specifies....

  16. 40 CFR 63.5850 - How do I conduct performance tests, performance evaluations, and design evaluations?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... procedures in EPA Method 3B of appendix A to 40 CFR part 60 to determine an oxygen correction factor if... performance test, performance evaluation, and design evaluation in 40 CFR part 63, subpart SS, that applies to... requirements in § 63.7(e)(1) and under the specific conditions that 40 CFR part 63, subpart SS, specifies....

  17. Extended-performance thruster technology evaluation

    NASA Technical Reports Server (NTRS)

    Beattie, J. R.; Poeschel, R. L.; Bechtel, R. T.

    1978-01-01

    Two 30-cm ion thruster technology areas are investigated in support of the extended-performance thruster operation required for the Halley's comet rendezvous mission. These areas include an evaluation of the thruster performance and lifetime characteristics at increased specific impulse and power levels, and the design and evaluation of a high-voltage propellant electrical isolator. Experimental results are presented indicating that all elements of the thruster design function well at the higher specific impulse and power levels. It is shown that the only thruster modifications required for extended-performance operation are a respacing of the ion optics assembly and a redesign of the propellant isolators. Experimental results obtained from three isolator designs are presented, and it is concluded that the design and development of a high-voltage isolator is possible using existing technology.

  18. PEAPOL (Program Evaluation at the Performance Objective Level) Outside Evaluation.

    ERIC Educational Resources Information Center

    Auvil, Mary S.

    In evaluating this pilot project, which developed a computer system for assessing student progress and cost effectiveness as related to achievement of performance objectives, interviews were conducted with project participants, including project staff, school administrators, and the auto shop instructors. Project documents were reviewed and a…

  19. Evaluating modeling tools for the EDOS

    NASA Technical Reports Server (NTRS)

    Knoble, Gordon; Mccaleb, Frederick; Aslam, Tanweer; Nester, Paul

    1994-01-01

    The Earth Observing System (EOS) Data and Operations System (EDOS) Project is developing a functional, system performance model to support the system implementation phase of the EDOS which is being designed and built by the Goddard Space Flight Center (GSFC). The EDOS Project will use modeling to meet two key objectives: (1) manage system design impacts introduced by unplanned changed in mission requirements; and (2) evaluate evolutionary technology insertions throughout the development of the EDOS. To select a suitable modeling tool, the EDOS modeling team developed an approach for evaluating modeling tools and languages by deriving evaluation criteria from both the EDOS modeling requirements and the development plan. Essential and optional features for an appropriate modeling tool were identified and compared with known capabilities of several modeling tools. Vendors were also provided the opportunity to model a representative EDOS processing function to demonstrate the applicability of their modeling tool to the EDOS modeling requirements. This paper emphasizes the importance of using a well defined approach for evaluating tools to model complex systems like the EDOS. The results of this evaluation study do not in any way signify the superiority of any one modeling tool since the results will vary with the specific modeling requirements of each project.

  20. Attributing spatial patterns of hydrological model performance

    NASA Astrophysics Data System (ADS)

    Eisner, S.; Malsy, M.; Flörke, M.

    2013-12-01

    Global hydrological models and land surface models are used to understand and simulate the global terrestrial water cycle. They are, in particular, applied to assess the current state of global water resources, to identify anthropogenic pressures on the global water system, and to assess impacts of global and climate change on water resources. Especially in data-scarce regions, the growing availability of remote sensing products, e.g. GRACE estimates of changes in terrestrial water storage, evaporation or soil moisture estimates, has added valuable information to force and constrain these models as they facilitate the calibration and validation of simulated states and fluxes other than stream flow at large spatial scales. Nevertheless, observed discharge records provide important evidence to evaluate the quality of water availability estimates and to quantify the uncertainty associated with these estimates. Most large scale modelling approaches are constrained by simplified physical process representations and they implicitly rely on the assumption that the same model structure is valid and can be applied globally. It is therefore important to understand why large scale hydrological models perform good or poor in reproducing observed runoff and discharge fields in certain regions, and to explore and explain spatial patterns of model performance. We present an extensive evaluation of the global water model WaterGAP (Water - Global Assessment and Prognosis) to simulate 20th century discharges. The WaterGAP modeling framework comprises a hydrology model and several water use models and operates in its current version, WaterGAP3, on a 5 arc minute global grid and . Runoff generated on the individual grid cells is routed along a global drainage direction map taking into account retention in natural surface water bodies, i.e. lakes and wetlands, as well as anthropogenic impacts, i.e. flow regulation and water abstraction for agriculture, industry and domestic purposes as

  1. Performance Analysis of GYRO: A Tool Evaluation

    SciTech Connect

    Worley, P.; Roth, P.; Candy, J.; Shan, Hongzhang; Mahinthakumar,G.; Sreepathi, S.; Carrington, L.; Kaiser, T.; Snavely, A.; Reed, D.; Zhang, Y.; Huck, K.; Malony, A.; Shende, S.; Moore, S.; Wolf, F.

    2005-06-26

    The performance of the Eulerian gyrokinetic-Maxwell solver code GYRO is analyzed on five high performance computing systems. First, a manual approach is taken, using custom scripts to analyze the output of embedded wall clock timers, floating point operation counts collected using hardware performance counters, and traces of user and communication events collected using the profiling interface to Message Passing Interface (MPI) libraries. Parts of the analysis are then repeated or extended using a number of sophisticated performance analysis tools: IPM, KOJAK, SvPablo, TAU, and the PMaC modeling tool suite. The paper briefly discusses what has been discovered via this manual analysis process, what performance analyses are inconvenient or infeasible to attempt manually, and to what extent the tools show promise in accelerating or significantly extending the manual performance analyses.

  2. Model Program Evaluations. Fact Sheet

    ERIC Educational Resources Information Center

    Arkansas Safe Schools Initiative Division, 2002

    2002-01-01

    There are probably thousands of programs and courses intended to prevent or reduce violence in this nation's schools. Evaluating these many programs has become a problem or goal in itself. There are now many evaluation programs, with many levels of designations, such as model, promising, best practice, exemplary and noteworthy. "Model program" is…

  3. Evaluating Causal Models.

    ERIC Educational Resources Information Center

    Watt, James H., Jr.

    Pointing out that linear causal models can organize the interrelationships of a large number of variables, this paper contends that such models are particularly useful to mass communication research, which must by necessity deal with complex systems of variables. The paper first outlines briefly the philosophical requirements for establishing a…

  4. Using hybrid method to evaluate the green performance in uncertainty.

    PubMed

    Tseng, Ming-Lang; Lan, Lawrence W; Wang, Ray; Chiu, Anthony; Cheng, Hui-Ping

    2011-04-01

    Green performance measure is vital for enterprises in making continuous improvements to maintain sustainable competitive advantages. Evaluation of green performance, however, is a challenging task due to the dependence complexity of the aspects, criteria, and the linguistic vagueness of some qualitative information and quantitative data together. To deal with this issue, this study proposes a novel approach to evaluate the dependence aspects and criteria of firm's green performance. The rationale of the proposed approach, namely green network balanced scorecard, is using balanced scorecard to combine fuzzy set theory with analytical network process (ANP) and importance-performance analysis (IPA) methods, wherein fuzzy set theory accounts for the linguistic vagueness of qualitative criteria and ANP converts the relations among the dependence aspects and criteria into an intelligible structural modeling used IPA. For the empirical case study, four dependence aspects and 34 green performance criteria for PCB firms in Taiwan were evaluated. The managerial implications are discussed. PMID:20571885

  5. Data management system performance modeling

    NASA Technical Reports Server (NTRS)

    Kiser, Larry M.

    1993-01-01

    This paper discusses analytical techniques that have been used to gain a better understanding of the Space Station Freedom's (SSF's) Data Management System (DMS). The DMS is a complex, distributed, real-time computer system that has been redesigned numerous times. The implications of these redesigns have not been fully analyzed. This paper discusses the advantages and disadvantages for static analytical techniques such as Rate Monotonic Analysis (RMA) and also provides a rationale for dynamic modeling. Factors such as system architecture, processor utilization, bus architecture, queuing, etc. are well suited for analysis with a dynamic model. The significance of performance measures for a real-time system are discussed.

  6. Evaluating the Performance of Wavelet-based Data-driven Models for Multistep-ahead Flood Forecasting in an Urbanized Watershed

    NASA Astrophysics Data System (ADS)

    Kasaee Roodsari, B.; Chandler, D. G.

    2015-12-01

    A real-time flood forecast system is presented to provide emergency management authorities sufficient lead time to execute plans for evacuation and asset protection in urban watersheds. This study investigates the performance of two hybrid models for real-time flood forecasting at different subcatchments of Ley Creek watershed, a heavily urbanized watershed in the vicinity of Syracuse, New York. Hybrid models include Wavelet-Based Artificial Neural Network (WANN) and Wavelet-Based Adaptive Neuro-Fuzzy Inference System (WANFIS). Both models are developed on the basis of real time stream network sensing. The wavelet approach is applied to decompose the collected water depth timeseries to Approximation and Detail components. The Approximation component is then used as an input to ANN and ANFIS models to forecast water level at lead times of 1 to 10 hours. The performance of WANN and WANFIS models are compared to ANN and ANFIS models for different lead times. Initial results demonstrated greater predictive power of hybrid models.

  7. Comprehensive system models: Strategies for evaluation

    NASA Technical Reports Server (NTRS)

    Field, Christopher; Kutzbach, John E.; Ramanathan, V.; Maccracken, Michael C.

    1992-01-01

    The task of evaluating comprehensive earth system models is vast involving validations of every model component at every scale of organization, as well as tests of all the individual linkages. Even the most detailed evaluation of each of the component processes and the individual links among them should not, however, engender confidence in the performance of the whole. The integrated earth system is so rich with complex feedback loops, often involving components of the atmosphere, oceans, biosphere, and cryosphere, that it is certain to exhibit emergent properties very difficult to predict from the perspective of a narrow focus on any individual component of the system. Therefore, a substantial share of the task of evaluating comprehensive earth system models must reside at the level of whole system evaluations. Since complete, integrated atmosphere/ ocean/ biosphere/ hydrology models are not yet operational, questions of evaluation must be addressed at the level of the kinds of earth system processes that the models should be competent to simulate, rather than at the level of specific performance criteria. Here, we have tried to identify examples of earth system processes that are difficult to simulate with existing models and that involve a rich enough suite of feedbacks that they are unlikely to be satisfactorily described by highly simplified or toy models. Our purpose is not to specify a checklist of evaluation criteria but to introduce characteristics of the earth system that may present useful opportunities for model testing and, of course, improvement.

  8. Performance evaluation soil samples utilizing encapsulation technology

    DOEpatents

    Dahlgran, J.R.

    1999-08-17

    Performance evaluation soil samples and method of their preparation uses encapsulation technology to encapsulate analytes which are introduced into a soil matrix for analysis and evaluation by analytical laboratories. Target analytes are mixed in an appropriate solvent at predetermined concentrations. The mixture is emulsified in a solution of polymeric film forming material. The emulsified solution is polymerized to form microcapsules. The microcapsules are recovered, quantitated and introduced into a soil matrix in a predetermined ratio to form soil samples with the desired analyte concentration. 1 fig.

  9. The Class C Passive Performance Evaluation Program

    NASA Astrophysics Data System (ADS)

    1981-09-01

    The Class-C performance which provides information on qualities of passive solar features which make them attractive to buyers was evaluated. The following topics are discussed: design of an audit form; design of regionally specific audit addenda; determination of site selection criteria; identification of sites; selection, training, and management of auditors; and packaging of materials of subcontractors for evaluation. Results and findings are presented as follows: demographic profile, passive solar home profile, cost, financing, and payback considerations, expectations, realizations, and satisfaction, and decisionmaking.

  10. Performance evaluation soil samples utilizing encapsulation technology

    DOEpatents

    Dahlgran, James R.

    1999-01-01

    Performance evaluation soil samples and method of their preparation using encapsulation technology to encapsulate analytes which are introduced into a soil matrix for analysis and evaluation by analytical laboratories. Target analytes are mixed in an appropriate solvent at predetermined concentrations. The mixture is emulsified in a solution of polymeric film forming material. The emulsified solution is polymerized to form microcapsules. The microcapsules are recovered, quantitated and introduced into a soil matrix in a predetermined ratio to form soil samples with the desired analyte concentration.

  11. The Discrepancy Evaluation Model. I. Basic Tenets of the Model.

    ERIC Educational Resources Information Center

    Steinmetz, Andres

    1976-01-01

    The basic principles of the discrepancy evaluation model (DEM), developed by Malcolm Provus, are presented. The three concepts which are essential to DEM are defined: (1) the standard is a description of how something should be; (2) performance measures are used to find out the actual characteristics of the object being evaluated; and (3) the…

  12. Measurement-based reliability/performability models

    NASA Technical Reports Server (NTRS)

    Hsueh, Mei-Chen

    1987-01-01

    Measurement-based models based on real error-data collected on a multiprocessor system are described. Model development from the raw error-data to the estimation of cumulative reward is also described. A workload/reliability model is developed based on low-level error and resource usage data collected on an IBM 3081 system during its normal operation in order to evaluate the resource usage/error/recovery process in a large mainframe system. Thus, both normal and erroneous behavior of the system are modeled. The results provide an understanding of the different types of errors and recovery processes. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model the system behavior. A sensitivity analysis is performed to investigate the significance of using a semi-Markov process, as opposed to a Markov process, to model the measured system.

  13. Performance modeling for large database systems

    NASA Astrophysics Data System (ADS)

    Schaar, Stephen; Hum, Frank; Romano, Joe

    1997-02-01

    One of the unique approaches Science Applications International Corporation took to meet performance requirements was to start the modeling effort during the proposal phase of the Interstate Identification Index/Federal Bureau of Investigations (III/FBI) project. The III/FBI Performance Model uses analytical modeling techniques to represent the III/FBI system. Inputs to the model include workloads for each transaction type, record size for each record type, number of records for each file, hardware envelope characteristics, engineering margins and estimates for software instructions, memory, and I/O for each transaction type. The model uses queuing theory to calculate the average transaction queue length. The model calculates a response time and the resources needed for each transaction type. Outputs of the model include the total resources needed for the system, a hardware configuration, and projected inherent and operational availability. The III/FBI Performance Model is used to evaluate what-if scenarios and allows a rapid response to engineering change proposals and technical enhancements.

  14. Performance Evaluation Methods for Assistive Robotic Technology

    NASA Astrophysics Data System (ADS)

    Tsui, Katherine M.; Feil-Seifer, David J.; Matarić, Maja J.; Yanco, Holly A.

    Robots have been developed for several assistive technology domains, including intervention for Autism Spectrum Disorders, eldercare, and post-stroke rehabilitation. Assistive robots have also been used to promote independent living through the use of devices such as intelligent wheelchairs, assistive robotic arms, and external limb prostheses. Work in the broad field of assistive robotic technology can be divided into two major research phases: technology development, in which new devices, software, and interfaces are created; and clinical, in which assistive technology is applied to a given end-user population. Moving from technology development towards clinical applications is a significant challenge. Developing performance metrics for assistive robots poses a related set of challenges. In this paper, we survey several areas of assistive robotic technology in order to derive and demonstrate domain-specific means for evaluating the performance of such systems. We also present two case studies of applied performance measures and a discussion regarding the ubiquity of functional performance measures across the sampled domains. Finally, we present guidelines for incorporating human performance metrics into end-user evaluations of assistive robotic technologies.

  15. Performance evaluation of an improved street sweeper

    SciTech Connect

    Duncan, M.W.; Jain, R.C.; Yung, S.C.; Patterson, R.G.

    1985-10-01

    The paper gives results of an evaluation of the performance of an improved street sweeper (ISS) and conventional sweepers. Dust emissions from paved roads are a major source of urban airborne particles. These emissions can be controlled by street cleaning, but commonly used sweepers were not designed for fine particle collection. A sweeper was modified to improve its ability to remove fine particles from streets and to contain its dust dispersions. Performance was measured by sampling street solids with a vacuum system before and after sweeping. Sieve analyses were made on these samples. During sampling, cascade impactor subsamples were collected to measure the finer particles. Also, dust dispersions were measured.

  16. Hierarchical Model Validation of Symbolic Performance Models of Scientific Kernels

    SciTech Connect

    Alam, Sadaf R; Vetter, Jeffrey S

    2006-08-01

    Multi-resolution validation of hierarchical performance models of scientific applications is critical primarily for two reasons. First, the step-by-step validation determines the correctness of all essential components or phases in a science simulation. Second, a model that is validated at multiple resolution levels is the very first step to generate predictive performance models, for not only existing systems but also for emerging systems and future problem sizes. We present the design and validation of hierarchical performance models of two scientific benchmarks using a new technique called the modeling assertions (MA). Our MA prototype framework generates symbolic performance models that can be evaluated efficiently by generating the equivalent model representations in Octave and MATLAB. The multi-resolution modeling and validation is conducted on two contemporary, massively-parallel systems, XT3 and Blue Gene/L system. The workload distribution and the growth rates predictions generated by the MA models are confirmed by the experimental data collected on the MPP platforms. In addition, the physical memory requirements that are generated by the MA models are verified by the runtime values on the Blue Gene/L system, which has 512 MBytes and 256 MBytes physical memory capacity in its two unique execution modes.

  17. Metrics for Offline Evaluation of Prognostic Performance

    NASA Technical Reports Server (NTRS)

    Saxena, Abhinav; Celaya, Jose; Saha, Bhaskar; Saha, Sankalita; Goebel, Kai

    2010-01-01

    Prognostic performance evaluation has gained significant attention in the past few years. Currently, prognostics concepts lack standard definitions and suffer from ambiguous and inconsistent interpretations. This lack of standards is in part due to the varied end-user requirements for different applications, time scales, available information, domain dynamics, etc. to name a few. The research community has used a variety of metrics largely based on convenience and their respective requirements. Very little attention has been focused on establishing a standardized approach to compare different efforts. This paper presents several new evaluation metrics tailored for prognostics that were recently introduced and were shown to effectively evaluate various algorithms as compared to other conventional metrics. Specifically, this paper presents a detailed discussion on how these metrics should be interpreted and used. These metrics have the capability of incorporating probabilistic uncertainty estimates from prognostic algorithms. In addition to quantitative assessment they also offer a comprehensive visual perspective that can be used in designing the prognostic system. Several methods are suggested to customize these metrics for different applications. Guidelines are provided to help choose one method over another based on distribution characteristics. Various issues faced by prognostics and its performance evaluation are discussed followed by a formal notational framework to help standardize subsequent developments.

  18. Strapdown system performance optimization test evaluations (SPOT), volume 1

    NASA Technical Reports Server (NTRS)

    Blaha, R. J.; Gilmore, J. P.

    1973-01-01

    A three axis inertial system was packaged in an Apollo gimbal fixture for fine grain evaluation of strapdown system performance in dynamic environments. These evaluations have provided information to assess the effectiveness of real-time compensation techniques and to study system performance tradeoffs to factors such as quantization and iteration rate. The strapdown performance and tradeoff studies conducted include: (1) Compensation models and techniques for the inertial instrument first-order error terms were developed and compensation effectivity was demonstrated in four basic environments; single and multi-axis slew, and single and multi-axis oscillatory. (2) The theoretical coning bandwidth for the first-order quaternion algorithm expansion was verified. (3) Gyro loop quantization was identified to affect proportionally the system attitude uncertainty. (4) Land navigation evaluations identified the requirement for accurate initialization alignment in order to pursue fine grain navigation evaluations.

  19. Analytical performance evaluation for autonomous sensor fusion

    NASA Astrophysics Data System (ADS)

    Chang, K. C.

    2008-04-01

    A distributed data fusion system consists of a network of sensors, each capable of local processing and fusion of sensor data. There has been a great deal of work in developing distributed fusion algorithms applicable to a network centric architecture. Currently there are at least a few approaches including naive fusion, cross-correlation fusion, information graph fusion, maximum a posteriori (MAP) fusion, channel filter fusion, and covariance intersection fusion. However, in general, in a distributed system such as the ad hoc sensor networks, the communication architecture is not fixed. Each node has knowledge of only its local connectivity but not the global network topology. In those cases, the distributed fusion algorithm based on information graph type of approach may not scale due to its requirements to carry long pedigree information for decorrelation. In this paper, we focus on scalable fusion algorithms and conduct analytical performance evaluation to compare their performance. The goal is to understand the performance of those algorithms under different operating conditions. Specifically, we evaluate the performance of channel filter fusion, Chernoff fusion, Shannon Fusion, and Battachayya fusion algorithms. We also compare their results to NaÃve fusion and "optimal" centralized fusion algorithms under a specific communication pattern.

  20. Group 3: Performance evaluation and assessment

    NASA Technical Reports Server (NTRS)

    Frink, A.

    1981-01-01

    Line-oriented flight training provides a unique learning experience and an opportunity to look at aspects of performance other types of training did not provide. Areas such as crew coordination, resource management, leadership, and so forth, can be readily evaluated in such a format. While individual performance is of the utmost importance, crew performance deserves equal emphasis, therefore, these areas should be carefully observed by the instructors as an rea for discussion in the same way that individual performane is observed. To be effective, it must be accepted by the crew members, and administered by the instructors as pure training-learning through experience. To keep open minds, to benefit most from the experience, both in the doing and in the follow-on discussion, it is essential that it be entered into with a feeling of freedom, openness, and enthusiasm. Reserve or defensiveness because of concern for failure must be inhibit participation.

  1. Evaluating Algorithm Performance Metrics Tailored for Prognostics

    NASA Technical Reports Server (NTRS)

    Saxena, Abhinav; Celaya, Jose; Saha, Bhaskar; Saha, Sankalita; Goebel, Kai

    2009-01-01

    Prognostics has taken a center stage in Condition Based Maintenance (CBM) where it is desired to estimate Remaining Useful Life (RUL) of the system so that remedial measures may be taken in advance to avoid catastrophic events or unwanted downtimes. Validation of such predictions is an important but difficult proposition and a lack of appropriate evaluation methods renders prognostics meaningless. Evaluation methods currently used in the research community are not standardized and in many cases do not sufficiently assess key performance aspects expected out of a prognostics algorithm. In this paper we introduce several new evaluation metrics tailored for prognostics and show that they can effectively evaluate various algorithms as compared to other conventional metrics. Specifically four algorithms namely; Relevance Vector Machine (RVM), Gaussian Process Regression (GPR), Artificial Neural Network (ANN), and Polynomial Regression (PR) are compared. These algorithms vary in complexity and their ability to manage uncertainty around predicted estimates. Results show that the new metrics rank these algorithms in different manner and depending on the requirements and constraints suitable metrics may be chosen. Beyond these results, these metrics offer ideas about how metrics suitable to prognostics may be designed so that the evaluation procedure can be standardized. 1

  2. High temperature furnace modeling and performance verifications

    NASA Technical Reports Server (NTRS)

    Smith, James E., Jr.

    1992-01-01

    Analytical, numerical, and experimental studies were performed on two classes of high temperature materials processing sources for their potential use as directional solidification furnaces. The research concentrated on a commercially available high temperature furnace using a zirconia ceramic tube as the heating element and an Arc Furnace based on a tube welder. The first objective was to assemble the zirconia furnace and construct parts needed to successfully perform experiments. The 2nd objective was to evaluate the zirconia furnace performance as a directional solidification furnace element. The 3rd objective was to establish a data base on materials used in the furnace construction, with particular emphasis on emissivities, transmissivities, and absorptivities as functions of wavelength and temperature. A 1-D and 2-D spectral radiation heat transfer model was developed for comparison with standard modeling techniques, and were used to predict wall and crucible temperatures. The 4th objective addressed the development of a SINDA model for the Arc Furnace and was used to design sample holders and to estimate cooling media temperatures for the steady state operation of the furnace. And, the 5th objective addressed the initial performance evaluation of the Arc Furnace and associated equipment for directional solidification. Results of these objectives are presented.

  3. Performance evaluation of vector-machine architectures

    SciTech Connect

    Tang, Ju-ho.

    1989-01-01

    Vector machines are well known for their high-peak performance, but the delivered performance varies greatly over different workloads and depends strongly on compiler optimizations. Recently it has been claimed that several horizontal superscalar architectures, e.g., VLIW and polycyclic architectures, provide a more balanced performance across a wider range of scientific workloads than do vector machines. The purpose of this research is to study the performance of register-register vector processors, such as Cray supercomputers, as a function of their architectural features, scheduling schemes, compiler optimization capabilities, and program parameters. The results of this study also provide a base for comparing vector machines with horizontal superscalar machines. An evaluation methodology, based on timing parameters, bottle-necks, and run time bounds, is developed. Cray-1 performance is degraded by the multiple memory loads of index-misaligned vectors and the inability of the Cray Fortran Compiler (CFT) to produce code that hits all the chain slot times. The impact of chaining and two instruction scheduling schemes on one-memory-port vector supercomputers, illustrated by the Cray-1 and Cray-2, is studied. The lack of instruction chaining on the Cray-2 requires a different instruction scheduling scheme from that of the Cray-1. Situations are characterized in which simple vector scheduling can generate code that fully utilizes one functional unit for machines with chaining. Even without chaining, polycyclic scheduling guarantees full utilization of one functional unit, after an initial transient, for loops with acyclic dependence graphs.

  4. 40 CFR 63.5850 - How do I conduct performance tests, performance evaluations, and design evaluations?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... procedures in EPA Method 3B of appendix A to 40 CFR part 60 to determine an oxygen correction factor if... test, performance evaluation, and design evaluation in 40 CFR part 63, subpart SS, that applies to you... requirements in § 63.7(e)(1) and under the specific conditions that 40 CFR part 63, subpart SS, specifies....

  5. 40 CFR 63.5850 - How do I conduct performance tests, performance evaluations, and design evaluations?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... test, performance evaluation, and design evaluation in 40 CFR part 63, subpart SS, that applies to you... requirements in § 63.7(e)(1) and under the specific conditions that 40 CFR part 63, subpart SS, specifies. (c... and under the specific conditions that 40 CFR part 63, subpart SS, specifies. (d) You may not...

  6. Evaluating Internet End-to-end Performance

    PubMed Central

    Wood, Fred B.; Cid, Victor H.; Siegel, Elliot R.

    1998-01-01

    Abstract Objective: An evaluation of Internet end-to-end performance was conducted for the purpose of better understanding the overall performance of Internet pathways typical of those used to access information in National Library of Medicine (NLM) databases and, by extension, other Internet-based biomedical information resources. Design: The evaluation used a three-level test strategy: 1) user testing to collect empirical data on Internet performance as perceived by users when accessing NLM Web-based databases, 2) technical testing to analyze the Internet paths between the NLM and the user's desktop computer terminal, and 3) technical testing between the NLM and the World Wide Web (“Web”) server computer at the user's institution to help characterize the relative performance of Internet pathways. Measurements: Time to download the front pages of NLM Web sites and conduct standardized searches of NLM databases, data transmission capacity between NLM and remote locations (known as the bulk transfer capacity [BTC], “ping” round-trip time as an indication of the latency of the network pathways, and the network routing of the data transmissions (number and sequencing of hops). Results: Based on 347 user tests spread over 16 locations, the median time per location to download the main NLM home page ranged from 2 to 59 seconds, and 1 to 24 seconds for the other NLM Web sites tested. The median time to conduct standardized searches and get search results ranged from 2 to 14 seconds for PubMed and 4 to 18 seconds for Internet Grateful Med. The overall problem rate was about 1 percent; that is, on the average, users experienced a problem once every 100 test measurements. The user terminal tests at five locations and Web host tests at 13 locations provided profiles of BTC, RTT, and network routing for both dial-up and fixed Internet connections. Conclusion: The evaluation framework provided a profile of typical Internet performance and insights into network

  7. ATAMM enhancement and multiprocessing performance evaluation

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.

    1994-01-01

    The algorithm to architecture mapping model (ATAAM) is a Petri net based model which provides a strategy for periodic execution of a class of real-time algorithms on multicomputer dataflow architecture. The execution of large-grained, decision-free algorithms on homogeneous processing elements is studied. The ATAAM provides an analytical basis for calculating performance bounds on throughput characteristics. Extension of the ATAMM as a strategy for cyclo-static scheduling provides for a truly distributed ATAMM multicomputer operating system. An ATAAM testbed consisting of a centralized graph manager and three processors is described using embedded firmware on 68HC11 microcontrollers.

  8. Generic hypersonic vehicle performance model

    NASA Technical Reports Server (NTRS)

    Chavez, Frank R.; Schmidt, David K.

    1993-01-01

    An integrated computational model of a generic hypersonic vehicle was developed for the purpose of determining the vehicle's performance characteristics, which include the lift, drag, thrust, and moment acting on the vehicle at specified altitude, flight condition, and vehicular configuration. The lift, drag, thrust, and moment are developed for the body fixed coordinate system. These forces and moments arise from both aerodynamic and propulsive sources. SCRAMjet engine performance characteristics, such as fuel flow rate, can also be determined. The vehicle is assumed to be a lifting body with a single aerodynamic control surface. The body shape and control surface location are arbitrary and must be defined. The aerodynamics are calculated using either 2-dimensional Newtonian or modified Newtonian theory and approximate high-Mach-number Prandtl-Meyer expansion theory. Skin-friction drag was also accounted for. The skin-friction drag coefficient is a function of the freestream Mach number. The data for the skin-friction drag coefficient values were taken from NASA Technical Memorandum 102610. The modeling of the vehicle's SCRAMjet engine is based on quasi 1-dimensional gas dynamics for the engine diffuser, nozzle, and the combustor with heat addition. The engine has three variable inputs for control: the engine inlet diffuser area ratio, the total temperature rise through the combustor due to combustion of the fuel, and the engine internal expansion nozzle area ratio. The pressure distribution over the vehicle's lower aft body surface, which acts as an external nozzle, is calculated using a combination of quasi 1-dimensional gas dynamic theory and Newtonian or modified Newtonian theory. The exhaust plume shape is determined by matching the pressure inside the plume, calculated from the gas dynamic equations, with the freestream pressure, calculated from Newtonian or Modified Newtonian theory. In this manner, the pressure distribution along the vehicle after body

  9. Application performation evaluation of the HTMT architecture.

    SciTech Connect

    Hereld, M.; Judson, I. R.; Stevens, R.

    2004-02-23

    In this report we summarize findings from a study of the predicted performance of a suite of application codes taken from the research environment and analyzed against a modeling framework for the HTMT architecture. We find that the inward bandwidth of the data vortex may be a limiting factor for some applications. We also find that available memory in the cryogenic layer is a constraining factor in the partitioning of applications into parcels. The architecture in several examples may be inadequately exploited; in particular, applications typically did not capitalize well on the available computational power or data organizational capability in the PIM layers. The application suite provided significant examples of wide excursions from the accepted (if simplified) program execution model--in particular, by required complex in-SPELL synchronization between parcels. The availability of the HTMT-C emulation environment did not contribute significantly to the ability to analyze applications, because of the large gap between the available hardware descriptions and parameters in the modeling framework and the types of data that could be collected via HTMT-C emulation runs. Detailed analysis of application performance, and indeed further credible development of the HTMT-inspired program execution model and system architecture, requires development of much better tools. Chief among them are cycle-accurate simulation tools for computational, network, and memory components. Additionally, there is a critical need for a whole system simulation tool to allow detailed programming exercises and performance tests to be developed. We address three issues in this report: (1) The landscape for applications of petaflops computing; (2) The performance of applications on the HTMT architecture; and (3) The effectiveness of HTMT-C as a tool for studying and developing the HTMT architecture. We set the scene with observations about the course of application development as petaflops

  10. Performance evaluation of swimmers: scientific tools.

    PubMed

    Smith, David J; Norris, Stephen R; Hogg, John M

    2002-01-01

    The purpose of this article is to provide a critical commentary of the physiological and psychological tools used in the evaluation of swimmers. The first-level evaluation should be the competitive performance itself, since it is at this juncture that all elements interplay and provide the 'highest form' of assessment. Competition video analysis of major swimming events has progressed to the point where it has become an indispensable tool for coaches, athletes, sport scientists, equipment manufacturers, and even the media. The breakdown of each swimming performance at the individual level to its constituent parts allows for comparison with the predicted or sought after execution, as well as allowing for comparison with identified world competition levels. The use of other 'on-going' monitoring protocols to evaluate training efficacy typically involves criterion 'effort' swims and specific training sets where certain aspects are scrutinised in depth. Physiological parameters that are often examined alongside swimming speed and technical aspects include oxygen uptake, heart rate, blood lactate concentration, blood lactate accumulation and clearance rates. Simple and more complex procedures are available for in-training examination of technical issues. Strength and power may be quantified via several modalities although, typically, tethered swimming and dry-land isokinetic devices are used. The availability of a 'swimming flume' does afford coaches and sport scientists a higher degree of flexibility in the type of monitoring and evaluation that can be undertaken. There is convincing evidence that athletes can be distinguished on the basis of their psychological skills and emotional competencies and that these differences become further accentuated as the athlete improves. No matter what test format is used (physiological, biomechanical or psychological), similar criteria of validity must be ensured so that the test provides useful and associative information

  11. Performance evaluation of TCP over ABT protocols

    NASA Astrophysics Data System (ADS)

    Ata, Shingo; Murata, Masayuki; Miyahara, Hideo

    1998-10-01

    ABT is promising for effectively transferring a highly bursty data traffic in ATM networks. Most of past studies focused on the data transfer capability of ABT within the ATM layer. In actual, however, we need to consider the upper layer transport protocol since the transport layer protocol also supports a network congestion control mechanism. One such example is TCP, which is now widely used in the Internet. In this paper, we evaluate the performance of TCP over ABT protocols. Simulation results show that the retransmission mechanism of ABT can effectively overlay the TCP congestion control mechanism so that TCP operates in a stable fashion and works well only as an error recovery mechanism.

  12. Evaluation of impact limiter performance during end-on and slapdown drop tests of a one-third scale model storage/transport cask system

    SciTech Connect

    Yoshimura, H.R.; Bronowski, D.R.; Uncapher, W.L.; Attaway, S.W.; Bateman, V.I.; Carne, T.G.; Gregory, D.L. ); Huerta, M. )

    1990-12-01

    This report describes drop testing of a one-third scale model shipping cask system. Two casks were designed and fabricated by Transnuclear, Inc., to ship spent fuel from the former Nuclear Fuel Services West Valley reprocessing facility in New York to the Idaho National Engineering Laboratory for a long-term spent fuel dry storage demonstration project. As part of the NRC's regulatory certification process, one-third scale model tests were performed to obtain experimental data on impact limiter performance during impact testing. The objectives of the testing program were to (1) obtain deceleration and displacement information for the cask and impact limiter system, (2) obtain dynamic force-displacement data for the impact limiters, (3) verify the integrity of the impact limiter retention system, and (4) examine the crush behavior of the limiters. Two 30-ft (9-m) drop tests were conducted on a mass model of the cask body and scaled balsa and redwood-filled impact limiters. This report describes the results of both tests in terms of measured decelerations, posttest deformation measurements, and the general structural response of the system. 3 refs., 32 figs.

  13. Behavior model for performance assessment.

    SciTech Connect

    Borwn-VanHoozer, S. A.

    1999-07-23

    Every individual channels information differently based on their preference of the sensory modality or representational system (visual auditory or kinesthetic) we tend to favor most (our primary representational system (PRS)). Therefore, some of us access and store our information primarily visually first, some auditorily, and others kinesthetically (through feel and touch); which in turn establishes our information processing patterns and strategies and external to internal (and subsequently vice versa) experiential language representation. Because of the different ways we channel our information, each of us will respond differently to a task--the way we gather and process the external information (input), our response time (process), and the outcome (behavior). Traditional human models of decision making and response time focus on perception, cognitive and motor systems stimulated and influenced by the three sensory modalities, visual, auditory and kinesthetic. For us, these are the building blocks to knowing how someone is thinking. Being aware of what is taking place and how to ask questions is essential in assessing performance toward reducing human errors. Existing models give predications based on time values or response times for a particular event, and may be summed and averaged for a generalization of behavior(s). However, by our not establishing a basic understanding of the foundation of how the behavior was predicated through a decision making strategy process, predicative models are overall inefficient in their analysis of the means by which behavior was generated. What is seen is the end result.

  14. Performance evaluation of bound diamond ring tools

    SciTech Connect

    Piscotty, M.A.; Taylor, J.S.; Blaedel, K.L.

    1995-07-14

    LLNL is collaborating with the Center for Optics Manufacturing (COM) and the American Precision Optics Manufacturers Association (APOMA) to optimize bound diamond ring tools for the spherical generation of high quality optical surfaces. An important element of this work is establishing an experimentally-verified link between tooling properties and workpiece quality indicators such as roughness, subsurface damage and removal rate. In this paper, we report on a standardized methodology for assessing ring tool performance and its preliminary application to a set of commercially-available wheels. Our goals are to (1) assist optics manufacturers (users of the ring tools) in evaluating tools and in assessing their applicability for a given operation, and (2) provide performance feedback to wheel manufacturers to help optimize tooling for the optics industry. Our paper includes measurements of wheel performance for three 2-4 micron diamond bronze-bond wheels that were supplied by different manufacturers to nominally- identical specifications. Preliminary data suggests that the difference in performance levels among the wheels were small.

  15. 40 CFR 35.9055 - Evaluation of recipient performance.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 1 2010-07-01 2010-07-01 false Evaluation of recipient performance. 35... Evaluation of recipient performance. The Regional Administrator will oversee each recipient's performance... schedule for evaluation in the assistance agreement and will evaluate recipient performance and...

  16. 48 CFR 436.201 - Evaluation of contractor performance.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Construction 436.201 Evaluation of contractor performance. Preparation of performance evaluation reports. In addition to the requirements of FAR 36.201, performance evaluation reports shall be prepared for indefinite... of services to be ordered exceeds $500,000.00. For these contracts, performance evaluation...

  17. Performance Evaluation and Parameter Identification on DROID III

    NASA Technical Reports Server (NTRS)

    Plumb, Julianna J.

    2011-01-01

    The DROID III project consisted of two main parts. The former, performance evaluation, focused on the performance characteristics of the aircraft such as lift to drag ratio, thrust required for level flight, and rate of climb. The latter, parameter identification, focused on finding the aerodynamic coefficients for the aircraft using a system that creates a mathematical model to match the flight data of doublet maneuvers and the aircraft s response. Both portions of the project called for flight testing and that data is now available on account of this project. The conclusion of the project is that the performance evaluation data is well-within desired standards but could be improved with a thrust model, and that parameter identification is still in need of more data processing but seems to produce reasonable results thus far.

  18. Evaluating performance of container terminal operation using simulation

    NASA Astrophysics Data System (ADS)

    Nawawi, Mohd Kamal Mohd; Jamil, Fadhilah Che; Hamzah, Firdaus Mohamad

    2015-05-01

    A container terminal is a facility where containers are transshipped from one mode of transport to another. Congestion problem leads to the decreasing of the customer's level of satisfaction. This study presents the application of simulation technique with the main objective of this study is to develop the current model and evaluate the performance of the container terminal. The suitable performance measure used in this study to evaluate the container terminal model are the average waiting time in queue, average of process time at berth, number of vessels enter the berth and resource utilization. Simulation technique was found to be a suitable technique to conduct in this study. The results from the simulation model had proved to solve the problem occurred in the container terminal.

  19. Measuring the Performance of Neural Models.

    PubMed

    Schoppe, Oliver; Harper, Nicol S; Willmore, Ben D B; King, Andrew J; Schnupp, Jan W H

    2016-01-01

    Good metrics of the performance of a statistical or computational model are essential for model comparison and selection. Here, we address the design of performance metrics for models that aim to predict neural responses to sensory inputs. This is particularly difficult because the responses of sensory neurons are inherently variable, even in response to repeated presentations of identical stimuli. In this situation, standard metrics (such as the correlation coefficient) fail because they do not distinguish between explainable variance (the part of the neural response that is systematically dependent on the stimulus) and response variability (the part of the neural response that is not systematically dependent on the stimulus, and cannot be explained by modeling the stimulus-response relationship). As a result, models which perfectly describe the systematic stimulus-response relationship may appear to perform poorly. Two metrics have previously been proposed which account for this inherent variability: Signal Power Explained (SPE, Sahani and Linden, 2003), and the normalized correlation coefficient (CC norm , Hsu et al., 2004). Here, we analyze these metrics, and show that they are intimately related. However, SPE has no lower bound, and we show that, even for good models, SPE can yield negative values that are difficult to interpret. CC norm is better behaved in that it is effectively bounded between -1 and 1, and values below zero are very rare in practice and easy to interpret. However, it was hitherto not possible to calculate CC norm directly; instead, it was estimated using imprecise and laborious resampling techniques. Here, we identify a new approach that can calculate CC norm quickly and accurately. As a result, we argue that it is now a better choice of metric than SPE to accurately evaluate the performance of neural models. PMID:26903851

  20. Measuring the Performance of Neural Models

    PubMed Central

    Schoppe, Oliver; Harper, Nicol S.; Willmore, Ben D. B.; King, Andrew J.; Schnupp, Jan W. H.

    2016-01-01

    Good metrics of the performance of a statistical or computational model are essential for model comparison and selection. Here, we address the design of performance metrics for models that aim to predict neural responses to sensory inputs. This is particularly difficult because the responses of sensory neurons are inherently variable, even in response to repeated presentations of identical stimuli. In this situation, standard metrics (such as the correlation coefficient) fail because they do not distinguish between explainable variance (the part of the neural response that is systematically dependent on the stimulus) and response variability (the part of the neural response that is not systematically dependent on the stimulus, and cannot be explained by modeling the stimulus-response relationship). As a result, models which perfectly describe the systematic stimulus-response relationship may appear to perform poorly. Two metrics have previously been proposed which account for this inherent variability: Signal Power Explained (SPE, Sahani and Linden, 2003), and the normalized correlation coefficient (CCnorm, Hsu et al., 2004). Here, we analyze these metrics, and show that they are intimately related. However, SPE has no lower bound, and we show that, even for good models, SPE can yield negative values that are difficult to interpret. CCnorm is better behaved in that it is effectively bounded between −1 and 1, and values below zero are very rare in practice and easy to interpret. However, it was hitherto not possible to calculate CCnorm directly; instead, it was estimated using imprecise and laborious resampling techniques. Here, we identify a new approach that can calculate CCnorm quickly and accurately. As a result, we argue that it is now a better choice of metric than SPE to accurately evaluate the performance of neural models. PMID:26903851

  1. Data assimilation in integrated hydrological modeling using ensemble Kalman filtering: evaluating the effect of ensemble size and localization on filter performance

    NASA Astrophysics Data System (ADS)

    Rasmussen, J.; Madsen, H.; Jensen, K. H.; Refsgaard, J. C.

    2015-07-01

    Groundwater head and stream discharge is assimilated using the ensemble transform Kalman filter in an integrated hydrological model with the aim of studying the relationship between the filter performance and the ensemble size. In an attempt to reduce the required number of ensemble members, an adaptive localization method is used. The performance of the adaptive localization method is compared to the more common distance-based localization. The relationship between filter performance in terms of hydraulic head and discharge error and the number of ensemble members is investigated for varying numbers and spatial distributions of groundwater head observations and with or without discharge assimilation and parameter estimation. The study shows that (1) more ensemble members are needed when fewer groundwater head observations are assimilated, and (2) assimilating discharge observations and estimating parameters requires a much larger ensemble size than just assimilating groundwater head observations. However, the required ensemble size can be greatly reduced with the use of adaptive localization, which by far outperforms distance-based localization. The study is conducted using synthetic data only.

  2. A new definition of the representative volume element in numerical homogenization problems and its application to the performance evaluation of analytical homogenization models

    NASA Astrophysics Data System (ADS)

    Moussaddy, Hadi

    The Representative Volume Element (RVE) plays a central role in the mechanics of composite materials with respect to predicting their effective properties. Numerical homogenization delivers accurate estimations of composite effective properties when associated with a RVE. In computational homogenization, the RVE refers to an ensemble of random material volumes that yield, by an averaging procedure, the effective properties of the bulk material, within a tolerance. A large diversity of RVE quantitative definitions, providing computational methods to estimate the RVE size, are found in literature. In this study, the ability of the different RVE definitions to yield accurate effective properties is investigated. The assessment is conducted on a specific random microstructure, namely an elastic two-phase three dimensional composite reinforced by randomly oriented fibers. Large scale finite element simulations of material volumes of different sizes are performed on high performance computational servers using parallel computing. The materials volumes are virtually generated and subjected to periodic boundary conditions. It is shown that most popular RVE definitions, based on convergence of the properties when increasing the material volume, yields inaccurate effective properties. A new RVE definition is introduced based on the statistical variations of the properties computed from material volumes. It is shown to produce more accurate estimations of the effective properties. In addition, the new definition produced RVE that are smaller in size than that of other RVE definitions ; also the number of necessary finite element simulations to determine the RVE is substantially reduced. The computed effective properties are compared to that of analytical models. The comparisons are performed for a wide range of fibers aspect ratios (up to 120), properties contrast (up to 300) and volume fractions only up to 20% due to computational limits. The Mori-Tanaka model and the two

  3. Lithographic performance evaluation of a contaminated EUV mask after cleaning

    SciTech Connect

    George, Simi; Naulleau, Patrick; Okoroanyanwu, Uzodinma; Dittmar, Kornelia; Holfeld, Christian; Wuest, Andrea

    2009-11-16

    The effect of surface contamination and subsequent mask surface cleaning on the lithographic performance of a EUV mask is investigated. SEMATECH's Berkeley micro-field exposure tool (MET) printed 40 nm and 50 nm line and space (L/S) patterns are evaluated to compare the performance of a contaminated and cleaned mask to an uncontaminated mask. Since the two EUV masks have distinct absorber architectures, optical imaging models and aerial image calculations were completed to determine any expected differences in performance. Measured and calculated Bossung curves, process windows, and exposure latitudes for the two sets of L/S patterns are compared to determine how the contamination and cleaning impacts the lithographic performance of EUV masks. The observed differences in mask performance are shown to be insignificant, indicating that the cleaning process did not appreciably affect mask performance.

  4. Space Shuttle Underside Astronaut Communications Performance Evaluation

    NASA Technical Reports Server (NTRS)

    Hwu, Shian U.; Dobbins, Justin A.; Loh, Yin-Chung; Kroll, Quin D.; Sham, Catherine C.

    2005-01-01

    The Space Shuttle Ultra High Frequency (UHF) communications system is planned to provide Radio Frequency (RF) coverage for astronauts working underside of the Space Shuttle Orbiter (SSO) for thermal tile inspection and repairing. This study is to assess the Space Shuttle UHF communication performance for astronauts in the shadow region without line-of-sight (LOS) to the Space Shuttle and Space Station UHF antennas. To insure the RF coverage performance at anticipated astronaut worksites, the link margin between the UHF antennas and Extravehicular Activity (EVA) Astronauts with significant vehicle structure blockage was analyzed. A series of near-field measurements were performed using the NASA/JSC Anechoic Chamber Antenna test facilities. Computational investigations were also performed using the electromagnetic modeling techniques. The computer simulation tool based on the Geometrical Theory of Diffraction (GTD) was used to compute the signal strengths. The signal strength was obtained by computing the reflected and diffracted fields along the propagation paths between the transmitting and receiving antennas. Based on the results obtained in this study, RF coverage for UHF communication links was determined for the anticipated astronaut worksite in the shadow region underneath the Space Shuttle.

  5. Traction contact performance evaluation at high speeds

    NASA Technical Reports Server (NTRS)

    Tevaarwerk, J. L.

    1981-01-01

    The results of traction tests performed on two fluids are presented. These tests covered a pressure range of 1.0 to 2.5 GPa, an inlet temperature range of 30 'C to 70 'C, a speed range of 10 to 80 m/sec, aspect ratios of .5 to 5 and spin from 0 to 2.1 percent. The test results are presented in the form of two dimensionless parameters, the initial traction slope and the maximum traction peak. With the use of a suitable rheological fluid model the actual traction curves measured can now be reconstituted from the two fluid parameters. More importantly, the knowledge of these parameters together with the fluid rheological model, allow the prediction of traction under conditions of spin, slip and any combination thereof. Comparison between theoretically predicted traction under these conditions and those measured in actual traction tests shows that this method gives good results.

  6. Manipulator Performance Evaluation Using Fitts' Taping Task

    SciTech Connect

    Draper, J.V.; Jared, B.C.; Noakes, M.W.

    1999-04-25

    Metaphorically, a teleoperator with master controllers projects the user's arms and hands into a re- mote area, Therefore, human users interact with teleoperators at a more fundamental level than they do with most human-machine systems. Instead of inputting decisions about how the system should func- tion, teleoperator users input the movements they might make if they were truly in the remote area and the remote machine must recreate their trajectories and impedance. This intense human-machine inter- action requires displays and controls more carefully attuned to human motor capabilities than is neces- sary with most systems. It is important for teleoperated manipulators to be able to recreate human trajectories and impedance in real time. One method for assessing manipulator performance is to observe how well a system be- haves while a human user completes human dexterity tasks with it. Fitts' tapping task has been, used many times in the past for this purpose. This report describes such a performance assessment. The International Submarine Engineering (ISE) Autonomous/Teleoperated Operations Manipulator (ATOM) servomanipulator system was evalu- ated using a generic positioning accuracy task. The task is a simple one but has the merits of (1) pro- ducing a performance function estimate rather than a point estimate and (2) being widely used in the past for human and servomanipulator dexterity tests. Results of testing using this task may, therefore, allow comparison with other manipulators, and is generically representative of a broad class of tasks. Results of the testing indicate that the ATOM manipulator is capable of performing the task. Force reflection had a negative impact on task efficiency in these data. This was most likely caused by the high resistance to movement the master controller exhibited with the force reflection engaged. Measurements of exerted forces were not made, so it is not possible to say whether the force reflection helped partici- pants

  7. Advocacy Evaluation: A Model for Internal Evaluation Offices.

    ERIC Educational Resources Information Center

    Sonnichsen, Richard C.

    1988-01-01

    As evaluations are more often implemented by internal staff, internal evaluators must begin to assume decision-making and advocacy tasks. This advocacy evaluation concept is described using the Federal Bureau of Investigation evaluation staff as a model. (TJH)

  8. Critical evaluation of laser-induced interstitial thermotherapy (LITT) performed on in-vitro, in-vivo, and ex-vivo models

    NASA Astrophysics Data System (ADS)

    Henkel, Thomas O.; Niedergethmann, M.; Alken, Peter

    1996-01-01

    Thermal ablation techniques are experiencing application in many different fields of medicine. Recently, experimental studies have been performed by various authors concerned with dosimetry and laser-tissue interaction. In order to study the effects of interstitial laser energy on biological tissue, we examined different tissue models which compared important parameters during laser application. We have performed the following in vitro, in vivo and ex vivo studies by comparing a neodymium: YAG (1064 nm) and diode laser (830 nm) equipped with interstitial laser fibers. In vitro studies which examined the influence of changes in power and time duration of application were performed on potato, muscle, liver and kidney. In vivo studies (porcine model) also examined different power settings at designated time intervals. Ex vivo studies with isolated perfused kidney (IPK) investigated the effects of power, application time, perfusion pressure and different perfusion mediums (saline solution, anticoagulated blood). In vitro studies revealed necrotic lesions in all tissues. Although no power threshold could be obtained for liver tissue (early onset fiber damage), potato, kidney and muscle tissue demonstrated their own respective power threshold. Furthermore, when using the Nd:YAG laser, we observed that higher power settings had permitted a quicker necrosis induction, however within its own treatment power spectrum, the diode laser was capable of inducing larger lesions. In vivo studies demonstrated that early onset diffuser tip damage would prevent exact documentation of laser-tissue interaction at higher power levels. Results obtained with our standardized ex vivo model (IPK) revealed smaller necrotic lesions with saline than with blood perfusion and also demonstrated the important role which perfusion rate plays during laser-tissue interaction. We found that pigmented, well vascularized parenchymal organs with low stromal content (kidney, liver) and a higher absorption

  9. Performance evaluation of conventional chiller systems

    SciTech Connect

    Beyene, A.

    1995-06-01

    This article describes an optimization technique to reduce chiller energy usage by evaluating energy saving strategies. In most commercial buildings and industrial plants, HVAC systems are the largest energy consumers and offer the owners significant potential for savings. Chiller machines are also of interest to utility companies because they operate during cooling times that overlap peak hours of warmer climate zones, thereby contributing to peak energy demands. The key performance parameter in chiller analysis is the kW/ton of refrigeration, which is the ratio of the amount of electrical energy consumed relative to the amount of cooling energy delivers. To obtain the kW/ton refrigeration for a chiller, the electric power consumption (kW) of the compressor should be measured, or calculated if the instantaneous current and voltage are known.

  10. Performance evaluation of mail-scanning cameras

    NASA Astrophysics Data System (ADS)

    Rajashekar, Umesh; Vu, Tony Tuan; Hooning, John E.; Bovik, Alan Conrad

    2010-04-01

    Letter-scanning cameras (LSCs) form the front- end imaging systems for virtually all mail-scanning systems that are currently used to automatically sort mail products. As with any vision-dependent technology, the quality of the images generated by the camera is fundamental to the overall performance of the system. We present novel techniques for objective evaluation of LSCs using comparative imaging-a technique that involves measuring the fidelity of target images produced by a camera with reference to an image of the same target captured at very high quality. Such a framework provides a unique opportunity to directly quantify the camera's ability to capture real-world targets, such as handwritten and printed text. Noncomparative techniques were also used to measure properties such as the camera's modulation transfer function, dynamic range, and signal-to-noise ratio. To simulate real-world imaging conditions, application-specific test samples were designed using actual mail product materials.

  11. A performance evaluation system for photomultiplier tubes

    NASA Astrophysics Data System (ADS)

    Xia, J.; Qian, S.; Wang, W.; Ning, Z.; Cheng, Y.; Wang, Z.; Li, X.; Qi, M.; Heng, Y.; Liu, S.; Lei, X.

    2015-03-01

    A comprehensive performance evaluation system for Photomultiplier tubes has been built up. The system is able to review diverse cathode and anode properties for PMTs with different sizes and dimensions. Relative and direct methods were developed for the quantum efficiency measurement and the results are consistent with each other. Two-dimensional and three-dimensional scanning platforms were built to test both the cathode and anode uniformity for either the plane type or spherical type photocathode. A Flash Analog-to-Digital Convertor module is utilized to achieve high speed waveforms sampling. The entire system is highly automatic and flexible. Details of the system and some typical experimental results are presented in this paper.

  12. Evaluation of help model replacement codes

    SciTech Connect

    Whiteside, Tad; Hang, Thong; Flach, Gregory

    2009-07-01

    This work evaluates the computer codes that are proposed to be used to predict percolation of water through the closure-cap and into the waste containment zone at the Department of Energy closure sites. This work compares the currently used water-balance code (HELP) with newly developed computer codes that use unsaturated flow (Richards’ equation). It provides a literature review of the HELP model and the proposed codes, which result in two recommended codes for further evaluation: HYDRUS-2D3D and VADOSE/W. This further evaluation involved performing actual simulations on a simple model and comparing the results of those simulations to those obtained with the HELP code and the field data. From the results of this work, we conclude that the new codes perform nearly the same, although moving forward, we recommend HYDRUS-2D3D.

  13. Performance Evaluations of Ceramic Wafer Seals

    NASA Technical Reports Server (NTRS)

    Dunlap, Patrick H., Jr.; DeMange, Jeffrey J.; Steinetz, Bruce M.

    2006-01-01

    Future hypersonic vehicles will require high temperature, dynamic seals in advanced ramjet/scramjet engines and on the vehicle airframe to seal the perimeters of movable panels, flaps, and doors. Seal temperatures in these locations can exceed 2000 F, especially when the seals are in contact with hot ceramic matrix composite sealing surfaces. NASA Glenn Research Center is developing advanced ceramic wafer seals to meet the needs of these applications. High temperature scrub tests performed between silicon nitride wafers and carbon-silicon carbide rub surfaces revealed high friction forces and evidence of material transfer from the rub surfaces to the wafer seals. Stickage between adjacent wafers was also observed after testing. Several design changes to the wafer seals were evaluated as possible solutions to these concerns. Wafers with recessed sides were evaluated as a potential means of reducing friction between adjacent wafers. Alternative wafer materials are also being considered as a means of reducing friction between the seals and their sealing surfaces and because the baseline silicon nitride wafer material (AS800) is no longer commercially available.

  14. 48 CFR 236.201 - Evaluation of contractor performance.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... CONTRACTS Special Aspects of Contracting for Construction 236.201 Evaluation of contractor performance. (a) Preparation of performance evaluation reports. Use DD Form 2626, Performance Evaluation (Construction... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Evaluation of...

  15. Advanced fuels modeling: Evaluating the steady-state performance of carbide fuel in helium-cooled reactors using FRAPCON 3.4

    NASA Astrophysics Data System (ADS)

    Hallman, Luther, Jr.

    Uranium carbide (UC) has long been considered a potential alternative to uranium dioxide (UO2) fuel, especially in the context of Gen IV gas-cooled reactors. It has shown promise because of its high uranium density, good irradiation stability, and especially high thermal conductivity. Despite its many benefits, UC is known to swell at a rate twice that of UO2. However, the swelling phenomenon is not well understood, and we are limited to a weak empirical understanding of the swelling mechanism. One suggested cladding for UC is silicon carbide (SiC), a ceramic that demonstrates a number of desirable properties. Among them are an increased corrosion resistance, high mechanical strength, and irradiation stability. However, with increased temperatures, SiC exhibits an extremely brittle nature. The brittle behavior of SiC is not fully understood and thus it is unknown how SiC would respond to the added stress of a swelling UC fuel. To better understand the interaction between these advanced materials, each has been implemented into FRAPCON, the preferred fuel performance code of the Nuclear Regulatory Commission (NRC); additionally, the material properties for a helium coolant have been incorporated. The implementation of UC within FRAPCON required the development of material models that described not only the thermophysical properties of UC, such as thermal conductivity and thermal expansion, but also models for the swelling, densification, and fission gas release associated with the fuel's irradiation behavior. This research is intended to supplement ongoing analysis of the performance and behavior of uranium carbide and silicon carbide in a helium-cooled reactor.

  16. Modeling and Performance Simulation of the Mass Storage Network Environment

    NASA Technical Reports Server (NTRS)

    Kim, Chan M.; Sang, Janche

    2000-01-01

    This paper describes the application of modeling and simulation in evaluating and predicting the performance of the mass storage network environment. Network traffic is generated to mimic the realistic pattern of file transfer, electronic mail, and web browsing. The behavior and performance of the mass storage network and a typical client-server Local Area Network (LAN) are investigated by modeling and simulation. Performance characteristics in throughput and delay demonstrate the important role of modeling and simulation in network engineering and capacity planning.

  17. Performance and Perception in the Flipped Learning Model: An Initial Approach to Evaluate the Effectiveness of a New Teaching Methodology in a General Science Classroom

    NASA Astrophysics Data System (ADS)

    González-Gómez, David; Jeong, Jin Su; Airado Rodríguez, Diego; Cañada-Cañada, Florentina

    2016-06-01

    "Flipped classroom" teaching methodology is a type of blended learning in which the traditional class setting is inverted. Lecture is shifted outside of class, while the classroom time is employed to solve problems or doing practical works through the discussion/peer collaboration of students and instructors. This relatively new instructional methodology claims that flipping your classroom engages more effectively students with the learning process, achieving better teaching results. Thus, this research aimed to evaluate the effects of the flipped classroom on the students' performance and perception of this new methodology. This study was conducted in a general science course, sophomore of the Primary Education bachelor degree in the Training Teaching School of the University of Extremadura (Spain) during the course 2014/2015. In order to assess the suitability of the proposed methodology, the class was divided in two groups. For the first group, a traditional methodology was followed, and it was used as control. On the other hand, the "flipped classroom" methodology was used in the second group, where the students were given diverse materials, such as video lessons and reading materials, before the class to be revised at home by them. Online questionnaires were as well provided to assess the progress of the students before the class. Finally, the results were compared in terms of students' achievements and a post-task survey was also conducted to know the students' perceptions. A statistically significant difference was found on all assessments with the flipped class students performing higher on average. In addition, most students had a favorable perception about the flipped classroom noting the ability to pause, rewind and review lectures, as well as increased individualized learning and increased teacher availability.

  18. Performance and Perception in the Flipped Learning Model: An Initial Approach to Evaluate the Effectiveness of a New Teaching Methodology in a General Science Classroom

    NASA Astrophysics Data System (ADS)

    González-Gómez, David; Jeong, Jin Su; Airado Rodríguez, Diego; Cañada-Cañada, Florentina

    2016-02-01

    "Flipped classroom" teaching methodology is a type of blended learning in which the traditional class setting is inverted. Lecture is shifted outside of class, while the classroom time is employed to solve problems or doing practical works through the discussion/peer collaboration of students and instructors. This relatively new instructional methodology claims that flipping your classroom engages more effectively students with the learning process, achieving better teaching results. Thus, this research aimed to evaluate the effects of the flipped classroom on the students' performance and perception of this new methodology. This study was conducted in a general science course, sophomore of the Primary Education bachelor degree in the Training Teaching School of the University of Extremadura (Spain) during the course 2014/2015. In order to assess the suitability of the proposed methodology, the class was divided in two groups. For the first group, a traditional methodology was followed, and it was used as control. On the other hand, the "flipped classroom" methodology was used in the second group, where the students were given diverse materials, such as video lessons and reading materials, before the class to be revised at home by them. Online questionnaires were as well provided to assess the progress of the students before the class. Finally, the results were compared in terms of students' achievements and a post-task survey was also conducted to know the students' perceptions. A statistically significant difference was found on all assessments with the flipped class students performing higher on average. In addition, most students had a favorable perception about the flipped classroom noting the ability to pause, rewind and review lectures, as well as increased individualized learning and increased teacher availability.

  19. Solar power plant performance evaluation: simulation and experimental validation

    NASA Astrophysics Data System (ADS)

    Natsheh, E. M.; Albarbar, A.

    2012-05-01

    In this work the performance of solar power plant is evaluated based on a developed model comprise photovoltaic array, battery storage, controller and converters. The model is implemented using MATLAB/SIMULINK software package. Perturb and observe (P&O) algorithm is used for maximizing the generated power based on maximum power point tracker (MPPT) implementation. The outcome of the developed model are validated and supported by a case study carried out using operational 28.8kW grid-connected solar power plant located in central Manchester. Measurements were taken over 21 month's period; using hourly average irradiance and cell temperature. It was found that system degradation could be clearly monitored by determining the residual (the difference) between the output power predicted by the model and the actual measured power parameters. It was found that the residual exceeded the healthy threshold, 1.7kW, due to heavy snow in Manchester last winter. More important, the developed performance evaluation technique could be adopted to detect any other reasons that may degrade the performance of the P V panels such as shading and dirt. Repeatability and reliability of the developed system performance were validated during this period. Good agreement was achieved between the theoretical simulation and the real time measurement taken the online grid connected solar power plant.

  20. Space Shuttle UHF Communications Performance Evaluation

    NASA Technical Reports Server (NTRS)

    Hwu, Shian U.; Loh, Yin-Chung; Kroll, Quin D.; Sham, Catherine C.

    2004-01-01

    An extension boom is to be installed on the starboard side of the Space Shuttle Orbiter (SSO) payload bay for thermal tile inspection and repairing. As a result, the Space Shuttle payload bay Ultra High Frequency (UHF) antenna will be under the boom. This study is to evaluate the Space Shuttle UHF communication performance for antenna at a suitable new location. To insure the RF coverage performance at proposed new locations, the link margin between the UHF payload bay antenna and Extravehicular Activity (EVA) Astronauts at a range distance of 160 meters from the payload bay antenna was analyzed. The communication performance between Space Shuttle Orbiter and International Space Station (SSO-ISS) during rendezvous was also investigated. The multipath effects from payload bay structures surrounding the payload bay antenna were analyzed. The computer simulation tool based on the Geometrical Theory of Diffraction method (GTD) was used to compute the signal strengths. The total field strength was obtained by summing the direct fields from the antennas and the reflected and diffracted fields from the surrounding structures. The computed signal strengths were compared to the signal strength corresponding to the 0 dB link margin. Based on the results obtained in this study, RF coverage for SSO-EVA and SSO- ISS communication links was determined for the proposed payload bay antenna UHF locations. The RF radiation to the Orbiter Docking System (ODS) pyros, the payload bay avionics, and the Shuttle Remote Manipulator System (SRMS) from the new proposed UHF antenna location was also investigated to ensure the EMC/EMI compliances.

  1. 48 CFR 1252.216-72 - Performance evaluation plan.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ....216-72 Performance evaluation plan. As prescribed in (TAR) 48 CFR 1216.406(b), insert the following clause: Performance Evaluation Plan (OCT 1994) (a) A Performance Evaluation Plan shall be unilaterally... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Performance...

  2. Analysis of Photovoltaic System Energy Performance Evaluation Method

    SciTech Connect

    Kurtz, S.; Newmiller, J.; Kimber, A.; Flottemesch, R.; Riley, E.; Dierauf, T.; McKee, J.; Krishnani, P.

    2013-11-01

    Documentation of the energy yield of a large photovoltaic (PV) system over a substantial period can be useful to measure a performance guarantee, as an assessment of the health of the system, for verification of a performance model to then be applied to a new system, or for a variety of other purposes. Although the measurement of this performance metric might appear to be straight forward, there are a number of subtleties associated with variations in weather and imperfect data collection that complicate the determination and data analysis. A performance assessment is most valuable when it is completed with a very low uncertainty and when the subtleties are systematically addressed, yet currently no standard exists to guide this process. This report summarizes a draft methodology for an Energy Performance Evaluation Method, the philosophy behind the draft method, and the lessons that were learned by implementing the method.

  3. Using Weibull Distribution Analysis to Evaluate ALARA Performance

    SciTech Connect

    E. L. Frome, J. P. Watkins, and D. A. Hagemeyer

    2009-10-01

    As Low as Reasonably Achievable (ALARA) is the underlying principle for protecting nuclear workers from potential health outcomes related to occupational radiation exposure. Radiation protection performance is currently evaluated by measures such as collective dose and average measurable dose, which do not indicate ALARA performance. The purpose of this work is to show how statistical modeling of individual doses using the Weibull distribution can provide objective supplemental performance indicators for comparing ALARA implementation among sites and for insights into ALARA practices within a site. Maximum likelihood methods were employed to estimate the Weibull shape and scale parameters used for performance indicators. The shape parameter reflects the effectiveness of maximizing the number of workers receiving lower doses and is represented as the slope of the fitted line on a Weibull probability plot. Additional performance indicators derived from the model parameters include the 99th percentile and the exceedance fraction. When grouping sites by collective total effective dose equivalent (TEDE) and ranking by 99th percentile with confidence intervals, differences in performance among sites can be readily identified. Applying this methodology will enable more efficient and complete evaluation of the effectiveness of ALARA implementation.

  4. Evaluating the Performance of a New Model for Predicting the Growth of Clostridium perfringens in Cooked, Uncured Meat and Poultry Products under Isothermal, Heating, and Dynamically Cooling Conditions.

    PubMed

    Huang, Lihan

    2016-07-01

    Clostridium perfringens type A is a significant public health threat and its spores may germinate, outgrow, and multiply during cooling of cooked meats. This study applies a new C. perfringens growth model in the USDA Integrated Pathogen Modeling Program-Dynamic Prediction (IPMP Dynamic Prediction) Dynamic Prediction to predict the growth from spores of C. perfringens in cooked uncured meat and poultry products using isothermal, dynamic heating, and cooling data reported in the literature. The residual errors of predictions (observation-prediction) are analyzed, and the root-mean-square error (RMSE) calculated. For isothermal and heating profiles, each data point in growth curves is compared. The mean residual errors (MRE) of predictions range from -0.40 to 0.02 Log colony forming units (CFU)/g, with a RMSE of approximately 0.6 Log CFU/g. For cooling, the end point predictions are conservative in nature, with an MRE of -1.16 Log CFU/g for single-rate cooling and -0.66 Log CFU/g for dual-rate cooling. The RMSE is between 0.6 and 0.7 Log CFU/g. Compared with other models reported in the literature, this model makes more accurate and fail-safe predictions. For cooling, the percentage for accurate and fail-safe predictions is between 97.6% and 100%. Under criterion 1, the percentage of accurate predictions is 47.5% for single-rate cooling and 66.7% for dual-rate cooling, while the fail-dangerous predictions are between 0% and 2.4%. This study demonstrates that IPMP Dynamic Prediction can be used by food processors and regulatory agencies as a tool to predict the growth of C. perfringens in uncured cooked meats and evaluate the safety of cooked or heat-treated uncured meat and poultry products exposed to cooling deviations or to develop customized cooling schedules. This study also demonstrates the need for more accurate data collection during cooling. PMID:27259065

  5. A Model for Curriculum Evaluation

    ERIC Educational Resources Information Center

    Crane, Peter; Abt, Clark C.

    1969-01-01

    Describes in some detail the Curriculum Evaluation Model, "a technique for calculating the cost-effectiveness of alternative curriculum materials by a detailed breakdown and analysis of their components, quality, and cost. Coverage, appropriateness, motivational effectiveness, and cost are the four major categories in terms of which the…

  6. Market behavior and performance of different strategy evaluation schemes

    NASA Astrophysics Data System (ADS)

    Baek, Yongjoo; Lee, Sang Hoon; Jeong, Hawoong

    2010-08-01

    Strategy evaluation schemes are a crucial factor in any agent-based market model, as they determine the agents’ strategy preferences and consequently their behavioral pattern. This study investigates how the strategy evaluation schemes adopted by agents affect their performance in conjunction with the market circumstances. We observe the performance of three strategy evaluation schemes, the history-dependent wealth game, the trend-opposing minority game, and the trend-following majority game, in a stock market where the price is exogenously determined. The price is either directly adopted from the real stock market indices or generated with a Markov chain of order ≤2 . Each scheme’s success is quantified by average wealth accumulated by the traders equipped with the scheme. The wealth game, as it learns from the history, shows relatively good performance unless the market is highly unpredictable. The majority game is successful in a trendy market dominated by long periods of sustained price increase or decrease. On the other hand, the minority game is suitable for a market with persistent zigzag price patterns. We also discuss the consequence of implementing finite memory in the scoring processes of strategies. Our findings suggest under which market circumstances each evaluation scheme is appropriate for modeling the behavior of real market traders.

  7. High performance APCS conceptual design and evaluation scoping study

    SciTech Connect

    Soelberg, N.; Liekhus, K.; Chambers, A.; Anderson, G.

    1998-02-01

    This Air Pollution Control System (APCS) Conceptual Design and Evaluation study was conducted to evaluate a high-performance (APC) system for minimizing air emissions from mixed waste thermal treatment systems. Seven variations of high-performance APCS designs were conceptualized using several design objectives. One of the system designs was selected for detailed process simulation using ASPEN PLUS to determine material and energy balances and evaluate performance. Installed system capital costs were also estimated. Sensitivity studies were conducted to evaluate the incremental cost and benefit of added carbon adsorber beds for mercury control, specific catalytic reduction for NO{sub x} control, and offgas retention tanks for holding the offgas until sample analysis is conducted to verify that the offgas meets emission limits. Results show that the high-performance dry-wet APCS can easily meet all expected emission limits except for possibly mercury. The capability to achieve high levels of mercury control (potentially necessary for thermally treating some DOE mixed streams) could not be validated using current performance data for mercury control technologies. The engineering approach and ASPEN PLUS modeling tool developed and used in this study identified APC equipment and system performance, size, cost, and other issues that are not yet resolved. These issues need to be addressed in feasibility studies and conceptual designs for new facilities or for determining how to modify existing facilities to meet expected emission limits. The ASPEN PLUS process simulation with current and refined input assumptions and calculations can be used to provide system performance information for decision-making, identifying best options, estimating costs, reducing the potential for emission violations, providing information needed for waste flow analysis, incorporating new APCS technologies in existing designs, or performing facility design and permitting activities.

  8. A model evaluation checklist for process-based environmental models

    NASA Astrophysics Data System (ADS)

    Jackson-Blake, Leah

    2015-04-01

    Mechanistic catchment-scale phosphorus models appear to perform poorly where diffuse sources dominate. The reasons for this were investigated for one commonly-applied model, the INtegrated model of CAtchment Phosphorus (INCA-P). Model output was compared to 18 months of daily water quality monitoring data in a small agricultural catchment in Scotland, and model structure, key model processes and internal model responses were examined. Although the model broadly reproduced dissolved phosphorus dynamics, it struggled with particulates. The reasons for poor performance were explored, together with ways in which improvements could be made. The process of critiquing and assessing model performance was then generalised to provide a broadly-applicable model evaluation checklist, incorporating: (1) Calibration challenges, relating to difficulties in thoroughly searching a high-dimensional parameter space and in selecting appropriate means of evaluating model performance. In this study, for example, model simplification was identified as a necessary improvement to reduce the number of parameters requiring calibration, whilst the traditionally-used Nash Sutcliffe model performance statistic was not able to discriminate between realistic and unrealistic model simulations, and alternative statistics were needed. (2) Data limitations, relating to a lack of (or uncertainty in) input data, data to constrain model parameters, data for model calibration and testing, and data to test internal model processes. In this study, model reliability could be improved by addressing all four kinds of data limitation. For example, there was insufficient surface water monitoring data for model testing against an independent dataset to that used in calibration, whilst additional monitoring of groundwater and effluent phosphorus inputs would help distinguish between alternative plausible model parameterisations. (3) Model structural inadequacies, whereby model structure may inadequately represent

  9. Shuttle rendezvous radar performance evaluation and simulation

    NASA Technical Reports Server (NTRS)

    Griffin, John W.; Lindberg, Andrew C.; Ahn, Thomas B.; Harton, Paul L.

    1988-01-01

    The US Space Shuttle's Ku-band system was specifically designed for communications and tracking functions which are required during on-orbit operations with other spacecraft. Operating modes permit search and acquisition to be accomplished by computer designation or under manual control by the astronaut. Ku-band system data channels drive on-board dedicated displays and are incorporated into state vector updates by Shuttle guidance and navigation computers. Radar-cross-section estimates were used in computer simulations to predict the range at which radar detection and acquisition can be expected. Validity of the simulationi model and the radar design and performance were verified by flight tests on the White Sands test range. It is concluded that results of the testing established confidence in the capability of the system to provide the relative position and rate information which is needed for Shuttle work involving other spacecraft.

  10. Evaluating the influence of physical, economic and managerial factors on sheet erosion in rangelands of SW Spain by performing a sensitivity analysis on an integrated dynamic model.

    PubMed

    Ibáñez, J; Lavado Contador, J F; Schnabel, S; Martínez Valderrama, J

    2016-02-15

    An integrated dynamic model was used to evaluate the influence of climatic, soil, pastoral, economic and managerial factors on sheet erosion in rangelands of SW Spain (dehesas). This was achieved by means of a variance-based sensitivity analysis. Topsoil erodibility, climate change and a combined factor related to soil water storage capacity and the pasture production function were the factors which influenced water erosion the most. Of them, climate change is the main source of uncertainty, though in this study it caused a reduction in the mean and the variance of long-term erosion rates. The economic and managerial factors showed scant influence on soil erosion, meaning that it is unlikely to find such influence in the study area for the time being. This is because the low profitability of the livestock business maintains stocking rates at low levels. However, the potential impact of livestock, through which economic and managerial factors affect soil erosion, proved to be greater in absolute value than the impact of climate change. Therefore, if changes in some economic or managerial factors led to higher stocking rates in the future, significant increases in erosion rates would be expected. PMID:26657389

  11. Dynamic Multicriteria Evaluation of Conceptual Hydrological Models

    NASA Astrophysics Data System (ADS)

    de Vos, N. J.; Rientjes, T. H.; Fenicia, F.; Gupta, H. V.

    2007-12-01

    Accurate and precise forecasts of river streamflows are crucial for successful management of water resources and under the threat of hydrological extremes such as floods and droughts. Conceptual rainfall-runoff models are the most popular approach in flood forecasting. However, the calibration and evaluation of such models is often oversimplified by the use of performance statistics that largely ignore the dynamic character of a watershed system. This research aims to find novel ways of model evaluation by identifying periods of hydrologic similarity and customizing evaluation within each period using multiple criteria. A dynamic approach to hydrologic model identification, calibration and testing can be realized by applying clustering algorithms (e.g., Self-Organizing Map, Fuzzy C-means algorithm) to hydrological data. These algorithms are able to identify clusters in the data that represent periods of hydrological similarity. In this way, dynamic catchment system behavior can be simplified within the clusters that are identified. Although clustering requires a number of subjective choices, new insights into the hydrological functioning of a catchment can be obtained. Finally, separate model multi-criteria calibration and evaluation is performed for each of the clusters. Such a model evaluation procedure shows to be reliable and gives much-needed feedback on exactly where certain model structures fail. Several clustering algorithms were tested on two data sets of meso-scale and large-scale catchments. The results show that the clustering algorithms define categories that reflect hydrological process understanding: dry/wet seasons, rising/falling hydrograph limbs, precipitation-driven/ non-driven periods, etc. The results of various clustering algorithms are compared and validated using expert knowledge. Calibration results on a conceptual hydrological model show that the common practice of single-criteria calibration over the complete time series fails to perform

  12. High-Performance Monopropellants and Catalysts Evaluated

    NASA Technical Reports Server (NTRS)

    Reed, Brian D.

    2004-01-01

    The NASA Glenn Research Center is sponsoring efforts to develop advanced monopropellant technology. The focus has been on monopropellant formulations composed of an aqueous solution of hydroxylammonium nitrate (HAN) and a fuel component. HAN-based monopropellants do not have a toxic vapor and do not need the extraordinary procedures for storage, handling, and disposal required of hydrazine (N2H4). Generically, HAN-based monopropellants are denser and have lower freezing points than N2H4. The performance of HAN-based monopropellants depends on the selection of fuel, the HAN-to-fuel ratio, and the amount of water in the formulation. HAN-based monopropellants are not seen as a replacement for N2H4 per se, but rather as a propulsion option in their own right. For example, HAN-based monopropellants would prove beneficial to the orbit insertion of small, power-limited satellites because of this propellant's high performance (reduced system mass), high density (reduced system volume), and low freezing point (elimination of tank and line heaters). Under a Glenn-contracted effort, Aerojet Redmond Rocket Center conducted testing to provide the foundation for the development of monopropellant thrusters with an I(sub sp) goal of 250 sec. A modular, workhorse reactor (representative of a 1-lbf thruster) was used to evaluate HAN formulations with catalyst materials. Stoichiometric, oxygen-rich, and fuelrich formulations of HAN-methanol and HAN-tris(aminoethyl)amine trinitrate were tested to investigate the effects of stoichiometry on combustion behavior. Aerojet found that fuelrich formulations degrade the catalyst and reactor faster than oxygen-rich and stoichiometric formulations do. A HAN-methanol formulation with a theoretical Isp of 269 sec (designated HAN269MEO) was selected as the baseline. With a combustion efficiency of at least 93 percent demonstrated for HAN-based monopropellants, HAN269MEO will meet the I(sub sp) 250 sec goal.

  13. Public Education Resources and Pupil Performance Models.

    ERIC Educational Resources Information Center

    Spottheim, David; And Others

    This report details three models quantifying the relationships between educational means (resources) and ends (pupil achievements) to analyze resource allocation problems within school districts: (1) the Pupil Performance Model; (2) the Goal Programming Model; and (3) the Operational Structure of a School and Pupil Performance Model. These models…

  14. Human performance evaluation in dual-axis critical task tracking

    NASA Technical Reports Server (NTRS)

    Ritchie, M. L.; Nataraj, N. S.

    1975-01-01

    A dual axis tracking using a multiloop critical task was set up to evaluate human performance. The effects of control stick variation and display formats are evaluated. A secondary loading was used to measure the degradation in tracking performance.

  15. Prediction of performance on the RCMP physical ability requirement evaluation.

    PubMed

    Stanish, H I; Wood, T M; Campagna, P

    1999-08-01

    The Royal Canadian Mounted Police use the Physical Ability Requirement Evaluation (PARE) for screening applicants. The purposes of this investigation were to identify those field tests of physical fitness that were associated with PARE performance and determine which most accurately classified successful and unsuccessful PARE performers. The participants were 27 female and 21 male volunteers. Testing included measures of aerobic power, anaerobic power, agility, muscular strength, muscular endurance, and body composition. Multiple regression analysis revealed a three-variable model for males (70-lb bench press, standing long jump, and agility) explaining 79% of the variability in PARE time, whereas a one-variable model (agility) explained 43% of the variability for females. Analysis of the classification accuracy of the males' data was prohibited because 91% of the males passed the PARE. Classification accuracy of the females' data, using logistic regression, produced a two-variable model (agility, 1.5-mile endurance run) with 93% overall classification accuracy. PMID:10457510

  16. Methodology for Evaluation of Diagnostic Performance

    SciTech Connect

    Metz, Charles E.

    2003-02-19

    developing statistical tests to evaluate the significance of measured differences between ROC curves. These are especially important tasks in medical applications, because various practical issues usually limit the number of patients with clearly established diagnostic truth that can be included in any study that seeks to measure diagnostic performance objectively. Other progress has been made in relating ROC analysis to cost/benefit analysis, and in generalizing ROC methods to accommodate some diagnostic tasks where more than two decision alternatives are available. ROC analysis clearly provides the most rigorous and fruitful approach for such assessments but, like many other powerful techniques that provide useful insight concerning complex situations, it currently suffers from limitations, particularly in evaluation studies that involve small case samples. However, the potential of this relatively new analytic approach and the concepts on which it is based have not been fully explored. The research proposed here is designed to refine and supplement existing ROC methodology to increase both the accuracy and the precision of its results.

  17. Performance Modeling: Understanding the Present and Predicting theFuture

    SciTech Connect

    Bailey, David H.; Snavely, Allan

    2005-11-30

    We present an overview of current research in performance modeling, focusing on efforts underway in the Performance Evaluation Research Center (PERC). Using some new techniques, we are able to construct performance models that can be used to project the sustained performance of large-scale scientific programs on different systems, over a range of job and system sizes. Such models can be used by vendors in system designs, by computing centers in system acquisitions, and by application scientists to improve the performance of their codes.

  18. Test suite for evaluating performance of multithreaded MPI communication.

    SciTech Connect

    Thakur, R.; Gropp, W.; Mathematics and Computer Science; Univ. of Illinois

    2009-12-01

    As parallel systems are commonly being built out of increasingly large multicore chips, application programmers are exploring the use of hybrid programming models combining MPI across nodes and multithreading within a node. Many MPI implementations, however, are just starting to support multithreaded MPI communication, often focussing on correctness first and performance later. As a result, both users and implementers need some measure for evaluating the multithreaded performance of an MPI implementation. In this paper, we propose a number of performance tests that are motivated by typical application scenarios. These tests cover the overhead of providing the MPI-THREAD-MULTIPLE level of thread safety for user programs, the amount of concurrency in different threads making MPI calls, the ability to overlap communication with computation, and other features. We present performance results with this test suite on several platforms (Linux cluster, Sun and IBM SMPs) and MPI implementations (MPICH2, Open MPI, IBM, and Sun).

  19. COST AND PERFORMANCE MODELS FOR ELECTROSTATICALLY STIMULATED FABRIC FILTRATION

    EPA Science Inventory

    The report gives results of a survey of the literature on performance models for pulse-cleaned fabric filters. Each model is evaluated for its ability to predict average pressure drop from pilot plant data. The best model is chosen and used, in conjunction with pressure drop redu...

  20. MANUAL FOR THE EVALUATION OF LABORATORIES PERFORMING AQUATIC TOXICITY TESTS

    EPA Science Inventory

    This manual describes guidelines and standardized procedures for conducting on-site audits and evaluations of laboratories performing toxicity tests. ncluded are pre-survey information activities, on-site evaluation activities, evaluation criteria, organizational history and labo...

  1. Sequentially Executed Model Evaluation Framework

    SciTech Connect

    2015-10-20

    Provides a message passing framework between generic input, model and output drivers, and specifies an API for developing such drivers. Also provides batch and real-time controllers which step the model and I/O through the time domain (or other discrete domain), and sample I/O drivers. This is a library framework, and does not, itself, solve any problems or execute any modeling. The SeMe framework aids in development of models which operate on sequential information, such as time-series, where evaluation is based on prior results combined with new data for this iteration. Has applications in quality monitoring, and was developed as part of the CANARY-EDS software, where real-time water quality data is being analyzed for anomalies.

  2. Sequentially Executed Model Evaluation Framework

    SciTech Connect

    2014-02-14

    Provides a message passing framework between generic input, model and output drivers, and specifies an API for developing such drivers. Also provides batch and real-time controllers which step the model and 1/0 through the time domain (or other discrete domain), and sample 1/0 drivers. This is a Framework library framework, and does not, itself, solve any problems or execute any modelling. The SeMe framework aids in development of models which operate on sequential information, such as time-series, where evaluation is based on prior results combined with new data for this iteration. Ha) applications in quality monitoring, and was developed as part of the CANARY-EDS software, where real-time water quality data is being analyzed

  3. Sequentially Executed Model Evaluation Framework

    2014-02-14

    Provides a message passing framework between generic input, model and output drivers, and specifies an API for developing such drivers. Also provides batch and real-time controllers which step the model and 1/0 through the time domain (or other discrete domain), and sample 1/0 drivers. This is a Framework library framework, and does not, itself, solve any problems or execute any modelling. The SeMe framework aids in development of models which operate on sequential information, suchmore » as time-series, where evaluation is based on prior results combined with new data for this iteration. Ha) applications in quality monitoring, and was developed as part of the CANARY-EDS software, where real-time water quality data is being analyzed« less

  4. Sequentially Executed Model Evaluation Framework

    2015-10-20

    Provides a message passing framework between generic input, model and output drivers, and specifies an API for developing such drivers. Also provides batch and real-time controllers which step the model and I/O through the time domain (or other discrete domain), and sample I/O drivers. This is a library framework, and does not, itself, solve any problems or execute any modeling. The SeMe framework aids in development of models which operate on sequential information, such asmore » time-series, where evaluation is based on prior results combined with new data for this iteration. Has applications in quality monitoring, and was developed as part of the CANARY-EDS software, where real-time water quality data is being analyzed for anomalies.« less

  5. 48 CFR 2452.216-73 - Performance evaluation plan.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 6 2010-10-01 2010-10-01 true Performance evaluation plan... 2452.216-73 Performance evaluation plan. As prescribed in 2416.406(e)(3), insert the following clause in all award fee contracts: Performance Evaluation Plan (AUG 1987) (a) The Government...

  6. 48 CFR 8.406-7 - Contractor Performance Evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Performance Evaluation. Ordering activities must prepare an evaluation of contractor performance for each... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Contractor Performance Evaluation. 8.406-7 Section 8.406-7 Federal Acquisition Regulations System FEDERAL ACQUISITION...

  7. 24 CFR 570.491 - Performance and evaluation report.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... Development Block Grant Program § 570.491 Performance and evaluation report. The annual performance and evaluation report shall be submitted in accordance with 24 CFR part 91. (Approved by the Office of Management... 24 Housing and Urban Development 3 2011-04-01 2010-04-01 true Performance and evaluation...

  8. 24 CFR 570.491 - Performance and evaluation report.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Development Block Grant Program § 570.491 Performance and evaluation report. The annual performance and evaluation report shall be submitted in accordance with 24 CFR part 91. (Approved by the Office of Management... 24 Housing and Urban Development 3 2010-04-01 2010-04-01 false Performance and evaluation...

  9. 40 CFR 35.515 - Evaluation of performance.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Administrator's decision under the dispute processes in 40 CFR 31.70. (d) Evaluation reports. The Regional... 40 Protection of Environment 1 2010-07-01 2010-07-01 false Evaluation of performance. 35.515....515 Evaluation of performance. (a) Joint evaluation process. The applicant and the...

  10. Evaluation of a lake whitefish bioenergetics model

    USGS Publications Warehouse

    Madenjian, Charles P.; O'Connor, Daniel V.; Pothoven, Steven A.; Schneeberger, Philip J.; Rediske, Richard R.; O'Keefe, James P.; Bergstedt, Roger A.; Argyle, Ray L.; Brandt, Stephen B.

    2006-01-01

    We evaluated the Wisconsin bioenergetics model for lake whitefish Coregonus clupeaformis in the laboratory and in the field. For the laboratory evaluation, lake whitefish were fed rainbow smelt Osmerus mordax in four laboratory tanks during a 133-d experiment. Based on a comparison of bioenergetics model predictions of lake whitefish food consumption and growth with observed consumption and growth, we concluded that the bioenergetics model furnished significantly biased estimates of both food consumption and growth. On average, the model overestimated consumption by 61% and underestimated growth by 16%. The source of the bias was probably an overestimation of the respiration rate. We therefore adjusted the respiration component of the bioenergetics model to obtain a good fit of the model to the observed consumption and growth in our laboratory tanks. Based on the adjusted model, predictions of food consumption over the 133-d period fell within 5% of observed consumption in three of the four tanks and within 9% of observed consumption in the remaining tank. We used polychlorinated biphenyls (PCBs) as a tracer to evaluate model performance in the field. Based on our laboratory experiment, the efficiency with which lake whitefish retained PCBs from their food (I?) was estimated at 0.45. We applied the bioenergetics model to Lake Michigan lake whitefish and then used PCB determinations of both lake whitefish and their prey from Lake Michigan to estimate p in the field. Application of the original model to Lake Michigan lake whitefish yielded a field estimate of 0.28, implying that the original formulation of the model overestimated consumption in Lake Michigan by 61%. Application of the bioenergetics model with the adjusted respiration component resulted in a field I? estimate of 0.56, implying that this revised model underestimated consumption by 20%.

  11. Performance Evaluation in Network-Based Parallel Computing

    NASA Technical Reports Server (NTRS)

    Dezhgosha, Kamyar

    1996-01-01

    Network-based parallel computing is emerging as a cost-effective alternative for solving many problems which require use of supercomputers or massively parallel computers. The primary objective of this project has been to conduct experimental research on performance evaluation for clustered parallel computing. First, a testbed was established by augmenting our existing SUNSPARCs' network with PVM (Parallel Virtual Machine) which is a software system for linking clusters of machines. Second, a set of three basic applications were selected. The applications consist of a parallel search, a parallel sort, a parallel matrix multiplication. These application programs were implemented in C programming language under PVM. Third, we conducted performance evaluation under various configurations and problem sizes. Alternative parallel computing models and workload allocations for application programs were explored. The performance metric was limited to elapsed time or response time which in the context of parallel computing can be expressed in terms of speedup. The results reveal that the overhead of communication latency between processes in many cases is the restricting factor to performance. That is, coarse-grain parallelism which requires less frequent communication between processes will result in higher performance in network-based computing. Finally, we are in the final stages of installing an Asynchronous Transfer Mode (ATM) switch and four ATM interfaces (each 155 Mbps) which will allow us to extend our study to newer applications, performance metrics, and configurations.

  12. Human Performance Models of Pilot Behavior

    NASA Technical Reports Server (NTRS)

    Foyle, David C.; Hooey, Becky L.; Byrne, Michael D.; Deutsch, Stephen; Lebiere, Christian; Leiden, Ken; Wickens, Christopher D.; Corker, Kevin M.

    2005-01-01

    Five modeling teams from industry and academia were chosen by the NASA Aviation Safety and Security Program to develop human performance models (HPM) of pilots performing taxi operations and runway instrument approaches with and without advanced displays. One representative from each team will serve as a panelist to discuss their team s model architecture, augmentations and advancements to HPMs, and aviation-safety related lessons learned. Panelists will discuss how modeling results are influenced by a model s architecture and structure, the role of the external environment, specific modeling advances and future directions and challenges for human performance modeling in aviation.

  13. Towards Reliable Evaluation of Anomaly-Based Intrusion Detection Performance

    NASA Technical Reports Server (NTRS)

    Viswanathan, Arun

    2012-01-01

    This report describes the results of research into the effects of environment-induced noise on the evaluation process for anomaly detectors in the cyber security domain. This research was conducted during a 10-week summer internship program from the 19th of August, 2012 to the 23rd of August, 2012 at the Jet Propulsion Laboratory in Pasadena, California. The research performed lies within the larger context of the Los Angeles Department of Water and Power (LADWP) Smart Grid cyber security project, a Department of Energy (DoE) funded effort involving the Jet Propulsion Laboratory, California Institute of Technology and the University of Southern California/ Information Sciences Institute. The results of the present effort constitute an important contribution towards building more rigorous evaluation paradigms for anomaly-based intrusion detectors in complex cyber physical systems such as the Smart Grid. Anomaly detection is a key strategy for cyber intrusion detection and operates by identifying deviations from profiles of nominal behavior and are thus conceptually appealing for detecting "novel" attacks. Evaluating the performance of such a detector requires assessing: (a) how well it captures the model of nominal behavior, and (b) how well it detects attacks (deviations from normality). Current evaluation methods produce results that give insufficient insight into the operation of a detector, inevitably resulting in a significantly poor characterization of a detectors performance. In this work, we first describe a preliminary taxonomy of key evaluation constructs that are necessary for establishing rigor in the evaluation regime of an anomaly detector. We then focus on clarifying the impact of the operational environment on the manifestation of attacks in monitored data. We show how dynamic and evolving environments can introduce high variability into the data stream perturbing detector performance. Prior research has focused on understanding the impact of this

  14. CTBT integrated verification system evaluation model supplement

    SciTech Connect

    EDENBURN,MICHAEL W.; BUNTING,MARCUS; PAYNE JR.,ARTHUR C.; TROST,LAWRENCE C.

    2000-03-02

    Sandia National Laboratories has developed a computer based model called IVSEM (Integrated Verification System Evaluation Model) to estimate the performance of a nuclear detonation monitoring system. The IVSEM project was initiated in June 1994, by Sandia's Monitoring Systems and Technology Center and has been funded by the U.S. Department of Energy's Office of Nonproliferation and National Security (DOE/NN). IVSEM is a simple, ''top-level,'' modeling tool which estimates the performance of a Comprehensive Nuclear Test Ban Treaty (CTBT) monitoring system and can help explore the impact of various sensor system concepts and technology advancements on CTBT monitoring. One of IVSEM's unique features is that it integrates results from the various CTBT sensor technologies (seismic, in sound, radionuclide, and hydroacoustic) and allows the user to investigate synergy among the technologies. Specifically, IVSEM estimates the detection effectiveness (probability of detection), location accuracy, and identification capability of the integrated system and of each technology subsystem individually. The model attempts to accurately estimate the monitoring system's performance at medium interfaces (air-land, air-water) and for some evasive testing methods such as seismic decoupling. The original IVSEM report, CTBT Integrated Verification System Evaluation Model, SAND97-25 18, described version 1.2 of IVSEM. This report describes the changes made to IVSEM version 1.2 and the addition of identification capability estimates that have been incorporated into IVSEM version 2.0.

  15. Evaluating conflation methods using uncertainty modeling

    NASA Astrophysics Data System (ADS)

    Doucette, Peter; Dolloff, John; Canavosio-Zuzelski, Roberto; Lenihan, Michael; Motsko, Dennis

    2013-05-01

    The classic problem of computer-assisted conflation involves the matching of individual features (e.g., point, polyline, or polygon vectors) as stored in a geographic information system (GIS), between two different sets (layers) of features. The classical goal of conflation is the transfer of feature metadata (attributes) from one layer to another. The age of free public and open source geospatial feature data has significantly increased the opportunity to conflate such data to create enhanced products. There are currently several spatial conflation tools in the marketplace with varying degrees of automation. An ability to evaluate conflation tool performance quantitatively is of operational value, although manual truthing of matched features is laborious and costly. In this paper, we present a novel methodology that uses spatial uncertainty modeling to simulate realistic feature layers to streamline evaluation of feature matching performance for conflation methods. Performance results are compiled for DCGIS street centerline features.

  16. A Note for Missile Autopilot Performance Evaluation Test

    NASA Astrophysics Data System (ADS)

    Eguchi, Hirofumi

    The essential benefit of HardWare-In-the-Loop (HWIL) simulation can be summarized as that the performance of autopilot system is evaluated realistically without the modeling error by using actual hardware such as seeker systems, autopilot systems and servo equipments. The most important requirement at the HWIL simulation test is to set the homing seeker at the 3-axis gimbals center of the flight motion table. But, because of the various reasons such as the length of the homing seeker, the structure of the flight motion table and the shape of attachments, this requirement on setting is not able to be satisfied. In this paper, the effect of this position error on the guidance and control system performance is analyzed and evaluated.

  17. Unsupervised Performance Evaluation of Image Segmentation

    NASA Astrophysics Data System (ADS)

    Chabrier, Sebastien; Emile, Bruno; Rosenberger, Christophe; Laurent, Helene

    2006-12-01

    We present in this paper a study of unsupervised evaluation criteria that enable the quantification of the quality of an image segmentation result. These evaluation criteria compute some statistics for each region or class in a segmentation result. Such an evaluation criterion can be useful for different applications: the comparison of segmentation results, the automatic choice of the best fitted parameters of a segmentation method for a given image, or the definition of new segmentation methods by optimization. We first present the state of art of unsupervised evaluation, and then, we compare six unsupervised evaluation criteria. For this comparative study, we use a database composed of 8400 synthetic gray-level images segmented in four different ways. Vinet's measure (correct classification rate) is used as an objective criterion to compare the behavior of the different criteria. Finally, we present the experimental results on the segmentation evaluation of a few gray-level natural images.

  18. Findings and Preliminary Recommendations from the Michigan State and Indiana University Research Study of Value-Added Models to Evaluate Teacher Performance

    ERIC Educational Resources Information Center

    Guarino, Cassandra M.

    2013-01-01

    The push for accountability in public schooling has extended to the measurement of teacher performance, accelerated by federal efforts through Race to the Top. Currently, a large number of states and districts across the country are computing measures of teacher performance based on the standardized test scores of their students and using them in…

  19. The design and implementation of an operational model evaluation system

    SciTech Connect

    Foster, K.T.

    1995-06-01

    An evaluation of an atmospheric transport and diffusion model`s operational performance typically involves the comparison of the model`s calculations with measurements of an atmospheric pollutant`s temporal and spatial distribution. These evaluations however often use data from a small number of experiments and may be limited to producing some of the commonly quoted statistics based on the differences between model calculations and the measurements. This paper presents efforts to develop a model evaluation system geared for both the objective statistical analysis and the more subjective visualization of the inter-relationships between a model`s calculations and the appropriate field measurement data.

  20. Performance and Architecture Lab Modeling Tool

    SciTech Connect

    2014-06-19

    Analytical application performance models are critical for diagnosing performance-limiting resources, optimizing systems, and designing machines. Creating models, however, is difficult. Furthermore, models are frequently expressed in forms that are hard to distribute and validate. The Performance and Architecture Lab Modeling tool, or Palm, is a modeling tool designed to make application modeling easier. Palm provides a source code modeling annotation language. Not only does the modeling language divide the modeling task into sub problems, it formally links an application's source code with its model. This link is important because a model's purpose is to capture application behavior. Furthermore, this link makes it possible to define rules for generating models according to source code organization. Palm generates hierarchical models according to well-defined rules. Given an application, a set of annotations, and a representative execution environment, Palm will generate the same model. A generated model is a an executable program whose constituent parts directly correspond to the modeled application. Palm generates models by combining top-down (human-provided) semantic insight with bottom-up static and dynamic analysis. A model's hierarchy is defined by static and dynamic source code structure. Because Palm coordinates models and source code, Palm's models are 'first-class' and reproducible. Palm automates common modeling tasks. For instance, Palm incorporates measurements to focus attention, represent constant behavior, and validate models. Palm's workflow is as follows. The workflow's input is source code annotated with Palm modeling annotations. The most important annotation models an instance of a block of code. Given annotated source code, the Palm Compiler produces executables and the Palm Monitor collects a representative performance profile. The Palm Generator synthesizes a model based on the static and dynamic mapping of annotations to program behavior

  1. Performance and Architecture Lab Modeling Tool

    2014-06-19

    Analytical application performance models are critical for diagnosing performance-limiting resources, optimizing systems, and designing machines. Creating models, however, is difficult. Furthermore, models are frequently expressed in forms that are hard to distribute and validate. The Performance and Architecture Lab Modeling tool, or Palm, is a modeling tool designed to make application modeling easier. Palm provides a source code modeling annotation language. Not only does the modeling language divide the modeling task into sub problems, itmore » formally links an application's source code with its model. This link is important because a model's purpose is to capture application behavior. Furthermore, this link makes it possible to define rules for generating models according to source code organization. Palm generates hierarchical models according to well-defined rules. Given an application, a set of annotations, and a representative execution environment, Palm will generate the same model. A generated model is a an executable program whose constituent parts directly correspond to the modeled application. Palm generates models by combining top-down (human-provided) semantic insight with bottom-up static and dynamic analysis. A model's hierarchy is defined by static and dynamic source code structure. Because Palm coordinates models and source code, Palm's models are 'first-class' and reproducible. Palm automates common modeling tasks. For instance, Palm incorporates measurements to focus attention, represent constant behavior, and validate models. Palm's workflow is as follows. The workflow's input is source code annotated with Palm modeling annotations. The most important annotation models an instance of a block of code. Given annotated source code, the Palm Compiler produces executables and the Palm Monitor collects a representative performance profile. The Palm Generator synthesizes a model based on the static and dynamic mapping of annotations to program

  2. Development of task network models of human performance in microgravity

    NASA Technical Reports Server (NTRS)

    Diaz, Manuel F.; Adam, Susan

    1992-01-01

    This paper discusses the utility of task-network modeling for quantifying human performance variability in microgravity. The data are gathered for: (1) improving current methodologies for assessing human performance and workload in the operational space environment; (2) developing tools for assessing alternative system designs; and (3) developing an integrated set of methodologies for the evaluation of performance degradation during extended duration spaceflight. The evaluation entailed an analysis of the Remote Manipulator System payload-grapple task performed on many shuttle missions. Task-network modeling can be used as a tool for assessing and enhancing human performance in man-machine systems, particularly for modeling long-duration manned spaceflight. Task-network modeling can be directed toward improving system efficiency by increasing the understanding of basic capabilities of the human component in the system and the factors that influence these capabilities.

  3. The Spiral-Interactive Program Evaluation Model.

    ERIC Educational Resources Information Center

    Khaleel, Ibrahim Adamu

    1988-01-01

    Describes the spiral interactive program evaluation model, which is designed to evaluate vocational-technical education programs in secondary schools in Nigeria. Program evaluation is defined; utility oriented and process oriented models for evaluation are described; and internal and external evaluative factors and variables that define each…

  4. Human visual performance model for crewstation design

    NASA Astrophysics Data System (ADS)

    Larimer, James O.; Prevost, Michael P.; Arditi, Aries R.; Azueta, Steven; Bergen, James R.; Lubin, Jeffrey

    1991-08-01

    In a cockpit, the crewstation of an airplane, the ability of the pilot to unambiguously perceive rapidly changing information both internal and external to the crewstation is critical. To assess the impact of crewstation design decisions on the pilot''s ability to perceive information, the designer needs a means of evaluating the trade-offs that result from different designs. The Visibility Modeling Tool (VMT) provides the designer with a CAD tool for assessing these trade-offs. It combines the technologies of computer graphics, computational geometry, human performance modeling and equipment modeling into a computer-based interactive design tool. Through a simple interactive interface, a designer can manipulate design parameters such as the geometry of the cockpit, environmental factors such as ambient lighting, pilot parameters such as point of regard and adaptation state, and equipment parameters such as the location of displays, their size and the contrast of displayed symbology. VMT provides an end-to-end analysis that answers questions such as ''Will the pilot be able to read the display?'' Performance data can be projected, in the form of 3D contours, into the crewstation graphic model, providing the designer with a footprint of the operator''s visual capabilities, defining, for example, the regions in which fonts of a particular type, size and contrast can be read without error. Geometrical data such as the pilot''s volume field of view, occlusions caused by facial geometry, helmet margins, and objects in the crewstation can also be projected into the crewstation graphic model with respect to the coordinates of the aviator''s eyes and fixation point. The intersections of the projections with objects in the crewstation, delineate the area of coverage, masking, or occlusion associated with the objects. Objects in the crewstation space can be projected onto models of the operator''s retinas. These projections can be used to provide the designer with the

  5. Modeling colloid transport for performance assessment.

    PubMed

    Contardi, J S; Turner, D R; Ahn, T M

    2001-02-01

    The natural system is expected to contribute to isolation at the proposed high-level nuclear waste (HLW) geologic repository at Yucca Mountain, NV (YM). In developing performance assessment (PA) computer models to simulate long-term behavior at YM, colloidal transport of radionuclides has been proposed as a critical factor because of the possible reduced interaction with the geologic media. Site-specific information on the chemistry and natural colloid concentration of saturated zone groundwaters in the vicinity of YM is combined with a surface complexation sorption model to evaluate the impact of natural colloids on calculated retardation factors (RF) for several radioelements of concern in PA. Inclusion of colloids into the conceptual model can reduce the calculated effective retardation significantly. Strongly sorbed radionuclides such as americium and thorium are most affected by pseudocolloid formation and transport, with a potential reduction in RF of several orders of magnitude. Radioelements that are less strongly sorbed under YM conditions, such as uranium and neptunium, are not affected significantly by colloid transport, and transport of plutonium in the valence state is only moderately enhanced. Model results showed no increase in the peak mean annual total effective dose equivalent (TEDE) within a compliance period of 10,000 years, although this is strongly dependent on container life in the base case scenario. At longer times, simulated container failures increase and the TEDE from the colloidal models increased by a factor of 60 from the base case. By using mechanistic models and sensitivity analyses to determine what parameters and transport processes affect the TEDE, colloidal transport in future versions of the TPA code can be represented more accurately. PMID:11288586

  6. Evaluation of a Mysis bioenergetics model

    USGS Publications Warehouse

    Chipps, S.R.; Bennett, D.H.

    2002-01-01

    Direct approaches for estimating the feeding rate of the opossum shrimp Mysis relicta can be hampered by variable gut residence time (evacuation rate models) and non-linear functional responses (clearance rate models). Bioenergetics modeling provides an alternative method, but the reliability of this approach needs to be evaluated using independent measures of growth and food consumption. In this study, we measured growth and food consumption for M. relicta and compared experimental results with those predicted from a Mysis bioenergetics model. For Mysis reared at 10??C, model predictions were not significantly different from observed values. Moreover, decomposition of mean square error indicated that 70% of the variation between model predictions and observed values was attributable to random error. On average, model predictions were within 12% of observed values. A sensitivity analysis revealed that Mysis respiration and prey energy density were the most sensitive parameters affecting model output. By accounting for uncertainty (95% CLs) in Mysis respiration, we observed a significant improvement in the accuracy of model output (within 5% of observed values), illustrating the importance of sensitive input parameters for model performance. These findings help corroborate the Mysis bioenergetics model and demonstrate the usefulness of this approach for estimating Mysis feeding rate.

  7. FLUORESCENT TRACER EVALUATION OF PROTECTIVE CLOTHING PERFORMANCE

    EPA Science Inventory

    Field studies evaluating chemical protective clothing (CPC), which is often employed as a primary control option to reduce occupational exposures during pesticide applications, are limited. This study, supported by the U.S. Environmental Protection Agency (EPA), was designed to...

  8. Performance Evaluation of the NASA/KSC Transmission System

    NASA Technical Reports Server (NTRS)

    Christensen, Kenneth J.

    2000-01-01

    NASA-KSC currently uses three bridged 100-Mbps FDDI segments as its backbone for data traffic. The FDDI Transmission System (FTXS) connects the KSC industrial area, KSC launch complex 39 area, and the Cape Canaveral Air Force Station. The report presents a performance modeling study of the FTXS and the proposed ATM Transmission System (ATXS). The focus of the study is on performance of MPEG video transmission on these networks. Commercial modeling tools - the CACI Predictor and Comnet tools - were used. In addition, custom software tools were developed to characterize conversation pairs in Sniffer trace (capture) files to use as input to these tools. A baseline study of both non-launch and launch day data traffic on the FTXS is presented. MPEG-1 and MPEG-2 video traffic was characterized and the shaping of it evaluated. It is shown that the characteristics of a video stream has a direct effect on its performance in a network. It is also shown that shaping of video streams is necessary to prevent overflow losses and resulting poor video quality. The developed models can be used to predict when the existing FTXS will 'run out of room' and for optimizing the parameters of ATM links used for transmission of MPEG video. Future work with these models can provide useful input and validation to set-top box projects within the Advanced Networks Development group in NASA-KSC Development Engineering.

  9. Evaluating Performances of Solar-Energy Systems

    NASA Technical Reports Server (NTRS)

    Jaffe, L. D.

    1987-01-01

    CONC11 computer program calculates performances of dish-type solar thermal collectors and power systems. Solar thermal power system consists of one or more collectors, power-conversion subsystems, and powerprocessing subsystems. CONC11 intended to aid system designer in comparing performance of various design alternatives. Written in Athena FORTRAN and Assembler.

  10. Sustainable Supplier Performance Evaluation and Selection with Neofuzzy TOPSIS Method.

    PubMed

    Chaharsooghi, S K; Ashrafi, Mehdi

    2014-01-01

    Supplier selection plays an important role in the supply chain management and traditional criteria such as price, quality, and flexibility are considered for supplier performance evaluation in researches. In recent years sustainability has received more attention in the supply chain management literature with triple bottom line (TBL) describing the sustainability in supply chain management with social, environmental, and economic initiatives. This paper explores sustainability in supply chain management and examines the problem of identifying a new model for supplier selection based on extended model of TBL approach in supply chain by presenting fuzzy multicriteria method. Linguistic values of experts' subjective preferences are expressed with fuzzy numbers and Neofuzzy TOPSIS is proposed for finding the best solution of supplier selection problem. Numerical results show that the proposed model is efficient for integrating sustainability in supplier selection problem. The importance of using complimentary aspects of sustainability and Neofuzzy TOPSIS concept in sustainable supplier selection process is shown with sensitivity analysis. PMID:27379267

  11. Evaluation of the performance of an annular diffusion denuder

    SciTech Connect

    Fan, B.J.; Cheng, Y.S.; Yeh, Hsu-Chi

    1994-11-01

    In air sampling, an annular diffusion denuder (ADD) is often used to trap specific gases from an air sample stream. The efficiency of an ADD in collecting a gas was considered in this study. A dimensional analysis showed that the collection efficiency depended on two parameters: the Peclet number and the annulus radii ratio. To determine collection efficiency, we calculated the fractional loss of the gas inside the denuder. In the calculation, the governing equations for gas concentration and flow field inside the annulus were solved numerically. After validating the methodology, a parameteric calculation of the collection efficiency was made, and a one-equation model based on the calculation was developed. A comparison of the model and experimental data showed a variance coefficient of 3.26%. This confirmed that the performance of an annular denuder could be evaluated by this model.

  12. Sustainable Supplier Performance Evaluation and Selection with Neofuzzy TOPSIS Method

    PubMed Central

    Chaharsooghi, S. K.; Ashrafi, Mehdi

    2014-01-01

    Supplier selection plays an important role in the supply chain management and traditional criteria such as price, quality, and flexibility are considered for supplier performance evaluation in researches. In recent years sustainability has received more attention in the supply chain management literature with triple bottom line (TBL) describing the sustainability in supply chain management with social, environmental, and economic initiatives. This paper explores sustainability in supply chain management and examines the problem of identifying a new model for supplier selection based on extended model of TBL approach in supply chain by presenting fuzzy multicriteria method. Linguistic values of experts' subjective preferences are expressed with fuzzy numbers and Neofuzzy TOPSIS is proposed for finding the best solution of supplier selection problem. Numerical results show that the proposed model is efficient for integrating sustainability in supplier selection problem. The importance of using complimentary aspects of sustainability and Neofuzzy TOPSIS concept in sustainable supplier selection process is shown with sensitivity analysis. PMID:27379267

  13. 48 CFR 2936.201 - Evaluation of contractor performance.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Construction 2936.201 Evaluation of contractor performance. The HCA must establish procedures to evaluate... 48 Federal Acquisition Regulations System 7 2010-10-01 2010-10-01 false Evaluation of contractor performance. 2936.201 Section 2936.201 Federal Acquisition Regulations System DEPARTMENT OF LABOR...

  14. Team Primacy Concept (TPC) Based Employee Evaluation and Job Performance

    ERIC Educational Resources Information Center

    Muniute, Eivina I.; Alfred, Mary V.

    2007-01-01

    This qualitative study explored how employees learn from Team Primacy Concept (TPC) based employee evaluation and how they use the feedback in performing their jobs. TPC based evaluation is a form of multirater evaluation, during which the employee's performance is discussed by one's peers in a face-to-face team setting. The study used Kolb's…

  15. Guidelines for Performance Based Evaluation: Teachers, Counselors, Librarians. [New Edition.

    ERIC Educational Resources Information Center

    Missouri State Dept. of Elementary and Secondary Education, Jefferson City.

    Guidelines for the performance-based evaluation of teachers, counselors, and librarians in the Missouri public schools are provided in this manual. Performance-based evaluation of school staff, mandated by state law, is described in terms of its philosophy and procedures, suggested evaluation criteria, and descriptors for each of the three job…

  16. A Perspective on Computational Human Performance Models as Design Tools

    NASA Technical Reports Server (NTRS)

    Jones, Patricia M.

    2010-01-01

    The design of interactive systems, including levels of automation, displays, and controls, is usually based on design guidelines and iterative empirical prototyping. A complementary approach is to use computational human performance models to evaluate designs. An integrated strategy of model-based and empirical test and evaluation activities is particularly attractive as a methodology for verification and validation of human-rated systems for commercial space. This talk will review several computational human performance modeling approaches and their applicability to design of display and control requirements.

  17. Performance-based Seismic Evaluation of RC Framed Building

    NASA Astrophysics Data System (ADS)

    Cinitha, A.; Umesha, P. K.; R Iyer, Nagesh; Lakshmanan, N.

    2015-12-01

    This work presents a typical 6-storey reinforced concrete building frame analyzed and designed for four load cases considering the three revisions of IS:1893 and IS:456. A conceptual frame work and detailed procedure for performance evaluation of reinforced concrete framed buildings are presented against the explicit force based method described in Indian codes of practice. Modelling issues related to generation of capacity curve, the damage and vulnerability indices are discussed. Based on the studies simple expressions are suggested to estimate, the global damage indices in the hardening and elasto-plastic regions of the capacity spectra.

  18. 24 CFR 968.330 - PHA performance and evaluation report.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false PHA performance and evaluation... 250 or More Public Housing Units) § 968.330 PHA performance and evaluation report. For any FFY in which a PHA has received assistance under this subpart, the PHA shall submit a Performance...

  19. A Performance Evaluation System for Professional Support Personnel.

    ERIC Educational Resources Information Center

    Stronge, James H.; Helm, Virginia M.

    1992-01-01

    Provides a conceptual framework for a professional support personnel (e.g., counselors, deans, librarians) performance evaluation system. Outlines steps in evaluating support personnel (identifying system needs, relating program expectations to job responsibilities, selecting performance indicators, setting job performance standards, documenting…

  20. Evaluation of performance impairment by spacecraft contaminants

    NASA Technical Reports Server (NTRS)

    Geller, I.; Hartman, R. J., Jr.; Mendez, V. M.

    1977-01-01

    The environmental contaminants (isolated as off-gases in Skylab and Apollo missions) were evaluated. Specifically, six contaminants were evaluated for their effects on the behavior of juvenile baboons. The concentrations of contaminants were determined through preliminary range-finding studies with laboratory rats. The contaminants evaluated were acetone, methyl ethyl ketone (MEK), methyl isobutyl ketone (MIBK), trichloroethylene (TCE), heptane and Freon 21. When the studies of the individual gases were completed, the baboons were also exposed to a mixture of MEK and TCE. The data obtained revealed alterations in the behavior of baboons exposed to relatively low levels of the contaminants. These findings were presented at the First International Symposium on Voluntary Inhalation of Industrial Solvents in Mexico City, June 21-24, 1976. A preprint of the proceedings is included.

  1. EVALUATION OF VENTILATION PERFORMANCE FOR INDOOR SPACE

    EPA Science Inventory

    The paper discusses a personal-computer-based application of computational fluid dynamics that can be used to determine the turbulent flow field and time-dependent/steady-state contaminant concentration distributions within isothermal indoor space. (NOTE: Ventilation performance ...

  2. Summary of photovoltaic system performance models

    SciTech Connect

    Smith, J. H.; Reiter, L. J.

    1984-01-15

    The purpose of this study is to provide a detailed overview of photovoltaics (PV) performance modeling capabilities that have been developed during recent years for analyzing PV system and component design and policy issues. A set of 10 performance models have been selected which span a representative range of capabilities from generalized first-order calculations to highly specialized electrical network simulations. A set of performance modeling topics and characteristics is defined and used to examine some of the major issues associated with photovoltaic performance modeling. Next, each of the models is described in the context of these topics and characteristics to assess its purpose, approach, and level of detail. Then each of the issues is discussed in terms of the range of model capabilities available and summarized in tabular form for quick reference. Finally, the models are grouped into categories to illustrate their purposes and perspectives.

  3. Summary of photovoltaic system performance models

    NASA Technical Reports Server (NTRS)

    Smith, J. H.; Reiter, L. J.

    1984-01-01

    A detailed overview of photovoltaics (PV) performance modeling capabilities developed for analyzing PV system and component design and policy issues is provided. A set of 10 performance models are selected which span a representative range of capabilities from generalized first order calculations to highly specialized electrical network simulations. A set of performance modeling topics and characteristics is defined and used to examine some of the major issues associated with photovoltaic performance modeling. Each of the models is described in the context of these topics and characteristics to assess its purpose, approach, and level of detail. The issues are discussed in terms of the range of model capabilities available and summarized in tabular form for quick reference. The models are grouped into categories to illustrate their purposes and perspectives.

  4. A Hybrid Evaluation System Framework (Shell & Web) with Standardized Access to Climate Model Data and Verification Tools for a Clear Climate Science Infrastructure on Big Data High Performance Computers

    NASA Astrophysics Data System (ADS)

    Kadow, C.; Illing, S.; Kunst, O.; Cubasch, U.

    2014-12-01

    The project 'Integrated Data and Evaluation System for Decadal Scale Prediction' (INTEGRATION) as part of the German decadal prediction project MiKlip develops a central evaluation system. The fully operational hybrid features a HPC shell access and an user friendly web-interface. It employs one common system with a variety of verification tools and validation data from different projects in- and outside of MiKlip. The evaluation system is located at the German Climate Computing Centre (DKRZ) and has direct access to the bulk of its ESGF node including millions of climate model data sets, e.g. from CMIP5 and CORDEX. The database is organized by the international CMOR standard using the meta information of the self-describing model, reanalysis and observational data sets. Apache Solr is used for indexing the different data projects into one common search environment. This implemented meta data system with its advanced but easy to handle search tool supports users, developers and their tools to retrieve the required information. A generic application programming interface (API) allows scientific developers to connect their analysis tools with the evaluation system independently of the programming language used. Users of the evaluation techniques benefit from the common interface of the evaluation system without any need to understand the different scripting languages. Facilitating the provision and usage of tools and climate data increases automatically the number of scientists working with the data sets and identify discrepancies. Additionally, the history and configuration sub-system stores every analysis performed with the evaluation system in a MySQL database. Configurations and results of the tools can be shared among scientists via shell or web-system. Therefore, plugged-in tools gain automatically from transparency and reproducibility. Furthermore, when configurations match while starting a evaluation tool, the system suggests to use results already produced

  5. A Hybrid Evaluation System Framework (Shell & Web) with Standardized Access to Climate Model Data and Verification Tools for a Clear Climate Science Infrastructure on Big Data High Performance Computers

    NASA Astrophysics Data System (ADS)

    Kadow, Christopher; Illing, Sebastian; Kunst, Oliver; Ulbrich, Uwe; Cubasch, Ulrich

    2015-04-01

    The project 'Integrated Data and Evaluation System for Decadal Scale Prediction' (INTEGRATION) as part of the German decadal prediction project MiKlip develops a central evaluation system. The fully operational hybrid features a HPC shell access and an user friendly web-interface. It employs one common system with a variety of verification tools and validation data from different projects in- and outside of MiKlip. The evaluation system is located at the German Climate Computing Centre (DKRZ) and has direct access to the bulk of its ESGF node including millions of climate model data sets, e.g. from CMIP5 and CORDEX. The database is organized by the international CMOR standard using the meta information of the self-describing model, reanalysis and observational data sets. Apache Solr is used for indexing the different data projects into one common search environment. This implemented meta data system with its advanced but easy to handle search tool supports users, developers and their tools to retrieve the required information. A generic application programming interface (API) allows scientific developers to connect their analysis tools with the evaluation system independently of the programming language used. Users of the evaluation techniques benefit from the common interface of the evaluation system without any need to understand the different scripting languages. Facilitating the provision and usage of tools and climate data increases automatically the number of scientists working with the data sets and identify discrepancies. Additionally, the history and configuration sub-system stores every analysis performed with the evaluation system in a MySQL database. Configurations and results of the tools can be shared among scientists via shell or web-system. Therefore, plugged-in tools gain automatically from transparency and reproducibility. Furthermore, when configurations match while starting a evaluation tool, the system suggests to use results already produced

  6. A COMPREHENSIVE EVALUATION OF THE ETA-CMAQ FORECAST MODEL PERFORMANCE FOR O3, ITS RELATED PRECURSORS, AND METEOROLOGICAL PARAMETERS DURING THE 2004 ICARTT STUDY

    EPA Science Inventory

    In this study, the ability of the Eta-CMAQ forecast model to represent the vertical profiles of O3, related chemical species (CO, NO, NO2, H2O2, CH2O, HNO3, SO2, PAN, isoprene, toluene), and meteorological paramete...

  7. Evaluation of Learning Performance of E-Learning in China: A Methodology Based on Change of Internal Mental Model of Learners

    ERIC Educational Resources Information Center

    Zhang, Lingxian; Zhang, Xiaoshuan; Duan, Yanqing; Fu, Zetian; Wang, Yanwei

    2010-01-01

    This paper presents a method of assessment on how Human-Computer Interaction (HCI) and animation influence the psychological process of learning by comparing a traditional web design course and an e-learning web design course, based on the Change of Internal Mental Model of Learners. We constructed the e-learning course based on Gagne's learning…

  8. Evaluation of genome-enabled selection for bacterial cold water disease resistance using progeny performance data in Rainbow Trout: Insights on genotyping methods and genomic prediction models

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Bacterial cold water disease (BCWD) causes significant economic losses in salmonid aquaculture, and traditional family-based breeding programs aimed at improving BCWD resistance have been limited to exploiting only between-family variation. We used genomic selection (GS) models to predict genomic br...

  9. Virginia Higher Education Performance Funding Model.

    ERIC Educational Resources Information Center

    Virginia State Council of Higher Education, Richmond.

    This report reviews the proposed Virginia Higher Education Performance Funding Model. It includes an overview of the proposed funding model, examples of likely funding scenarios (including determination of block grants, assumptions underlying performance funding for four-year and two-year institutions); information on deregulation/decentralization…

  10. NEUROBEHAVIORAL EVALUATION SYSTEM (NES) AND SCHOOL PERFORMANCE

    EPA Science Inventory

    The aims of this study were to explore the validity of a set of computerized tests, and to explore the validity of reaction time variability as an index of sustained attention. n Phase I, 105 7- to 10-year-old children were presented with five tests from the Neurobehavioral Evalu...

  11. Evaluating the Performance of Administrators: The Process and the Tools.

    ERIC Educational Resources Information Center

    Herman, Jerry J.

    1991-01-01

    Describes the various roles (monitor, information gatherer, communicator and feedback provider, clarifier, coanalyzer, assister, resource provider, and motivator) played by the supervisor when evaluating administrators. Presents a sample evaluation instrument assessing five major performance areas (management, professionalism, leadership,…

  12. Performance evaluation of 1 kw PEFC

    SciTech Connect

    Komaki, Hideaki; Tsuchiyama, Syozo

    1996-12-31

    This report covers part of a joint study on a PEFC propulsion system for surface ships, summarized in a presentation to this Seminar, entitled {open_quote}Study on a PEFC Propulsion System for Surface Ships{close_quotes}, and which envisages application to a 1,500 DWT cargo vessel. The aspect treated here concerns the effects brought on PEFC operating performance by conditions particular to shipboard operation. The performance characteristics were examined through tests performed on a 1 kw stack and on a single cell (Manufactured by Fuji Electric Co., Ltd.). The tests covered the items (1) to (4) cited in the headings of the sections that follow. Specifications of the stack and single cell are as given.

  13. Mathematical model of bisubject qualimetric arbitrary objects evaluation

    NASA Astrophysics Data System (ADS)

    Morozova, A.

    2016-04-01

    An analytical basis and the process of formalization of arbitrary objects bisubject qualimetric evaluation mathematical model information spaces are developed. The model is applicable in solving problems of control over both technical and socio-economic systems for objects evaluation using systems of parameters generated by different subjects taking into account their performance and priorities of decision-making.

  14. NREL Evaluates Performance of Hydraulic Hybrid Refuse Vehicles

    SciTech Connect

    2015-09-01

    This highlight describes NREL's evaluation of the in-service performance of 10 next-generation hydraulic hybrid refuse vehicles (HHVs), 8 previous-generation (model year 2013) HHVs, and 8 comparable conventional diesel vehicles operated by Miami-Dade County's Public Works and Waste Management Department in southern Florida. Launched in March 2015, the on-road portion of this 12-month evaluation focuses on collecting and analyzing vehicle performance data - fuel economy, maintenance costs, and drive cycles - from the HHVs and the conventional diesel vehicles. The fuel economy of heavy-duty vehicles, such as refuse trucks, is largely dependent on the load carried and the drive cycles on which they operate. In the right applications, HHVs offer a potential fuel-cost advantage over their conventional counterparts. This advantage is contingent, however, on driving behavior and drive cycles with high kinetic intensity that take advantage of regenerative braking. NREL's evaluation will assess the performance of this technology in commercial operation and help Miami-Dade County determine the ideal routes for maximizing the fuel-saving potential of its HHVs. Based on the field data, NREL will develop a validated vehicle model using the Future Automotive Systems Technology Simulator, also known as FASTSim, to study the impacts of route selection and other vehicle parameters. NREL is also analyzing fueling and maintenance data to support total-cost-of-ownership estimations and forecasts. The study aims to improve understanding of the overall usage and effectiveness of HHVs in refuse operation compared to similar conventional vehicles and to provide unbiased technical information to interested stakeholders.

  15. Using Ratio Analysis to Evaluate Financial Performance.

    ERIC Educational Resources Information Center

    Minter, John; And Others

    1982-01-01

    The ways in which ratio analysis can help in long-range planning, budgeting, and asset management to strengthen financial performance and help avoid financial difficulties are explained. Types of ratios considered include balance sheet ratios, net operating ratios, and contribution and demand ratios. (MSE)

  16. Performance Evaluation Gravity Probe B Design

    NASA Technical Reports Server (NTRS)

    Francis, Ronnie; Wells, Eugene M.

    1996-01-01

    This final report documents the work done to develop a 6 degree-of-freedom simulation of the Lockheed Martin Gravity Probe B (GPB) Spacecraft. This simulation includes the effects of vehicle flexibility and propellant slosh. The simulation was used to investigate the control performance of the spacecraft when subjected to realistic on orbit disturbances.

  17. Game Performance Evaluation in Male Goalball Players.

    PubMed

    Molik, Bartosz; Morgulec-Adamowicz, Natalia; Kosmol, Andrzej; Perkowski, Krzysztof; Bednarczuk, Grzegorz; Skowroński, Waldemar; Gomez, Miguel Angel; Koc, Krzysztof; Rutkowska, Izabela; Szyman, Robert J

    2015-11-22

    Goalball is a Paralympic sport exclusively for athletes who are visually impaired and blind. The aims of this study were twofold: to describe game performance of elite male goalball players based upon the degree of visual impairment, and to determine if game performance was related to anthropometric characteristics of elite male goalball players. The study sample consisted of 44 male goalball athletes. A total of 38 games were recorded during the Summer Paralympic Games in London 2012. Observations were reported using the Game Efficiency Sheet for Goalball. Additional anthropometric measurements included body mass (kg), body height (cm), the arm span (cm) and length of the body in the defensive position (cm). The results differentiating both groups showed that the players with total blindness obtained higher means than the players with visual impairment for game indicators such as the sum of defense (p = 0.03) and the sum of good defense (p = 0.04). The players with visual impairment obtained higher results than those with total blindness for attack efficiency (p = 0.04), the sum of penalty defenses (p = 0.01), and fouls (p = 0.01). The study showed that athletes with blindness demonstrated higher game performance in defence. However, athletes with visual impairment presented higher efficiency in offensive actions. The analyses confirmed that body mass, body height, the arm span and length of the body in the defensive position did not differentiate players' performance at the elite level. PMID:26834872

  18. Game Performance Evaluation in Male Goalball Players

    PubMed Central

    Molik, Bartosz; Morgulec-Adamowicz, Natalia; Kosmol, Andrzej; Perkowski, Krzysztof; Bednarczuk, Grzegorz; Skowroński, Waldemar; Gomez, Miguel Angel; Koc, Krzysztof; Rutkowska, Izabela; Szyman, Robert J

    2015-01-01

    Goalball is a Paralympic sport exclusively for athletes who are visually impaired and blind. The aims of this study were twofold: to describe game performance of elite male goalball players based upon the degree of visual impairment, and to determine if game performance was related to anthropometric characteristics of elite male goalball players. The study sample consisted of 44 male goalball athletes. A total of 38 games were recorded during the Summer Paralympic Games in London 2012. Observations were reported using the Game Efficiency Sheet for Goalball. Additional anthropometric measurements included body mass (kg), body height (cm), the arm span (cm) and length of the body in the defensive position (cm). The results differentiating both groups showed that the players with total blindness obtained higher means than the players with visual impairment for game indicators such as the sum of defense (p = 0.03) and the sum of good defense (p = 0.04). The players with visual impairment obtained higher results than those with total blindness for attack efficiency (p = 0.04), the sum of penalty defenses (p = 0.01), and fouls (p = 0.01). The study showed that athletes with blindness demonstrated higher game performance in defence. However, athletes with visual impairment presented higher efficiency in offensive actions. The analyses confirmed that body mass, body height, the arm span and length of the body in the defensive position did not differentiate players’ performance at the elite level. PMID:26834872

  19. An hierarchical approach to performance evaluation of expert systems

    NASA Technical Reports Server (NTRS)

    Dominick, Wayne D. (Editor); Kavi, Srinu

    1985-01-01

    The number and size of expert systems is growing rapidly. Formal evaluation of these systems - which is not performed for many systems - increases the acceptability by the user community and hence their success. Hierarchical evaluation that had been conducted for computer systems is applied for expert system performance evaluation. Expert systems are also evaluated by treating them as software systems (or programs). This paper reports many of the basic concepts and ideas in the Performance Evaluation of Expert Systems Study being conducted at the University of Southwestern Louisiana.

  20. Optical Performance Modeling of FUSE Telescope Mirror

    NASA Technical Reports Server (NTRS)

    Saha, Timo T.; Ohl, Raymond G.; Friedman, Scott D.; Moos, H. Warren

    2000-01-01

    We describe the Metrology Data Processor (METDAT), the Optical Surface Analysis Code (OSAC), and their application to the image evaluation of the Far Ultraviolet Spectroscopic Explorer (FUSE) mirrors. The FUSE instrument - designed and developed by the Johns Hopkins University and launched in June 1999 is an astrophysics satellite which provides high resolution spectra (lambda/Delta(lambda) = 20,000 - 25,000) in the wavelength region from 90.5 to 118.7 nm The FUSE instrument is comprised of four co-aligned, normal incidence, off-axis parabolic mirrors, four Rowland circle spectrograph channels with holographic gratings, and delay line microchannel plate detectors. The OSAC code provides a comprehensive analysis of optical system performance, including the effects of optical surface misalignments, low spatial frequency deformations described by discrete polynomial terms, mid- and high-spatial frequency deformations (surface roughness), and diffraction due to the finite size of the aperture. Both normal incidence (traditionally infrared, visible, and near ultraviolet mirror systems) and grazing incidence (x-ray mirror systems) systems can be analyzed. The code also properly accounts for reflectance losses on the mirror surfaces. Low frequency surface errors are described in OSAC by using Zernike polynomials for normal incidence mirrors and Legendre-Fourier polynomials for grazing incidence mirrors. The scatter analysis of the mirror is based on scalar scatter theory. The program accepts simple autocovariance (ACV) function models or power spectral density (PSD) models derived from mirror surface metrology data as input to the scatter calculation. The end product of the program is a user-defined pixel array containing the system Point Spread Function (PSF). The METDAT routine is used in conjunction with the OSAC program. This code reads in laboratory metrology data in a normalized format. The code then fits the data using Zernike polynomials for normal incidence

  1. Transmutation Fuel Performance Code Thermal Model Verification

    SciTech Connect

    Gregory K. Miller; Pavel G. Medvedev

    2007-09-01

    FRAPCON fuel performance code is being modified to be able to model performance of the nuclear fuels of interest to the Global Nuclear Energy Partnership (GNEP). The present report documents the effort for verification of the FRAPCON thermal model. It was found that, with minor modifications, FRAPCON thermal model temperature calculation agrees with that of the commercial software ABAQUS (Version 6.4-4). This report outlines the methodology of the verification, code input, and calculation results.

  2. Evaluating Suit Fit Using Performance Degradation

    NASA Technical Reports Server (NTRS)

    Margerum, Sarah E.; Cowley, Matthew; Harvill, Lauren; Benson, Elizabeth; Rajulu, Sudhakar

    2011-01-01

    The Mark III suit has multiple sizes of suit components (arm, leg, and gloves) as well as sizing inserts to tailor the fit of the suit to an individual. This study sought to determine a way to identify the point an ideal suit fit transforms into a bad fit and how to quantify this breakdown using mobility-based physical performance data. This study examined the changes in human physical performance via degradation of the elbow and wrist range of motion of the planetary suit prototype (Mark III) with respect to changes in sizing and as well as how to apply that knowledge to suit sizing options and improvements in suit fit. The methods implemented in this study focused on changes in elbow and wrist mobility due to incremental suit sizing modifications. This incremental sizing was within a range that included both optimum and poor fit. Suited range of motion data was collected using a motion analysis system for nine isolated and functional tasks encompassing the elbow and wrist joints. A total of four subjects were tested with motions involving both arms simultaneously as well as the right arm only. The results were then compared across sizing configurations. The results of this study indicate that range of motion may be used as a viable parameter to quantify at what stage suit sizing causes a detriment in performance; however the human performance decrement appeared to be based on the interaction of multiple joints along a limb, not a single joint angle. The study was able to identify a preliminary method to quantify the impact of size on performance and to develop a means to gauge tolerances around optimal size. More work is needed to improve the assessment of optimal fit and to compensate for multiple joint interactions.

  3. Evaluating Suit Fit Using Performance Degradation

    NASA Technical Reports Server (NTRS)

    Margerum, Sarah E.; Cowley, Matthew; Harvill, Lauren; Benson, Elizabeth; Rajulu, Sudhakar

    2012-01-01

    The Mark III planetary technology demonstrator space suit can be tailored to an individual by swapping the modular components of the suit, such as the arms, legs, and gloves, as well as adding or removing sizing inserts in key areas. A method was sought to identify the transition from an ideal suit fit to a bad fit and how to quantify this breakdown using a metric of mobility-based human performance data. To this end, the degradation of the range of motion of the elbow and wrist of the suit as a function of suit sizing modifications was investigated to attempt to improve suit fit. The sizing range tested spanned optimal and poor fit and was adjusted incrementally in order to compare each joint angle across five different sizing configurations. Suited range of motion data were collected using a motion capture system for nine isolated and functional tasks utilizing the elbow and wrist joints. A total of four subjects were tested with motions involving both arms simultaneously as well as the right arm by itself. Findings indicated that no single joint drives the performance of the arm as a function of suit size; instead it is based on the interaction of multiple joints along a limb. To determine a size adjustment range where an individual can operate the suit at an acceptable level, a performance detriment limit was set. This user-selected limit reveals the task-dependent tolerance of the suit fit around optimal size. For example, the isolated joint motion indicated that the suit can deviate from optimal by as little as -0.6 in to -2.6 in before experiencing a 10% performance drop in the wrist or elbow joint. The study identified a preliminary method to quantify the impact of size on performance and developed a new way to gauge tolerances around optimal size.

  4. Cost and Performance Assumptions for Modeling Electricity Generation Technologies

    SciTech Connect

    Tidball, Rick; Bluestein, Joel; Rodriguez, Nick; Knoke, Stu

    2010-11-01

    The goal of this project was to compare and contrast utility scale power plant characteristics used in data sets that support energy market models. Characteristics include both technology cost and technology performance projections to the year 2050. Cost parameters include installed capital costs and operation and maintenance (O&M) costs. Performance parameters include plant size, heat rate, capacity factor or availability factor, and plant lifetime. Conventional, renewable, and emerging electricity generating technologies were considered. Six data sets, each associated with a different model, were selected. Two of the data sets represent modeled results, not direct model inputs. These two data sets include cost and performance improvements that result from increased deployment as well as resulting capacity factors estimated from particular model runs; other data sets represent model input data. For the technologies contained in each data set, the levelized cost of energy (LCOE) was also evaluated, according to published cost, performance, and fuel assumptions.

  5. Performance Evaluation of Resource Management in Cloud Computing Environments

    PubMed Central

    Batista, Bruno Guazzelli; Estrella, Julio Cezar; Ferreira, Carlos Henrique Gomes; Filho, Dionisio Machado Leite; Nakamura, Luis Hideo Vasconcelos; Reiff-Marganiec, Stephan; Santana, Marcos José; Santana, Regina Helena Carlucci

    2015-01-01

    Cloud computing is a computational model in which resource providers can offer on-demand services to clients in a transparent way. However, to be able to guarantee quality of service without limiting the number of accepted requests, providers must be able to dynamically manage the available resources so that they can be optimized. This dynamic resource management is not a trivial task, since it involves meeting several challenges related to workload modeling, virtualization, performance modeling, deployment and monitoring of applications on virtualized resources. This paper carries out a performance evaluation of a module for resource management in a cloud environment that includes handling available resources during execution time and ensuring the quality of service defined in the service level agreement. An analysis was conducted of different resource configurations to define which dimension of resource scaling has a real influence on client requests. The results were used to model and implement a simulated cloud system, in which the allocated resource can be changed on-the-fly, with a corresponding change in price. In this way, the proposed module seeks to satisfy both the client by ensuring quality of service, and the provider by ensuring the best use of resources at a fair price. PMID:26555730

  6. Performance Evaluation of Resource Management in Cloud Computing Environments.

    PubMed

    Batista, Bruno Guazzelli; Estrella, Julio Cezar; Ferreira, Carlos Henrique Gomes; Filho, Dionisio Machado Leite; Nakamura, Luis Hideo Vasconcelos; Reiff-Marganiec, Stephan; Santana, Marcos José; Santana, Regina Helena Carlucci

    2015-01-01

    Cloud computing is a computational model in which resource providers can offer on-demand services to clients in a transparent way. However, to be able to guarantee quality of service without limiting the number of accepted requests, providers must be able to dynamically manage the available resources so that they can be optimized. This dynamic resource management is not a trivial task, since it involves meeting several challenges related to workload modeling, virtualization, performance modeling, deployment and monitoring of applications on virtualized resources. This paper carries out a performance evaluation of a module for resource management in a cloud environment that includes handling available resources during execution time and ensuring the quality of service defined in the service level agreement. An analysis was conducted of different resource configurations to define which dimension of resource scaling has a real influence on client requests. The results were used to model and implement a simulated cloud system, in which the allocated resource can be changed on-the-fly, with a corresponding change in price. In this way, the proposed module seeks to satisfy both the client by ensuring quality of service, and the provider by ensuring the best use of resources at a fair price. PMID:26555730

  7. Performance of an integrated network model

    PubMed Central

    Lehmann, François; Dunn, David; Beaulieu, Marie-Dominique; Brophy, James

    2016-01-01

    Objective To evaluate the changes in accessibility, patients’ care experiences, and quality-of-care indicators following a clinic’s transformation into a fully integrated network clinic. Design Mixed-methods study. Setting Verdun, Que. Participants Data on all patient visits were used, in addition to 2 distinct patient cohorts: 134 patients with chronic illness (ie, diabetes, arteriosclerotic heart disease, or both); and 450 women between the ages of 20 and 70 years. Main outcome measures Accessibility was measured by the number of walk-in visits, scheduled visits, and new patient enrolments. With the first cohort, patients’ care experiences were measured using validated serial questionnaires; and quality-of-care indicators were measured using biologic data. With the second cohort, quality of preventive care was measured using the number of Papanicolaou tests performed as a surrogate marker. Results Despite a negligible increase in the number of physicians, there was an increase in accessibility after the clinic’s transition to an integrated network model. During the first 4 years of operation, the number of scheduled visits more than doubled, nonscheduled visits (walk-in visits) increased by 29%, and enrolment of vulnerable patients (those with chronic illnesses) at the clinic remained high. Patient satisfaction with doctors was rated very highly at all points of time that were evaluated. While the number of Pap tests done did not increase with time, the proportion of patients meeting hemoglobin A1c and low-density lipoprotein guideline target levels increased, as did the number of patients tested for microalbuminuria. Conclusion Transformation to an integrated network model of care led to increased efficiency and enhanced accessibility with no negative effects on the doctor-patient relationship. Improvements in biologic data also suggested better quality of care. PMID:27521410

  8. Relating Performance Evaluation to Compensation of Public Sector Employees

    ERIC Educational Resources Information Center

    Van Adelsberg, Henri

    1978-01-01

    Provides a variety of approaches to administering individual salaries on the basis of evaluated performance. Describes methods of precalculating and controlling salary expenditures while simultaneously administering salaries on a "relative" rather than "absolute" performance rating system. (Author)

  9. Generic CSP Performance Model for NREL's System Advisor Model: Preprint

    SciTech Connect

    Wagner, M. J.; Zhu, G.

    2011-08-01

    The suite of concentrating solar power (CSP) modeling tools in NREL's System Advisor Model (SAM) includes technology performance models for parabolic troughs, power towers, and dish-Stirling systems. Each model provides the user with unique capabilities that are catered to typical design considerations seen in each technology. Since the scope of the various models is generally limited to common plant configurations, new CSP technologies, component geometries, and subsystem combinations can be difficult to model directly in the existing SAM technology models. To overcome the limitations imposed by representative CSP technology models, NREL has developed a 'Generic Solar System' (GSS) performance model for use in SAM. This paper discusses the formulation and performance considerations included in this model and verifies the model by comparing its results with more detailed models.

  10. Intern Performance in Three Supervisory Models

    ERIC Educational Resources Information Center

    Womack, Sid T.; Hanna, Shellie L.; Callaway, Rebecca; Woodall, Peggy

    2011-01-01

    Differences in intern performance, as measured by a Praxis III-similar instrument were found between interns supervised in three supervisory models: Traditional triad model, cohort model, and distance supervision. Candidates in this study's particular form of distance supervision were not as effective as teachers as candidates in…

  11. Diagnostics and performance evaluation of multikilohertz capacitors

    SciTech Connect

    McDuff, G.; Nunnally, W.C.; Rust, K.; Sarjeant, J.

    1980-01-01

    The observed performance of nanofarad polypropylene-silicone oil, mica paper, and polytetrafluoroethylene-silicone oil capacitors discharged in a 100-ns, 1-kA pulse with a pulse repetition frequency of 1 kHz is presented. The test facility circuit, diagnostic parameters, and the preliminary test schedule are outlined as a basis for discussion of the observed failure locations and proposed failure mechanisms. Most of the test data and discussion presented involves the polypropylene-silicone oil units.

  12. Chiller performance evaluation report. Final report

    SciTech Connect

    Wylie, D.

    1998-12-01

    The Electric Power Research Institute (EPRI) directed ASW Engineering Management to analyze the performance of a new package chiller manufactured by VaCom, Inc. The chiller was operated for approximately 22 months using three different refrigerants (R-407C, R-22 and R-507). The objective was to identify the chiller`s energy-efficiency with each of the three refrigerants. This report presents AWS`s findings and associated backup information.

  13. The Discrepancy Evaluation Model: A Systematic Approach for the Evaluation of Career Planning and Placement Programs.

    ERIC Educational Resources Information Center

    Buttram, Joan L.; Covert, Robert W.

    The Discrepancy Evaluation Model (DEM), developed in 1966 by Malcolm Provus, provides information for program assessment and program improvement. Under the DEM, evaluation is defined as the comparison of an actual performance to a desired standard. The DEM embodies five stages of evaluation based upon a program's natural development: program…

  14. CTBT Integrated Verification System Evaluation Model

    SciTech Connect

    Edenburn, M.W.; Bunting, M.L.; Payne, A.C. Jr.

    1997-10-01

    Sandia National Laboratories has developed a computer based model called IVSEM (Integrated Verification System Evaluation Model) to estimate the performance of a nuclear detonation monitoring system. The IVSEM project was initiated in June 1994, by Sandia`s Monitoring Systems and Technology Center and has been funded by the US Department of Energy`s Office of Nonproliferation and National Security (DOE/NN). IVSEM is a simple, top-level, modeling tool which estimates the performance of a Comprehensive Nuclear Test Ban Treaty (CTBT) monitoring system and can help explore the impact of various sensor system concepts and technology advancements on CTBT monitoring. One of IVSEM`s unique features is that it integrates results from the various CTBT sensor technologies (seismic, infrasound, radionuclide, and hydroacoustic) and allows the user to investigate synergy among the technologies. Specifically, IVSEM estimates the detection effectiveness (probability of detection) and location accuracy of the integrated system and of each technology subsystem individually. The model attempts to accurately estimate the monitoring system`s performance at medium interfaces (air-land, air-water) and for some evasive testing methods such as seismic decoupling. This report describes version 1.2 of IVSEM.

  15. Performance Evaluation of a Clinical PACS Module

    NASA Astrophysics Data System (ADS)

    Taira, Ricky K.; Cho, Paul S.; Huang, H. K.; Mankovich, Nicholas J.; Boechat, Maria I.

    1989-05-01

    Picture archiving and communication systems (PACS) are now clinically available in limited radiologic applications. The benefits, acceptability, and reliablity of these systems have thus far been mainly speculative and anecdotal. This paper discusses the evaluation of a PACS module implemented in the pediatric radiology section of a 700-bed teaching hospital. The PACS manages all pediatric inpatient images including conventional x-rays and contrast studies (obtained with a computed radiography system), magnetic resonance images, and relevant ultrasound images. A six-monitor workstation is available for image review.

  16. Performance Evaluation of Hyperspectral Chemical Detection Systems

    NASA Astrophysics Data System (ADS)

    Truslow, Eric

    Remote sensing of chemical vapor plumes is a difficult but important task with many military and civilian applications. Hyperspectral sensors operating in the long wave infrared (LWIR) regime have well demonstrated detection capabilities. However, the identification of a plume's chemical constituents, based on a chemical library, is a multiple hypothesis-testing problem that standard detection metrics do not fully describe. Our approach partitions and weights a confusion matrix to develop both the standard detection metrics and an identification metric based on the Dice index. Using the developed metrics, we demonstrate that using a detector bank followed by an identifier can achieve superior performance relative to either algorithm individually. Performance of the cascaded system relies on the first pass reliably detecting the plume. However, detection performance is severely hampered by the inclusion of plume pixels in estimates of background quantities. We demonstrate that this problem, known as contamination, can be mitigated by iteratively applying a spatial filter to the detected pixels. Multiple detection and filtering passes can remove nearly all contamination from the background estimates, a vast improvement over single-pass techniques.

  17. Performance evaluation of blind steganalysis classifiers

    NASA Astrophysics Data System (ADS)

    Hogan, Mark T.; Silvestre, Guenole C. M.; Hurley, Neil J.

    2004-06-01

    Steganalysis is the art of detecting and/or decoding secret messages embedded in multimedia contents. The topic has received considerable attention in recent years due to the malicious use of multimedia documents for covert communication. Steganalysis algorithms can be classified as either blind or non-blind depending on whether or not the method assumes knowledge of the embedding algorithm. In general, blind methods involve the extraction of a feature vector that is sensitive to embedding and is subsequently used to train a classifier. This classifier can then be used to determine the presence of a stego-object, subject to an acceptable probability of false alarm. In this work, the performance of three classifiers, namely Fisher linear discriminant (FLD), neural network (NN) and support vector machines (SVM), is compared using a recently proposed feature extraction technique. It is shown that the NN and SVM classifiers exhibit similar performance exceeding that of the FLD. However, steganographers may be able to circumvent such steganalysis algorithms by preserving the statistical transparency of the feature vector at the embedding. This motivates the use of classification algorithms based on the entire document. Such a strategy is applied using SVM classification for DCT, FFT and DWT representations of an image. The performance is compared to a feature extraction technique.

  18. Performance Evaluation Method for Dissimilar Aircraft Designs

    NASA Technical Reports Server (NTRS)

    Walker, H. J.

    1979-01-01

    A rationale is presented for using the square of the wingspan rather than the wing reference area as a basis for nondimensional comparisons of the aerodynamic and performance characteristics of aircraft that differ substantially in planform and loading. Working relationships are developed and illustrated through application to several categories of aircraft covering a range of Mach numbers from 0.60 to 2.00. For each application, direct comparisons of drag polars, lift-to-drag ratios, and maneuverability are shown for both nondimensional systems. The inaccuracies that may arise in the determination of aerodynamic efficiency based on reference area are noted. Span loading is introduced independently in comparing the combined effects of loading and aerodynamic efficiency on overall performance. Performance comparisons are made for the NACA research aircraft, lifting bodies, century-series fighter aircraft, F-111A aircraft with conventional and supercritical wings, and a group of supersonic aircraft including the B-58 and XB-70 bomber aircraft. An idealized configuration is included in each category to serve as a standard for comparing overall efficiency.

  19. Alvord (3000-ft Strawn) LPG flood: design and performance evaluation

    SciTech Connect

    Frazier, G.D.; Todd, M.R.

    1982-01-01

    Mitchell Energy Corporation has implemented a LPG-dry gas miscible process in the Alvord (3000 ft Strawn) Unit in Wise County, Texas utilizing the DOE tertiary incentive program. The field had been waterflooded for 14 years and was producing near its economic limit at the time this project was started. This paper presents the results of the reservoir simulation study that was conducted to evaluate pattern configuration and operating alternatives so as to maximize LPG containment and oil recovery performance. Several recommendations resulting from this study were implemented for the project. Based on the model prediction, tertiary oil recovery is expected to be between 100,000 and 130,000 bbls, or about 7 percent of th oil originally in place in the Unit. An evaluation of the project performance to date is presented. In July of 1981 the injection of a 16% HPV slug of propane was completed. Natural gas is being used to drive the propane slug. A peak oil response of 222 BOPD was achieved in August of 1981 and production has since been declining. The observed performance of the flood indicates that the actual tertiary oil recovered will reach the predicted value, although the project life will be longer than expected. The results presented in this paper indicate that, without the DOE incentive program, the economics for this project would still be uncertain at this time.

  20. Evaluating thermal performance of a single slope solar still

    NASA Astrophysics Data System (ADS)

    Badran, Omar O.; Abu-Khader, Mazen M.

    2007-08-01

    The distillation is one of the important methods of getting clean water from brackish and sea water using the free energy supply from the sun. An experimental work is conducted on a single slope solar still. The thermal performance of the single slope solar still is examined and evaluated through implementing the following effective parameters: (a) different insulation thicknesses of 1, 2.5 and 5 cm; (b) water depth of 2 and 3.5 cm; (c) solar intensity; (d) Overall heat loss coefficient (e) effective absorbtivity and transmissivity; and (f) ambient, water and vapor temperatures. Different effective parameters should be taken into account to increase the still productivity. A mathematical model is presented and compared with experimental results. The model gives a good match with experimental values.