Nonlinearity analysis of measurement model for vision-based optical navigation system
NASA Astrophysics Data System (ADS)
Li, Jianguo; Cui, Hutao; Tian, Yang
2015-02-01
In the autonomous optical navigation system based on line-of-sight vector observation, nonlinearity of measurement model is highly correlated with the navigation performance. By quantitatively calculating the degree of nonlinearity of the focal plane model and the unit vector model, this paper focuses on determining which optical measurement model performs better. Firstly, measurement equations and measurement noise statistics of these two line-of-sight measurement models are established based on perspective projection co-linearity equation. Then the nonlinear effects of measurement model on the filter performance are analyzed within the framework of the Extended Kalman filter, also the degrees of nonlinearity of two measurement models are compared using the curvature measure theory from differential geometry. Finally, a simulation of star-tracker-based attitude determination is presented to confirm the superiority of the unit vector measurement model. Simulation results show that the magnitude of curvature nonlinearity measurement is consistent with the filter performance, and the unit vector measurement model yields higher estimation precision and faster convergence properties.
NASA Astrophysics Data System (ADS)
El Akbar, R. Reza; Anshary, Muhammad Adi Khairul; Hariadi, Dennis
2018-02-01
Model MACP for HE ver.1. Is a model that describes how to perform measurement and monitoring performance for Higher Education. Based on a review of the research related to the model, there are several parts of the model component to develop in further research, so this research has four main objectives. The first objective is to differentiate the CSF (critical success factor) components in the previous model, the two key KPI (key performance indicators) exploration in the previous model, the three based on the previous objective, the new and more detailed model design. The final goal is the fourth designed prototype application for performance measurement in higher education, based on a new model created. The method used is explorative research method and application design using prototype method. The results of this study are first, forming a more detailed new model for measurement and monitoring of performance in higher education, differentiation and exploration of the Model MACP for HE Ver.1. The second result compiles a dictionary of college performance measurement by re-evaluating the existing indicators. The third result is the design of prototype application of performance measurement in higher education.
NASA Astrophysics Data System (ADS)
Kusrini, Elisa; Subagyo; Aini Masruroh, Nur
2016-01-01
This research is a sequel of the author's earlier conducted researches in the fields of designing of integrated performance measurement between supply chain's actors and regulator. In the previous paper, the design of performance measurement is done by combining Balanced Scorecard - Supply Chain Operation Reference - Regulator Contribution model and Data Envelopment Analysis. This model referred as B-S-Rc-DEA model. The combination has the disadvantage that all the performance variables have the same weight. This paper investigates whether by giving weight to performance variables will produce more sensitive performance measurement in detecting performance improvement. Therefore, this paper discusses the development of the model B-S-Rc-DEA by giving weight to its performance'variables. This model referred as Scale B-S-Rc-DEA model. To illustrate the model of development, some samples from small medium enterprises of leather craft industry supply chain in province of Yogyakarta, Indonesia are used in this research. It is found that Scale B-S-Rc-DEA model is more sensitive to detecting performance improvement than B-S- Rc-DEA model.
Predicting Document Retrieval System Performance: An Expected Precision Measure.
ERIC Educational Resources Information Center
Losee, Robert M., Jr.
1987-01-01
Describes an expected precision (EP) measure designed to predict document retrieval performance. Highlights include decision theoretic models; precision and recall as measures of system performance; EP graphs; relevance feedback; and computing the retrieval status value of a document for two models, the Binary Independent Model and the Two Poisson…
Performance measurement for people with multiple chronic conditions: conceptual model.
Giovannetti, Erin R; Dy, Sydney; Leff, Bruce; Weston, Christine; Adams, Karen; Valuck, Tom B; Pittman, Aisha T; Blaum, Caroline S; McCann, Barbara A; Boyd, Cynthia M
2013-10-01
Improving quality of care for people with multiple chronic conditions (MCCs) requires performance measures reflecting the heterogeneity and scope of their care. Since most existing measures are disease specific, performance measures must be refined and new measures must be developed to address the complexity of care for those with MCCs. To describe development of the Performance Measurement for People with Multiple Chronic Conditions (PM-MCC) conceptual model. Framework development and a national stakeholder panel. We used reviews of existing conceptual frameworks of performance measurement, review of the literature on MCCs, input from experts in the multistakeholder Steering Committee, and public comment. The resulting model centers on the patient and family goals and preferences for care in the context of multiple care sites and providers, the type of care they are receiving, and the national priority domains for healthcare quality measurement. This model organizes measures into a comprehensive framework and identifies areas where measures are lacking. In this context, performance measures can be prioritized and implemented at different levels, in the context of patients' overall healthcare needs.
Development of an Integrated Performance Measurement (PM) Model for Pharmaceutical Industry
Shabaninejad, Hosein; Mirsalehian, Mohammad Hossein; Mehralian, Gholamhossein
2014-01-01
With respect to special characteristics of pharmaceutical industry and lack of reported performance measure, this study tries to design an integrated PM model for pharmaceutical companies. For generating this model; we first identified the key performance indicators (KPIs) and the key result indicators (KRIs) of a typical pharmaceutical company. Then, based on experts᾽ opinions, the identified indicators were ranked with respect to their importance, and the most important of them were selected to be used in the proposed model; In this model, we identified 25 KPIs and 12 KRIs. Although, this model is mostly appropriate to measure the performances of pharmaceutical companies, it can be also used to measure the performances of other industries with some modifications. We strongly recommend pharmaceutical managers to link these indicators with their payment and reward system, which can dramatically affect the performance of employees, and consequently their organization`s success. PMID:24711848
CPMIP: measurements of real computational performance of Earth system models in CMIP6
NASA Astrophysics Data System (ADS)
Balaji, Venkatramani; Maisonnave, Eric; Zadeh, Niki; Lawrence, Bryan N.; Biercamp, Joachim; Fladrich, Uwe; Aloisio, Giovanni; Benson, Rusty; Caubel, Arnaud; Durachta, Jeffrey; Foujols, Marie-Alice; Lister, Grenville; Mocavero, Silvia; Underwood, Seth; Wright, Garrett
2017-01-01
A climate model represents a multitude of processes on a variety of timescales and space scales: a canonical example of multi-physics multi-scale modeling. The underlying climate system is physically characterized by sensitive dependence on initial conditions, and natural stochastic variability, so very long integrations are needed to extract signals of climate change. Algorithms generally possess weak scaling and can be I/O and/or memory-bound. Such weak-scaling, I/O, and memory-bound multi-physics codes present particular challenges to computational performance. Traditional metrics of computational efficiency such as performance counters and scaling curves do not tell us enough about real sustained performance from climate models on different machines. They also do not provide a satisfactory basis for comparative information across models. codes present particular challenges to computational performance. We introduce a set of metrics that can be used for the study of computational performance of climate (and Earth system) models. These measures do not require specialized software or specific hardware counters, and should be accessible to anyone. They are independent of platform and underlying parallel programming models. We show how these metrics can be used to measure actually attained performance of Earth system models on different machines, and identify the most fruitful areas of research and development for performance engineering. codes present particular challenges to computational performance. We present results for these measures for a diverse suite of models from several modeling centers, and propose to use these measures as a basis for a CPMIP, a computational performance model intercomparison project (MIP).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Armstrong, Robert C.; Ray, Jaideep; Malony, A.
2003-11-01
We present a case study of performance measurement and modeling of a CCA (Common Component Architecture) component-based application in a high performance computing environment. We explore issues peculiar to component-based HPC applications and propose a performance measurement infrastructure for HPC based loosely on recent work done for Grid environments. A prototypical implementation of the infrastructure is used to collect data for a three components in a scientific application and construct performance models for two of them. Both computational and message-passing performance are addressed.
NASA Astrophysics Data System (ADS)
Amran, T. G.; Janitra Yose, Mindy
2018-03-01
As the free trade Asean Economic Community (AEC) causes the tougher competition, it is important that Indonesia’s automotive industry have high competitiveness as well. A model of logistics performance measurement was designed as an evaluation tool for automotive component companies to improve their logistics performance in order to compete in AEC. The design of logistics performance measurement model was based on the Logistics Scorecard perspectives, divided into two stages: identifying the logistics business strategy to get the KPI and arranging the model. 23 KPI was obtained. The measurement result can be taken into consideration of determining policies to improve the performance logistics competitiveness.
Case Studies Comparing System Advisor Model (SAM) Results to Real Performance Data: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blair, N.; Dobos, A.; Sather, N.
2012-06-01
NREL has completed a series of detailed case studies comparing the simulations of the System Advisor Model (SAM) and measured performance data or published performance expectations. These case studies compare PV measured performance data with simulated performance data using appropriate weather data. The measured data sets were primarily taken from NREL onsite PV systems and weather monitoring stations.
Performance model for grid-connected photovoltaic inverters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boyson, William Earl; Galbraith, Gary M.; King, David L.
2007-09-01
This document provides an empirically based performance model for grid-connected photovoltaic inverters used for system performance (energy) modeling and for continuous monitoring of inverter performance during system operation. The versatility and accuracy of the model were validated for a variety of both residential and commercial size inverters. Default parameters for the model can be obtained from manufacturers specification sheets, and the accuracy of the model can be further refined using measurements from either well-instrumented field measurements in operational systems or using detailed measurements from a recognized testing laboratory. An initial database of inverter performance parameters was developed based on measurementsmore » conducted at Sandia National Laboratories and at laboratories supporting the solar programs of the California Energy Commission.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Freeman, Janine; Freestate, David; Riley, Cameron
2016-11-01
Measured plane-of-array (POA) irradiance may provide a lower-cost alternative to standard irradiance component data for photovoltaic (PV) system performance modeling without loss of accuracy. Previous work has shown that transposition models typically used by PV models to calculate POA irradiance from horizontal data introduce error into the POA irradiance estimates, and that measured POA data can correlate better to measured performance data. However, popular PV modeling tools historically have not directly used input POA data. This paper introduces a new capability in NREL's System Advisor Model (SAM) to directly use POA data in PV modeling, and compares SAM results frommore » both POA irradiance and irradiance components inputs against measured performance data for eight operating PV systems.« less
Using Measured Plane-of-Array Data Directly in Photovoltaic Modeling: Methodology and Validation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Freeman, Janine; Freestate, David; Hobbs, William
2016-11-21
Measured plane-of-array (POA) irradiance may provide a lower-cost alternative to standard irradiance component data for photovoltaic (PV) system performance modeling without loss of accuracy. Previous work has shown that transposition models typically used by PV models to calculate POA irradiance from horizontal data introduce error into the POA irradiance estimates, and that measured POA data can correlate better to measured performance data. However, popular PV modeling tools historically have not directly used input POA data. This paper introduces a new capability in NREL's System Advisor Model (SAM) to directly use POA data in PV modeling, and compares SAM results frommore » both POA irradiance and irradiance components inputs against measured performance data for eight operating PV systems.« less
Using Measured Plane-of-Array Data Directly in Photovoltaic Modeling: Methodology and Validation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Freeman, Janine; Freestate, David; Hobbs, William
2016-06-05
Measured plane-of-array (POA) irradiance may provide a lower-cost alternative to standard irradiance component data for photovoltaic (PV) system performance modeling without loss of accuracy. Previous work has shown that transposition models typically used by PV models to calculate POA irradiance from horizontal data introduce error into the POA irradiance estimates, and that measured POA data can correlate better to measured performance data. However, popular PV modeling tools historically have not directly used input POA data. This paper introduces a new capability in NREL's System Advisor Model (SAM) to directly use POA data in PV modeling, and compares SAM results frommore » both POA irradiance and irradiance components inputs against measured performance data for eight operating PV systems.« less
ERIC Educational Resources Information Center
Connelly, Edward A.; And Others
A new approach to deriving human performance measures and criteria for use in automatically evaluating trainee performance is documented in this report. The ultimate application of the research is to provide methods for automatically measuring pilot performance in a flight simulator or from recorded in-flight data. An efficient method of…
Advanced Performance Modeling with Combined Passive and Active Monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dovrolis, Constantine; Sim, Alex
2015-04-15
To improve the efficiency of resource utilization and scheduling of scientific data transfers on high-speed networks, the "Advanced Performance Modeling with combined passive and active monitoring" (APM) project investigates and models a general-purpose, reusable and expandable network performance estimation framework. The predictive estimation model and the framework will be helpful in optimizing the performance and utilization of networks as well as sharing resources with predictable performance for scientific collaborations, especially in data intensive applications. Our prediction model utilizes historical network performance information from various network activity logs as well as live streaming measurements from network peering devices. Historical network performancemore » information is used without putting extra load on the resources by active measurement collection. Performance measurements collected by active probing is used judiciously for improving the accuracy of predictions.« less
Telerobotic system performance measurement - Motivation and methods
NASA Technical Reports Server (NTRS)
Kondraske, George V.; Khoury, George J.
1992-01-01
A systems performance-based strategy for modeling and conducting experiments relevant to the design and performance characterization of telerobotic systems is described. A developmental testbed consisting of a distributed telerobotics network and initial efforts to implement the strategy described is presented. Consideration is given to the general systems performance theory (GSPT) to tackle human performance problems as a basis for: measurement of overall telerobotic system (TRS) performance; task decomposition; development of a generic TRS model; and the characterization of performance of subsystems comprising the generic model. GSPT employs a resource construct to model performance and resource economic principles to govern the interface of systems to tasks. It provides a comprehensive modeling/measurement strategy applicable to complex systems including both human and artificial components. Application is presented within the framework of a distributed telerobotics network as a testbed. Insight into the design of test protocols which elicit application-independent data is described.
NASA Astrophysics Data System (ADS)
Réveillet, Marion; Six, Delphine; Vincent, Christian; Rabatel, Antoine; Dumont, Marie; Lafaysse, Matthieu; Morin, Samuel; Vionnet, Vincent; Litt, Maxime
2018-04-01
This study focuses on simulations of the seasonal and annual surface mass balance (SMB) of Saint-Sorlin Glacier (French Alps) for the period 1996-2015 using the detailed SURFEX/ISBA-Crocus snowpack model. The model is forced by SAFRAN meteorological reanalysis data, adjusted with automatic weather station (AWS) measurements to ensure that simulations of all the energy balance components, in particular turbulent fluxes, are accurately represented with respect to the measured energy balance. Results indicate good model performance for the simulation of summer SMB when using meteorological forcing adjusted with in situ measurements. Model performance however strongly decreases without in situ meteorological measurements. The sensitivity of the model to meteorological forcing indicates a strong sensitivity to wind speed, higher than the sensitivity to ice albedo. Compared to an empirical approach, the model exhibited better performance for simulations of snow and firn melting in the accumulation area and similar performance in the ablation area when forced with meteorological data adjusted with nearby AWS measurements. When such measurements were not available close to the glacier, the empirical model performed better. Our results suggest that simulations of the evolution of future mass balance using an energy balance model require very accurate meteorological data. Given the uncertainties in the temporal evolution of the relevant meteorological variables and glacier surface properties in the future, empirical approaches based on temperature and precipitation could be more appropriate for simulations of glaciers in the future.
An Empirical Study of a Solo Performance Assessment Model
ERIC Educational Resources Information Center
Russell, Brian E.
2015-01-01
The purpose of this study was to test a hypothesized model of solo music performance assessment. Specifically, this study investigates the influence of technique and musical expression on perceptions of overall performance quality. The Aural Musical Performance Quality (AMPQ) measure was created to measure overall performance quality, technique,…
Incorporating measurement error in n = 1 psychological autoregressive modeling.
Schuurman, Noémi K; Houtveen, Jan H; Hamaker, Ellen L
2015-01-01
Measurement error is omnipresent in psychological data. However, the vast majority of applications of autoregressive time series analyses in psychology do not take measurement error into account. Disregarding measurement error when it is present in the data results in a bias of the autoregressive parameters. We discuss two models that take measurement error into account: An autoregressive model with a white noise term (AR+WN), and an autoregressive moving average (ARMA) model. In a simulation study we compare the parameter recovery performance of these models, and compare this performance for both a Bayesian and frequentist approach. We find that overall, the AR+WN model performs better. Furthermore, we find that for realistic (i.e., small) sample sizes, psychological research would benefit from a Bayesian approach in fitting these models. Finally, we illustrate the effect of disregarding measurement error in an AR(1) model by means of an empirical application on mood data in women. We find that, depending on the person, approximately 30-50% of the total variance was due to measurement error, and that disregarding this measurement error results in a substantial underestimation of the autoregressive parameters.
Rahman, M Shafiqur; Ambler, Gareth; Choodari-Oskooei, Babak; Omar, Rumana Z
2017-04-18
When developing a prediction model for survival data it is essential to validate its performance in external validation settings using appropriate performance measures. Although a number of such measures have been proposed, there is only limited guidance regarding their use in the context of model validation. This paper reviewed and evaluated a wide range of performance measures to provide some guidelines for their use in practice. An extensive simulation study based on two clinical datasets was conducted to investigate the performance of the measures in external validation settings. Measures were selected from categories that assess the overall performance, discrimination and calibration of a survival prediction model. Some of these have been modified to allow their use with validation data, and a case study is provided to describe how these measures can be estimated in practice. The measures were evaluated with respect to their robustness to censoring and ease of interpretation. All measures are implemented, or are straightforward to implement, in statistical software. Most of the performance measures were reasonably robust to moderate levels of censoring. One exception was Harrell's concordance measure which tended to increase as censoring increased. We recommend that Uno's concordance measure is used to quantify concordance when there are moderate levels of censoring. Alternatively, Gönen and Heller's measure could be considered, especially if censoring is very high, but we suggest that the prediction model is re-calibrated first. We also recommend that Royston's D is routinely reported to assess discrimination since it has an appealing interpretation. The calibration slope is useful for both internal and external validation settings and recommended to report routinely. Our recommendation would be to use any of the predictive accuracy measures and provide the corresponding predictive accuracy curves. In addition, we recommend to investigate the characteristics of the validation data such as the level of censoring and the distribution of the prognostic index derived in the validation setting before choosing the performance measures.
Model Performance of Water-Current Meters
Fulford, J.M.; ,
2002-01-01
The measurement of discharge in natural streams requires hydrographers to use accurate water-current meters that have consistent performance among meters of the same model. This paper presents the results of an investigation into the performance of four models of current meters - Price type-AA, Price pygmy, Marsh McBirney 2000 and Swoffer 2100. Tests for consistency and accuracy for six meters of each model are summarized. Variation of meter performance within a model is used as an indicator of consistency, and percent velocity error that is computed from a measured reference velocity is used as an indicator of meter accuracy. Velocities measured by each meter are also compared to the manufacturer's published or advertised accuracy limits. For the meters tested, the Price models werer found to be more accurate and consistent over the range of test velocities compared to the other models. The Marsh McBirney model usually measured within its accuracy specification. The Swoffer meters did not meet the stringent Swoffer accuracy limits for all the velocities tested.
Validation of the PVSyst Performance Model for the Concentrix CPV Technology
NASA Astrophysics Data System (ADS)
Gerstmaier, Tobias; Gomez, María; Gombert, Andreas; Mermoud, André; Lejeune, Thibault
2011-12-01
The accuracy of the two-stage PVSyst model for the Concentrix CPV Technology is determined by comparing modeled to measured values. For both stages, i) the module model and ii) the power plant model, the underlying approaches are explained and methods for obtaining the model parameters are presented. The performance of both models is quantified using 19 months of outdoor measurements for the module model and 9 months of measurements at four different sites for the power plant model. Results are presented by giving statistical quantities for the model accuracy.
Wavelet-based multiscale performance analysis: An approach to assess and improve hydrological models
NASA Astrophysics Data System (ADS)
Rathinasamy, Maheswaran; Khosa, Rakesh; Adamowski, Jan; ch, Sudheer; Partheepan, G.; Anand, Jatin; Narsimlu, Boini
2014-12-01
The temporal dynamics of hydrological processes are spread across different time scales and, as such, the performance of hydrological models cannot be estimated reliably from global performance measures that assign a single number to the fit of a simulated time series to an observed reference series. Accordingly, it is important to analyze model performance at different time scales. Wavelets have been used extensively in the area of hydrological modeling for multiscale analysis, and have been shown to be very reliable and useful in understanding dynamics across time scales and as these evolve in time. In this paper, a wavelet-based multiscale performance measure for hydrological models is proposed and tested (i.e., Multiscale Nash-Sutcliffe Criteria and Multiscale Normalized Root Mean Square Error). The main advantage of this method is that it provides a quantitative measure of model performance across different time scales. In the proposed approach, model and observed time series are decomposed using the Discrete Wavelet Transform (known as the à trous wavelet transform), and performance measures of the model are obtained at each time scale. The applicability of the proposed method was explored using various case studies-both real as well as synthetic. The synthetic case studies included various kinds of errors (e.g., timing error, under and over prediction of high and low flows) in outputs from a hydrologic model. The real time case studies investigated in this study included simulation results of both the process-based Soil Water Assessment Tool (SWAT) model, as well as statistical models, namely the Coupled Wavelet-Volterra (WVC), Artificial Neural Network (ANN), and Auto Regressive Moving Average (ARMA) methods. For the SWAT model, data from Wainganga and Sind Basin (India) were used, while for the Wavelet Volterra, ANN and ARMA models, data from the Cauvery River Basin (India) and Fraser River (Canada) were used. The study also explored the effect of the choice of the wavelets in multiscale model evaluation. It was found that the proposed wavelet-based performance measures, namely the MNSC (Multiscale Nash-Sutcliffe Criteria) and MNRMSE (Multiscale Normalized Root Mean Square Error), are a more reliable measure than traditional performance measures such as the Nash-Sutcliffe Criteria (NSC), Root Mean Square Error (RMSE), and Normalized Root Mean Square Error (NRMSE). Further, the proposed methodology can be used to: i) compare different hydrological models (both physical and statistical models), and ii) help in model calibration.
Measuring the Performance and Intelligence of Systems: Proceedings of the 2002 PerMIS Workshop
NASA Technical Reports Server (NTRS)
Messina, E. R.; Meystel, A. M.
2002-01-01
Contents include the following: Performance Metrics; Performance of Multiple Agents; Performance of Mobility Systems; Performance of Planning Systems; General Discussion Panel 1; Uncertainty of Representation I; Performance of Robots in Hazardous Domains; Modeling Intelligence; Modeling of Mind; Measuring Intelligence; Grouping: A Core Procedure of Intelligence; Uncertainty in Representation II; Towards Universal Planning/Control Systems.
NASA Astrophysics Data System (ADS)
Odman, M. T.; Hu, Y.; Russell, A.; Chai, T.; Lee, P.; Shankar, U.; Boylan, J.
2012-12-01
Regulatory air quality modeling, such as State Implementation Plan (SIP) modeling, requires that model performance meets recommended criteria in the base-year simulations using period-specific, estimated emissions. The goal of the performance evaluation is to assure that the base-year modeling accurately captures the observed chemical reality of the lower troposphere. Any significant deficiencies found in the performance evaluation must be corrected before any base-case (with typical emissions) and future-year modeling is conducted. Corrections are usually made to model inputs such as emission-rate estimates or meteorology and/or to the air quality model itself, in modules that describe specific processes. Use of ground-level measurements that follow approved protocols is recommended for evaluating model performance. However, ground-level monitoring networks are spatially sparse, especially for particulate matter. Satellite retrievals of atmospheric chemical properties such as aerosol optical depth (AOD) provide spatial coverage that can compensate for the sparseness of ground-level measurements. Satellite retrievals can also help diagnose potential model or data problems in the upper troposphere. It is possible to achieve good model performance near the ground, but have, for example, erroneous sources or sinks in the upper troposphere that may result in misleading and unrealistic responses to emission reductions. Despite these advantages, satellite retrievals are rarely used in model performance evaluation, especially for regulatory modeling purposes, due to the high uncertainty in retrievals associated with various contaminations, for example by clouds. In this study, 2007 was selected as the base year for SIP modeling in the southeastern U.S. Performance of the Community Multiscale Air Quality (CMAQ) model, at a 12-km horizontal resolution, for this annual simulation is evaluated using both recommended ground-level measurements and non-traditional satellite retrievals. Evaluation results are assessed against recommended criteria and peer studies in the literature. Further analysis is conducted, based upon these assessments, to discover likely errors in model inputs and potential deficiencies in the model itself. Correlations as well as differences in input errors and model deficiencies revealed by ground-level measurements versus satellite observations are discussed. Additionally, sensitivity analyses are employed to investigate errors in emission-rate estimates using either ground-level measurements or satellite retrievals, and the results are compared against each other considering observational uncertainties. Recommendations are made for how to effectively utilize satellite retrievals in regulatory air quality modeling.
Modeling the Performance of Direct-Detection Doppler Lidar Systems in Real Atmospheres
NASA Technical Reports Server (NTRS)
McGill, Matthew J.; Hart, William D.; McKay, Jack A.; Spinhirne, James D.
1999-01-01
Previous modeling of the performance of spaceborne direct-detection Doppler lidar systems has assumed extremely idealized atmospheric models. Here we develop a technique for modeling the performance of these systems in a more realistic atmosphere, based on actual airborne lidar observations. The resulting atmospheric model contains cloud and aerosol variability that is absent in other simulations of spaceborne Doppler lidar instruments. To produce a realistic simulation of daytime performance, we include solar radiance values that are based on actual measurements and are allowed to vary as the viewing scene changes. Simulations are performed for two types of direct-detection Doppler lidar systems: the double-edge and the multi-channel techniques. Both systems were optimized to measure winds from Rayleigh backscatter at 355 nm. Simulations show that the measurement uncertainty during daytime is degraded by only about 10-20% compared to nighttime performance, provided a proper solar filter is included in the instrument design.
McGill, M J; Hart, W D; McKay, J A; Spinhirne, J D
1999-10-20
Previous modeling of the performance of spaceborne direct-detection Doppler lidar systems assumed extremely idealized atmospheric models. Here we develop a technique for modeling the performance of these systems in a more realistic atmosphere, based on actual airborne lidar observations. The resulting atmospheric model contains cloud and aerosol variability that is absent in other simulations of spaceborne Doppler lidar instruments. To produce a realistic simulation of daytime performance, we include solar radiance values that are based on actual measurements and are allowed to vary as the viewing scene changes. Simulations are performed for two types of direct-detection Doppler lidar system: the double-edge and the multichannel techniques. Both systems were optimized to measure winds from Rayleigh backscatter at 355 nm. Simulations show that the measurement uncertainty during daytime is degraded by only approximately 10-20% compared with nighttime performance, provided that a proper solar filter is included in the instrument design.
Incorporating measurement error in n = 1 psychological autoregressive modeling
Schuurman, Noémi K.; Houtveen, Jan H.; Hamaker, Ellen L.
2015-01-01
Measurement error is omnipresent in psychological data. However, the vast majority of applications of autoregressive time series analyses in psychology do not take measurement error into account. Disregarding measurement error when it is present in the data results in a bias of the autoregressive parameters. We discuss two models that take measurement error into account: An autoregressive model with a white noise term (AR+WN), and an autoregressive moving average (ARMA) model. In a simulation study we compare the parameter recovery performance of these models, and compare this performance for both a Bayesian and frequentist approach. We find that overall, the AR+WN model performs better. Furthermore, we find that for realistic (i.e., small) sample sizes, psychological research would benefit from a Bayesian approach in fitting these models. Finally, we illustrate the effect of disregarding measurement error in an AR(1) model by means of an empirical application on mood data in women. We find that, depending on the person, approximately 30–50% of the total variance was due to measurement error, and that disregarding this measurement error results in a substantial underestimation of the autoregressive parameters. PMID:26283988
47 CFR 73.151 - Field strength measurements to establish performance of directional antennas.
Code of Federal Regulations, 2010 CFR
2010-10-01
... verified either by field strength measurement or by computer modeling and sampling system verification. (a... specifically identified by the Commission. (c) Computer modeling and sample system verification of modeled... performance verified by computer modeling and sample system verification. (1) A matrix of impedance...
ERIC Educational Resources Information Center
Longo, Paul J.
This study explored the mechanics of using an enhanced, comprehensive multipurpose logic model, the Performance Blueprint, as a means of building evaluation capacity, referred to in this paper as performance measurement literacy, to facilitate the attainment of both service-delivery oriented and community-oriented outcomes. The application of this…
Evaluation methodologies for an advanced information processing system
NASA Technical Reports Server (NTRS)
Schabowsky, R. S., Jr.; Gai, E.; Walker, B. K.; Lala, J. H.; Motyka, P.
1984-01-01
The system concept and requirements for an Advanced Information Processing System (AIPS) are briefly described, but the emphasis of this paper is on the evaluation methodologies being developed and utilized in the AIPS program. The evaluation tasks include hardware reliability, maintainability and availability, software reliability, performance, and performability. Hardware RMA and software reliability are addressed with Markov modeling techniques. The performance analysis for AIPS is based on queueing theory. Performability is a measure of merit which combines system reliability and performance measures. The probability laws of the performance measures are obtained from the Markov reliability models. Scalar functions of this law such as the mean and variance provide measures of merit in the AIPS performability evaluations.
NASA Technical Reports Server (NTRS)
Hertel, R. J.
1979-01-01
An electro-optical method to measure the aeroelastic deformations of wind tunnel models is examined. The multitarget tracking performance of one of the two electronic cameras comprising the stereo pair is modeled and measured. The properties of the targets at the model, the camera optics, target illumination, number of targets, acquisition time, target velocities, and tracker performance are considered. The electronic camera system is shown to be capable of locating, measuring, and following the positions of 5 to 50 targets attached to the model at measuring rates up to 5000 targets per second.
Multitasking TORT under UNICOS: Parallel performance models and measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnett, A.; Azmy, Y.Y.
1999-09-27
The existing parallel algorithms in the TORT discrete ordinates code were updated to function in a UNICOS environment. A performance model for the parallel overhead was derived for the existing algorithms. The largest contributors to the parallel overhead were identified and a new algorithm was developed. A parallel overhead model was also derived for the new algorithm. The results of the comparison of parallel performance models were compared to applications of the code to two TORT standard test problems and a large production problem. The parallel performance models agree well with the measured parallel overhead.
Multitasking TORT Under UNICOS: Parallel Performance Models and Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Azmy, Y.Y.; Barnett, D.A.
1999-09-27
The existing parallel algorithms in the TORT discrete ordinates were updated to function in a UNI-COS environment. A performance model for the parallel overhead was derived for the existing algorithms. The largest contributors to the parallel overhead were identified and a new algorithm was developed. A parallel overhead model was also derived for the new algorithm. The results of the comparison of parallel performance models were compared to applications of the code to two TORT standard test problems and a large production problem. The parallel performance models agree well with the measured parallel overhead.
Silich, Bert A; Yang, James J
2012-05-01
Measuring workplace performance is important to emergency department management. If an unreliable model is used, the results will be inaccurate. Use of inaccurate results to make decisions, such as how to distribute the incentive pay, will lead to rewarding the wrong people and will potentially demoralize top performers. This article demonstrates a statistical model to reliably measure the work accomplished, which can then be used as a performance measurement.
Evaluating Organic Aerosol Model Performance: Impact of two Embedded Assumptions
NASA Astrophysics Data System (ADS)
Jiang, W.; Giroux, E.; Roth, H.; Yin, D.
2004-05-01
Organic aerosols are important due to their abundance in the polluted lower atmosphere and their impact on human health and vegetation. However, modeling organic aerosols is a very challenging task because of the complexity of aerosol composition, structure, and formation processes. Assumptions and their associated uncertainties in both models and measurement data make model performance evaluation a truly demanding job. Although some assumptions are obvious, others are hidden and embedded, and can significantly impact modeling results, possibly even changing conclusions about model performance. This paper focuses on analyzing the impact of two embedded assumptions on evaluation of organic aerosol model performance. One assumption is about the enthalpy of vaporization widely used in various secondary organic aerosol (SOA) algorithms. The other is about the conversion factor used to obtain ambient organic aerosol concentrations from measured organic carbon. These two assumptions reflect uncertainties in the model and in the ambient measurement data, respectively. For illustration purposes, various choices of the assumed values are implemented in the evaluation process for an air quality model based on CMAQ (the Community Multiscale Air Quality Model). Model simulations are conducted for the Lower Fraser Valley covering Southwest British Columbia, Canada, and Northwest Washington, United States, for a historical pollution episode in 1993. To understand the impact of the assumed enthalpy of vaporization on modeling results, its impact on instantaneous organic aerosol yields (IAY) through partitioning coefficients is analysed first. The analysis shows that utilizing different enthalpy of vaporization values causes changes in the shapes of IAY curves and in the response of SOA formation capability of reactive organic gases to temperature variations. These changes are then carried into the air quality model and cause substantial changes in the organic aerosol modeling results. In another aspect, using different assumed factors to convert measured organic carbon to organic aerosol concentrations cause substantial variations in the processed ambient data themselves, which are normally used as performance targets for model evaluations. The combination of uncertainties in the modeling results and in the moving performance targets causes major uncertainties in the final conclusion about the model performance. Without further information, the best thing that a modeler can do is to choose a combination of the assumed values from the sensible parameter ranges available in the literature, based on the best match of the modeling results with the processed measurement data. However, the best match of the modeling results with the processed measurement data may not necessarily guarantee that the model itself is rigorous and the model performance is robust. Conclusions on the model performance can only be reached with sufficient understanding of the uncertainties and their impact.
Performance measures and criteria for hydrologic and water quality models
USDA-ARS?s Scientific Manuscript database
Performance measures and criteria are essential for model calibration and validation. This presentation will include a summary of one of the papers that will be included in the 2014 Hydrologic and Water Quality Model Calibration & Validation Guidelines Special Collection of the ASABE Transactions. T...
Performance Modeling of an Airborne Raman Water Vapor Lidar
NASA Technical Reports Server (NTRS)
Whiteman, D. N.; Schwemmer, G.; Berkoff, T.; Plotkin, H.; Ramos-Izquierdo, L.; Pappalardo, G.
2000-01-01
A sophisticated Raman lidar numerical model had been developed. The model has been used to simulate the performance of two ground-based Raman water vapor lidar systems. After tuning the model using these ground-based measurements, the model is used to simulate the water vapor measurement capability of an airborne Raman lidar under both day-and night-time conditions for a wide range of water vapor conditions. The results indicate that, under many circumstances, the daytime measurements possess comparable resolution to an existing airborne differential absorption water vapor lidar while the nighttime measurement have higher resolution. In addition, a Raman lidar is capable of measurements not possible using a differential absorption system.
Calculation of the Aerodynamic Behavior of the Tilt Rotor Aeroacoustic Model (TRAM) in the DNW
NASA Technical Reports Server (NTRS)
Johnson, Wayne
2001-01-01
Comparisons of measured and calculated aerodynamic behavior of a tiltrotor model are presented. The test of the Tilt Rotor Aeroacoustic Model (TRAM) with a single, 1/4-scale V- 22 rotor in the German-Dutch Wind Tunnel (DNW) provides an extensive set of aeroacoustic, performance, and structural loads data. The calculations were performed using the rotorcraft comprehensive analysis CAMRAD II. Presented are comparisons of measured and calculated performance and airloads for helicopter mode operation, as well as calculated induced and profile power. An aerodynamic and wake model and calculation procedure that reflects the unique geometry and phenomena of tiltrotors has been developed. There are major differences between this model and the corresponding aerodynamic and wake model that has been established for helicopter rotors. In general, good correlation between measured and calculated performance and airloads behavior has been shown. Two aspects of the analysis that clearly need improvement are the stall delay model and the trailed vortex formation model.
Guo, Changning; Doub, William H; Kauffman, John F
2010-08-01
Monte Carlo simulations were applied to investigate the propagation of uncertainty in both input variables and response measurements on model prediction for nasal spray product performance design of experiment (DOE) models in the first part of this study, with an initial assumption that the models perfectly represent the relationship between input variables and the measured responses. In this article, we discard the initial assumption, and extended the Monte Carlo simulation study to examine the influence of both input variable variation and product performance measurement variation on the uncertainty in DOE model coefficients. The Monte Carlo simulations presented in this article illustrate the importance of careful error propagation during product performance modeling. Our results show that the error estimates based on Monte Carlo simulation result in smaller model coefficient standard deviations than those from regression methods. This suggests that the estimated standard deviations from regression may overestimate the uncertainties in the model coefficients. Monte Carlo simulations provide a simple software solution to understand the propagation of uncertainty in complex DOE models so that design space can be specified with statistically meaningful confidence levels. (c) 2010 Wiley-Liss, Inc. and the American Pharmacists Association
Palm: Easing the Burden of Analytical Performance Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tallent, Nathan R.; Hoisie, Adolfy
2014-06-01
Analytical (predictive) application performance models are critical for diagnosing performance-limiting resources, optimizing systems, and designing machines. Creating models, however, is difficult because they must be both accurate and concise. To ease the burden of performance modeling, we developed Palm, a modeling tool that combines top-down (human-provided) semantic insight with bottom-up static and dynamic analysis. To express insight, Palm defines a source code modeling annotation language. By coordinating models and source code, Palm's models are `first-class' and reproducible. Unlike prior work, Palm formally links models, functions, and measurements. As a result, Palm (a) uses functions to either abstract or express complexitymore » (b) generates hierarchical models (representing an application's static and dynamic structure); and (c) automatically incorporates measurements to focus attention, represent constant behavior, and validate models. We discuss generating models for three different applications.« less
Campbell, William; Ganna, Andrea; Ingelsson, Erik; Janssens, A Cecile J W
2016-01-01
We propose a new measure of assessing the performance of risk models, the area under the prediction impact curve (auPIC), which quantifies the performance of risk models in terms of their average health impact in the population. Using simulated data, we explain how the prediction impact curve (PIC) estimates the percentage of events prevented when a risk model is used to assign high-risk individuals to an intervention. We apply the PIC to the Atherosclerosis Risk in Communities (ARIC) Study to illustrate its application toward prevention of coronary heart disease. We estimated that if the ARIC cohort received statins at baseline, 5% of events would be prevented when the risk model was evaluated at a cutoff threshold of 20% predicted risk compared to 1% when individuals were assigned to the intervention without the use of a model. By calculating the auPIC, we estimated that an average of 15% of events would be prevented when considering performance across the entire interval. We conclude that the PIC is a clinically meaningful measure for quantifying the expected health impact of risk models that supplements existing measures of model performance. Copyright © 2016 Elsevier Inc. All rights reserved.
Van Looy, Amy; Shafagatova, Aygun
2016-01-01
Measuring the performance of business processes has become a central issue in both academia and business, since organizations are challenged to achieve effective and efficient results. Applying performance measurement models to this purpose ensures alignment with a business strategy, which implies that the choice of performance indicators is organization-dependent. Nonetheless, such measurement models generally suffer from a lack of guidance regarding the performance indicators that exist and how they can be concretized in practice. To fill this gap, we conducted a structured literature review to find patterns or trends in the research on business process performance measurement. The study also documents an extended list of 140 process-related performance indicators in a systematic manner by further categorizing them into 11 performance perspectives in order to gain a holistic view. Managers and scholars can consult the provided list to choose the indicators that are of interest to them, considering each perspective. The structured literature review concludes with avenues for further research.
NASA Astrophysics Data System (ADS)
Kong, Changduk; Lim, Semyeong; Kim, Keunwoo
2013-03-01
The Neural Networks is mostly used to engine fault diagnostic system due to its good learning performance, but it has a drawback due to low accuracy and long learning time to build learning data base. This work builds inversely a base performance model of a turboprop engine to be used for a high altitude operation UAV using measuring performance data, and proposes a fault diagnostic system using the base performance model and artificial intelligent methods such as Fuzzy and Neural Networks. Each real engine performance model, which is named as the base performance model that can simulate a new engine performance, is inversely made using its performance test data. Therefore the condition monitoring of each engine can be more precisely carried out through comparison with measuring performance data. The proposed diagnostic system identifies firstly the faulted components using Fuzzy Logic, and then quantifies faults of the identified components using Neural Networks leaned by fault learning data base obtained from the developed base performance model. In leaning the measuring performance data of the faulted components, the FFBP (Feed Forward Back Propagation) is used. In order to user's friendly purpose, the proposed diagnostic program is coded by the GUI type using MATLAB.
A resistive mesh phantom for assessing the performance of EIT systems.
Gagnon, Hervé; Cousineau, Martin; Adler, Andy; Hartinger, Alzbeta E
2010-09-01
Assessing the performance of electrical impedance tomography (EIT) systems usually requires a phantom for validation, calibration, or comparison purposes. This paper describes a resistive mesh phantom to assess the performance of EIT systems while taking into account cabling stray effects similar to in vivo conditions. This phantom is built with 340 precision resistors on a printed circuit board representing a 2-D circular homogeneous medium. It also integrates equivalent electrical models of the Ag/AgCl electrode impedances. The parameters of the electrode models were fitted from impedance curves measured with an impedance analyzer. The technique used to build the phantom is general and applicable to phantoms of arbitrary shape and conductivity distribution. We describe three performance indicators that can be measured with our phantom for every measurement of an EIT data frame: SNR, accuracy, and modeling accuracy. These performance indicators were evaluated on our EIT system under different frame rates and applied current intensities. The performance indicators are dependent on frame rate, operating frequency, applied current intensity, measurement strategy, and intermodulation distortion when performing simultaneous measurements at several frequencies. These parameter values should, therefore, always be specified when reporting performance indicators to better appreciate their significance.
A Framework for Assessing the Performance of Nonprofit Organizations
ERIC Educational Resources Information Center
Lee, Chongmyoung; Nowell, Branda
2015-01-01
Performance measurement has gained increased importance in the nonprofit sector, and contemporary literature is populated with numerous performance measurement frameworks. In this article, we seek to accomplish two goals. First, we review contemporary models of nonprofit performance measurement to develop an integrated framework in order to…
Prediction of Cognitive Performance and Subjective Sleepiness Using a Model of Arousal Dynamics.
Postnova, Svetlana; Lockley, Steven W; Robinson, Peter A
2018-04-01
A model of arousal dynamics is applied to predict objective performance and subjective sleepiness measures, including lapses and reaction time on a visual Performance Vigilance Test (vPVT), performance on a mathematical addition task (ADD), and the Karolinska Sleepiness Scale (KSS). The arousal dynamics model is comprised of a physiologically based flip-flop switch between the wake- and sleep-active neuronal populations and a dynamic circadian oscillator, thus allowing prediction of sleep propensity. Published group-level experimental constant routine (CR) and forced desynchrony (FD) data are used to calibrate the model to predict performance and sleepiness. Only the studies using dim light (<15 lux) during alertness measurements and controlling for sleep and entrainment before the start of the protocol are selected for modeling. This is done to avoid the direct alerting effects of light and effects of prior sleep debt and circadian misalignment on the data. The results show that linear combination of circadian and homeostatic drives is sufficient to predict dynamics of a variety of sleepiness and performance measures during CR and FD protocols, with sleep-wake cycles ranging from 20 to 42.85 h and a 2:1 wake-to-sleep ratio. New metrics relating model outputs to performance and sleepiness data are developed and tested against group average outcomes from 7 (vPVT lapses), 5 (ADD), and 8 (KSS) experimental protocols, showing good quantitative and qualitative agreement with the data (root mean squared error of 0.38, 0.19, and 0.35, respectively). The weights of the homeostatic and circadian effects are found to be different between the measures, with KSS having stronger homeostatic influence compared with the objective measures of performance. Using FD data in addition to CR data allows us to challenge the model in conditions of both acute sleep deprivation and structured circadian misalignment, ensuring that the role of the circadian and homeostatic drives in performance is properly captured.
What Are Error Rates for Classifying Teacher and School Performance Using Value-Added Models?
ERIC Educational Resources Information Center
Schochet, Peter Z.; Chiang, Hanley S.
2013-01-01
This article addresses likely error rates for measuring teacher and school performance in the upper elementary grades using value-added models applied to student test score gain data. Using a realistic performance measurement system scheme based on hypothesis testing, the authors develop error rate formulas based on ordinary least squares and…
A measurement-based performability model for a multiprocessor system
NASA Technical Reports Server (NTRS)
Ilsueh, M. C.; Iyer, Ravi K.; Trivedi, K. S.
1987-01-01
A measurement-based performability model based on real error-data collected on a multiprocessor system is described. Model development from the raw errror-data to the estimation of cumulative reward is described. Both normal and failure behavior of the system are characterized. The measured data show that the holding times in key operational and failure states are not simple exponential and that semi-Markov process is necessary to model the system behavior. A reward function, based on the service rate and the error rate in each state, is then defined in order to estimate the performability of the system and to depict the cost of different failure types and recovery procedures.
NASA Astrophysics Data System (ADS)
Kashim, Rosmaini; Kasim, Maznah Mat; Rahman, Rosshairy Abd
2015-12-01
Measuring university performance is essential for efficient allocation and utilization of educational resources. In most of the previous studies, performance measurement in universities emphasized the operational efficiency and resource utilization without investigating the university's ability to fulfill the needs of its stakeholders and society. Therefore, assessment of the performance of university should be separated into two stages namely efficiency and effectiveness. In conventional DEA analysis, a decision making unit (DMU) or in this context, a university is generally treated as a black-box which ignores the operation and interdependence of the internal processes. When this happens, the results obtained would be misleading. Thus, this paper suggest an alternative framework for measuring the overall performance of a university by incorporating both efficiency and effectiveness and applies network DEA model. The network DEA models are recommended because this approach takes into account the interrelationship between the processes of efficiency and effectiveness in the system. This framework also focuses on the university structure which is expanded from the hierarchical to form a series of horizontal relationship between subordinate units by assuming both intermediate unit and its subordinate units can generate output(s). Three conceptual models are proposed to evaluate the performance of a university. An efficiency model is developed at the first stage by using hierarchical network model. It is followed by an effectiveness model which take output(s) from the hierarchical structure at the first stage as a input(s) at the second stage. As a result, a new overall performance model is proposed by combining both efficiency and effectiveness models. Thus, once this overall model is realized and utilized, the university's top management can determine the overall performance of each unit more accurately and systematically. Besides that, the result from the network DEA model can give a superior benchmarking power over the conventional models.
Cognitive performance modeling based on general systems performance theory.
Kondraske, George V
2010-01-01
General Systems Performance Theory (GSPT) was initially motivated by problems associated with quantifying different aspects of human performance. It has proved to be invaluable for measurement development and understanding quantitative relationships between human subsystem capacities and performance in complex tasks. It is now desired to bring focus to the application of GSPT to modeling of cognitive system performance. Previous studies involving two complex tasks (i.e., driving and performing laparoscopic surgery) and incorporating measures that are clearly related to cognitive performance (information processing speed and short-term memory capacity) were revisited. A GSPT-derived method of task analysis and performance prediction termed Nonlinear Causal Resource Analysis (NCRA) was employed to determine the demand on basic cognitive performance resources required to support different levels of complex task performance. This approach is presented as a means to determine a cognitive workload profile and the subsequent computation of a single number measure of cognitive workload (CW). Computation of CW may be a viable alternative to measuring it. Various possible "more basic" performance resources that contribute to cognitive system performance are discussed. It is concluded from this preliminary exploration that a GSPT-based approach can contribute to defining cognitive performance models that are useful for both individual subjects and specific groups (e.g., military pilots).
Measuring the performance of Internet companies using a two-stage data envelopment analysis model
NASA Astrophysics Data System (ADS)
Cao, Xiongfei; Yang, Feng
2011-05-01
In exploring the business operation of Internet companies, few researchers have used data envelopment analysis (DEA) to evaluate their performance. Since the Internet companies have a two-stage production process: marketability and profitability, this study employs a relational two-stage DEA model to assess the efficiency of the 40 dot com firms. The results show that our model performs better in measuring efficiency, and is able to discriminate the causes of inefficiency, thus helping business management to be more effective through providing more guidance to business performance improvement.
Models and techniques for evaluating the effectiveness of aircraft computing systems
NASA Technical Reports Server (NTRS)
Meyer, J. F.
1977-01-01
Models, measures and techniques were developed for evaluating the effectiveness of aircraft computing systems. The concept of effectiveness involves aspects of system performance, reliability and worth. Specifically done was a detailed development of model hierarchy at mission, functional task, and computational task levels. An appropriate class of stochastic models was investigated which served as bottom level models in the hierarchial scheme. A unified measure of effectiveness called 'performability' was defined and formulated.
Nicholson, Patricia; Griffin, Patrick; Gillis, Shelley; Wu, Margaret; Dunning, Trisha
2013-09-01
Concern about the process of identifying underlying competencies that contribute to effective nursing performance has been debated with a lack of consensus surrounding an approved measurement instrument for assessing clinical performance. Although a number of methodologies are noted in the development of competency-based assessment measures, these studies are not without criticism. The primary aim of the study was to develop and validate a Performance Based Scoring Rubric, which included both analytical and holistic scales. The aim included examining the validity and reliability of the rubric, which was designed to measure clinical competencies in the operating theatre. The fieldwork observations of 32 nurse educators and preceptors assessing the performance of 95 instrument nurses in the operating theatre were used in the calibration of the rubric. The Rasch model, a particular model among Item Response Models, was used in the calibration of each item in the rubric in an attempt at improving the measurement properties of the scale. This is done by establishing the 'fit' of the data to the conditions demanded by the Rasch model. Acceptable reliability estimates, specifically a high Cronbach's alpha reliability coefficient (0.940), as well as empirical support for construct and criterion validity for the rubric were achieved. Calibration of the Performance Based Scoring Rubric using Rasch model revealed that the fit statistics for most items were acceptable. The use of the Rasch model offers a number of features in developing and refining healthcare competency-based assessments, improving confidence in measuring clinical performance. The Rasch model was shown to be useful in developing and validating a competency-based assessment for measuring the competence of the instrument nurse in the operating theatre with implications for use in other areas of nursing practice. Crown Copyright © 2012. Published by Elsevier Ltd. All rights reserved.
Weil, Joyce; Hutchinson, Susan R; Traxler, Karen
2014-11-01
Data from the Women's Health and Aging Study were used to test a model of factors explaining depressive symptomology. The primary purpose of the study was to explore the association between performance-based measures of functional ability and depression and to examine the role of self-rated physical difficulties and perceived instrumental support in mediating the relationship between performance-based functioning and depression. The inclusion of performance-based measures allows for the testing of functional ability as a clinical precursor to disability and depression: a critical, but rarely examined, association in the disablement process. Structural equation modeling supported the overall fit of the model and found an indirect relationship between performance-based functioning and depression, with perceived physical difficulties serving as a significant mediator. Our results highlight the complementary nature of performance-based and self-rated measures and the importance of including perception of self-rated physical difficulties when examining depression in older persons. © The Author(s) 2014.
A model for critical thinking measurement of dental student performance.
Johnsen, David C; Finkelstein, Michael W; Marshall, Teresa A; Chalkley, Yvonne M
2009-02-01
The educational application of critical thinking has increased in the last twenty years with programs like problem-based learning. Performance measurement related to the dental student's capacity for critical thinking remains elusive, however. This article offers a model now in use to measure critical thinking applied to patient assessment and treatment planning across the four years of the dental school curriculum and across clinical disciplines. Two elements of the model are described: 1) a critical thinking measurement "cell," and 2) a list of minimally essential steps in critical thinking for patient assessment and treatment planning. Issues pertaining to this model are discussed: adaptations on the path from novice to expert, the role of subjective measurement, variations supportive of the model, and the correlation of individual and institutional assessment. The critical thinking measurement cell consists of interacting performance tasks and measures. The student identifies the step in the process (for example, chief complaint) with objective measurement; the student then applies the step to a patient or case with subjective measurement; the faculty member then combines the objective and subjective measurements into an evaluation on progress toward competence. The activities in the cell are then repeated until all the steps in the process have been addressed. A next task is to determine consistency across the four years and across clinical disciplines.
NASA Astrophysics Data System (ADS)
Koran, John J., Jr.; Koran, Mary Lou
In a study designed to explore the effects of teacher anxiety and modeling on acquisition of a science teaching skill and concomitant student performance, 69 preservice secondary teachers and 295 eighth grade students were randomly assigned to microteaching sessions. Prior to microteaching, teachers were given an anxiety test, then randomly assigned to one of three treatments; a transcript model, a protocol model, or a control condition. Subsequently both teacher and student performance was assessed using written and behavioral measures. Analysis of variance indicated that subjects in the two modeling treatments significantly exceeded performance of control group subjects on all measures of the dependent variable, with the protocol model being generally superior to the transcript model. The differential effects of the modeling treatments were further reflected in student performance. Regression analysis of aptitude-treatment interactions indicated that teacher anxiety scores interacted significantly with instructional treatments, with high anxiety teachers performing best in the protocol modeling treatment. Again, this interaction was reflected in student performance, where students taught by highly anxious teachers performed significantly better when their teachers had received the protocol model. These results were discussed in terms of teacher concerns and a memory model of the effects of anxiety on performance.
The Real World Significance of Performance Prediction
ERIC Educational Resources Information Center
Pardos, Zachary A.; Wang, Qing Yang; Trivedi, Shubhendu
2012-01-01
In recent years, the educational data mining and user modeling communities have been aggressively introducing models for predicting student performance on external measures such as standardized tests as well as within-tutor performance. While these models have brought statistically reliable improvement to performance prediction, the real world…
ERIC Educational Resources Information Center
Connelly, E. M.; And Others
A new approach to deriving human performance measures and criteria for use in automatically evaluating trainee performance is described. Ultimately, this approach will allow automatic measurement of pilot performance in a flight simulator or from recorded in-flight data. An efficient method of representing performance data within a computer is…
Modelling of different measures for improving removal in a stormwater pond.
German, J; Jansons, K; Svensson, G; Karlsson, D; Gustafsson, L G
2005-01-01
The effect of retrofitting an existing pond on removal efficiency and hydraulic performance was modelled using the commercial software Mike21 and compartmental modelling. The Mike21 model had previously been calibrated on the studied pond. Installation of baffles, the addition of culverts under a causeway and removal of an existing island were all studied as possible improvement measures in the pond. The subsequent effect on hydraulic performance and removal of suspended solids was then evaluated. Copper, cadmium, BOD, nitrogen and phosphorus removal were also investigated for that specific improvement measure showing the best results. Outcomes of this study reveal that all measures increase the removal efficiency of suspended solids. The hydraulic efficiency is improved for all cases, except for the case where the island is removed. Compartmental modelling was also used to evaluate hydraulic performance and facilitated a better understanding of the way each of the different measures affected the flow pattern and performance. It was concluded that the installation of baffles is the best of the studied measures resulting in a reduction in the annual load on the receiving lake by approximately 8,000 kg of suspended solids (25% reduction of the annual load), 2 kg of copper (10% reduction of the annual load) and 600 kg of BOD (10% reduction of the annual load).
Airloads and Wake Geometry Calculations for an Isolated Tiltrotor Model in a Wind Tunnel
NASA Technical Reports Server (NTRS)
Johnson, Wayne
2001-01-01
Comparisons of measured and calculated aerodynamic behavior of a tiltrotor model are presented. The test of the Tilt Rotor Aeroacoustic Model (TRAM) with a single, 0.25-scale V-22 rotor in the German-Dutch Wind Tunnel (DNW) provides an extensive set of aeroacoustic, performance, and structural loads data. The calculations were performed using the rotorcraft comprehensive analysis CAMRAD II. Presented are comparisons of measured and calculated performance for hover and helicopter mode operation, and airloads for helicopter mode. Calculated induced power, profile power, and wake geometry provide additional information about the aerodynamic behavior. An aerodynamic and wake model and calculation procedure that reflects the unique geometry and phenomena of tiltrotors has been developed. There are major differences between this model and the corresponding aerodynamic and wake model that has been established for helicopter rotors. In general, good correlation between measured and calculated performance and airloads behavior has been shown. Two aspects of the analysis that clearly need improvement are the stall delay model and the trailed vortex formation model.
Model Performance Evaluation and Scenario Analysis (MPESA) Tutorial
The model performance evaluation consists of metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit measures that capture magnitude only, sequence only, and combined magnitude and sequence errors.
NASA Astrophysics Data System (ADS)
Williams, Jason J.; Chung, Serena H.; Johansen, Anne M.; Lamb, Brian K.; Vaughan, Joseph K.; Beutel, Marc
2017-02-01
Air quality models are widely used to estimate pollutant deposition rates and thereby calculate critical loads and critical load exceedances (model deposition > critical load). However, model operational performance is not always quantified specifically to inform these applications. We developed a performance assessment approach designed to inform critical load and exceedance calculations, and applied it to the Pacific Northwest region of the U.S. We quantified wet inorganic N deposition performance of several widely-used air quality models, including five different Community Multiscale Air Quality Model (CMAQ) simulations, the Tdep model, and 'PRISM x NTN' model. Modeled wet inorganic N deposition estimates were compared to wet inorganic N deposition measurements at 16 National Trends Network (NTN) monitoring sites, and to annual bulk inorganic N deposition measurements at Mount Rainier National Park. Model bias (model - observed) and error (|model - observed|) were expressed as a percentage of regional critical load values for diatoms and lichens. This novel approach demonstrated that wet inorganic N deposition bias in the Pacific Northwest approached or exceeded 100% of regional diatom and lichen critical load values at several individual monitoring sites, and approached or exceeded 50% of critical loads when averaged regionally. Even models that adjusted deposition estimates based on deposition measurements to reduce bias or that spatially-interpolated measurement data, had bias that approached or exceeded critical loads at some locations. While wet inorganic N deposition model bias is only one source of uncertainty that can affect critical load and exceedance calculations, results demonstrate expressing bias as a percentage of critical loads at a spatial scale consistent with calculations may be a useful exercise for those performing calculations. It may help decide if model performance is adequate for a particular calculation, help assess confidence in calculation results, and highlight cases where a non-deterministic approach may be needed.
Yoo, Hyung Chol; Miller, Matthew J; Yip, Pansy
2015-04-01
There is limited research examining psychological correlates of a uniquely racialized experience of the model minority stereotype faced by Asian Americans. The present study examined the factor structure and fit of the only published measure of the internalization of the model minority myth, the Internalization of the Model Minority Myth Measure (IM-4; Yoo et al., 2010), with a sample of 155 Asian American high school adolescents. We also examined the link between internalization of the model minority myth types (i.e., myth associated with achievement and myth associated with unrestricted mobility) and psychological adjustment (i.e., affective distress, somatic distress, performance difficulty, academic expectations stress), and the potential moderating effect of academic performance (cumulative grade point average). Results suggested the 2-factor model of the IM-4 had an acceptable fit to the data and supported the factor structure using confirmatory factor analyses. Internalizing the model minority myth of achievement related positively to academic expectations stress; however, internalizing the model minority myth of unrestricted mobility related negatively to academic expectations stress, both controlling for gender and academic performance. Finally, academic performance moderated the model minority myth associated with unrestricted mobility and affective distress link and the model minority myth associated with achievement and performance difficulty link. These findings highlight the complex ways in which the model minority myth relates to psychological outcomes. (c) 2015 APA, all rights reserved).
Procurement performance measurement system in the health care industry.
Kumar, Arun; Ozdamar, Linet; Ng, Chai Peng
2005-01-01
The rising operating cost of providing healthcare is of concern to health care providers. As such, measurement of procurement performance will enable competitive advantage and provide a framework for continuous improvement. The objective of this paper is to develop a procurement performance measurement system. The paper reviews the existing literature in procurement performance measurement to identify the key areas of purchasing performance. By studying the three components in the supply chain collectively with the resources, procedures and output, a model is been developed. Additionally, a balanced scorecard is proposed by establishing a set of generic measures and six perspectives. A case study conducted at the Singapore Hospital applies the conceptual model to describe the purchasing department and the activities within and outside the department. The results indicate that the material management department has already made a good start in measuring the procurement process through the implementation of the balanced scorecard. There are many data that are collected but not properly collated and utilized. Areas lacking measurement include cycle time of delivery, order processing time, effectiveness, efficiency and reliability. Though a lot of hard work was involved, the advantages of establishing a measurement system outweigh the costs and efforts involved in its implementation. Results of balanced scorecard measurements provide decision-makers with critical information on efficiency and effectiveness of the purchasing department's work. The measurement model developed could be used for any hospital procurement system.
The Impact of Measurement Noise in GPA Diagnostic Analysis of a Gas Turbine Engine
NASA Astrophysics Data System (ADS)
Ntantis, Efstratios L.; Li, Y. G.
2013-12-01
The performance diagnostic analysis of a gas turbine is accomplished by estimating a set of internal engine health parameters from available sensor measurements. No physical measuring instruments however can ever completely eliminate the presence of measurement uncertainties. Sensor measurements are often distorted by noise and bias leading to inaccurate estimation results. This paper explores the impact of measurement noise on Gas Turbine GPA analysis. The analysis is demonstrated with a test case where gas turbine performance simulation and diagnostics code TURBOMATCH is used to build a performance model of a model engine similar to Rolls-Royce Trent 500 turbofan engine, and carry out the diagnostic analysis with the presence of different levels of measurement noise. Conclusively, to improve the reliability of the diagnostic results, a statistical analysis of the data scattering caused by sensor uncertainties is made. The diagnostic tool used to deal with the statistical analysis of measurement noise impact is a model-based method utilizing a non-linear GPA.
Study of helicopterroll control effectiveness criteria
NASA Technical Reports Server (NTRS)
Heffley, Robert K.; Bourne, Simon M.; Curtiss, Howard C., Jr.; Hindson, William S.; Hess, Ronald A.
1986-01-01
A study of helicopter roll control effectiveness based on closed-loop task performance measurement and modeling is presented. Roll control critieria are based on task margin, the excess of vehicle task performance capability over the pilot's task performance demand. Appropriate helicopter roll axis dynamic models are defined for use with analytic models for task performance. Both near-earth and up-and-away large-amplitude maneuvering phases are considered. The results of in-flight and moving-base simulation measurements are presented to support the roll control effectiveness criteria offered. This Volume contains the theoretical analysis, simulation results and criteria development.
The Eysenckian personality factors and their correlations with academic performance.
Poropat, Arthur E
2011-03-01
BACKGROUND. The relationship between personality and academic performance has long been explored, and a recent meta-analysis established that measures of the five-factor model (FFM) dimension of Conscientiousness have similar validity to intelligence measures. Although currently dominant, the FFM is only one of the currently accepted models of personality, and has limited theoretical support. In contrast, the Eysenckian personality model was developed to assess a specific theoretical model and is still commonly used in educational settings and research. AIMS. This meta-analysis assessed the validity of the Eysenckian personality measures for predicting academic performance. SAMPLE. Statistics were obtained for correlations with Psychoticism, Extraversion, and Neuroticism (20-23 samples; N from 8,013 to 9,191), with smaller aggregates for the Lie scale (7 samples; N= 3,910). METHODS. The Hunter-Schmidt random effects method was used to estimate population correlations between the Eysenckian personality measures and academic performance. Moderating effects were tested using weighted least squares regression. RESULTS. Significant but modest validities were reported for each scale. Neuroticism and Extraversion had relationships with academic performance that were consistent with previous findings, while Psychoticism appears to be linked to academic performance because of its association with FFM Conscientiousness. Age and educational level moderated correlations with Neuroticism and Extraversion, and gender had no moderating effect. Correlations varied significantly based on the measurement instrument used. CONCLUSIONS. The Eysenckian scales do not add to the prediction of academic performance beyond that provided by FFM scales. Several measurement problems afflict the Eysenckian scales, including low to poor internal reliability and complex factor structures. In particular, the measurement and validity problems of Psychoticism mean its continued use in academic settings is unjustified. © 2010 The Author. British Journal of Educational Psychology. © 2010 The British Psychological Society.
Measuring Information Security Performance with 10 by 10 Model for Holistic State Evaluation.
Bernik, Igor; Prislan, Kaja
Organizations should measure their information security performance if they wish to take the right decisions and develop it in line with their security needs. Since the measurement of information security is generally underdeveloped in practice and many organizations find the existing recommendations too complex, the paper presents a solution in the form of a 10 by 10 information security performance measurement model. The model-ISP 10×10M is composed of ten critical success factors, 100 key performance indicators and 6 performance levels. Its content was devised on the basis of findings presented in the current research studies and standards, while its structure results from an empirical research conducted among information security professionals from Slovenia. Results of the study show that a high level of information security performance is mostly dependent on measures aimed at managing information risks, employees and information sources, while formal and environmental factors have a lesser impact. Experts believe that information security should evolve systematically, where it's recommended that beginning steps include technical, logical and physical security controls, while advanced activities should relate predominantly strategic management activities. By applying the proposed model, organizations are able to determine the actual level of information security performance based on the weighted indexing technique. In this manner they identify the measures they ought to develop in order to improve the current situation. The ISP 10×10M is a useful tool for conducting internal system evaluations and decision-making. It may also be applied to a larger sample of organizations in order to determine the general state-of-play for research purposes.
Pressure-Distribution Measurements on the Tail Surfaces of a Rotating Model of the Design BFW - M31
NASA Technical Reports Server (NTRS)
Kohler, M.; Mautz, W.
1949-01-01
In order to obtain insight into the flow conditions on tail surfaces on airplanes during spins, pressure-distribution measurements were performed on a rotating model of the design BFW-M31. For the time being, the tests were made for only one angle of attack (alpha = 60 degrees) and various angles of yaw and rudder angles. The results of these measurements are given; the construction of the model, and the test arrangement used are described. Measurements to be performed later and alterations planned in the test arrangement are pointed out.
Untangling Performance from Success
NASA Astrophysics Data System (ADS)
Yucesoy, Burcu; Barabasi, Albert-Laszlo
Fame, popularity and celebrity status, frequently used tokens of success, are often loosely related to, or even divorced from professional performance. This dichotomy is partly rooted in the difficulty to distinguish performance, an individual measure that captures the actions of a performer, from success, a collective measure that captures a community's reactions to these actions. Yet, finding the relationship between the two measures is essential for all areas that aim to objectively reward excellence, from science to business. Here we quantify the relationship between performance and success by focusing on tennis, an individual sport where the two quantities can be independently measured. We show that a predictive model, relying only on a tennis player's performance in tournaments, can accurately predict an athlete's popularity, both during a player's active years and after retirement. Hence the model establishes a direct link between performance and momentary popularity. The agreement between the performance-driven and observed popularity suggests that in most areas of human achievement exceptional visibility may be rooted in detectable performance measures. This research was supported by Air Force Office of Scientific Research (AFOSR) under agreement FA9550-15-1-0077.
Linking Quality and Spending to Measure Value for People with Serious Illness.
Ryan, Andrew M; Rodgers, Phillip E
2018-03-01
Healthcare payment is rapidly evolving to reward value by measuring and paying for quality and spending performance. Rewarding value for the care of seriously ill patients presents unique challenges. To evaluate the state of current efforts to measure and reward value for the care of seriously ill patients. We performed a PubMed search of articles related to (1) measures of spending for people with serious illness and (2) linking spending and quality measures and rewarding performance for the care of people with serious illness. We limited our search to U.S.-based studies published in English between January 1, 1960, and March 31, 2017. We supplemented this search by identifying public programs and other known initiatives that linked quality and spending for the seriously ill and extracted key program elements. Our search related to linking spending and quality measures and rewarding performance for the care of people with serious illness yielded 277 articles. We identified three current public programs that currently link measures of quality and spending-or are likely to within the next few years-the Oncology Care Model; the Comprehensive End-Stage Renal Disease Model; and Home Health Value-Based Purchasing. Models that link quality and spending consist of four core components: (1) measuring quality, (2) measuring spending, (3) the payment adjustment model, and (4) the linking/incentive model. We found that current efforts to reward value for seriously ill patients are targeted for specific patient populations, do not broadly encourage the use of palliative care, and have not closely aligned quality and spending measures related to palliative care. We develop recommendations for policymakers and stakeholders about how measures of spending and quality can be balanced in value-based payment programs.
Linking Quality and Spending to Measure Value for People with Serious Illness
Rodgers, Phillip E.
2018-01-01
Abstract Background: Healthcare payment is rapidly evolving to reward value by measuring and paying for quality and spending performance. Rewarding value for the care of seriously ill patients presents unique challenges. Objective: To evaluate the state of current efforts to measure and reward value for the care of seriously ill patients. Design: We performed a PubMed search of articles related to (1) measures of spending for people with serious illness and (2) linking spending and quality measures and rewarding performance for the care of people with serious illness. We limited our search to U.S.-based studies published in English between January 1, 1960, and March 31, 2017. We supplemented this search by identifying public programs and other known initiatives that linked quality and spending for the seriously ill and extracted key program elements. Results: Our search related to linking spending and quality measures and rewarding performance for the care of people with serious illness yielded 277 articles. We identified three current public programs that currently link measures of quality and spending—or are likely to within the next few years—the Oncology Care Model; the Comprehensive End-Stage Renal Disease Model; and Home Health Value-Based Purchasing. Models that link quality and spending consist of four core components: (1) measuring quality, (2) measuring spending, (3) the payment adjustment model, and (4) the linking/incentive model. We found that current efforts to reward value for seriously ill patients are targeted for specific patient populations, do not broadly encourage the use of palliative care, and have not closely aligned quality and spending measures related to palliative care. Conclusions: We develop recommendations for policymakers and stakeholders about how measures of spending and quality can be balanced in value-based payment programs. PMID:29091529
Hydrograph matching method for measuring model performance
NASA Astrophysics Data System (ADS)
Ewen, John
2011-09-01
SummaryDespite all the progress made over the years on developing automatic methods for analysing hydrographs and measuring the performance of rainfall-runoff models, automatic methods cannot yet match the power and flexibility of the human eye and brain. Very simple approaches are therefore being developed that mimic the way hydrologists inspect and interpret hydrographs, including the way that patterns are recognised, links are made by eye, and hydrological responses and errors are studied and remembered. In this paper, a dynamic programming algorithm originally designed for use in data mining is customised for use with hydrographs. It generates sets of "rays" that are analogous to the visual links made by the hydrologist's eye when linking features or times in one hydrograph to the corresponding features or times in another hydrograph. One outcome from this work is a new family of performance measures called "visual" performance measures. These can measure differences in amplitude and timing, including the timing errors between simulated and observed hydrographs in model calibration. To demonstrate this, two visual performance measures, one based on the Nash-Sutcliffe Efficiency and the other on the mean absolute error, are used in a total of 34 split-sample calibration-validation tests for two rainfall-runoff models applied to the Hodder catchment, northwest England. The customised algorithm, called the Hydrograph Matching Algorithm, is very simple to apply; it is given in a few lines of pseudocode.
Thrust stand evaluation of engine performance improvement algorithms in an F-15 airplane
NASA Technical Reports Server (NTRS)
Conners, Timothy R.
1992-01-01
An investigation is underway to determine the benefits of a new propulsion system optimization algorithm in an F-15 airplane. The performance seeking control (PSC) algorithm optimizes the quasi-steady-state performance of an F100 derivative turbofan engine for several modes of operation. The PSC algorithm uses an onboard software engine model that calculates thrust, stall margin, and other unmeasured variables for use in the optimization. As part of the PSC test program, the F-15 aircraft was operated on a horizontal thrust stand. Thrust was measured with highly accurate load cells. The measured thrust was compared to onboard model estimates and to results from posttest performance programs. Thrust changes using the various PSC modes were recorded. Those results were compared to benefits using the less complex highly integrated digital electronic control (HIDEC) algorithm. The PSC maximum thrust mode increased intermediate power thrust by 10 percent. The PSC engine model did very well at estimating measured thrust and closely followed the transients during optimization. Quantitative results from the evaluation of the algorithms and performance calculation models are included with emphasis on measured thrust results. The report presents a description of the PSC system and a discussion of factors affecting the accuracy of the thrust stand load measurements.
Terahertz radar cross section measurements.
Iwaszczuk, Krzysztof; Heiselberg, Henning; Jepsen, Peter Uhd
2010-12-06
We perform angle- and frequency-resolved radar cross section (RCS) measurements on objects at terahertz frequencies. Our RCS measurements are performed on a scale model aircraft of size 5-10 cm in polar and azimuthal configurations, and correspond closely to RCS measurements with conventional radar on full-size objects. The measurements are performed in a terahertz time-domain system with freely propagating terahertz pulses generated by tilted pulse front excitation of lithium niobate crystals and measured with sub-picosecond time resolution. The application of a time domain system provides ranging information and also allows for identification of scattering points such as weaponry attached to the aircraft. The shapes of the models and positions of reflecting parts are retrieved by the filtered back projection algorithm.
Morin, Ruth T; Axelrod, Bradley N
Latent Class Analysis (LCA) was used to classify a heterogeneous sample of neuropsychology data. In particular, we used measures of performance validity, symptom validity, cognition, and emotional functioning to assess and describe latent groups of functioning in these areas. A data-set of 680 neuropsychological evaluation protocols was analyzed using a LCA. Data were collected from evaluations performed for clinical purposes at an urban medical center. A four-class model emerged as the best fitting model of latent classes. The resulting classes were distinct based on measures of performance validity and symptom validity. Class A performed poorly on both performance and symptom validity measures. Class B had intact performance validity and heightened symptom reporting. The remaining two Classes performed adequately on both performance and symptom validity measures, differing only in cognitive and emotional functioning. In general, performance invalidity was associated with worse cognitive performance, while symptom invalidity was associated with elevated emotional distress. LCA appears useful in identifying groups within a heterogeneous sample with distinct performance patterns. Further, the orthogonal nature of performance and symptom validities is supported.
Hu, Ming-Hsia; Yeh, Chih-Jun; Chen, Tou-Rong; Wang, Ching-Yi
2014-01-01
A valid, time-efficient and easy-to-use instrument is important for busy clinical settings, large scale surveys, or community screening use. The purpose of this study was to validate the mobility hierarchical disability categorization model (an abbreviated model) by investigating its concurrent validity with the multidimensional hierarchical disability categorization model (a comprehensive model) and triangulating both models with physical performance measures in older adults. 604 community-dwelling older adults of at least 60 years in age volunteered to participate. Self-reported function on mobility, instrumental activities of daily living (IADL) and activities of daily living (ADL) domains were recorded and then the disability status determined based on both the multidimensional hierarchical categorization model and the mobility hierarchical categorization model. The physical performance measures, consisting of grip strength and usual and fastest gait speeds (UGS, FGS), were collected on the same day. Both categorization models showed high correlation (γs = 0.92, p < 0.001) and agreement (kappa = 0.61, p < 0.0001). Physical performance measures demonstrated significant different group means among the disability subgroups based on both categorization models. The results of multiple regression analysis indicated that both models individually explain similar amount of variance on all physical performances, with adjustments for age, sex, and number of comorbidities. Our results found that the mobility hierarchical disability categorization model is a valid and time efficient tool for large survey or screening use.
Performance evaluation of an agent-based occupancy simulation model
Luo, Xuan; Lam, Khee Poh; Chen, Yixing; ...
2017-01-17
Occupancy is an important factor driving building performance. Static and homogeneous occupant schedules, commonly used in building performance simulation, contribute to issues such as performance gaps between simulated and measured energy use in buildings. Stochastic occupancy models have been recently developed and applied to better represent spatial and temporal diversity of occupants in buildings. However, there is very limited evaluation of the usability and accuracy of these models. This study used measured occupancy data from a real office building to evaluate the performance of an agent-based occupancy simulation model: the Occupancy Simulator. The occupancy patterns of various occupant types weremore » first derived from the measured occupant schedule data using statistical analysis. Then the performance of the simulation model was evaluated and verified based on (1) whether the distribution of observed occupancy behavior patterns follows the theoretical ones included in the Occupancy Simulator, and (2) whether the simulator can reproduce a variety of occupancy patterns accurately. Results demonstrated the feasibility of applying the Occupancy Simulator to simulate a range of occupancy presence and movement behaviors for regular types of occupants in office buildings, and to generate stochastic occupant schedules at the room and individual occupant levels for building performance simulation. For future work, model validation is recommended, which includes collecting and using detailed interval occupancy data of all spaces in an office building to validate the simulated occupant schedules from the Occupancy Simulator.« less
Performance evaluation of an agent-based occupancy simulation model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, Xuan; Lam, Khee Poh; Chen, Yixing
Occupancy is an important factor driving building performance. Static and homogeneous occupant schedules, commonly used in building performance simulation, contribute to issues such as performance gaps between simulated and measured energy use in buildings. Stochastic occupancy models have been recently developed and applied to better represent spatial and temporal diversity of occupants in buildings. However, there is very limited evaluation of the usability and accuracy of these models. This study used measured occupancy data from a real office building to evaluate the performance of an agent-based occupancy simulation model: the Occupancy Simulator. The occupancy patterns of various occupant types weremore » first derived from the measured occupant schedule data using statistical analysis. Then the performance of the simulation model was evaluated and verified based on (1) whether the distribution of observed occupancy behavior patterns follows the theoretical ones included in the Occupancy Simulator, and (2) whether the simulator can reproduce a variety of occupancy patterns accurately. Results demonstrated the feasibility of applying the Occupancy Simulator to simulate a range of occupancy presence and movement behaviors for regular types of occupants in office buildings, and to generate stochastic occupant schedules at the room and individual occupant levels for building performance simulation. For future work, model validation is recommended, which includes collecting and using detailed interval occupancy data of all spaces in an office building to validate the simulated occupant schedules from the Occupancy Simulator.« less
Recent Progress Towards Predicting Aircraft Ground Handling Performance
NASA Technical Reports Server (NTRS)
Yager, T. J.; White, E. J.
1981-01-01
The significant progress which has been achieved in development of aircraft ground handling simulation capability is reviewed and additional improvements in software modeling identified. The problem associated with providing necessary simulator input data for adequate modeling of aircraft tire/runway friction behavior is discussed and efforts to improve this complex model, and hence simulator fidelity, are described. Aircraft braking performance data obtained on several wet runway surfaces is compared to ground vehicle friction measurements and, by use of empirically derived methods, good agreement between actual and estimated aircraft braking friction from ground vehilce data is shown. The performance of a relatively new friction measuring device, the friction tester, showed great promise in providing data applicable to aircraft friction performance. Additional research efforts to improve methods of predicting tire friction performance are discussed including use of an instrumented tire test vehicle to expand the tire friction data bank and a study of surface texture measurement techniques.
Accuracy assessment for a multi-parameter optical calliper in on line automotive applications
NASA Astrophysics Data System (ADS)
D'Emilia, G.; Di Gasbarro, D.; Gaspari, A.; Natale, E.
2017-08-01
In this work, a methodological approach based on the evaluation of the measurement uncertainty is applied to an experimental test case, related to the automotive sector. The uncertainty model for different measurement procedures of a high-accuracy optical gauge is discussed in order to individuate the best measuring performances of the system for on-line applications and when the measurement requirements are becoming more stringent. In particular, with reference to the industrial production and control strategies of high-performing turbochargers, two uncertainty models are proposed, discussed and compared, to be used by the optical calliper. Models are based on an integrated approach between measurement methods and production best practices to emphasize their mutual coherence. The paper shows the possible advantages deriving from the considerations that the measurement uncertainty modelling provides, in order to keep control of the uncertainty propagation on all the indirect measurements useful for production statistical control, on which basing further improvements.
On the estimation algorithm used in adaptive performance optimization of turbofan engines
NASA Technical Reports Server (NTRS)
Espana, Martin D.; Gilyard, Glenn B.
1993-01-01
The performance seeking control algorithm is designed to continuously optimize the performance of propulsion systems. The performance seeking control algorithm uses a nominal model of the propulsion system and estimates, in flight, the engine deviation parameters characterizing the engine deviations with respect to nominal conditions. In practice, because of measurement biases and/or model uncertainties, the estimated engine deviation parameters may not reflect the engine's actual off-nominal condition. This factor has a necessary impact on the overall performance seeking control scheme exacerbated by the open-loop character of the algorithm. The effects produced by unknown measurement biases over the estimation algorithm are evaluated. This evaluation allows for identification of the most critical measurements for application of the performance seeking control algorithm to an F100 engine. An equivalence relation between the biases and engine deviation parameters stems from an observability study; therefore, it is undecided whether the estimated engine deviation parameters represent the actual engine deviation or whether they simply reflect the measurement biases. A new algorithm, based on the engine's (steady-state) optimization model, is proposed and tested with flight data. When compared with previous Kalman filter schemes, based on local engine dynamic models, the new algorithm is easier to design and tune and it reduces the computational burden of the onboard computer.
Papantoniou, Panagiotis
2018-04-03
The present research relies on 2 main objectives. The first is to investigate whether latent model analysis through a structural equation model can be implemented on driving simulator data in order to define an unobserved driving performance variable. Subsequently, the second objective is to investigate and quantify the effect of several risk factors including distraction sources, driver characteristics, and road and traffic environment on the overall driving performance and not in independent driving performance measures. For the scope of the present research, 95 participants from all age groups were asked to drive under different types of distraction (conversation with passenger, cell phone use) in urban and rural road environments with low and high traffic volume in a driving simulator experiment. Then, in the framework of the statistical analysis, a correlation table is presented investigating any of a broad class of statistical relationships between driving simulator measures and a structural equation model is developed in which overall driving performance is estimated as a latent variable based on several individual driving simulator measures. Results confirm the suitability of the structural equation model and indicate that the selection of the specific performance measures that define overall performance should be guided by a rule of representativeness between the selected variables. Moreover, results indicate that conversation with the passenger was not found to have a statistically significant effect, indicating that drivers do not change their performance while conversing with a passenger compared to undistracted driving. On the other hand, results support the hypothesis that cell phone use has a negative effect on driving performance. Furthermore, regarding driver characteristics, age, gender, and experience all have a significant effect on driving performance, indicating that driver-related characteristics play the most crucial role in overall driving performance. The findings of this study allow a new approach to the investigation of driving behavior in driving simulator experiments and in general. By the successful implementation of the structural equation model, driving behavior can be assessed in terms of overall performance and not through individual performance measures, which allows an important scientific step forward from piecemeal analyses to a sound combined analysis of the interrelationship between several risk factors and overall driving performance.
Comparison of measurement- and proxy-based Vs30 values in California
Yong, Alan K.
2016-01-01
This study was prompted by the recent availability of a significant amount of openly accessible measured VS30 values and the desire to investigate the trend of using proxy-based models to predict VS30 in the absence of measurements. Comparisons between measured and model-based values were performed. The measured data included 503 VS30 values collected from various projects for 482 seismographic station sites in California. Six proxy-based models—employing geologic mapping, topographic slope, and terrain classification—were also considered. Included was a new terrain class model based on the Yong et al. (2012) approach but recalibrated with updated measured VS30 values. Using the measured VS30 data as the metric for performance, the predictive capabilities of the six models were determined to be statistically indistinguishable. This study also found three models that tend to underpredict VS30 at lower velocities (NEHRP Site Classes D–E) and overpredict at higher velocities (Site Classes B–C).
An Experimental Study on the Iso-Content-Based Angle Similarity Measure.
ERIC Educational Resources Information Center
Zhang, Jin; Rasmussen, Edie M.
2002-01-01
Retrieval performance of the iso-content-based angle similarity measure within the angle, distance, conjunction, disjunction, and ellipse retrieval models is compared with retrieval performance of the distance similarity measure and the angle similarity measure. Results show the iso-content-based angle similarity measure achieves satisfactory…
A Spectral Evaluation of Models Performances in Mediterranean Oak Woodlands
NASA Astrophysics Data System (ADS)
Vargas, R.; Baldocchi, D. D.; Abramowitz, G.; Carrara, A.; Correia, A.; Kobayashi, H.; Papale, D.; Pearson, D.; Pereira, J.; Piao, S.; Rambal, S.; Sonnentag, O.
2009-12-01
Ecosystem processes are influenced by climatic trends at multiple temporal scales including diel patterns and other mid-term climatic modes, such as interannual and seasonal variability. Because interactions between biophysical components of ecosystem processes are complex, it is important to test how models perform in frequency (e.g. hours, days, weeks, months, years) and time (i.e. day of the year) domains in addition to traditional tests of annual or monthly sums. Here we present a spectral evaluation using wavelet time series analysis of model performance in seven Mediterranean Oak Woodlands that encompass three deciduous and four evergreen sites. We tested the performance of five models (CABLE, ORCHIDEE, BEPS, Biome-BGC, and JULES) on measured variables of gross primary production (GPP) and evapotranspiration (ET). In general, model performance fails at intermediate periods (e.g. weeks to months) likely because these models do not represent the water pulse dynamics that influence GPP and ET at these Mediterranean systems. To improve the performance of a model it is critical to identify first where and when the model fails. Only by identifying where a model fails we can improve the model performance and use them as prognostic tools and to generate further hypotheses that can be tested by new experiments and measurements.
Model Performance Evaluation and Scenario Analysis ...
This tool consists of two parts: model performance evaluation and scenario analysis (MPESA). The model performance evaluation consists of two components: model performance evaluation metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit measures that capture magnitude only, sequence only, and combined magnitude and sequence errors. The performance measures include error analysis, coefficient of determination, Nash-Sutcliffe efficiency, and a new weighted rank method. These performance metrics only provide useful information about the overall model performance. Note that MPESA is based on the separation of observed and simulated time series into magnitude and sequence components. The separation of time series into magnitude and sequence components and the reconstruction back to time series provides diagnostic insights to modelers. For example, traditional approaches lack the capability to identify if the source of uncertainty in the simulated data is due to the quality of the input data or the way the analyst adjusted the model parameters. This report presents a suite of model diagnostics that identify if mismatches between observed and simulated data result from magnitude or sequence related errors. MPESA offers graphical and statistical options that allow HSPF users to compare observed and simulated time series and identify the parameter values to adjust or the input data to modify. The scenario analysis part of the too
Measurement-based reliability/performability models
NASA Technical Reports Server (NTRS)
Hsueh, Mei-Chen
1987-01-01
Measurement-based models based on real error-data collected on a multiprocessor system are described. Model development from the raw error-data to the estimation of cumulative reward is also described. A workload/reliability model is developed based on low-level error and resource usage data collected on an IBM 3081 system during its normal operation in order to evaluate the resource usage/error/recovery process in a large mainframe system. Thus, both normal and erroneous behavior of the system are modeled. The results provide an understanding of the different types of errors and recovery processes. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model the system behavior. A sensitivity analysis is performed to investigate the significance of using a semi-Markov process, as opposed to a Markov process, to model the measured system.
Reducing hydrologic model uncertainty in monthly streamflow predictions using multimodel combination
NASA Astrophysics Data System (ADS)
Li, Weihua; Sankarasubramanian, A.
2012-12-01
Model errors are inevitable in any prediction exercise. One approach that is currently gaining attention in reducing model errors is by combining multiple models to develop improved predictions. The rationale behind this approach primarily lies on the premise that optimal weights could be derived for each model so that the developed multimodel predictions will result in improved predictions. A new dynamic approach (MM-1) to combine multiple hydrological models by evaluating their performance/skill contingent on the predictor state is proposed. We combine two hydrological models, "abcd" model and variable infiltration capacity (VIC) model, to develop multimodel streamflow predictions. To quantify precisely under what conditions the multimodel combination results in improved predictions, we compare multimodel scheme MM-1 with optimal model combination scheme (MM-O) by employing them in predicting the streamflow generated from a known hydrologic model (abcd model orVICmodel) with heteroscedastic error variance as well as from a hydrologic model that exhibits different structure than that of the candidate models (i.e., "abcd" model or VIC model). Results from the study show that streamflow estimated from single models performed better than multimodels under almost no measurement error. However, under increased measurement errors and model structural misspecification, both multimodel schemes (MM-1 and MM-O) consistently performed better than the single model prediction. Overall, MM-1 performs better than MM-O in predicting the monthly flow values as well as in predicting extreme monthly flows. Comparison of the weights obtained from each candidate model reveals that as measurement errors increase, MM-1 assigns weights equally for all the models, whereas MM-O assigns higher weights for always the best-performing candidate model under the calibration period. Applying the multimodel algorithms for predicting streamflows over four different sites revealed that MM-1 performs better than all single models and optimal model combination scheme, MM-O, in predicting the monthly flows as well as the flows during wetter months.
NASA Astrophysics Data System (ADS)
Kishcha, P.; Starobinets, B.; Bozzano, R.; Pensieri, S.; Canepa, E.; Nickovie, S.; di Sarra, A.; Udisti, R.; Becagli, S.; Alpert, P.
2012-03-01
Sea-salt aerosol (SSA) could influence the Earth's climate acting as cloud condensation nuclei. However, there were no regular measurements of SSA in the open sea. At Tel-Aviv University, the DREAM-Salt prediction system has been producing daily forecasts of 3-D distribution of sea-salt aerosol concentrations over the Mediterranean Sea (http://wind.tau.ac.il/saltina/ salt.html). In order to evaluate the model performance in the open sea, daily modeled concentrations were compared directly with SSA measurements taken at the tiny island of Lampedusa, in the Central Mediterranean. In order to further test the robustness of the model, the model performance over the open sea was indirectly verified by comparing modeled SSA concentrations with wave height measurements collected by the ODAS Italia 1 buoy and the Llobregat buoy. Model-vs.-measurement comparisons show that the model is capable of producing realistic SSA concentrations and their day-today variations over the open sea, in accordance with observed wave height and wind speed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snyder, Abigail C.; Link, Robert P.; Calvin, Katherine V.
Hindcasting experiments (conducting a model forecast for a time period in which observational data are available) are being undertaken increasingly often by the integrated assessment model (IAM) community, across many scales of models. When they are undertaken, the results are often evaluated using global aggregates or otherwise highly aggregated skill scores that mask deficiencies. We select a set of deviation-based measures that can be applied on different spatial scales (regional versus global) to make evaluating the large number of variable–region combinations in IAMs more tractable. We also identify performance benchmarks for these measures, based on the statistics of the observationalmore » dataset, that allow a model to be evaluated in absolute terms rather than relative to the performance of other models at similar tasks. An ideal evaluation method for hindcast experiments in IAMs would feature both absolute measures for evaluation of a single experiment for a single model and relative measures to compare the results of multiple experiments for a single model or the same experiment repeated across multiple models, such as in community intercomparison studies. The performance benchmarks highlight the use of this scheme for model evaluation in absolute terms, providing information about the reasons a model may perform poorly on a given measure and therefore identifying opportunities for improvement. To demonstrate the use of and types of results possible with the evaluation method, the measures are applied to the results of a past hindcast experiment focusing on land allocation in the Global Change Assessment Model (GCAM) version 3.0. The question of how to more holistically evaluate models as complex as IAMs is an area for future research. We find quantitative evidence that global aggregates alone are not sufficient for evaluating IAMs that require global supply to equal global demand at each time period, such as GCAM. The results of this work indicate it is unlikely that a single evaluation measure for all variables in an IAM exists, and therefore sector-by-sector evaluation may be necessary.« less
Snyder, Abigail C.; Link, Robert P.; Calvin, Katherine V.
2017-11-29
Hindcasting experiments (conducting a model forecast for a time period in which observational data are available) are being undertaken increasingly often by the integrated assessment model (IAM) community, across many scales of models. When they are undertaken, the results are often evaluated using global aggregates or otherwise highly aggregated skill scores that mask deficiencies. We select a set of deviation-based measures that can be applied on different spatial scales (regional versus global) to make evaluating the large number of variable–region combinations in IAMs more tractable. We also identify performance benchmarks for these measures, based on the statistics of the observationalmore » dataset, that allow a model to be evaluated in absolute terms rather than relative to the performance of other models at similar tasks. An ideal evaluation method for hindcast experiments in IAMs would feature both absolute measures for evaluation of a single experiment for a single model and relative measures to compare the results of multiple experiments for a single model or the same experiment repeated across multiple models, such as in community intercomparison studies. The performance benchmarks highlight the use of this scheme for model evaluation in absolute terms, providing information about the reasons a model may perform poorly on a given measure and therefore identifying opportunities for improvement. To demonstrate the use of and types of results possible with the evaluation method, the measures are applied to the results of a past hindcast experiment focusing on land allocation in the Global Change Assessment Model (GCAM) version 3.0. The question of how to more holistically evaluate models as complex as IAMs is an area for future research. We find quantitative evidence that global aggregates alone are not sufficient for evaluating IAMs that require global supply to equal global demand at each time period, such as GCAM. The results of this work indicate it is unlikely that a single evaluation measure for all variables in an IAM exists, and therefore sector-by-sector evaluation may be necessary.« less
NASA Astrophysics Data System (ADS)
Snyder, Abigail C.; Link, Robert P.; Calvin, Katherine V.
2017-11-01
Hindcasting experiments (conducting a model forecast for a time period in which observational data are available) are being undertaken increasingly often by the integrated assessment model (IAM) community, across many scales of models. When they are undertaken, the results are often evaluated using global aggregates or otherwise highly aggregated skill scores that mask deficiencies. We select a set of deviation-based measures that can be applied on different spatial scales (regional versus global) to make evaluating the large number of variable-region combinations in IAMs more tractable. We also identify performance benchmarks for these measures, based on the statistics of the observational dataset, that allow a model to be evaluated in absolute terms rather than relative to the performance of other models at similar tasks. An ideal evaluation method for hindcast experiments in IAMs would feature both absolute measures for evaluation of a single experiment for a single model and relative measures to compare the results of multiple experiments for a single model or the same experiment repeated across multiple models, such as in community intercomparison studies. The performance benchmarks highlight the use of this scheme for model evaluation in absolute terms, providing information about the reasons a model may perform poorly on a given measure and therefore identifying opportunities for improvement. To demonstrate the use of and types of results possible with the evaluation method, the measures are applied to the results of a past hindcast experiment focusing on land allocation in the Global Change Assessment Model (GCAM) version 3.0. The question of how to more holistically evaluate models as complex as IAMs is an area for future research. We find quantitative evidence that global aggregates alone are not sufficient for evaluating IAMs that require global supply to equal global demand at each time period, such as GCAM. The results of this work indicate it is unlikely that a single evaluation measure for all variables in an IAM exists, and therefore sector-by-sector evaluation may be necessary.
Apostol, Izydor; Kelner, Drew; Jiang, Xinzhao Grace; Huang, Gang; Wypych, Jette; Zhang, Xin; Gastwirt, Jessica; Chen, Kenneth; Fodor, Szilan; Hapuarachchi, Suminda; Meriage, Dave; Ye, Frank; Poppe, Leszek; Szpankowski, Wojciech
2012-12-01
To predict precision and other performance characteristics of chromatographic purity methods, which represent the most widely used form of analysis in the biopharmaceutical industry. We have conducted a comprehensive survey of purity methods, and show that all performance characteristics fall within narrow measurement ranges. This observation was used to develop a model called Uncertainty Based on Current Information (UBCI), which expresses these performance characteristics as a function of the signal and noise levels, hardware specifications, and software settings. We applied the UCBI model to assess the uncertainty of purity measurements, and compared the results to those from conventional qualification. We demonstrated that the UBCI model is suitable to dynamically assess method performance characteristics, based on information extracted from individual chromatograms. The model provides an opportunity for streamlining qualification and validation studies by implementing a "live validation" of test results utilizing UBCI as a concurrent assessment of measurement uncertainty. Therefore, UBCI can potentially mitigate the challenges associated with laborious conventional method validation and facilitates the introduction of more advanced analytical technologies during the method lifecycle.
Measures and limits of models of fixation selection.
Wilming, Niklas; Betz, Torsten; Kietzmann, Tim C; König, Peter
2011-01-01
Models of fixation selection are a central tool in the quest to understand how the human mind selects relevant information. Using this tool in the evaluation of competing claims often requires comparing different models' relative performance in predicting eye movements. However, studies use a wide variety of performance measures with markedly different properties, which makes a comparison difficult. We make three main contributions to this line of research: First we argue for a set of desirable properties, review commonly used measures, and conclude that no single measure unites all desirable properties. However the area under the ROC curve (a classification measure) and the KL-divergence (a distance measure of probability distributions) combine many desirable properties and allow a meaningful comparison of critical model performance. We give an analytical proof of the linearity of the ROC measure with respect to averaging over subjects and demonstrate an appropriate correction of entropy-based measures like KL-divergence for small sample sizes in the context of eye-tracking data. Second, we provide a lower bound and an upper bound of these measures, based on image-independent properties of fixation data and between subject consistency respectively. Based on these bounds it is possible to give a reference frame to judge the predictive power of a model of fixation selection. We provide open-source python code to compute the reference frame. Third, we show that the upper, between subject consistency bound holds only for models that predict averages of subject populations. Departing from this we show that incorporating subject-specific viewing behavior can generate predictions which surpass that upper bound. Taken together, these findings lay out the required information that allow a well-founded judgment of the quality of any model of fixation selection and should therefore be reported when a new model is introduced.
NASA Astrophysics Data System (ADS)
Byun, D. W.; Rappenglueck, B.; Lefer, B.
2007-12-01
Accurate meteorological and photochemical modeling efforts are necessary to understand the measurements made during the Texas Air Quality Study (TexAQS-II). The main objective of the study is to understand the meteorological and chemical processes of high ozone and regional haze events in the Eastern Texas, including the Houston-Galveston metropolitan area. Real-time and retrospective meteorological and photochemical model simulations were performed to study key physical and chemical processes in the Houston Galveston Area. In particular, the Vertical Mixing Experiment (VME) at the University of Houston campus was performed on selected days during the TexAQS-II. Results of the MM5 meteorological model and CMAQ air quality model simulations were compared with the VME and other TexAQS-II measurements to understand the interaction of the boundary layer dynamics and photochemical evolution affecting Houston air quality.
Proactive Supply Chain Performance Management with Predictive Analytics
Stefanovic, Nenad
2014-01-01
Today's business climate requires supply chains to be proactive rather than reactive, which demands a new approach that incorporates data mining predictive analytics. This paper introduces a predictive supply chain performance management model which combines process modelling, performance measurement, data mining models, and web portal technologies into a unique model. It presents the supply chain modelling approach based on the specialized metamodel which allows modelling of any supply chain configuration and at different level of details. The paper also presents the supply chain semantic business intelligence (BI) model which encapsulates data sources and business rules and includes the data warehouse model with specific supply chain dimensions, measures, and KPIs (key performance indicators). Next, the paper describes two generic approaches for designing the KPI predictive data mining models based on the BI semantic model. KPI predictive models were trained and tested with a real-world data set. Finally, a specialized analytical web portal which offers collaborative performance monitoring and decision making is presented. The results show that these models give very accurate KPI projections and provide valuable insights into newly emerging trends, opportunities, and problems. This should lead to more intelligent, predictive, and responsive supply chains capable of adapting to future business environment. PMID:25386605
Proactive supply chain performance management with predictive analytics.
Stefanovic, Nenad
2014-01-01
Today's business climate requires supply chains to be proactive rather than reactive, which demands a new approach that incorporates data mining predictive analytics. This paper introduces a predictive supply chain performance management model which combines process modelling, performance measurement, data mining models, and web portal technologies into a unique model. It presents the supply chain modelling approach based on the specialized metamodel which allows modelling of any supply chain configuration and at different level of details. The paper also presents the supply chain semantic business intelligence (BI) model which encapsulates data sources and business rules and includes the data warehouse model with specific supply chain dimensions, measures, and KPIs (key performance indicators). Next, the paper describes two generic approaches for designing the KPI predictive data mining models based on the BI semantic model. KPI predictive models were trained and tested with a real-world data set. Finally, a specialized analytical web portal which offers collaborative performance monitoring and decision making is presented. The results show that these models give very accurate KPI projections and provide valuable insights into newly emerging trends, opportunities, and problems. This should lead to more intelligent, predictive, and responsive supply chains capable of adapting to future business environment.
Roberts, William L; Pugliano, Gina; Langenau, Erik; Boulet, John R
2012-08-01
Medical schools employ a variety of preadmission measures to select students most likely to succeed in the program. The Medical College Admission Test (MCAT) and the undergraduate college grade point average (uGPA) are two academic measures typically used to select students in medical school. The assumption that presently used preadmission measures can predict clinical skill performance on a medical licensure examination was evaluated within a validity argument framework (Kane 1992). A hierarchical generalized linear model tested relationships between the log-odds of failing a high-stakes medical licensure performance examination and matriculant academic and non-academic preadmission measures, controlling for student-and school-variables. Data includes 3,189 matriculants from 22 osteopathic medical schools tested in 2009-2010. Unconditional unit-specific model expected average log-odds of failing the examination across medical schools is -3.05 (se = 0.11) or 5%. Student-level estimated coefficients for MCAT Verbal Reasoning scores (0.03), Physical Sciences scores (0.05), Biological Sciences scores (0.04), uGPA(science) (0.07), and uGPA(non-science) (0.26) lacked association with the log-odds of failing the COMLEX-USA Level 2-PE, controlling for all other predictors in the model. Evidence from this study shows that present preadmission measures of academic ability are not related to later clinical skill performance. Given that clinical skill performance is an important part of medical practice, selection measures should be developed to identify students who will be successful in communication and be able to demonstrate the ability to systematically collect a medical history, perform a physical examination, and synthesize this information to diagnose and manage patient conditions.
Performance of the libraries in Isfahan University of Medical Sciences based on the EFQM model.
Karimi, Saeid; Atashpour, Bahareh; Papi, Ahmad; Nouri, Rasul; Hasanzade, Akbar
2014-01-01
Performance measurement is inevitable for university libraries. Hence, planning and establishing a constant and up-to-date measurement system is required for the libraries, especially the university libraries. The primary studies and analyses reveal that the EFQM Excellence Model has been efficient, and the administrative reform program has focused on the implementation of this model. Therefore, on the basis of these facts as well as the need for a measurement system, the researchers measured the performance of libraries in schools and hospitals supported by Isfahan University of Medical Sciences, using the EFQM Organizational Excellence Model. This descriptive research study was carried out by a cross-sectional survey method in 2011. This research study included librarians and library directors of Isfahan University of Medical Sciences (70 people). The validity of the instrument was measured by the specialists in the field of Management and Library Science. To measure the reliability of the questionnaire, the Cronbach's alpha coefficient value was measured (0.93). The t-test, ANOVA, and Spearman's rank correlation coefficient were used for measurements. The data were analyzed by SPSS. Data analysis revealed that the mean score of the performance measurement for the libraries under study and between nine dimensions the highest score was 65.3% for leadership dimension and the lowest scores were 55.1% for people and 55.1% for society results. In general, using the ninth EFQM model the average level of all dimensions, which is in good agreement with normal values, was assessed. However, compared to other results, the criterion people and society results were poor. It is Recommended by forming the expert committee on criterion people and society results by individuals concerned with the various conferences and training courses to improve the aspects.
Performance of the libraries in Isfahan University of Medical Sciences based on the EFQM model
Karimi, Saeid; Atashpour, Bahareh; Papi, Ahmad; Nouri, Rasul; Hasanzade, Akbar
2014-01-01
Introduction: Performance measurement is inevitable for university libraries. Hence, planning and establishing a constant and up-to-date measurement system is required for the libraries, especially the university libraries. The primary studies and analyses reveal that the EFQM Excellence Model has been efficient, and the administrative reform program has focused on the implementation of this model. Therefore, on the basis of these facts as well as the need for a measurement system, the researchers measured the performance of libraries in schools and hospitals supported by Isfahan University of Medical Sciences, using the EFQM Organizational Excellence Model. Materials and Methods: This descriptive research study was carried out by a cross-sectional survey method in 2011. This research study included librarians and library directors of Isfahan University of Medical Sciences (70 people). The validity of the instrument was measured by the specialists in the field of Management and Library Science. To measure the reliability of the questionnaire, the Cronbach's alpha coefficient value was measured (0.93). The t-test, ANOVA, and Spearman's rank correlation coefficient were used for measurements. The data were analyzed by SPSS. Results: Data analysis revealed that the mean score of the performance measurement for the libraries under study and between nine dimensions the highest score was 65.3% for leadership dimension and the lowest scores were 55.1% for people and 55.1% for society results. Conclusion: In general, using the ninth EFQM model the average level of all dimensions, which is in good agreement with normal values, was assessed. However, compared to other results, the criterion people and society results were poor. It is Recommended by forming the expert committee on criterion people and society results by individuals concerned with the various conferences and training courses to improve the aspects. PMID:25540795
Large/Complex Antenna Performance Validation for Spaceborne Radar/Radiometeric Instruments
NASA Technical Reports Server (NTRS)
Focardi, Paolo; Harrell, Jefferson; Vacchione, Joseph
2013-01-01
Over the past decade, Earth observing missions which employ spaceborne combined radar & radiometric instruments have been developed and implemented. These instruments include the use of large and complex deployable antennas whose radiation characteristics need to be accurately determined over 4 pisteradians. Given the size and complexity of these antennas, the performance of the flight units cannot be readily measured. In addition, the radiation performance is impacted by the presence of the instrument's service platform which cannot easily be included in any measurement campaign. In order to meet the system performance knowledge requirements, a two pronged approach has been employed. The first is to use modeling tools to characterize the system and the second is to build a scale model of the system and use RF measurements to validate the results of the modeling tools. This paper demonstrates the resulting level of agreement between scale model and numerical modeling for two recent missions: (1) the earlier Aquarius instrument currently in Earth orbit and (2) the upcoming Soil Moisture Active Passive (SMAP) mission. The results from two modeling approaches, Ansoft's High Frequency Structure Simulator (HFSS) and TICRA's General RF Applications Software Package (GRASP), were compared with measurements of approximately 1/10th scale models of the Aquarius and SMAP systems. Generally good agreement was found between the three methods but each approach had its shortcomings as will be detailed in this paper.
Learning Instance-Specific Predictive Models
Visweswaran, Shyam; Cooper, Gregory F.
2013-01-01
This paper introduces a Bayesian algorithm for constructing predictive models from data that are optimized to predict a target variable well for a particular instance. This algorithm learns Markov blanket models, carries out Bayesian model averaging over a set of models to predict a target variable of the instance at hand, and employs an instance-specific heuristic to locate a set of suitable models to average over. We call this method the instance-specific Markov blanket (ISMB) algorithm. The ISMB algorithm was evaluated on 21 UCI data sets using five different performance measures and its performance was compared to that of several commonly used predictive algorithms, including nave Bayes, C4.5 decision tree, logistic regression, neural networks, k-Nearest Neighbor, Lazy Bayesian Rules, and AdaBoost. Over all the data sets, the ISMB algorithm performed better on average on all performance measures against all the comparison algorithms. PMID:25045325
NASA Astrophysics Data System (ADS)
Boehm, Holger F.; Link, Thomas M.; Monetti, Roberto A.; Mueller, Dirk; Rummeny, Ernst J.; Raeth, Christoph W.
2005-04-01
Osteoporosis is a metabolic bone disease leading to de-mineralization and increased risk of fracture. The two major factors that determine the biomechanical competence of bone are the degree of mineralization and the micro-architectural integrity. Today, modern imaging modalities (high resolution MRI, micro-CT) are capable of depicting structural details of trabecular bone tissue. From the image data, structural properties obtained by quantitative measures are analysed with respect to the presence of osteoporotic fractures of the spine (in-vivo) or correlated with biomechanical strength as derived from destructive testing (in-vitro). Fairly well established are linear structural measures in 2D that are originally adopted from standard histo-morphometry. Recently, non-linear techniques in 2D and 3D based on the scaling index method (SIM), the standard Hough transform (SHT), and the Minkowski Functionals (MF) have been introduced, which show excellent performance in predicting bone strength and fracture risk. However, little is known about the performance of the various parameters with respect to monitoring structural changes due to progression of osteoporosis or as a result of medical treatment. In this contribution, we generate models of trabecular bone with pre-defined structural properties which are exposed to simulated osteoclastic activity. We apply linear and non-linear texture measures to the models and analyse their performance with respect to detecting architectural changes. This study demonstrates, that the texture measures are capable of monitoring structural changes of complex model data. The diagnostic potential varies for the different parameters and is found to depend on the topological composition of the model and initial "bone density". In our models, non-linear texture measures tend to react more sensitively to small structural changes than linear measures. Best performance is observed for the 3rd and 4th Minkowski Functionals and for the scaling index method.
Modeling the Direct and Indirect Determinants of Different Types of Individual Job Performance
2008-06-01
cognitions , and self-regulation). A different model was found to describe the process depending on whether the performance dimension was an element of...performing the behaviors they indicated they intended to perform, and assembled a battery of existing instruments to measure cognitive ability, personality...model came from the task performance dimension. For this dimension, knowledge, skill, cognitive choice aspects of motivation, and self-regulation
Carter, Nathan T; Dalal, Dev K; Boyce, Anthony S; O'Connell, Matthew S; Kung, Mei-Chuan; Delgado, Kristin M
2014-07-01
The personality trait of conscientiousness has seen considerable attention from applied psychologists due to its efficacy for predicting job performance across performance dimensions and occupations. However, recent theoretical and empirical developments have questioned the assumption that more conscientiousness always results in better job performance, suggesting a curvilinear link between the 2. Despite these developments, the results of studies directly testing the idea have been mixed. Here, we propose this link has been obscured by another pervasive assumption known as the dominance model of measurement: that higher scores on traditional personality measures always indicate higher levels of conscientiousness. Recent research suggests dominance models show inferior fit to personality test scores as compared to ideal point models that allow for curvilinear relationships between traits and scores. Using data from 2 different samples of job incumbents, we show the rank-order changes that result from using an ideal point model expose a curvilinear link between conscientiousness and job performance 100% of the time, whereas results using dominance models show mixed results, similar to the current state of the literature. Finally, with an independent cross-validation sample, we show that selection based on predicted performance using ideal point scores results in more favorable objective hiring outcomes. Implications for practice and future research are discussed.
In Vivo Validation of Numerical Prediction for Turbulence Intensity in an Aortic Coarctation
Arzani, Amirhossein; Dyverfeldt, Petter; Ebbers, Tino; Shadden, Shawn C.
2013-01-01
This paper compares numerical predictions of turbulence intensity with in vivo measurement. Magnetic resonance imaging (MRI) was carried out on a 60-year-old female with a restenosed aortic coarctation. Time-resolved three-directional phase-contrast (PC) MRI data was acquired to enable turbulence intensity estimation. A contrast-enhanced MR angiography (MRA) and a time-resolved 2D PCMRI measurement were also performed to acquire data needed to perform subsequent image-based computational fluid dynamics (CFD) modeling. A 3D model of the aortic coarctation and surrounding vasculature was constructed from the MRA data, and physiologic boundary conditions were modeled to match 2D PCMRI and pressure pulse measurements. Blood flow velocity data was subsequently obtained by numerical simulation. Turbulent kinetic energy (TKE) was computed from the resulting CFD data. Results indicate relative agreement (error ≈10%) between the in vivo measurements and the CFD predictions of TKE. The discrepancies in modeled vs. measured TKE values were within expectations due to modeling and measurement errors. PMID:22016327
A Model-Based Anomaly Detection Approach for Analyzing Streaming Aircraft Engine Measurement Data
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Rinehart, Aidan W.
2014-01-01
This paper presents a model-based anomaly detection architecture designed for analyzing streaming transient aircraft engine measurement data. The technique calculates and monitors residuals between sensed engine outputs and model predicted outputs for anomaly detection purposes. Pivotal to the performance of this technique is the ability to construct a model that accurately reflects the nominal operating performance of the engine. The dynamic model applied in the architecture is a piecewise linear design comprising steady-state trim points and dynamic state space matrices. A simple curve-fitting technique for updating the model trim point information based on steadystate information extracted from available nominal engine measurement data is presented. Results from the application of the model-based approach for processing actual engine test data are shown. These include both nominal fault-free test case data and seeded fault test case data. The results indicate that the updates applied to improve the model trim point information also improve anomaly detection performance. Recommendations for follow-on enhancements to the technique are also presented and discussed.
A Model-Based Anomaly Detection Approach for Analyzing Streaming Aircraft Engine Measurement Data
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Rinehart, Aidan Walker
2015-01-01
This paper presents a model-based anomaly detection architecture designed for analyzing streaming transient aircraft engine measurement data. The technique calculates and monitors residuals between sensed engine outputs and model predicted outputs for anomaly detection purposes. Pivotal to the performance of this technique is the ability to construct a model that accurately reflects the nominal operating performance of the engine. The dynamic model applied in the architecture is a piecewise linear design comprising steady-state trim points and dynamic state space matrices. A simple curve-fitting technique for updating the model trim point information based on steadystate information extracted from available nominal engine measurement data is presented. Results from the application of the model-based approach for processing actual engine test data are shown. These include both nominal fault-free test case data and seeded fault test case data. The results indicate that the updates applied to improve the model trim point information also improve anomaly detection performance. Recommendations for follow-on enhancements to the technique are also presented and discussed.
ERIC Educational Resources Information Center
Kaya, Yasemin; Leite, Walter L.
2017-01-01
Cognitive diagnosis models are diagnostic models used to classify respondents into homogenous groups based on multiple categorical latent variables representing the measured cognitive attributes. This study aims to present longitudinal models for cognitive diagnosis modeling, which can be applied to repeated measurements in order to monitor…
NASA Astrophysics Data System (ADS)
Li, Hanyu; Syed, Mubashir; Yao, Yu-Dong; Kamakaris, Theodoros
2009-12-01
This paper investigates spectrum sharing issues in the unlicensed industrial, scientific, and medical (ISM) bands. It presents a radio frequency measurement setup and measurement results in 2.4 GHz. It then develops an analytical model to characterize the coexistence interference in the ISM bands, based on radio frequency measurement results in the 2.4 GHz. Outage performance using the interference model is examined for a hybrid direct-sequence frequency-hopping spread spectrum system. The utilization of beamforming techniques in the system is also investigated, and a simplified beamforming model is proposed to analyze the system performance using beamforming. Numerical results show that beamforming significantly improves the system outage performance. The work presented in this paper provides a quantitative evaluation of signal outages in a spectrum sharing environment. It can be used as a tool in the development process for future dynamic spectrum access models as well as engineering designs for applications in unlicensed bands.
Kim, Eun Sook; Wang, Yan
2017-01-01
Population heterogeneity in growth trajectories can be detected with growth mixture modeling (GMM). It is common that researchers compute composite scores of repeated measures and use them as multiple indicators of growth factors (baseline performance and growth) assuming measurement invariance between latent classes. Considering that the assumption of measurement invariance does not always hold, we investigate the impact of measurement noninvariance on class enumeration and parameter recovery in GMM through a Monte Carlo simulation study (Study 1). In Study 2, we examine the class enumeration and parameter recovery of the second-order growth mixture modeling (SOGMM) that incorporates measurement models at the first order level. Thus, SOGMM estimates growth trajectory parameters with reliable sources of variance, that is, common factor variance of repeated measures and allows heterogeneity in measurement parameters between latent classes. The class enumeration rates are examined with information criteria such as AIC, BIC, sample-size adjusted BIC, and hierarchical BIC under various simulation conditions. The results of Study 1 showed that the parameter estimates of baseline performance and growth factor means were biased to the degree of measurement noninvariance even when the correct number of latent classes was extracted. In Study 2, the class enumeration accuracy of SOGMM depended on information criteria, class separation, and sample size. The estimates of baseline performance and growth factor mean differences between classes were generally unbiased but the size of measurement noninvariance was underestimated. Overall, SOGMM is advantageous in that it yields unbiased estimates of growth trajectory parameters and more accurate class enumeration compared to GMM by incorporating measurement models. PMID:28928691
Zhang, Xu; Jin, Weiqi; Li, Jiakun; Wang, Xia; Li, Shuo
2017-04-01
Thermal imaging technology is an effective means of detecting hazardous gas leaks. Much attention has been paid to evaluation of the performance of gas leak infrared imaging detection systems due to several potential applications. The minimum resolvable temperature difference (MRTD) and the minimum detectable temperature difference (MDTD) are commonly used as the main indicators of thermal imaging system performance. This paper establishes a minimum detectable gas concentration (MDGC) performance evaluation model based on the definition and derivation of MDTD. We proposed the direct calculation and equivalent calculation method of MDGC based on the MDTD measurement system. We build an experimental MDGC measurement system, which indicates the MDGC model can describe the detection performance of a thermal imaging system to typical gases. The direct calculation, equivalent calculation, and direct measurement results are consistent. The MDGC and the minimum resolvable gas concentration (MRGC) model can effectively describe the performance of "detection" and "spatial detail resolution" of thermal imaging systems to gas leak, respectively, and constitute the main performance indicators of gas leak detection systems.
Thermal model of attic systems with radiant barriers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilkes, K.E.
This report summarizes the first phase of a project to model the thermal performance of radiant barriers. The objective of this phase of the project was to develop a refined model for the thermal performance of residential house attics, with and without radiant barriers, and to verify the model by comparing its predictions against selected existing experimental thermal performance data. Models for the thermal performance of attics with and without radiant barriers have been developed and implemented on an IBM PC/AT computer. The validity of the models has been tested by comparing their predictions with ceiling heat fluxes measured inmore » a number of laboratory and field experiments on attics with and without radiant barriers. Cumulative heat flows predicted by the models were usually within about 5 to 10 percent of measured values. In future phases of the project, the models for attic/radiant barrier performance will be coupled with a whole-house model and further comparisons with experimental data will be made. Following this, the models will be utilized to provide an initial assessment of the energy savings potential of radiant barriers in various configurations and under various climatic conditions. 38 refs., 14 figs., 22 tabs.« less
Measurement of noise and its correlation to performance and geometry of small aircraft propellers
NASA Astrophysics Data System (ADS)
Štorch, Vít; Nožička, Jiří; Brada, Martin; Gemperle, Jiří; Suchý, Jakub
2016-03-01
A set of small model and UAV propellers is measured both in terms of aerodynamic performance and acoustic noise under static conditions. Apart from obvious correlation of noise to tip speed and propeller diameter the influence of blade pitch, blade pitch distribution, efficiency and shape of the blade is sought. Using the measured performance data a computational model for calculation of aerodynamic noise of propellers will be validated. The range of selected propellers include both propellers designed for nearly static conditions and propellers that are running at highly offdesign conditions, which allows to investigate i.e. the effect of blade stall on both noise level and performance results.
Leakage flow simulation in a specific pump model
NASA Astrophysics Data System (ADS)
Dupont, P.; Bayeul-Lainé, A. C.; Dazin, A.; Bois, G.; Roussette, O.; Si, Q.
2014-03-01
This paper deals with the influence of leakage flow existing in SHF pump model on the analysis of internal flow behaviour inside the vane diffuser of the pump model performance using both experiments and calculations. PIV measurements have been performed at different hub to shroud planes inside one diffuser channel passage for a given speed of rotation and various flow rates. For each operating condition, the PIV measurements have been trigged with different angular impeller positions. The performances and the static pressure rise of the diffuser were also measured using a three-hole probe. The numerical simulations were carried out with Star CCM+ 8.06 code (RANS frozen and unsteady calculations). Comparisons between numerical and experimental results are presented and discussed for three flow rates. The performances of the diffuser obtained by numerical simulation results are compared to the performances obtained by three-hole probe indications. The comparisons show few influence of fluid leakage on global performances but a real improvement concerning the efficiency of the impeller, the pump and the velocity distributions. These results show that leakage is an important parameter that has to be taken into account in order to make improved comparisons between numerical approaches and experiments in such a specific model set up.
Modeling road-cycling performance.
Olds, T S; Norton, K I; Lowe, E L; Olive, S; Reay, F; Ly, S
1995-04-01
This paper presents a complete set of equations for a "first principles" mathematical model of road-cycling performance, including corrections for the effect of winds, tire pressure and wheel radius, altitude, relative humidity, rotational kinetic energy, drafting, and changed drag. The relevant physiological, biophysical, and environmental variables were measured in 41 experienced cyclists completing a 26-km road time trial. The correlation between actual and predicted times was 0.89 (P < or = 0.0001), with a mean difference of 0.74 min (1.73% of mean performance time) and a mean absolute difference of 1.65 min (3.87%). Multiple simulations were performed where model inputs were randomly varied using a normal distribution about the measured values with a SD equivalent to the estimated day-to-day variability or technical error of measurement in each of the inputs. This analysis yielded 95% confidence limits for the predicted times. The model suggests that the main physiological factors contributing to road-cycling performance are maximal O2 consumption, fractional utilization of maximal O2 consumption, mechanical efficiency, and projected frontal area. The model is then applied to some practical problems in road cycling: the effect of drafting, the advantage of using smaller front wheels, the effects of added mass, the importance of rotational kinetic energy, the effect of changes in drag due to changes in bicycle configuration, the normalization of performances under different conditions, and the limits of human performance.
Performability modeling based on real data: A case study
NASA Technical Reports Server (NTRS)
Hsueh, M. C.; Iyer, R. K.; Trivedi, K. S.
1988-01-01
Described is a measurement-based performability model based on error and resource usage data collected on a multiprocessor system. A method for identifying the model structure is introduced and the resulting model is validated against real data. Model development from the collection of raw data to the estimation of the expected reward is described. Both normal and error behavior of the system are characterized. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model system behavior. A reward function, based on the service rate and the error rate in each state, is then defined in order to estimate the performability of the system and to depict the cost of apparent types of errors.
Performability modeling based on real data: A casestudy
NASA Technical Reports Server (NTRS)
Hsueh, M. C.; Iyer, R. K.; Trivedi, K. S.
1987-01-01
Described is a measurement-based performability model based on error and resource usage data collected on a multiprocessor system. A method for identifying the model structure is introduced and the resulting model is validated against real data. Model development from the collection of raw data to the estimation of the expected reward is described. Both normal and error behavior of the system are characterized. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model the system behavior. A reward function, based on the service rate and the error rate in each state, is then defined in order to estimate the performability of the system and to depict the cost of different types of errors.
Gokhale, Sharad; Raokhande, Namita
2008-05-01
There are several models that can be used to evaluate roadside air quality. The comparison of the operational performance of different models pertinent to local conditions is desirable so that the model that performs best can be identified. Three air quality models, namely the 'modified General Finite Line Source Model' (M-GFLSM) of particulates, the 'California Line Source' (CALINE3) model, and the 'California Line Source for Queuing & Hot Spot Calculations' (CAL3QHC) model have been identified for evaluating the air quality at one of the busiest traffic intersections in the city of Guwahati. These models have been evaluated statistically with the vehicle-derived airborne particulate mass emissions in two sizes, i.e. PM10 and PM2.5, the prevailing meteorology and the temporal distribution of the measured daily average PM10 and PM2.5 concentrations in wintertime. The study has shown that the CAL3QHC model would make better predictions compared to other models for varied meteorology and traffic conditions. The detailed study reveals that the agreements between the measured and the modeled PM10 and PM2.5 concentrations have been reasonably good for CALINE3 and CAL3QHC models. Further detailed analysis shows that the CAL3QHC model performed well compared to the CALINE3. The monthly performance measures have also led to the similar results. These two models have also outperformed for a class of wind speed velocities except for low winds (<1 m s(-1)), for which, the M-GFLSM model has shown the tendency of better performance for PM10. Nevertheless, the CAL3QHC model has outperformed for both the particulate sizes and for all the wind classes, which therefore can be optional for air quality assessment at urban traffic intersections.
Information and complexity measures for hydrologic model evaluation
USDA-ARS?s Scientific Manuscript database
Hydrological models are commonly evaluated through the residual-based performance measures such as the root-mean square error or efficiency criteria. Such measures, however, do not evaluate the degree of similarity of patterns in simulated and measured time series. The objective of this study was to...
Intern Performance in Three Supervisory Models
ERIC Educational Resources Information Center
Womack, Sid T.; Hanna, Shellie L.; Callaway, Rebecca; Woodall, Peggy
2011-01-01
Differences in intern performance, as measured by a Praxis III-similar instrument were found between interns supervised in three supervisory models: Traditional triad model, cohort model, and distance supervision. Candidates in this study's particular form of distance supervision were not as effective as teachers as candidates in traditional-triad…
Salinas, Maria; López-Garrigós, Maite; Gutiérrez, Mercedes; Lugo, Javier; Sirvent, Jose Vicente; Uris, Joaquin
2010-01-01
Laboratory performance can be measured using a set of model key performance indicators (KPIs). The design and implementation of KPIs are important issues. KPI results from 7 years are reported and their implementation, monitoring, objectives, interventions, result reporting and delivery are analyzed. The KPIs of the entire laboratory process were obtained using Laboratory Information System (LIS) registers. These were collected automatically using a data warehouse application, spreadsheets and external quality program reports. Customer satisfaction was assessed using surveys. Nine model laboratory KPIs were proposed and measured. The results of some examples of KPIs used in our laboratory are reported. Their corrective measurements or the implementation of objectives led to improvement in the associated KPIs results. Measurement of laboratory performance using KPIs and a data warehouse application that continuously collects registers and calculates KPIs confirmed the reliability of indicators, indicator acceptability and usability for users, and continuous process improvement.
NASA Technical Reports Server (NTRS)
Brankovic, A.; Ryder, R. C., Jr.; Hendricks, R. C.; Liu, N.-S.; Shouse, D. T.; Roquemore, W. M.
2005-01-01
An investigation is performed to evaluate the performance of a computational fluid dynamics (CFD) tool for the prediction of the reacting flow in a liquid-fueled combustor that uses water injection for control of pollutant emissions. The experiment consists of a multisector, liquid-fueled combustor rig operated at different inlet pressures and temperatures, and over a range of fuel/air and water/fuel ratios. Fuel can be injected directly into the main combustion airstream and into the cavities. Test rig performance is characterized by combustor exit quantities such as temperature and emissions measurements using rakes and overall pressure drop from upstream plenum to combustor exit. Visualization of the flame is performed using gray scale and color still photographs and high-frame-rate videos. CFD simulations are performed utilizing a methodology that includes computer-aided design (CAD) solid modeling of the geometry, parallel processing over networked computers, and graphical and quantitative post-processing. Physical models include liquid fuel droplet dynamics and evaporation, with combustion modeled using a hybrid finite-rate chemistry model developed for Jet-A fuel. CFD and experimental results are compared for cases with cavity-only fueling, while numerical studies of cavity and main fueling was also performed. Predicted and measured trends in combustor exit temperature, CO and NOx are in general agreement at the different water/fuel loading rates, although quantitative differences exist between the predictions and measurements.
On the use of tower-flux measurements to assess the performance of global ecosystem models
NASA Astrophysics Data System (ADS)
El Maayar, M.; Kucharik, C.
2003-04-01
Global ecosystem models are important tools for the study of biospheric processes and their responses to environmental changes. Such models typically translate knowledge, gained from local observations, into estimates of regional or even global outcomes of ecosystem processes. A typical test of ecosystem models consists of comparing their output against tower-flux measurements of land surface-atmosphere exchange of heat and mass. To perform such tests, models are typically run using detailed information on soil properties (texture, carbon content,...) and vegetation structure observed at the experimental site (e.g., vegetation height, vegetation phenology, leaf photosynthetic characteristics,...). In global simulations, however, earth's vegetation is typically represented by a limited number of plant functional types (PFT; group of plant species that have similar physiological and ecological characteristics). For each PFT (e.g., temperate broadleaf trees, boreal conifer evergreen trees,...), which can cover a very large area, a set of typical physiological and physical parameters are assigned. Thus, a legitimate question arises: How does the performance of a global ecosystem model run using detailed site-specific parameters compare with the performance of a less detailed global version where generic parameters are attributed to a group of vegetation species forming a PFT? To answer this question, we used a multiyear dataset, measured at two forest sites with contrasting environments, to compare seasonal and interannual variability of surface-atmosphere exchange of water and carbon predicted by the Integrated BIosphere Simulator-Dynamic Global Vegetation Model. Two types of simulations were, thus, performed: a) Detailed runs: observed vegetation characteristics (leaf area index, vegetation height,...) and soil carbon content, in addition to climate and soil type, are specified for model run; and b) Generic runs: when only observed climates and soil types at the measurement sites are used to run the model. The generic runs were performed for the number of years equal to the current age of the forests, initialized with no vegetation and a soil carbon density equal to zero.
Marion, Bill; Smith, Benjamin
2017-03-27
Using performance data from some of the millions of installed photovoltaic (PV) modules with micro-inverters may afford the opportunity to provide ground-based solar resource data critical for developing PV projects. Here, a method was developed to back-solve for the direct normal irradiance (DNI) and the diffuse horizontal irradiance (DHI) from the measured ac power of south-facing PV module/micro-inverter systems. The method was validated using one year of irradiance and PV performance measurements for five PV systems, each with a different tilt/azimuth orientation, and located in Golden, Colorado. Compared to using a measured global horizontal irradiance for PV performance model input,more » using the back-solved values of DNI and DHI only increased the range of mean bias deviations from measured values by 0.6% for the modeled annual averages of the global tilt irradiance and ac power for the five PV systems. Correcting for angle-of-incidence effects is an important feature of the method to prevent underestimating the solar resource and for modeling the performance of PV systems with more dissimilar PV module orientations. The results for the method were also shown more favorable than the results when using an existing power projection method for estimating the ac power.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marion, Bill; Smith, Benjamin
Using performance data from some of the millions of installed photovoltaic (PV) modules with micro-inverters may afford the opportunity to provide ground-based solar resource data critical for developing PV projects. Here, a method was developed to back-solve for the direct normal irradiance (DNI) and the diffuse horizontal irradiance (DHI) from the measured ac power of south-facing PV module/micro-inverter systems. The method was validated using one year of irradiance and PV performance measurements for five PV systems, each with a different tilt/azimuth orientation, and located in Golden, Colorado. Compared to using a measured global horizontal irradiance for PV performance model input,more » using the back-solved values of DNI and DHI only increased the range of mean bias deviations from measured values by 0.6% for the modeled annual averages of the global tilt irradiance and ac power for the five PV systems. Correcting for angle-of-incidence effects is an important feature of the method to prevent underestimating the solar resource and for modeling the performance of PV systems with more dissimilar PV module orientations. The results for the method were also shown more favorable than the results when using an existing power projection method for estimating the ac power.« less
Validation of Storm Water Management Model Storm Control Measures Modules
NASA Astrophysics Data System (ADS)
Simon, M. A.; Platz, M. C.
2017-12-01
EPA's Storm Water Management Model (SWMM) is a computational code heavily relied upon by industry for the simulation of wastewater and stormwater infrastructure performance. Many municipalities are relying on SWMM results to design multi-billion-dollar, multi-decade infrastructure upgrades. Since the 1970's, EPA and others have developed five major releases, the most recent ones containing storm control measures modules for green infrastructure. The main objective of this study was to quantify the accuracy with which SWMM v5.1.10 simulates the hydrologic activity of previously monitored low impact developments. Model performance was evaluated with a mathematical comparison of outflow hydrographs and total outflow volumes, using empirical data and a multi-event, multi-objective calibration method. The calibration methodology utilized PEST++ Version 3, a parameter estimation tool, which aided in the selection of unmeasured hydrologic parameters. From the validation study and sensitivity analysis, several model improvements were identified to advance SWMM LID Module performance for permeable pavements, infiltration units and green roofs, and these were performed and reported herein. Overall, it was determined that SWMM can successfully simulate low impact development controls given accurate model confirmation, parameter measurement, and model calibration.
NASA Astrophysics Data System (ADS)
Cunningham, Jessica D.
Newton's Universe (NU), an innovative teacher training program, strives to obtain measures from rural, middle school science teachers and their students to determine the impact of its distance learning course on understanding of temperature. No consensus exists on the most appropriate and useful method of analysis to measure change in psychological constructs over time. Several item response theory (IRT) models have been deemed useful in measuring change, which makes the choice of an IRT model not obvious. The appropriateness and utility of each model, including a comparison to a traditional analysis of variance approach, was investigated using middle school science student performance on an assessment over an instructional period. Predetermined criteria were outlined to guide model selection based on several factors including research questions, data properties, and meaningful interpretations to determine the most appropriate model for this study. All methods employed in this study reiterated one common interpretation of the data -- specifically, that the students of teachers with any NU course experience had significantly greater gains in performance over the instructional period. However, clear distinctions were made between an analysis of variance and the racked and stacked analysis using the Rasch model. Although limited research exists examining the usefulness of the Rasch model in measuring change in understanding over time, this study applied these methods and detailed plausible implications for data-driven decisions based upon results for NU and others. Being mindful of the advantages and usefulness of each method of analysis may help others make informed decisions about choosing an appropriate model to depict changes to evaluate other programs. Results may encourage other researchers to consider the meaningfulness of using IRT for this purpose. Results have implications for data-driven decisions for future professional development courses, in science education and other disciplines. KEYWORDS: Item Response Theory, Rasch Model, Racking and Stacking, Measuring Change in Student Performance, Newton's Universe teacher training
Modeling Verdict Outcomes Using Social Network Measures: The Watergate and Caviar Network Cases.
Masías, Víctor Hugo; Valle, Mauricio; Morselli, Carlo; Crespo, Fernando; Vargas, Augusto; Laengle, Sigifredo
2016-01-01
Modelling criminal trial verdict outcomes using social network measures is an emerging research area in quantitative criminology. Few studies have yet analyzed which of these measures are the most important for verdict modelling or which data classification techniques perform best for this application. To compare the performance of different techniques in classifying members of a criminal network, this article applies three different machine learning classifiers-Logistic Regression, Naïve Bayes and Random Forest-with a range of social network measures and the necessary databases to model the verdicts in two real-world cases: the U.S. Watergate Conspiracy of the 1970's and the now-defunct Canada-based international drug trafficking ring known as the Caviar Network. In both cases it was found that the Random Forest classifier did better than either Logistic Regression or Naïve Bayes, and its superior performance was statistically significant. This being so, Random Forest was used not only for classification but also to assess the importance of the measures. For the Watergate case, the most important one proved to be betweenness centrality while for the Caviar Network, it was the effective size of the network. These results are significant because they show that an approach combining machine learning with social network analysis not only can generate accurate classification models but also helps quantify the importance social network variables in modelling verdict outcomes. We conclude our analysis with a discussion and some suggestions for future work in verdict modelling using social network measures.
Evaluation of different models to estimate the global solar radiation on inclined surface
NASA Astrophysics Data System (ADS)
Demain, C.; Journée, M.; Bertrand, C.
2012-04-01
Global and diffuse solar radiation intensities are, in general, measured on horizontal surfaces, whereas stationary solar conversion systems (both flat plate solar collector and solar photovoltaic) are mounted on inclined surface to maximize the amount of solar radiation incident on the collector surface. Consequently, the solar radiation incident measured on a tilted surface has to be determined by converting solar radiation from horizontal surface to tilted surface of interest. This study evaluates the performance of 14 models transposing 10 minutes, hourly and daily diffuse solar irradiation from horizontal to inclined surface. Solar radiation data from 8 months (April to November 2011) which include diverse atmospheric conditions and solar altitudes, measured on the roof of the radiation tower of the Royal Meteorological Institute of Belgium in Uccle (Longitude 4.35°, Latitude 50.79°) were used for validation purposes. The individual model performance is assessed by an inter-comparison between the calculated and measured solar global radiation on the south-oriented surface tilted at 50.79° using statistical methods. The relative performance of the different models under different sky conditions has been studied. Comparison of the statistical errors between the different radiation models in function of the clearness index shows that some models perform better under one type of sky condition. Putting together different models acting under different sky conditions can lead to a diminution of the statistical error between global measured solar radiation and global estimated solar radiation. As models described in this paper have been developed for hourly data inputs, statistical error indexes are minimum for hourly data and increase for 10 minutes and one day frequency data.
The Compass Rose Effectiveness Model
ERIC Educational Resources Information Center
Spiers, Cynthia E.; Kiel, Dorothy; Hohenrink, Brad
2008-01-01
The effectiveness model focuses the institution on mission achievement through assessment and improvement planning. Eleven mission criteria, measured by key performance indicators, are aligned with the accountability interest of internal and external stakeholders. A Web-based performance assessment application supports the model, documenting the…
Quality of protection evaluation of security mechanisms.
Ksiezopolski, Bogdan; Zurek, Tomasz; Mokkas, Michail
2014-01-01
Recent research indicates that during the design of teleinformatic system the tradeoff between the systems performance and the system protection should be made. The traditional approach assumes that the best way is to apply the strongest possible security measures. Unfortunately, the overestimation of security measures can lead to the unreasonable increase of system load. This is especially important in multimedia systems where the performance has critical character. In many cases determination of the required level of protection and adjustment of some security measures to these requirements increase system efficiency. Such an approach is achieved by means of the quality of protection models where the security measures are evaluated according to their influence on the system security. In the paper, we propose a model for QoP evaluation of security mechanisms. Owing to this model, one can quantify the influence of particular security mechanisms on ensuring security attributes. The methodology of our model preparation is described and based on it the case study analysis is presented. We support our method by the tool where the models can be defined and QoP evaluation can be performed. Finally, we have modelled TLS cryptographic protocol and presented the QoP security mechanisms evaluation for the selected versions of this protocol.
A Simulation Model for Measuring Customer Satisfaction through Employee Satisfaction
NASA Astrophysics Data System (ADS)
Zondiros, Dimitris; Konstantopoulos, Nikolaos; Tomaras, Petros
2007-12-01
Customer satisfaction is defined as a measure of how a firm's product or service performs compared to customer's expectations. It has long been a subject of research due to its importance for measuring marketing and business performance. A lot of models have been developed for its measurement. This paper propose a simulation model using employee satisfaction as one of the most important factors leading to customer satisfaction (the others being expectations and disconfirmation of expectations). Data obtained from a two-year survey in customers of banks in Greece were used. The application of three approaches regarding employee satisfaction resulted in greater customer satisfaction when there is serious effort to keep employees satisfied.
Surface knowledge and risks to landing and roving - The scale problem
NASA Technical Reports Server (NTRS)
Bourke, Roger D.
1991-01-01
The role of surface information in the performance of surface exploration missions is discussed. Accurate surface models based on direct measurements or inference are considered to be an important component in mission risk management. These models can be obtained using high resolution orbital photography or a combination of laser profiling, thermal inertia measurements, and/or radar. It is concluded that strategies for Martian exploration should use high confidence models to achieve maximum performance and low risk.
Studying Resist Stochastics with the Multivariate Poisson Propagation Model
Naulleau, Patrick; Anderson, Christopher; Chao, Weilun; ...
2014-01-01
Progress in the ultimate performance of extreme ultraviolet resist has arguably decelerated in recent years suggesting an approach to stochastic limits both in photon counts and material parameters. Here we report on the performance of a variety of leading extreme ultraviolet resist both with and without chemical amplification. The measured performance is compared to stochastic modeling results using the Multivariate Poisson Propagation Model. The results show that the best materials are indeed nearing modeled performance limits.
Heiden, Marina; Garza, Jennifer; Trask, Catherine; Mathiassen, Svend Erik
2017-03-01
A cost-efficient approach for assessing working postures could be to build statistical models for predicting results of direct measurements from cheaper data, and apply these models to samples in which only the latter data are available. The present study aimed to build and assess the performance of statistical models predicting inclinometer-assessed trunk and arm posture among paper mill workers. Separate models were built using administrative data, workers' ratings of their exposure, and observations of the work from video recordings as predictors. Trunk and upper arm postures were measured using inclinometry on 28 paper mill workers during three work shifts each. Simultaneously, the workers were video filmed, and their postures were assessed by observation of the videos afterwards. Workers' ratings of exposure, and administrative data on staff and production during the shifts were also collected. Linear mixed models were fitted for predicting inclinometer-assessed exposure variables (median trunk and upper arm angle, proportion of time with neutral trunk and upper arm posture, and frequency of periods in neutral trunk and upper arm inclination) from administrative data, workers' ratings, and observations, respectively. Performance was evaluated in terms of Akaike information criterion, proportion of variance explained (R2), and standard error (SE) of the model estimate. For models performing well, validity was assessed by bootstrap resampling. Models based on administrative data performed poorly (R2 ≤ 15%) and would not be useful for assessing posture in this population. Models using workers' ratings of exposure performed slightly better (8% ≤ R2 ≤ 27% for trunk posture; 14% ≤ R2 ≤ 36% for arm posture). The best model was obtained when using observational data for predicting frequency of periods with neutral arm inclination. It explained 56% of the variance in the postural exposure, and its SE was 5.6. Bootstrap validation of this model showed similar expected performance in other samples (5th-95th percentile: R2 = 45-63%; SE = 5.1-6.2). Observational data had a better ability to predict inclinometer-assessed upper arm exposures than workers' ratings or administrative data. However, observational measurements are typically more expensive to obtain. The results encourage analyses of the cost-efficiency of modeling based on administrative data, workers' ratings, and observation, compared to the performance and cost of measuring exposure directly. © The Author 2017. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.
Cadieu, Charles F.; Hong, Ha; Yamins, Daniel L. K.; Pinto, Nicolas; Ardila, Diego; Solomon, Ethan A.; Majaj, Najib J.; DiCarlo, James J.
2014-01-01
The primate visual system achieves remarkable visual object recognition performance even in brief presentations, and under changes to object exemplar, geometric transformations, and background variation (a.k.a. core visual object recognition). This remarkable performance is mediated by the representation formed in inferior temporal (IT) cortex. In parallel, recent advances in machine learning have led to ever higher performing models of object recognition using artificial deep neural networks (DNNs). It remains unclear, however, whether the representational performance of DNNs rivals that of the brain. To accurately produce such a comparison, a major difficulty has been a unifying metric that accounts for experimental limitations, such as the amount of noise, the number of neural recording sites, and the number of trials, and computational limitations, such as the complexity of the decoding classifier and the number of classifier training examples. In this work, we perform a direct comparison that corrects for these experimental limitations and computational considerations. As part of our methodology, we propose an extension of “kernel analysis” that measures the generalization accuracy as a function of representational complexity. Our evaluations show that, unlike previous bio-inspired models, the latest DNNs rival the representational performance of IT cortex on this visual object recognition task. Furthermore, we show that models that perform well on measures of representational performance also perform well on measures of representational similarity to IT, and on measures of predicting individual IT multi-unit responses. Whether these DNNs rely on computational mechanisms similar to the primate visual system is yet to be determined, but, unlike all previous bio-inspired models, that possibility cannot be ruled out merely on representational performance grounds. PMID:25521294
The Critical Power Model as a Potential Tool for Anti-doping
Puchowicz, Michael J.; Mizelman, Eliran; Yogev, Assaf; Koehle, Michael S.; Townsend, Nathan E.; Clarke, David C.
2018-01-01
Existing doping detection strategies rely on direct and indirect biochemical measurement methods focused on detecting banned substances, their metabolites, or biomarkers related to their use. However, the goal of doping is to improve performance, and yet evidence from performance data is not considered by these strategies. The emergence of portable sensors for measuring exercise intensities and of player tracking technologies may enable the widespread collection of performance data. How these data should be used for doping detection is an open question. Herein, we review the basis by which performance models could be used for doping detection, followed by critically reviewing the potential of the critical power (CP) model as a prototypical performance model that could be used in this regard. Performance models are mathematical representations of performance data specific to the athlete. Some models feature parameters with physiological interpretations, changes to which may provide clues regarding the specific doping method. The CP model is a simple model of the power-duration curve and features two physiologically interpretable parameters, CP and W′. We argue that the CP model could be useful for doping detection mainly based on the predictable sensitivities of its parameters to ergogenic aids and other performance-enhancing interventions. However, our argument is counterbalanced by the existence of important limitations and unresolved questions that need to be addressed before the model is used for doping detection. We conclude by providing a simple worked example showing how it could be used and propose recommendations for its implementation. PMID:29928234
Characterization and modeling of 1.3 micron indium arsenide quantum dot lasers
NASA Astrophysics Data System (ADS)
Dikshit, Amit A.
2006-12-01
Quantum-dot (QD) lasers have the potential to offer superior characteristics compared to currently used QW lasers in optical fiber communications. In this work we have performed modeling and characterization of QD lasers with an aim to understand the physics in order to design better lasers in the future. A comprehensive analytical model is built which explains the observed temperature sensitivity of threshold current in QD lasers. The model shows that the ratio of excitons and free carriers is important to accurately model the carrier distribution and hence temperature performance of QD lasers. To understand the recombination mechanisms in QD lasers, carrier lifetime measurements were performed along with advanced numerical rate equation modeling. The carrier lifetime measurements were performed using the small-signal optical response and impedance technique. The rate equation models were then used to extract the recombination coefficients in QD lasers which represent the strength of various recombination mechanisms. Using these measurements and the rate equation models it is shown that Auger recombination is the dominant contribution to current and comprises approximately 80% of current at threshold. Further, we investigated the origin of the low injection efficiencies observed in QD lasers using a rate equation model that included the effect of inhomogeneous broadening. It is shown that the observed low injection efficiencies are likely a consequence of the cavity length vs. slope efficiency measurement technique, and therefore do not represent the intrinsic or true injection efficiencies in QD lasers. The limitation of this commonly used technique arises from the carrier occupation of non-lasing states in the inhomogeneously broadened QD ensemble.
ERIC Educational Resources Information Center
Suryaman
2018-01-01
Lecturer performance will affect the quality and carrying capacity of the sustainability of an organization, in this case the university. There are many models developed to measure the performance of teachers, but not much to discuss the influence of faculty performance itself towards sustainability of an organization. This study was conducted in…
Physical modelling in biomechanics.
Koehl, M A R
2003-01-01
Physical models, like mathematical models, are useful tools in biomechanical research. Physical models enable investigators to explore parameter space in a way that is not possible using a comparative approach with living organisms: parameters can be varied one at a time to measure the performance consequences of each, while values and combinations not found in nature can be tested. Experiments using physical models in the laboratory or field can circumvent problems posed by uncooperative or endangered organisms. Physical models also permit some aspects of the biomechanical performance of extinct organisms to be measured. Use of properly scaled physical models allows detailed physical measurements to be made for organisms that are too small or fast to be easily studied directly. The process of physical modelling and the advantages and limitations of this approach are illustrated using examples from our research on hydrodynamic forces on sessile organisms, mechanics of hydraulic skeletons, food capture by zooplankton and odour interception by olfactory antennules. PMID:14561350
Building a generalized distributed system model
NASA Technical Reports Server (NTRS)
Mukkamala, Ravi
1991-01-01
A number of topics related to building a generalized distributed system model are discussed. The effects of distributed database modeling on evaluation of transaction rollbacks, the measurement of effects of distributed database models on transaction availability measures, and a performance analysis of static locking in replicated distributed database systems are covered.
ERIC Educational Resources Information Center
Church, Lewis
2010-01-01
This dissertation answers three research questions: (1) What are the characteristics of a combinatoric measure, based on the Average Search Length (ASL), that performs the same as a probabilistic version of the ASL?; (2) Does the combinatoric ASL measure produce the same performance result as the one that is obtained by ranking a collection of…
ERIC Educational Resources Information Center
Schochet, Peter Z.; Chiang, Hanley S.
2010-01-01
This paper addresses likely error rates for measuring teacher and school performance in the upper elementary grades using value-added models applied to student test score gain data. Using realistic performance measurement system schemes based on hypothesis testing, we develop error rate formulas based on OLS and Empirical Bayes estimators.…
Interpreting incremental value of markers added to risk prediction models.
Pencina, Michael J; D'Agostino, Ralph B; Pencina, Karol M; Janssens, A Cecile J W; Greenland, Philip
2012-09-15
The discrimination of a risk prediction model measures that model's ability to distinguish between subjects with and without events. The area under the receiver operating characteristic curve (AUC) is a popular measure of discrimination. However, the AUC has recently been criticized for its insensitivity in model comparisons in which the baseline model has performed well. Thus, 2 other measures have been proposed to capture improvement in discrimination for nested models: the integrated discrimination improvement and the continuous net reclassification improvement. In the present study, the authors use mathematical relations and numerical simulations to quantify the improvement in discrimination offered by candidate markers of different strengths as measured by their effect sizes. They demonstrate that the increase in the AUC depends on the strength of the baseline model, which is true to a lesser degree for the integrated discrimination improvement. On the other hand, the continuous net reclassification improvement depends only on the effect size of the candidate variable and its correlation with other predictors. These measures are illustrated using the Framingham model for incident atrial fibrillation. The authors conclude that the increase in the AUC, integrated discrimination improvement, and net reclassification improvement offer complementary information and thus recommend reporting all 3 alongside measures characterizing the performance of the final model.
Keep it simple? Predicting primary health care costs with clinical morbidity measures
Brilleman, Samuel L.; Gravelle, Hugh; Hollinghurst, Sandra; Purdy, Sarah; Salisbury, Chris; Windmeijer, Frank
2014-01-01
Models of the determinants of individuals’ primary care costs can be used to set capitation payments to providers and to test for horizontal equity. We compare the ability of eight measures of patient morbidity and multimorbidity to predict future primary care costs and examine capitation payments based on them. The measures were derived from four morbidity descriptive systems: 17 chronic diseases in the Quality and Outcomes Framework (QOF); 17 chronic diseases in the Charlson scheme; 114 Expanded Diagnosis Clusters (EDCs); and 68 Adjusted Clinical Groups (ACGs). These were applied to patient records of 86,100 individuals in 174 English practices. For a given disease description system, counts of diseases and sets of disease dummy variables had similar explanatory power. The EDC measures performed best followed by the QOF and ACG measures. The Charlson measures had the worst performance but still improved markedly on models containing only age, gender, deprivation and practice effects. Comparisons of predictive power for different morbidity measures were similar for linear and exponential models, but the relative predictive power of the models varied with the morbidity measure. Capitation payments for an individual patient vary considerably with the different morbidity measures included in the cost model. Even for the best fitting model large differences between expected cost and capitation for some types of patient suggest incentives for patient selection. Models with any of the morbidity measures show higher cost for more deprived patients but the positive effect of deprivation on cost was smaller in better fitting models. PMID:24657375
Dissimilarity based Partial Least Squares (DPLS) for genomic prediction from SNPs.
Singh, Priyanka; Engel, Jasper; Jansen, Jeroen; de Haan, Jorn; Buydens, Lutgarde Maria Celina
2016-05-04
Genomic prediction (GP) allows breeders to select plants and animals based on their breeding potential for desirable traits, without lengthy and expensive field trials or progeny testing. We have proposed to use Dissimilarity-based Partial Least Squares (DPLS) for GP. As a case study, we use the DPLS approach to predict Bacterial wilt (BW) in tomatoes using SNPs as predictors. The DPLS approach was compared with the Genomic Best-Linear Unbiased Prediction (GBLUP) and single-SNP regression with SNP as a fixed effect to assess the performance of DPLS. Eight genomic distance measures were used to quantify relationships between the tomato accessions from the SNPs. Subsequently, each of these distance measures was used to predict the BW using the DPLS prediction model. The DPLS model was found to be robust to the choice of distance measures; similar prediction performances were obtained for each distance measure. DPLS greatly outperformed the single-SNP regression approach, showing that BW is a comprehensive trait dependent on several loci. Next, the performance of the DPLS model was compared to that of GBLUP. Although GBLUP and DPLS are conceptually very different, the prediction quality (PQ) measured by DPLS models were similar to the prediction statistics obtained from GBLUP. A considerable advantage of DPLS is that the genotype-phenotype relationship can easily be visualized in a 2-D scatter plot. This so-called score-plot provides breeders an insight to select candidates for their future breeding program. DPLS is a highly appropriate method for GP. The model prediction performance was similar to the GBLUP and far better than the single-SNP approach. The proposed method can be used in combination with a wide range of genomic dissimilarity measures and genotype representations such as allele-count, haplotypes or allele-intensity values. Additionally, the data can be insightfully visualized by the DPLS model, allowing for selection of desirable candidates from the breeding experiments. In this study, we have assessed the DPLS performance on a single trait.
Temporal evolution modeling of hydraulic and water quality performance of permeable pavements
NASA Astrophysics Data System (ADS)
Huang, Jian; He, Jianxun; Valeo, Caterina; Chu, Angus
2016-02-01
A mathematical model for predicting hydraulic and water quality performance in both the short- and long-term is proposed based on field measurements for three types of permeable pavements: porous asphalt (PA), porous concrete (PC), and permeable inter-locking concrete pavers (PICP). The model was applied to three field-scale test sites in Calgary, Alberta, Canada. The model performance was assessed in terms of hydraulic parameters including time to peak, peak flow and water balance and a water quality variable (the removal rate of total suspended solids). A total of 20 simulated storm events were used for model calibration and verification processes. The proposed model can simulate the outflow hydrographs with a coefficient of determination (R2) ranging from 0.762 to 0.907, and normalized root-mean-square deviation (NRMSD) ranging from 13.78% to 17.83%. Comparison of the time to peak flow, peak flow, runoff volume and TSS removal rates between the measured and modeled values in model verification phase had a maximum difference of 11%. The results demonstrate that the proposed model is capable of capturing the temporal dynamics of the pavement performance. Therefore, the model has great potential as a practical modeling tool for permeable pavement design and performance assessment.
Vogel, Annike B; Kilic, Fatih; Schmidt, Falko; Rübel, Sebastian; Lapatki, Bernd G
2015-07-01
Digital jaw models offer more extensive possibilities for analysis than casts and make it easier to share and archive relevant information. The aim of this study was to compare the dimensional accuracy of scans performed on alginate impressions and on stone models to reference scans performed on underlying resin models. Precision spheres 5 mm in diameter were occlusally fitted to the sites of the first premolars and first molars on a pair of jaw models fabricated from resin. A structured-light scanner was used for digitization. Once the two reference models had been scanned, alginate impressions were taken and scanned after no later than 1 h. A third series of scans was performed on type III stone models derived from the impressions. All scans were analyzed by performing five repeated measurements to determine the distances between the various sphere centers. Compared to the reference scans, the stone-model scans were larger by a mean of 73.6 µm (maxilla) or 65.2 µm (mandible). The impression scans were only larger by 7.7 µm (maxilla) or smaller by 0.7 µm (mandible). Median standard deviations over the five repeated measurements of 1.0 µm for the reference scans, 2.35 µm for the impression scans, and 2.0 µm for the stone-model scans indicate that the values measured in this study were adequately reproducible. Alginate impressions can be suitably digitized by structured-light scanning and offer considerably better dimensional accuracy than stone models. Apparently, however, both impression scans and stone-model scans can offer adequate precision for orthodontic purposes. The main issue of impression scans (which is incomplete representation of model surfaces) is being systematically explored in a follow-up study.
Xu, Wei; Riley, Erin A; Austin, Elena; Sasakura, Miyoko; Schaal, Lanae; Gould, Timothy R; Hartin, Kris; Simpson, Christopher D; Sampson, Paul D; Yost, Michael G; Larson, Timothy V; Xiu, Guangli; Vedal, Sverre
2017-03-01
Air pollution exposure prediction models can make use of many types of air monitoring data. Fixed location passive samples typically measure concentrations averaged over several days to weeks. Mobile monitoring data can generate near continuous concentration measurements. It is not known whether mobile monitoring data are suitable for generating well-performing exposure prediction models or how they compare with other types of monitoring data in generating exposure models. Measurements from fixed site passive samplers and mobile monitoring platform were made over a 2-week period in Baltimore in the summer and winter months in 2012. Performance of exposure prediction models for long-term nitrogen oxides (NO X ) and ozone (O 3 ) concentrations were compared using a state-of-the-art approach for model development based on land use regression (LUR) and geostatistical smoothing. Model performance was evaluated using leave-one-out cross-validation (LOOCV). Models performed well using the mobile peak traffic monitoring data for both NO X and O 3 , with LOOCV R 2 s of 0.70 and 0.71, respectively, in the summer, and 0.90 and 0.58, respectively, in the winter. Models using 2-week passive samples for NO X had LOOCV R 2 s of 0.60 and 0.65 in the summer and winter months, respectively. The passive badge sampling data were not adequate for developing models for O 3 . Mobile air monitoring data can be used to successfully build well-performing LUR exposure prediction models for NO X and O 3 and are a better source of data for these models than 2-week passive badge data.
Net reclassification index at event rate: properties and relationships.
Pencina, Michael J; Steyerberg, Ewout W; D'Agostino, Ralph B
2017-12-10
The net reclassification improvement (NRI) is an attractively simple summary measure quantifying improvement in performance because of addition of new risk marker(s) to a prediction model. Originally proposed for settings with well-established classification thresholds, it quickly extended into applications with no thresholds in common use. Here we aim to explore properties of the NRI at event rate. We express this NRI as a difference in performance measures for the new versus old model and show that the quantity underlying this difference is related to several global as well as decision analytic measures of model performance. It maximizes the relative utility (standardized net benefit) across all classification thresholds and can be viewed as the Kolmogorov-Smirnov distance between the distributions of risk among events and non-events. It can be expressed as a special case of the continuous NRI, measuring reclassification from the 'null' model with no predictors. It is also a criterion based on the value of information and quantifies the reduction in expected regret for a given regret function, casting the NRI at event rate as a measure of incremental reduction in expected regret. More generally, we find it informative to present plots of standardized net benefit/relative utility for the new versus old model across the domain of classification thresholds. Then, these plots can be summarized with their maximum values, and the increment in model performance can be described by the NRI at event rate. We provide theoretical examples and a clinical application on the evaluation of prognostic biomarkers for atrial fibrillation. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
A prediction model for lift-fan simulator performance. M.S. Thesis - Cleveland State Univ.
NASA Technical Reports Server (NTRS)
Yuska, J. A.
1972-01-01
The performance characteristics of a model VTOL lift-fan simulator installed in a two-dimensional wing are presented. The lift-fan simulator consisted of a 15-inch diameter fan driven by a turbine contained in the fan hub. The performance of the lift-fan simulator was measured in two ways: (1) the calculated momentum thrust of the fan and turbine (total thrust loading), and (2) the axial-force measured on a load cell force balance (axial-force loading). Tests were conducted over a wide range of crossflow velocities, corrected tip speeds, and wing angle of attack. A prediction modeling technique was developed to help in analyzing the performance characteristics of lift-fan simulators. A multiple linear regression analysis technique is presented which calculates prediction model equations for the dependent variables.
Selecting cockpit functions for speech I/O technology
NASA Technical Reports Server (NTRS)
Simpson, C. A.
1985-01-01
A general methodology for the initial selection of functions for speech generation and speech recognition technology is discussed. The SCR (Stimulus/Central-Processing/Response) compatibility model of Wickens et al. (1983) is examined, and its application is demonstrated for a particular cockpit display problem. Some limits of the applicability of that model are illustrated in the context of predicting overall pilot-aircraft system performance. A program of system performance measurement is recommended for the evaluation of candidate systems. It is suggested that no one measure of system performance can necessarily be depended upon to the exclusion of others. Systems response time, system accuracy, and pilot ratings are all important measures. Finally, these measures must be collected in the context of the total flight task environment.
Modeling and performance assessment in QinetiQ of EO and IR airborne reconnaissance systems
NASA Astrophysics Data System (ADS)
Williams, John W.; Potter, Gary E.
2002-11-01
QinetiQ are the technical authority responsible for specifying the performance requirements for the procurement of airborne reconnaissance systems, on behalf of the UK MoD. They are also responsible for acceptance of delivered systems, overseeing and verifying the installed system performance as predicted and then assessed by the contractor. Measures of functional capability are central to these activities. The conduct of these activities utilises the broad technical insight and wide range of analysis tools and models available within QinetiQ. This paper focuses on the tools, methods and models that are applicable to systems based on EO and IR sensors. The tools, methods and models are described, and representative output for systems that QinetiQ has been responsible for is presented. The principle capability applicable to EO and IR airborne reconnaissance systems is the STAR (Simulation Tools for Airborne Reconnaissance) suite of models. STAR generates predictions of performance measures such as GRD (Ground Resolved Distance) and GIQE (General Image Quality) NIIRS (National Imagery Interpretation Rating Scales). It also generates images representing sensor output, using the scene generation software CAMEO-SIM and the imaging sensor model EMERALD. The simulated image 'quality' is fully correlated with the predicted non-imaging performance measures. STAR also generates image and table data that is compliant with STANAG 7023, which may be used to test ground station functionality.
Impact of spatial variability and sampling design on model performance
NASA Astrophysics Data System (ADS)
Schrape, Charlotte; Schneider, Anne-Kathrin; Schröder, Boris; van Schaik, Loes
2017-04-01
Many environmental physical and chemical parameters as well as species distributions display a spatial variability at different scales. In case measurements are very costly in labour time or money a choice has to be made between a high sampling resolution at small scales and a low spatial cover of the study area or a lower sampling resolution at the small scales resulting in local data uncertainties with a better spatial cover of the whole area. This dilemma is often faced in the design of field sampling campaigns for large scale studies. When the gathered field data are subsequently used for modelling purposes the choice of sampling design and resulting data quality influence the model performance criteria. We studied this influence with a virtual model study based on a large dataset of field information on spatial variation of earthworms at different scales. Therefore we built a virtual map of anecic earthworm distributions over the Weiherbach catchment (Baden-Württemberg in Germany). First of all the field scale abundance of earthworms was estimated using a catchment scale model based on 65 field measurements. Subsequently the high small scale variability was added using semi-variograms, based on five fields with a total of 430 measurements divided in a spatially nested sampling design over these fields, to estimate the nugget, range and standard deviation of measurements within the fields. With the produced maps, we performed virtual samplings of one up to 50 random points per field. We then used these data to rebuild the catchment scale models of anecic earthworm abundance with the same model parameters as in the work by Palm et al. (2013). The results of the models show clearly that a large part of the non-explained deviance of the models is due to the very high small scale variability in earthworm abundance: the models based on single virtual sampling points on average obtain an explained deviance of 0.20 and a correlation coefficient of 0.64. With increasing sampling points per field, we averaged the measured abundance of the sampling within each field to obtain a more representative value of the field average. Doubling the samplings per field strongly improved the model performance criteria (explained deviance 0.38 and correlation coefficient 0.73). With 50 sampling points per field the performance criteria were 0.91 and 0.97 respectively for explained deviance and correlation coefficient. The relationship between number of samplings and performance criteria can be described with a saturation curve. Beyond five samples per field the model improvement becomes rather small. With this contribution we wish to discuss the impact of data variability at sampling scale on model performance and the implications for sampling design and assessment of model results as well as ecological inferences.
Liu, Chuan-Fen; Sales, Anne E; Sharp, Nancy D; Fishman, Paul; Sloan, Kevin L; Todd-Stenberg, Jeff; Nichol, W Paul; Rosen, Amy K; Loveland, Susan
2003-01-01
Objective To compare the rankings for health care utilization performance measures at the facility level in a Veterans Health Administration (VHA) health care delivery network using pharmacy- and diagnosis-based case-mix adjustment measures. Data Sources/Study Setting The study included veterans who used inpatient or outpatient services in Veterans Integrated Service Network (VISN) 20 during fiscal year 1998 (October 1997 to September 1998; N=126,076). Utilization and pharmacy data were extracted from VHA national databases and the VISN 20 data warehouse. Study Design We estimated concurrent regression models using pharmacy or diagnosis information in the base year (FY1998) to predict health service utilization in the same year. Utilization measures included bed days of care for inpatient care and provider visits for outpatient care. Principal Findings Rankings of predicted utilization measures across facilities vary by case-mix adjustment measure. There is greater consistency within the diagnosis-based models than between the diagnosis- and pharmacy-based models. The eight facilities were ranked differently by the diagnosis- and pharmacy-based models. Conclusions Choice of case-mix adjustment measure affects rankings of facilities on performance measures, raising concerns about the validity of profiling practices. Differences in rankings may reflect differences in comparability of data capture across facilities between pharmacy and diagnosis data sources, and unstable estimates due to small numbers of patients in a facility. PMID:14596393
Data Envelopment Analysis (DEA) Model in Operation Management
NASA Astrophysics Data System (ADS)
Malik, Meilisa; Efendi, Syahril; Zarlis, Muhammad
2018-01-01
Quality management is an effective system in operation management to develops, maintains, and improves quality from groups of companies that allow marketing, production, and service at the most economycal level as well as ensuring customer satisfication. Many companies are practicing quality management to improve their bussiness performance. One of performance measurement is through measurement of efficiency. One of the tools can be used to assess efficiency of companies performance is Data Envelopment Analysis (DEA). The aim of this paper is using Data Envelopment Analysis (DEA) model to assess efficiency of quality management. In this paper will be explained CCR, BCC, and SBM models to assess efficiency of quality management.
Effect Size Measures for Differential Item Functioning in a Multidimensional IRT Model
ERIC Educational Resources Information Center
Suh, Youngsuk
2016-01-01
This study adapted an effect size measure used for studying differential item functioning (DIF) in unidimensional tests and extended the measure to multidimensional tests. Two effect size measures were considered in a multidimensional item response theory model: signed weighted P-difference and unsigned weighted P-difference. The performance of…
Final Technical Report: Advanced Measurement and Analysis of PV Derate Factors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
King, Bruce Hardison; Burton, Patrick D.; Hansen, Clifford
2015-12-01
The Advanced Measurement and Analysis of PV Derate Factors project focuses on improving the accuracy and reducing the uncertainty of PV performance model predictions by addressing a common element of all PV performance models referred to as “derates”. Widespread use of “rules of thumb”, combined with significant uncertainty regarding appropriate values for these factors contribute to uncertainty in projected energy production.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Qingda; Gao, Xiaoyang; Krishnamoorthy, Sriram
Empirical optimizers like ATLAS have been very effective in optimizing computational kernels in libraries. The best choice of parameters such as tile size and degree of loop unrolling is determined by executing different versions of the computation. In contrast, optimizing compilers use a model-driven approach to program transformation. While the model-driven approach of optimizing compilers is generally orders of magnitude faster than ATLAS-like library generators, its effectiveness can be limited by the accuracy of the performance models used. In this paper, we describe an approach where a class of computations is modeled in terms of constituent operations that are empiricallymore » measured, thereby allowing modeling of the overall execution time. The performance model with empirically determined cost components is used to perform data layout optimization together with the selection of library calls and layout transformations in the context of the Tensor Contraction Engine, a compiler for a high-level domain-specific language for expressing computational models in quantum chemistry. The effectiveness of the approach is demonstrated through experimental measurements on representative computations from quantum chemistry.« less
ERIC Educational Resources Information Center
Ahmed, Wondimu; Bruinsma, Marjon
2006-01-01
The purpose of this study was to propose and test a motivational model of performance by integrating constructs from self-concept and self-determination theories and to explore cultural group differences in the model. To this end, self-report measures of global self-esteem, academic self-concept, academic motivation and academic performance were…
NASA Astrophysics Data System (ADS)
Wong, David W. C.; Choy, K. L.; Chow, Harry K. H.; Lin, Canhong
2014-06-01
For the most rapidly growing economic entity in the world, China, a new logistics operation called the indirect cross-border supply chain model has recently emerged. The primary idea of this model is to reduce logistics costs by storing goods at a bonded warehouse with low storage cost in certain Chinese regions, such as the Pearl River Delta (PRD). This research proposes a performance measurement system (PMS) framework to assess the direct and indirect cross-border supply chain models. The PMS covers four categories including cost, time, quality and flexibility in the assessment of the performance of direct and indirect models. Furthermore, a survey was conducted to investigate the logistics performance of third party logistics (3PLs) at the PRD regions, including Guangzhou, Shenzhen and Hong Kong. The significance of the proposed PMS framework allows 3PLs accurately pinpoint the weakness and strengths of it current operations policy at four major performance measurement categories. Hence, this helps 3PLs further enhance the competitiveness and operations efficiency through better resources allocation at the area of warehousing and transportation.
Performance assessment of a compressive sensing single-pixel imaging system
NASA Astrophysics Data System (ADS)
Du Bosq, Todd W.; Preece, Bradley L.
2017-04-01
Conventional sensors measure the light incident at each pixel in a focal plane array. Compressive sensing (CS) involves capturing a smaller number of unconventional measurements from the scene, and then using a companion process to recover the image. CS has the potential to acquire imagery with equivalent information content to a large format array while using smaller, cheaper, and lower bandwidth components. However, the benefits of CS do not come without compromise. The CS architecture chosen must effectively balance between physical considerations, reconstruction accuracy, and reconstruction speed to meet operational requirements. Performance modeling of CS imagers is challenging due to the complexity and nonlinearity of the system and reconstruction algorithm. To properly assess the value of such systems, it is necessary to fully characterize the image quality, including artifacts and sensitivity to noise. Imagery of a two-handheld object target set was collected using an shortwave infrared single-pixel CS camera for various ranges and number of processed measurements. Human perception experiments were performed to determine the identification performance within the trade space. The performance of the nonlinear CS camera was modeled by mapping the nonlinear degradations to an equivalent linear shift invariant model. Finally, the limitations of CS modeling techniques are discussed.
Identification of the numerical model of FEM in reference to measurements in situ
NASA Astrophysics Data System (ADS)
Jukowski, Michał; Bec, Jarosław; Błazik-Borowa, Ewa
2018-01-01
The paper deals with the verification of various numerical models in relation to the pilot-phase measurements of a rail bridge subjected to dynamic loading. Three types of FEM models were elaborated for this purpose. Static, modal and dynamic analyses were performed. The study consisted of measuring the acceleration values of the structural components of the object at the moment of the train passing. Based on this, FFT analysis was performed, the main natural frequencies of the bridge were determined, the structural damping ratio and the dynamic amplification factor (DAF) were calculated and compared with the standard values. Calculations were made using Autodesk Simulation Multiphysics (Algor).
Modeling and measurement of fault-tolerant multiprocessors
NASA Technical Reports Server (NTRS)
Shin, K. G.; Woodbury, M. H.; Lee, Y. H.
1985-01-01
The workload effects on computer performance are addressed first for a highly reliable unibus multiprocessor used in real-time control. As an approach to studing these effects, a modified Stochastic Petri Net (SPN) is used to describe the synchronous operation of the multiprocessor system. From this model the vital components affecting performance can be determined. However, because of the complexity in solving the modified SPN, a simpler model, i.e., a closed priority queuing network, is constructed that represents the same critical aspects. The use of this model for a specific application requires the partitioning of the workload into job classes. It is shown that the steady state solution of the queuing model directly produces useful results. The use of this model in evaluating an existing system, the Fault Tolerant Multiprocessor (FTMP) at the NASA AIRLAB, is outlined with some experimental results. Also addressed is the technique of measuring fault latency, an important microscopic system parameter. Most related works have assumed no or a negligible fault latency and then performed approximate analyses. To eliminate this deficiency, a new methodology for indirectly measuring fault latency is presented.
A Comparison of Modeled Pollutant Profiles With MOZAIC Aircraft Measurements
In this study, we use measurements performed under the MOZAIC program to evaluate vertical profiles of meteorological parameters, CO, and ozone that were simulated for the year 2006 with several versions of the WRF/CMAQ modeling system. Model updates, including WRF nudging strate...
Wong, Sabrina T; Yin, Delu; Bhattacharyya, Onil; Wang, Bin; Liu, Liqun; Chen, Bowen
2010-11-18
China has had no effective and systematic information system to provide guidance for strengthening PHC (Primary Health Care) or account to citizens on progress. We report on the development of the China results-based Logic Model for Community Health Facilities and Stations (CHS) and a set of relevant PHC indicators intended to measure CHS priorities. We adapted the PHC Results Based Logic Model developed in Canada and current work conducted in the community health system in China to create the China CHS Logic Model framework. We used a staged approach by first constructing the framework and indicators and then validating their content through an interactive process involving policy analysis, critical review of relevant literature and multiple stakeholder consultation. The China CHS Logic Model includes inputs, activities, outputs and outcomes with a total of 287 detailed performance indicators. In these indicators, 31 indicators measure inputs, 64 measure activities, 105 measure outputs, and 87 measure immediate (n = 65), intermediate (n = 15), or final (n = 7) outcomes. A Logic Model framework can be useful in planning, implementation, analysis and evaluation of PHC at a system and service level. The development and content validation of the China CHS Logic Model and subsequent indicators provides a means for stronger accountability and a clearer sense of overall direction and purpose needed to renew and strengthen the PHC system in China. Moreover, this work will be useful in moving towards developing a PHC information system and performance measurement across districts in urban China, and guiding the pursuit of quality in PHC.
2010-01-01
Background China has had no effective and systematic information system to provide guidance for strengthening PHC (Primary Health Care) or account to citizens on progress. We report on the development of the China results-based Logic Model for Community Health Facilities and Stations (CHS) and a set of relevant PHC indicators intended to measure CHS priorities. Methods We adapted the PHC Results Based Logic Model developed in Canada and current work conducted in the community health system in China to create the China CHS Logic Model framework. We used a staged approach by first constructing the framework and indicators and then validating their content through an interactive process involving policy analysis, critical review of relevant literature and multiple stakeholder consultation. Results The China CHS Logic Model includes inputs, activities, outputs and outcomes with a total of 287 detailed performance indicators. In these indicators, 31 indicators measure inputs, 64 measure activities, 105 measure outputs, and 87 measure immediate (n = 65), intermediate (n = 15), or final (n = 7) outcomes. Conclusion A Logic Model framework can be useful in planning, implementation, analysis and evaluation of PHC at a system and service level. The development and content validation of the China CHS Logic Model and subsequent indicators provides a means for stronger accountability and a clearer sense of overall direction and purpose needed to renew and strengthen the PHC system in China. Moreover, this work will be useful in moving towards developing a PHC information system and performance measurement across districts in urban China, and guiding the pursuit of quality in PHC. PMID:21087516
Validation of Multiple Tools for Flat Plate Photovoltaic Modeling Against Measured Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Freeman, J.; Whitmore, J.; Blair, N.
2014-08-01
This report expands upon a previous work by the same authors, published in the 40th IEEE Photovoltaic Specialists conference. In this validation study, comprehensive analysis is performed on nine photovoltaic systems for which NREL could obtain detailed performance data and specifications, including three utility-scale systems and six commercial scale systems. Multiple photovoltaic performance modeling tools were used to model these nine systems, and the error of each tool was analyzed compared to quality-controlled measured performance data. This study shows that, excluding identified outliers, all tools achieve annual errors within +/-8% and hourly root mean squared errors less than 7% formore » all systems. It is further shown using SAM that module model and irradiance input choices can change the annual error with respect to measured data by as much as 6.6% for these nine systems, although all combinations examined still fall within an annual error range of +/-8.5%. Additionally, a seasonal variation in monthly error is shown for all tools. Finally, the effects of irradiance data uncertainty and the use of default loss assumptions on annual error are explored, and two approaches to reduce the error inherent in photovoltaic modeling are proposed.« less
Neurocognitive predictors of financial capacity in traumatic brain injury.
Martin, Roy C; Triebel, Kristen; Dreer, Laura E; Novack, Thomas A; Turner, Crystal; Marson, Daniel C
2012-01-01
To develop cognitive models of financial capacity (FC) in patients with traumatic brain injury (TBI). Longitudinal design. Inpatient brain injury rehabilitation unit. Twenty healthy controls, and 24 adults with moderate-to-severe TBI were assessed at baseline (30 days postinjury) and 6 months postinjury. The FC instrument (FCI) and a neuropsychological test battery. Univariate correlation and multiple regression procedures were employed to develop cognitive models of FCI performance in the TBI group, at baseline and 6-month time follow-up. Three cognitive predictor models of FC were developed. At baseline, measures of mental arithmetic/working memory and immediate verbal memory predicted baseline FCI performance (R = 0.72). At 6-month follow-up, measures of executive function and mental arithmetic/working memory predicted 6-month FCI performance (R = 0.79), and a third model found that these 2 measures at baseline predicted 6-month FCI performance (R = 0.71). Multiple cognitive functions are associated with initial impairment and partial recovery of FC in moderate-to-severe TBI patients. In particular, arithmetic, working memory, and executive function skills appear critical to recovery of FC in TBI. The study results represent an initial step toward developing a neurocognitive model of FC in patients with TBI.
Facility-level outcome performance measures for nursing homes.
Porell, F; Caro, F G
1998-12-01
Risk-adjusted nursing home performance scores were developed for four health outcomes and five quality indicators from resident-level longitudinal case-mix reimbursement data for Medicaid residents of more than 500 nursing homes in Massachusetts. Facility performance was measured by comparing actual resident outcomes with expected outcomes derived from quarterly predictions of resident-level econometric models over a 3-year period (1991-1994). Performance measures were tightly distributed among facilities in the state. The intercorrelations among the nine outcome performance measures were relatively low and not uniformly positive. Performance measures were not highly associated with various structural facility attributes. For most outcomes, longitudinal analyses revealed only modest correlations between a facility's performance score from one time period to the next. Relatively few facilities exhibited consistent superior or inferior performance over time. The findings have implications toward the practical use of facility outcome performance measures for quality assurance and reimbursement purposes in the near future.
Modeling Verdict Outcomes Using Social Network Measures: The Watergate and Caviar Network Cases
2016-01-01
Modelling criminal trial verdict outcomes using social network measures is an emerging research area in quantitative criminology. Few studies have yet analyzed which of these measures are the most important for verdict modelling or which data classification techniques perform best for this application. To compare the performance of different techniques in classifying members of a criminal network, this article applies three different machine learning classifiers–Logistic Regression, Naïve Bayes and Random Forest–with a range of social network measures and the necessary databases to model the verdicts in two real–world cases: the U.S. Watergate Conspiracy of the 1970’s and the now–defunct Canada–based international drug trafficking ring known as the Caviar Network. In both cases it was found that the Random Forest classifier did better than either Logistic Regression or Naïve Bayes, and its superior performance was statistically significant. This being so, Random Forest was used not only for classification but also to assess the importance of the measures. For the Watergate case, the most important one proved to be betweenness centrality while for the Caviar Network, it was the effective size of the network. These results are significant because they show that an approach combining machine learning with social network analysis not only can generate accurate classification models but also helps quantify the importance social network variables in modelling verdict outcomes. We conclude our analysis with a discussion and some suggestions for future work in verdict modelling using social network measures. PMID:26824351
Infrared measurement and composite tracking algorithm for air-breathing hypersonic vehicles
NASA Astrophysics Data System (ADS)
Zhang, Zhao; Gao, Changsheng; Jing, Wuxing
2018-03-01
Air-breathing hypersonic vehicles have capabilities of hypersonic speed and strong maneuvering, and thus pose a significant challenge to conventional tracking methodologies. To achieve desirable tracking performance for hypersonic targets, this paper investigates the problems related to measurement model design and tracking model mismatching. First, owing to the severe aerothermal effect of hypersonic motion, an infrared measurement model in near space is designed and analyzed based on target infrared radiation and an atmospheric model. Second, using information from infrared sensors, a composite tracking algorithm is proposed via a combination of the interactive multiple models (IMM) algorithm, fitting dynamics model, and strong tracking filter. During the procedure, the IMMs algorithm generates tracking data to establish a fitting dynamics model of the target. Then, the strong tracking unscented Kalman filter is employed to estimate the target states for suppressing the impact of target maneuvers. Simulations are performed to verify the feasibility of the presented composite tracking algorithm. The results demonstrate that the designed infrared measurement model effectively and continuously observes hypersonic vehicles, and the proposed composite tracking algorithm accurately and stably tracks these targets.
Analysis of Advanced Rotorcraft Configurations
NASA Technical Reports Server (NTRS)
Johnson, Wayne
2000-01-01
Advanced rotorcraft configurations are being investigated with the objectives of identifying vehicles that are larger, quieter, and faster than current-generation rotorcraft. A large rotorcraft, carrying perhaps 150 passengers, could do much to alleviate airport capacity limitations, and a quiet rotorcraft is essential for community acceptance of the benefits of VTOL operations. A fast, long-range, long-endurance rotorcraft, notably the tilt-rotor configuration, will improve rotorcraft economics through productivity increases. A major part of the investigation of advanced rotorcraft configurations consists of conducting comprehensive analyses of vehicle behavior for the purpose of assessing vehicle potential and feasibility, as well as to establish the analytical models required to support the vehicle development. The analytical work of FY99 included applications to tilt-rotor aircraft. Tilt Rotor Aeroacoustic Model (TRAM) wind tunnel measurements are being compared with calculations performed by using the comprehensive analysis tool (Comprehensive Analytical Model of Rotorcraft Aerodynamics and Dynamics (CAMRAD 11)). The objective is to establish the wing and wake aerodynamic models that are required for tilt-rotor analysis and design. The TRAM test in the German-Dutch Wind Tunnel (DNW) produced extensive measurements. This is the first test to encompass air loads, performance, and structural load measurements on tilt rotors, as well as acoustic and flow visualization data. The correlation of measurements and calculations includes helicopter-mode operation (performance, air loads, and blade structural loads), hover (performance and air loads), and airplane-mode operation (performance).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mosby, W. R.; Jensen, B. A.
2002-05-31
In recent years there has been a trend towards storage of Irradiated Nuclear Fuel (INF) in dry conditions rather than in underwater environments. At the same time, the Department of Energy (DOE) has begun encouraging custodians of INF to perform measurements on INF for which no recent fissile contents measurement data exists. INF, in the form of spent fuel from Experimental Breeder Reactor 2 (EBR-II), has been stored in close-fitting, dry underground storage locations at the Radioactive Scrap and Waste Facility (RSWF) at Argonne National Laboratory-West (ANL-W) for many years. In Fiscal Year 2000, funding was obtained from the DOEmore » Office of Safeguards and Security Technology Development Program to develop and prepare for deployment a Shielded Measurement System (SMS) to perform fissile content measurements on INF stored in the RSWF. The SMS is equipped to lift an INF item out of its storage location, perform scanning neutron coincidence and high-resolution gamma-ray measurements, and restore the item to its storage location. The neutron and gamma-ray measurement results are compared to predictions based on isotope depletion and Monte Carlo neutral-particle transport models to provide confirmation of the accuracy of the models and hence of the fissile material contents of the item as calculated by the same models. This paper describes the SMS and discusses the results of the first calibration and validation measurements performed with the SMS.« less
NASA Technical Reports Server (NTRS)
August, Richard; Kaza, Krishna Rao V.
1988-01-01
An investigation of the vibration, performance, flutter, and forced response of the large-scale propfan, SR7L, and its aeroelastic model, SR7A, has been performed by applying available structural and aeroelastic analytical codes and then correlating measured and calculated results. Finite element models of the blades were used to obtain modal frequencies, displacements, stresses and strains. These values were then used in conjunction with a 3-D, unsteady, lifting surface aerodynamic theory for the subsequent aeroelastic analyses of the blades. The agreement between measured and calculated frequencies and mode shapes for both models is very good. Calculated power coefficients correlate well with those measured for low advance ratios. Flutter results show that both propfans are stable at their respective design points. There is also good agreement between calculated and measured blade vibratory strains due to excitation resulting from yawed flow for the SR7A propfan. The similarity of structural and aeroelastic results show that the SR7A propfan simulates the SR7L characteristics.
Microburst vertical wind estimation from horizontal wind measurements
NASA Technical Reports Server (NTRS)
Vicroy, Dan D.
1994-01-01
The vertical wind or downdraft component of a microburst-generated wind shear can significantly degrade airplane performance. Doppler radar and lidar are two sensor technologies being tested to provide flight crews with early warning of the presence of hazardous wind shear. An inherent limitation of Doppler-based sensors is the inability to measure velocities perpendicular to the line of sight, which results in an underestimate of the total wind shear hazard. One solution to the line-of-sight limitation is to use a vertical wind model to estimate the vertical component from the horizontal wind measurement. The objective of this study was to assess the ability of simple vertical wind models to improve the hazard prediction capability of an airborne Doppler sensor in a realistic microburst environment. Both simulation and flight test measurements were used to test the vertical wind models. The results indicate that in the altitude region of interest (at or below 300 m), the simple vertical wind models improved the hazard estimate. The radar simulation study showed that the magnitude of the performance improvement was altitude dependent. The altitude of maximum performance improvement occurred at about 300 m.
Swan, Suzanne C.; Gambone, Laura J.; Van Horn, M. Lee; Snow, David L.; Sullivan, Tami P.
2013-01-01
Theories and measures of women’s aggression in intimate relationships are only beginning to be developed. This study provides a first step in conceptualizing the measurement of women’s aggression by examining how well three widely used measures perform in assessing women’s perpetration of and victimization by aggression in their intimate relationships with men (i.e., the Conflict Tactics Scales 2; Straus, Hamby, & Warren, 2003, the Sexual Experiences Survey; Koss, Gidycz, & Wisniewski, 1987, and the Psychological Maltreatment of Women Inventory; Tolman, 1999). These constructs were examined in a diverse sample of 412 African American, Latina, and White women who had all recently used physical aggression against a male intimate partner. The factor structures and psychometric properties of perpetration and victimization models using these measures were compared. Results indicate that the factor structure of women’s perpetration differs from that of women’s victimization in theoretically meaningful ways. In the victimization model, all factors performed well in contributing to the measurement of the latent victimization construct. In contrast, the perpetration model performed well in assessing women’s physical and psychological aggression, but performed poorly in assessing women’s sexual aggression, coercive control, and jealous monitoring. Findings suggest that the power and control model of intimate partner violence may apply well to women’s victimization, but not as well to their perpetration. PMID:23012348
NASA Technical Reports Server (NTRS)
Waheed, Abdul; Yan, Jerry
1998-01-01
This paper presents a model to evaluate the performance and overhead of parallelizing sequential code using compiler directives for multiprocessing on distributed shared memory (DSM) systems. With increasing popularity of shared address space architectures, it is essential to understand their performance impact on programs that benefit from shared memory multiprocessing. We present a simple model to characterize the performance of programs that are parallelized using compiler directives for shared memory multiprocessing. We parallelized the sequential implementation of NAS benchmarks using native Fortran77 compiler directives for an Origin2000, which is a DSM system based on a cache-coherent Non Uniform Memory Access (ccNUMA) architecture. We report measurement based performance of these parallelized benchmarks from four perspectives: efficacy of parallelization process; scalability; parallelization overhead; and comparison with hand-parallelized and -optimized version of the same benchmarks. Our results indicate that sequential programs can conveniently be parallelized for DSM systems using compiler directives but realizing performance gains as predicted by the performance model depends primarily on minimizing architecture-specific data locality overhead.
Is stair climb power a clinically relevant measure of leg power impairments in at-risk older adults?
Bean, Jonathan F; Kiely, Dan K; LaRose, Sharon; Alian, Joda; Frontera, Walter R
2007-05-01
To test the clinical relevance of the stair climb power test (SCPT) as a measure of leg power impairments in mobility-limited older adults. Cross-sectional analysis of baseline data from participants within a randomized controlled trial. Rehabilitation research gym. Community-dwelling older adults (N=138; mean age, 75.4 y) with mobility limitations as defined by the Short Physical Performance Battery (SPPB). Not applicable. Leg power measures included the SCPT and double leg press power measured at 40% (DLP40) and 70% (DLP70) of the 1 repetition maximum. Mobility performance tests included the SPPB and its 3 components: gait speed, chair stand time, and standing balance. Stair climb power per kilogram (SCP/kg) had correlations of moderate strength (r=.47, r=.52) with DLP40/kg and DLP70/kg, respectively. All 3 leg power measures correlated with each of the mobility performance measures with the exception of DLP40/kg (r=.11, P=.27) and DLP70/kg (r=.11, P=.18) with standing balance. Magnitudes of association, as described by the Pearson correlation coefficient, did not differ substantively among the separate power measures as they related to SPPB performance overall. Separate adjusted multivariate models evaluating the relationship between leg power and SPPB performance were all statistically significant and described equivalent amounts of the total variance (R(2)) in SPPB performance (SCP/kg, R(2)=.30; DLP40, R(2)=.32; DLP70, R(2)=.31). Analyses of the components of the SPPB show that the SCPT had stronger associations than the other leg power impairment measures with models predicting chair stand (SCP/kg, R(2)=.25; DLP40, R(2)=.12; DLP70, R(2)=.13), whereas both types of leg press power testing had stronger associations with models predicting gait speed (SCP/kg, R(2)=.16; DLP40, R(2)=.34; DLP70, R(2)=.34). Stair climb power was the only power measure that was a significant component of models predicting standing balance (SCP/kg R(2)=.20). The SCPT is a clinically relevant measure of leg power impairments. It is associated with more complex modes of testing leg power impairments and is meaningfully associated with mobility performance, making it suitable for clinical settings in which impairment-mobility relationships are of interest.
Payment models to support population health management.
Huerta, Timothy R; Hefner, Jennifer L; McAlearney, Ann Scheck
2014-01-01
To survey the policy-driven financial controls currently being used to drive physician change in the care of populations. This paper offers a review of current health care payment models and discusses the impact of each on the potential success of PHM initiatives. We present the benefits of a multi-part model, combining visit-based fee-for-service reimbursement with a monthly "care coordination payment" and a performance-based payment system. A multi-part model removes volume-based incentives and promotes efficiency. However, it is predicated on a pay-for-performance framework that requires standardized measurement. Application of this model is limited due to the current lack of standardized measurement of quality goals that are linked to payment incentives. Financial models dictated by health system payers are inextricably linked to the organization and management of health care. There is a need for better measurements and realistic targets as part of a comprehensive system of measurement assessment that focuses on practice redesign, with the goal of standardizing measurement of the structure and process of redesign. Payment reform is a necessary component of an accurate measure of the associations between practice transformation and outcomes important to both patients and society.
A Sensor Dynamic Measurement Error Prediction Model Based on NAPSO-SVM.
Jiang, Minlan; Jiang, Lan; Jiang, Dingde; Li, Fei; Song, Houbing
2018-01-15
Dynamic measurement error correction is an effective way to improve sensor precision. Dynamic measurement error prediction is an important part of error correction, and support vector machine (SVM) is often used for predicting the dynamic measurement errors of sensors. Traditionally, the SVM parameters were always set manually, which cannot ensure the model's performance. In this paper, a SVM method based on an improved particle swarm optimization (NAPSO) is proposed to predict the dynamic measurement errors of sensors. Natural selection and simulated annealing are added in the PSO to raise the ability to avoid local optima. To verify the performance of NAPSO-SVM, three types of algorithms are selected to optimize the SVM's parameters: the particle swarm optimization algorithm (PSO), the improved PSO optimization algorithm (NAPSO), and the glowworm swarm optimization (GSO). The dynamic measurement error data of two sensors are applied as the test data. The root mean squared error and mean absolute percentage error are employed to evaluate the prediction models' performances. The experimental results show that among the three tested algorithms the NAPSO-SVM method has a better prediction precision and a less prediction errors, and it is an effective method for predicting the dynamic measurement errors of sensors.
Optical modeling of volcanic ash particles using ellipsoids
NASA Astrophysics Data System (ADS)
Merikallio, Sini; Muñoz, Olga; Sundström, Anu-Maija; Virtanen, Timo H.; Horttanainen, Matti; de Leeuw, Gerrit; Nousiainen, Timo
2015-05-01
The single-scattering properties of volcanic ash particles are modeled here by using ellipsoidal shapes. Ellipsoids are expected to improve the accuracy of the retrieval of aerosol properties using remote sensing techniques, which are currently often based on oversimplified assumptions of spherical ash particles. Measurements of the single-scattering optical properties of ash particles from several volcanoes across the globe, including previously unpublished measurements from the Eyjafjallajökull and Puyehue volcanoes, are used to assess the performance of the ellipsoidal particle models. These comparisons between the measurements and the ellipsoidal particle model include consideration of the whole scattering matrix, as well as sensitivity studies on the point of view of the Advanced Along Track Scanning Radiometer (AATSR) instrument. AATSR, which flew on the ENVISAT satellite, offers two viewing directions but no information on polarization, so usually only the phase function is relevant for interpreting its measurements. As expected, ensembles of ellipsoids are able to reproduce the observed scattering matrix more faithfully than spheres. Performance of ellipsoid ensembles depends on the distribution of particle shapes, which we tried to optimize. No single specific shape distribution could be found that would perform superiorly in all situations, but all of the best-fit ellipsoidal distributions, as well as the additionally tested equiprobable distribution, improved greatly over the performance of spheres. We conclude that an equiprobable shape distribution of ellipsoidal model particles is a relatively good, yet enticingly simple, approach for modeling volcanic ash single-scattering optical properties.
What’s in a game? A systems approach to enhancing performance analysis in football
2017-01-01
Purpose Performance analysis (PA) in football is considered to be an integral component of understanding the requirements for optimal performance. Despite vast amounts of research in this area key gaps remain, including what comprises PA in football, and methods to minimise research-practitioner gaps. The aim of this study was to develop a model of the football match system in order to better describe and understand the components of football performance. Such a model could inform the design of new PA methods. Method Eight elite level football Subject Method Experts (SME’s) participated in two workshops to develop a systems model of the football match system. The model was developed using a first-of-its-kind application of Cognitive Work Analysis (CWA) in football. CWA has been used in many other non-sporting domains to analyse and understand complex systems. Result Using CWA, a model of the football match ‘system’ was developed. The model enabled identification of several PA measures not currently utilised, including communication between team members, adaptability of teams, playing at the appropriate tempo, as well as attacking and defending related measures. Conclusion The results indicate that football is characteristic of a complex sociotechnical system, and revealed potential new and unique PA measures regarded as important by SME’s, yet not currently measured. Importantly, these results have identified a gap between the current PA research and the information that is meaningful to football coaches and practitioners. PMID:28212392
Constrained optimization via simulation models for new product innovation
NASA Astrophysics Data System (ADS)
Pujowidianto, Nugroho A.
2017-11-01
We consider the problem of constrained optimization where the decision makers aim to optimize the primary performance measure while constraining the secondary performance measures. This paper provides a brief overview of stochastically constrained optimization via discrete event simulation. Most review papers tend to be methodology-based. This review attempts to be problem-based as decision makers may have already decided on the problem formulation. We consider constrained optimization models as there are usually constraints on secondary performance measures as trade-off in new product development. It starts by laying out different possible methods and the reasons using constrained optimization via simulation models. It is then followed by the review of different simulation optimization approach to address constrained optimization depending on the number of decision variables, the type of constraints, and the risk preferences of the decision makers in handling uncertainties.
Schneider, Eric C.; Hussey, Peter S.; Schnyer, Christopher
2011-01-01
Abstract Insurers and purchasers of health care in the United States are on the verge of potentially revolutionary changes in the approaches they use to pay for health care. Recently, purchasers and insurers have been experimenting with payment approaches that include incentives to improve quality and reduce the use of unnecessary and costly services. The Patient Protection and Affordable Care Act of 2010 is likely to accelerate payment reform based on performance measurement. This article provides details of the results of a technical report that catalogues nearly 100 implemented and proposed payment reform programs, classifies each of these programs into one of 11 payment reform models, and identifies the performance measurement needs associated with each model. A synthesis of the results suggests near-term priorities for performance measure development and identifies pertinent challenges related to the use of performance measures as a basis for payment reform. The report is also intended to create a shared framework for analysis of future performance measurement opportunities. This report is intended for the many stakeholders tasked with outlining a national quality strategy in the wake of health care reform legislation. PMID:28083159
International Space Station Model Correlation Analysis
NASA Technical Reports Server (NTRS)
Laible, Michael R.; Fitzpatrick, Kristin; Hodge, Jennifer; Grygier, Michael
2018-01-01
This paper summarizes the on-orbit structural dynamic data and the related modal analysis, model validation and correlation performed for the International Space Station (ISS) configuration ISS Stage ULF7, 2015 Dedicated Thruster Firing (DTF). The objective of this analysis is to validate and correlate the analytical models used to calculate the ISS internal dynamic loads and compare the 2015 DTF with previous tests. During the ISS configurations under consideration, on-orbit dynamic measurements were collected using the three main ISS instrumentation systems; Internal Wireless Instrumentation System (IWIS), External Wireless Instrumentation System (EWIS) and the Structural Dynamic Measurement System (SDMS). The measurements were recorded during several nominal on-orbit DTF tests on August 18, 2015. Experimental modal analyses were performed on the measured data to extract modal parameters including frequency, damping, and mode shape information. Correlation and comparisons between test and analytical frequencies and mode shapes were performed to assess the accuracy of the analytical models for the configurations under consideration. These mode shapes were also compared to earlier tests. Based on the frequency comparisons, the accuracy of the mathematical models is assessed and model refinement recommendations are given. In particular, results of the first fundamental mode will be discussed, nonlinear results will be shown, and accelerometer placement will be assessed.
Quality of Protection Evaluation of Security Mechanisms
Ksiezopolski, Bogdan; Zurek, Tomasz; Mokkas, Michail
2014-01-01
Recent research indicates that during the design of teleinformatic system the tradeoff between the systems performance and the system protection should be made. The traditional approach assumes that the best way is to apply the strongest possible security measures. Unfortunately, the overestimation of security measures can lead to the unreasonable increase of system load. This is especially important in multimedia systems where the performance has critical character. In many cases determination of the required level of protection and adjustment of some security measures to these requirements increase system efficiency. Such an approach is achieved by means of the quality of protection models where the security measures are evaluated according to their influence on the system security. In the paper, we propose a model for QoP evaluation of security mechanisms. Owing to this model, one can quantify the influence of particular security mechanisms on ensuring security attributes. The methodology of our model preparation is described and based on it the case study analysis is presented. We support our method by the tool where the models can be defined and QoP evaluation can be performed. Finally, we have modelled TLS cryptographic protocol and presented the QoP security mechanisms evaluation for the selected versions of this protocol. PMID:25136683
Simulation and performance of brushless dc motor actuators
NASA Astrophysics Data System (ADS)
Gerba, A., Jr.
1985-12-01
The simulation model for a Brushless D.C. Motor and the associated commutation power conditioner transistor model are presented. The necessary conditions for maximum power output while operating at steady-state speed and sinusoidally distributed air-gap flux are developed. Comparison of simulated model with the measured performance of a typical motor are done both on time response waveforms and on average performance characteristics. These preliminary results indicate good agreement. Plans for model improvement and testing of a motor-driven positioning device for model evaluation are outlined.
What Can the Diffusion Model Tell Us About Prospective Memory?
Horn, Sebastian S.; Bayen, Ute J.; Smith, Rebekah E.
2011-01-01
Cognitive process models, such as Ratcliff’s (1978) diffusion model, are useful tools for examining cost- or interference effects in event-based prospective memory (PM). The diffusion model includes several parameters that provide insight into how and why ongoing-task performance may be affected by a PM task and is ideally suited to analyze performance because both reaction time and accuracy are taken into account. Separate analyses of these measures can easily yield misleading interpretations in cases of speed-accuracy tradeoffs. The diffusion model allows us to measure possible criterion shifts and is thus an important methodological improvement over standard analyses. Performance in an ongoing lexical decision task (Smith, 2003) was analyzed with the diffusion model. The results suggest that criterion shifts play an important role when a PM task is added, but do not fully explain the cost effect on RT. PMID:21443332
Comparison of measured and modeled BRDF of natural targets
NASA Astrophysics Data System (ADS)
Boucher, Yannick; Cosnefroy, Helene; Petit, Alain D.; Serrot, Gerard; Briottet, Xavier
1999-07-01
The Bidirectional Reflectance Distribution Function (BRDF) plays a major role to evaluate or simulate the signatures of natural and artificial targets in the solar spectrum. A goniometer covering a large spectral and directional domain has been recently developed by the ONERA/DOTA. It was designed to allow both laboratory and outside measurements. The spectral domain ranges from 0.40 to 0.95 micrometer, with a resolution of 3 nm. The geometrical domain ranges 0 - 60 degrees for the zenith angle of the source and the sensor, and 0 - 180 degrees for the relative azimuth between the source and the sensor. The maximum target size for nadir measurements is 22 cm. The spatial target irradiance non-uniformity has been evaluated and then used to correct the raw measurements. BRDF measurements are calibrated thanks to a spectralon reference panel. Some BRDF measurements performed on sand and short grass and are presented here. Eight bidirectional models among the most popular models found in the literature have been tested on these measured data set. A code fitting the model parameters to the measured BRDF data has been developed. The comparative evaluation of the model performances is carried out, versus different criteria (root mean square error, root mean square relative error, correlation diagram . . .). The robustness of the models is evaluated with respect to the number of BRDF measurements, noise and interpolation.
A knowledge based search tool for performance measures in health care systems.
Beyan, Oya D; Baykal, Nazife
2012-02-01
Performance measurement is vital for improving the health care systems. However, we are still far from having accepted performance measurement models. Researchers and developers are seeking comparable performance indicators. We developed an intelligent search tool to identify appropriate measures for specific requirements by matching diverse care settings. We reviewed the literature and analyzed 229 performance measurement studies published after 2000. These studies are evaluated with an original theoretical framework and stored in the database. A semantic network is designed for representing domain knowledge and supporting reasoning. We have applied knowledge based decision support techniques to cope with uncertainty problems. As a result we designed a tool which simplifies the performance indicator search process and provides most relevant indicators by employing knowledge based systems.
Scenarios and performance measures for advanced ISDN satellite design and experiments
NASA Technical Reports Server (NTRS)
Pepin, Gerard R.
1991-01-01
Described here are the contemplated input and expected output for the Interim Service Integrated Services Digital Network (ISDN) Satellite (ISIS) and Full Service ISDN Satellite (FSIS) Models. The discrete event simulations of these models are presented with specific scenarios that stress ISDN satellite parameters. Performance measure criteria are presented for evaluating the advanced ISDN communication satellite designs of the NASA Satellite Communications Research (SCAR) Program.
Specifying and Refining a Measurement Model for a Computer-Based Interactive Assessment
ERIC Educational Resources Information Center
Levy, Roy; Mislevy, Robert J.
2004-01-01
The challenges of modeling students' performance in computer-based interactive assessments include accounting for multiple aspects of knowledge and skill that arise in different situations and the conditional dependencies among multiple aspects of performance. This article describes a Bayesian approach to modeling and estimating cognitive models…
Measuring Information Security Performance with 10 by 10 Model for Holistic State Evaluation
2016-01-01
Organizations should measure their information security performance if they wish to take the right decisions and develop it in line with their security needs. Since the measurement of information security is generally underdeveloped in practice and many organizations find the existing recommendations too complex, the paper presents a solution in the form of a 10 by 10 information security performance measurement model. The model—ISP 10×10M is composed of ten critical success factors, 100 key performance indicators and 6 performance levels. Its content was devised on the basis of findings presented in the current research studies and standards, while its structure results from an empirical research conducted among information security professionals from Slovenia. Results of the study show that a high level of information security performance is mostly dependent on measures aimed at managing information risks, employees and information sources, while formal and environmental factors have a lesser impact. Experts believe that information security should evolve systematically, where it’s recommended that beginning steps include technical, logical and physical security controls, while advanced activities should relate predominantly strategic management activities. By applying the proposed model, organizations are able to determine the actual level of information security performance based on the weighted indexing technique. In this manner they identify the measures they ought to develop in order to improve the current situation. The ISP 10×10M is a useful tool for conducting internal system evaluations and decision-making. It may also be applied to a larger sample of organizations in order to determine the general state-of-play for research purposes. PMID:27655001
NASA Astrophysics Data System (ADS)
Porto, P.; Cogliandro, V.; Callegari, G.
2018-01-01
In this paper, long-term sediment yield data, collected in a small (1.38 ha) Calabrian catchment (W2), reafforested with eucalyptus trees (Eucalyptus occidentalis Engl.) are used to validate the performance of the SEdiment Delivery Distributed Model (SEDD) in areas with high erosion rates. At first step, the SEDD model was calibrated using field data collected in previous field campaigns undertaken during the period 1978-1994. This first phase allowed the model calibration parameter β to be calculated using direct measurements of rainfall, runoff, and sediment output. The model was then validated in its calibrated form for an independent period (2006-2016) for which new measurements of rainfall, runoff and sediment output are also available. The analysis, carried out at event and annual scale showed good agreement between measured and predicted values of sediment yield and suggested that the SEDD model can be seen as an appropriate means of evaluating erosion risk associated with manmade plantations in marginal areas. Further work is however required to test the performance of the SEDD model as a prediction tool in different geomorphic contexts.
Ice Accretions and Icing Effects for Modern Airfoils
NASA Technical Reports Server (NTRS)
Addy, Harold E., Jr.
2000-01-01
Icing tests were conducted to document ice shapes formed on three different two-dimensional airfoils and to study the effects of the accreted ice on aerodynamic performance. The models tested were representative of airfoil designs in current use for each of the commercial transport, business jet, and general aviation categories of aircraft. The models were subjected to a range of icing conditions in an icing wind tunnel. The conditions were selected primarily from the Federal Aviation Administration's Federal Aviation Regulations 25 Appendix C atmospheric icing conditions. A few large droplet icing conditions were included. To verify the aerodynamic performance measurements, molds were made of selected ice shapes formed in the icing tunnel. Castings of the ice were made from the molds and placed on a model in a dry, low-turbulence wind tunnel where precision aerodynamic performance measurements were made. Documentation of all the ice shapes and the aerodynamic performance measurements made during the icing tunnel tests is included in this report. Results from the dry, low-turbulence wind tunnel tests are also presented.
Theory of constraints for publicly funded health systems.
Sadat, Somayeh; Carter, Michael W; Golden, Brian
2013-03-01
Originally developed in the context of publicly traded for-profit companies, theory of constraints (TOC) improves system performance through leveraging the constraint(s). While the theory seems to be a natural fit for resource-constrained publicly funded health systems, there is a lack of literature addressing the modifications required to adopt TOC and define the goal and performance measures. This paper develops a system dynamics representation of the classical TOC's system-wide goal and performance measures for publicly traded for-profit companies, which forms the basis for developing a similar model for publicly funded health systems. The model is then expanded to include some of the factors that affect system performance, providing a framework to apply TOC's process of ongoing improvement in publicly funded health systems. Future research is required to more accurately define the factors affecting system performance and populate the model with evidence-based estimates for various parameters in order to use the model to guide TOC's process of ongoing improvement.
Duran, Cassidy; Estrada, Sean; O'Malley, Marcia; Sheahan, Malachi G; Shames, Murray L; Lee, Jason T; Bismuth, Jean
2015-12-01
Fundamental skills testing is now required for certification in general surgery. No model for assessing fundamental endovascular skills exists. Our objective was to develop a model that tests the fundamental endovascular skills and differentiates competent from noncompetent performance. The Fundamentals of Endovascular Surgery model was developed in silicon and virtual-reality versions. Twenty individuals (with a range of experience) performed four tasks on each model in three separate sessions. Tasks on the silicon model were performed under fluoroscopic guidance, and electromagnetic tracking captured motion metrics for catheter tip position. Image processing captured tool tip position and motion on the virtual model. Performance was evaluated using a global rating scale, blinded video assessment of error metrics, and catheter tip movement and position. Motion analysis was based on derivations of speed and position that define proficiency of movement (spectral arc length, duration of submovement, and number of submovements). Performance was significantly different between competent and noncompetent interventionalists for the three performance measures of motion metrics, error metrics, and global rating scale. The mean error metric score was 6.83 for noncompetent individuals and 2.51 for the competent group (P < .0001). Median global rating scores were 2.25 for the noncompetent group and 4.75 for the competent users (P < .0001). The Fundamentals of Endovascular Surgery model successfully differentiates competent and noncompetent performance of fundamental endovascular skills based on a series of objective performance measures. This model could serve as a platform for skills testing for all trainees. Copyright © 2015 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.
Magnetic Measurements of the First Nb 3Sn Model Quadrupole (MQXFS) for the High-Luminosity LHC
DiMarco, J.; Ambrosio, G.; Chlachidze, G.; ...
2016-12-12
The US LHC Accelerator Research Program (LARP) and CERN are developing high-gradient Nb 3Sn magnets for the High Luminosity LHC interaction regions. Magnetic measurements of the first 1.5 m long, 150 mm aperture model quadrupole, MQXFS1, were performed during magnet assembly at LBNL, as well as during cryogenic testing at Fermilab’s Vertical Magnet Test Facility. This paper reports on the results of these magnetic characterization measurements, as well as on the performance of new probes developed for the tests.
NASA Astrophysics Data System (ADS)
Schmidt, J. B.
1985-09-01
This thesis investigates ways of improving the real-time performance of the Stockpoint Logistics Integrated Communication Environment (SPLICE). Performance evaluation through continuous monitoring activities and performance studies are the principle vehicles discussed. The method for implementing this performance evaluation process is the measurement of predefined performance indexes. Performance indexes for SPLICE are offered that would measure these areas. Existing SPLICE capability to carry out performance evaluation is explored, and recommendations are made to enhance that capability.
ERIC Educational Resources Information Center
Poole, Dennis L.; Nelson, Joan; Carnahan, Sharon; Chepenik, Nancy G.; Tubiak, Christine
2000-01-01
Developed and field tested the Performance Accountability Quality Scale (PAQS) on 191 program performance measurement systems developed by nonprofit agencies in central Florida. Preliminary findings indicate that the PAQS provides a structure for obtaining expert opinions based on a theory-driven model about the quality of proposed measurement…
Tian, Mi; Deng, Zhu; Meng, Zhaokun; Li, Rui; Zhang, Zhiyi; Qi, Wenhui; Wang, Rui; Yin, Tingting; Ji, Menghui
2018-01-01
Children's block building performances are used as indicators of other abilities in multiple domains. In the current study, we examined individual differences, types of model and social settings as influences on children's block building performance. Chinese preschoolers ( N = 180) participated in a block building activity in a natural setting, and performance was assessed with multiple measures in order to identify a range of specific skills. Using scores generated across these measures, three dependent variables were analyzed: block building skills, structural balance and structural features. An overall MANOVA showed that there were significant main effects of gender and grade level across most measures. Types of model showed no significant effect in children's block building. There was a significant main effect of social settings on structural features, with the best performance in the 5-member group, followed by individual and then the 10-member block building. These findings suggest that boys performed better than girls in block building activity. Block building performance increased significantly from 1st to 2nd year of preschool, but not from second to third. The preschoolers created more representational constructions when presented with a model made of wooden rather than with a picture. There was partial evidence that children performed better when working with peers in a small group than when working alone or working in a large group. It is suggested that future study should examine other modalities rather than the visual one, diversify the samples and adopt a longitudinal investigation.
Jürgens, Tim; Brand, Thomas
2009-11-01
This study compares the phoneme recognition performance in speech-shaped noise of a microscopic model for speech recognition with the performance of normal-hearing listeners. "Microscopic" is defined in terms of this model twofold. First, the speech recognition rate is predicted on a phoneme-by-phoneme basis. Second, microscopic modeling means that the signal waveforms to be recognized are processed by mimicking elementary parts of human's auditory processing. The model is based on an approach by Holube and Kollmeier [J. Acoust. Soc. Am. 100, 1703-1716 (1996)] and consists of a psychoacoustically and physiologically motivated preprocessing and a simple dynamic-time-warp speech recognizer. The model is evaluated while presenting nonsense speech in a closed-set paradigm. Averaged phoneme recognition rates, specific phoneme recognition rates, and phoneme confusions are analyzed. The influence of different perceptual distance measures and of the model's a-priori knowledge is investigated. The results show that human performance can be predicted by this model using an optimal detector, i.e., identical speech waveforms for both training of the recognizer and testing. The best model performance is yielded by distance measures which focus mainly on small perceptual distances and neglect outliers.
Patterson, Olga V; Forbush, Tyler B; Saini, Sameer D; Moser, Stephanie E; DuVall, Scott L
2015-01-01
In order to measure the level of utilization of colonoscopy procedures, identifying the primary indication for the procedure is required. Colonoscopies may be utilized not only for screening, but also for diagnostic or therapeutic purposes. To determine whether a colonoscopy was performed for screening, we created a natural language processing system to identify colonoscopy reports in the electronic medical record system and extract indications for the procedure. A rule-based model and three machine-learning models were created using 2,000 manually annotated clinical notes of patients cared for in the Department of Veterans Affairs. Performance of the models was measured and compared. Analysis of the models on a test set of 1,000 documents indicates that the rule-based system performance stays fairly constant as evaluated on training and testing sets. However, the machine learning model without feature selection showed significant decrease in performance. Therefore, rule-based classification system appears to be more robust than a machine-learning system in cases when no feature selection is performed.
Guiding Principles and Checklist for Population-Based Quality Metrics
Brunelli, Steven M.; Maddux, Franklin W.; Parker, Thomas F.; Johnson, Douglas; Nissenson, Allen R.; Collins, Allan; Lacson, Eduardo
2014-01-01
The Centers for Medicare and Medicaid Services oversees the ESRD Quality Incentive Program to ensure that the highest quality of health care is provided by outpatient dialysis facilities that treat patients with ESRD. To that end, Centers for Medicare and Medicaid Services uses clinical performance measures to evaluate quality of care under a pay-for-performance or value-based purchasing model. Now more than ever, the ESRD therapeutic area serves as the vanguard of health care delivery. By translating medical evidence into clinical performance measures, the ESRD Prospective Payment System became the first disease-specific sector using the pay-for-performance model. A major challenge for the creation and implementation of clinical performance measures is the adjustments that are necessary to transition from taking care of individual patients to managing the care of patient populations. The National Quality Forum and others have developed effective and appropriate population-based clinical performance measures quality metrics that can be aggregated at the physician, hospital, dialysis facility, nursing home, or surgery center level. Clinical performance measures considered for endorsement by the National Quality Forum are evaluated using five key criteria: evidence, performance gap, and priority (impact); reliability; validity; feasibility; and usability and use. We have developed a checklist of special considerations for clinical performance measure development according to these National Quality Forum criteria. Although the checklist is focused on ESRD, it could also have broad application to chronic disease states, where health care delivery organizations seek to enhance quality, safety, and efficiency of their services. Clinical performance measures are likely to become the norm for tracking performance for health care insurers. Thus, it is critical that the methodologies used to develop such metrics serve the payer and the provider and most importantly, reflect what represents the best care to improve patient outcomes. PMID:24558050
Radiation Measurements Performed with Active Detectors Relevant for Human Space Exploration
Narici, Livio; Berger, Thomas; Matthiä, Daniel; Reitz, Günther
2015-01-01
A reliable radiation risk assessment in space is a mandatory step for the development of countermeasures and long-duration mission planning in human spaceflight. Research in radiobiology provides information about possible risks linked to radiation. In addition, for a meaningful risk evaluation, the radiation exposure has to be assessed to a sufficient level of accuracy. Consequently, both the radiation models predicting the risks and the measurements used to validate such models must have an equivalent precision. Corresponding measurements can be performed both with passive and active devices. The former is easier to handle, cheaper, lighter, and smaller but they measure neither the time dependence of the radiation environment nor some of the details useful for a comprehensive radiation risk assessment. Active detectors provide most of these details and have been extensively used in the International Space Station. To easily access such an amount of data, a single point access is becoming essential. This review presents an ongoing work on the development of a tool that allows obtaining information about all relevant measurements performed with active detectors providing reliable inputs for radiation model validation. PMID:26697408
Radiation Measurements Performed with Active Detectors Relevant for Human Space Exploration.
Narici, Livio; Berger, Thomas; Matthiä, Daniel; Reitz, Günther
2015-01-01
A reliable radiation risk assessment in space is a mandatory step for the development of countermeasures and long-duration mission planning in human spaceflight. Research in radiobiology provides information about possible risks linked to radiation. In addition, for a meaningful risk evaluation, the radiation exposure has to be assessed to a sufficient level of accuracy. Consequently, both the radiation models predicting the risks and the measurements used to validate such models must have an equivalent precision. Corresponding measurements can be performed both with passive and active devices. The former is easier to handle, cheaper, lighter, and smaller but they measure neither the time dependence of the radiation environment nor some of the details useful for a comprehensive radiation risk assessment. Active detectors provide most of these details and have been extensively used in the International Space Station. To easily access such an amount of data, a single point access is becoming essential. This review presents an ongoing work on the development of a tool that allows obtaining information about all relevant measurements performed with active detectors providing reliable inputs for radiation model validation.
Does adding clinical data to administrative data improve agreement among hospital quality measures?
Hanchate, Amresh D; Stolzmann, Kelly L; Rosen, Amy K; Fink, Aaron S; Shwartz, Michael; Ash, Arlene S; Abdulkerim, Hassen; Pugh, Mary Jo V; Shokeen, Priti; Borzecki, Ann
2017-09-01
Hospital performance measures based on patient mortality and readmission have indicated modest rates of agreement. We examined if combining clinical data on laboratory tests and vital signs with administrative data leads to improved agreement with each other, and with other measures of hospital performance in the nation's largest integrated health care system. We used patient-level administrative and clinical data, and hospital-level data on quality indicators, for 2007-2010 from the Veterans Health Administration (VA). For patients admitted for acute myocardial infarction (AMI), heart failure (HF) and pneumonia we examined changes in hospital performance on 30-d mortality and 30-d readmission rates as a result of adding clinical data to administrative data. We evaluated whether this enhancement yielded improved measures of hospital quality, based on concordance with other hospital quality indicators. For 30-d mortality, data enhancement improved model performance, and significantly changed hospital performance profiles; for 30-d readmission, the impact was modest. Concordance between enhanced measures of both outcomes, and with other hospital quality measures - including Joint Commission process measures, VA Surgical Quality Improvement Program (VASQIP) mortality and morbidity, and case volume - remained poor. Adding laboratory tests and vital signs to measure hospital performance on mortality and readmission did not improve the poor rates of agreement across hospital quality indicators in the VA. Efforts to improve risk adjustment models should continue; however, evidence of validation should precede their use as reliable measures of quality. Published by Elsevier Inc.
Zhou, Xiangrong; Xu, Rui; Hara, Takeshi; Hirano, Yasushi; Yokoyama, Ryujiro; Kanematsu, Masayuki; Hoshi, Hiroaki; Kido, Shoji; Fujita, Hiroshi
2014-07-01
The shapes of the inner organs are important information for medical image analysis. Statistical shape modeling provides a way of quantifying and measuring shape variations of the inner organs in different patients. In this study, we developed a universal scheme that can be used for building the statistical shape models for different inner organs efficiently. This scheme combines the traditional point distribution modeling with a group-wise optimization method based on a measure called minimum description length to provide a practical means for 3D organ shape modeling. In experiments, the proposed scheme was applied to the building of five statistical shape models for hearts, livers, spleens, and right and left kidneys by use of 50 cases of 3D torso CT images. The performance of these models was evaluated by three measures: model compactness, model generalization, and model specificity. The experimental results showed that the constructed shape models have good "compactness" and satisfied the "generalization" performance for different organ shape representations; however, the "specificity" of these models should be improved in the future.
NASA Astrophysics Data System (ADS)
Park, K.-R.; Kim, K.-h.; Kwak, S.; Svensson, J.; Lee, J.; Ghim, Y.-c.
2017-11-01
Feasibility study of direct spectra measurements of Thomson scattered photons for fusion-grade plasmas is performed based on a forward model of the KSTAR Thomson scattering system. Expected spectra in the forward model are calculated based on Selden function including the relativistic polarization correction. Noise in the signal is modeled with photon noise and Gaussian electrical noise. Electron temperature and density are inferred using Bayesian probability theory. Based on bias error, full width at half maximum and entropy of posterior distributions, spectral measurements are found to be feasible. Comparisons between spectrometer-based and polychromator-based Thomson scattering systems are performed with varying quantum efficiency and electrical noise levels.
Measurement system and model for simultaneously measuring 6DOF geometric errors.
Zhao, Yuqiong; Zhang, Bin; Feng, Qibo
2017-09-04
A measurement system to simultaneously measure six degree-of-freedom (6DOF) geometric errors is proposed. The measurement method is based on a combination of mono-frequency laser interferometry and laser fiber collimation. A simpler and more integrated optical configuration is designed. To compensate for the measurement errors introduced by error crosstalk, element fabrication error, laser beam drift, and nonparallelism of two measurement beam, a unified measurement model, which can improve the measurement accuracy, is deduced and established using the ray-tracing method. A numerical simulation using the optical design software Zemax is conducted, and the results verify the correctness of the model. Several experiments are performed to demonstrate the feasibility and effectiveness of the proposed system and measurement model.
Risk assessment model for development of advanced age-related macular degeneration.
Klein, Michael L; Francis, Peter J; Ferris, Frederick L; Hamon, Sara C; Clemons, Traci E
2011-12-01
To design a risk assessment model for development of advanced age-related macular degeneration (AMD) incorporating phenotypic, demographic, environmental, and genetic risk factors. We evaluated longitudinal data from 2846 participants in the Age-Related Eye Disease Study. At baseline, these individuals had all levels of AMD, ranging from none to unilateral advanced AMD (neovascular or geographic atrophy). Follow-up averaged 9.3 years. We performed a Cox proportional hazards analysis with demographic, environmental, phenotypic, and genetic covariates and constructed a risk assessment model for development of advanced AMD. Performance of the model was evaluated using the C statistic and the Brier score and externally validated in participants in the Complications of Age-Related Macular Degeneration Prevention Trial. The final model included the following independent variables: age, smoking history, family history of AMD (first-degree member), phenotype based on a modified Age-Related Eye Disease Study simple scale score, and genetic variants CFH Y402H and ARMS2 A69S. The model did well on performance measures, with very good discrimination (C statistic = 0.872) and excellent calibration and overall performance (Brier score at 5 years = 0.08). Successful external validation was performed, and a risk assessment tool was designed for use with or without the genetic component. We constructed a risk assessment model for development of advanced AMD. The model performed well on measures of discrimination, calibration, and overall performance and was successfully externally validated. This risk assessment tool is available for online use.
Using the weighted area under the net benefit curve for decision curve analysis.
Talluri, Rajesh; Shete, Sanjay
2016-07-18
Risk prediction models have been proposed for various diseases and are being improved as new predictors are identified. A major challenge is to determine whether the newly discovered predictors improve risk prediction. Decision curve analysis has been proposed as an alternative to the area under the curve and net reclassification index to evaluate the performance of prediction models in clinical scenarios. The decision curve computed using the net benefit can evaluate the predictive performance of risk models at a given or range of threshold probabilities. However, when the decision curves for 2 competing models cross in the range of interest, it is difficult to identify the best model as there is no readily available summary measure for evaluating the predictive performance. The key deterrent for using simple measures such as the area under the net benefit curve is the assumption that the threshold probabilities are uniformly distributed among patients. We propose a novel measure for performing decision curve analysis. The approach estimates the distribution of threshold probabilities without the need of additional data. Using the estimated distribution of threshold probabilities, the weighted area under the net benefit curve serves as the summary measure to compare risk prediction models in a range of interest. We compared 3 different approaches, the standard method, the area under the net benefit curve, and the weighted area under the net benefit curve. Type 1 error and power comparisons demonstrate that the weighted area under the net benefit curve has higher power compared to the other methods. Several simulation studies are presented to demonstrate the improvement in model comparison using the weighted area under the net benefit curve compared to the standard method. The proposed measure improves decision curve analysis by using the weighted area under the curve and thereby improves the power of the decision curve analysis to compare risk prediction models in a clinical scenario.
ERIC Educational Resources Information Center
Day, Jeanne D.; And Others
1997-01-01
Relationships between pretraining skills, learning, and posttest performance were studied in spatial and verbal tasks for 84 preschool children. The measurement model that fit the data best maintained separate verbal and spatial domains. The best structural model included paths from pretest and learning assessments to posttest performance within…
Performance-Based Service Quality Model: An Empirical Study on Japanese Universities
ERIC Educational Resources Information Center
Sultan, Parves; Wong, Ho
2010-01-01
Purpose: This paper aims to develop and empirically test the performance-based higher education service quality model. Design/methodology/approach: The study develops 67-item instrument for measuring performance-based service quality with a particular focus on the higher education sector. Scale reliability is confirmed using the Cronbach's alpha.…
Effects of Prompting Multiple Solutions for Modelling Problems on Students' Performance
ERIC Educational Resources Information Center
Schukajlow, Stanislaw; Krug, André; Rakoczy, Katrin
2015-01-01
Prompting students to construct multiple solutions for modelling problems with vague conditions has been found to be an effective way to improve students' performance on interest-oriented measures. In the current study, we investigated the influence of this teaching element on students' performance. To assess the impact of prompting multiple…
2010-01-01
Background The measurement of healthcare provider performance is becoming more widespread. Physicians have been guarded about performance measurement, in part because the methodology for comparative measurement of care quality is underdeveloped. Comprehensive quality improvement will require comprehensive measurement, implying the aggregation of multiple quality metrics into composite indicators. Objective To present a conceptual framework to develop comprehensive, robust, and transparent composite indicators of pediatric care quality, and to highlight aspects specific to quality measurement in children. Methods We reviewed the scientific literature on composite indicator development, health systems, and quality measurement in the pediatric healthcare setting. Frameworks were selected for explicitness and applicability to a hospital-based measurement system. Results We synthesized various frameworks into a comprehensive model for the development of composite indicators of quality of care. Among its key premises, the model proposes identifying structural, process, and outcome metrics for each of the Institute of Medicine's six domains of quality (safety, effectiveness, efficiency, patient-centeredness, timeliness, and equity) and presents a step-by-step framework for embedding the quality of care measurement model into composite indicator development. Conclusions The framework presented offers researchers an explicit path to composite indicator development. Without a scientifically robust and comprehensive approach to measurement of the quality of healthcare, performance measurement will ultimately fail to achieve its quality improvement goals. PMID:20181129
75 FR 42760 - Statement of Organization, Functions, and Delegations of Authority
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-22
... accounting reports and invoices, and monitoring all spending. The Team develops, defends and executes the... results; performance measurement; research and evaluation methodologies; demonstration testing and model... ACF programs; strategic planning; performance measurement; program and policy evaluation; research and...
Scattering Properties of Large Irregular Cosmic Dust Particles at Visible Wavelengths
DOE Office of Scientific and Technical Information (OSTI.GOV)
Escobar-Cerezo, J.; Palmer, C.; Muñoz, O.
The effect of internal inhomogeneities and surface roughness on the scattering behavior of large cosmic dust particles is studied by comparing model simulations with laboratory measurements. The present work shows the results of an attempt to model a dust sample measured in the laboratory with simulations performed by a ray-optics model code. We consider this dust sample as a good analogue for interplanetary and interstellar dust as it shares its refractive index with known materials in these media. Several sensitivity tests have been performed for both structural cases (internal inclusions and surface roughness). Three different samples have been selected tomore » mimic inclusion/coating inhomogeneities: two measured scattering matrices of hematite and white clay, and a simulated matrix for water ice. These three matrices are selected to cover a wide range of imaginary refractive indices. The selection of these materials also seeks to study astrophysical environments of interest such as Mars, where hematite and clays have been detected, and comets. Based on the results of the sensitivity tests shown in this work, we perform calculations for a size distribution of a silicate-type host particle model with inclusions and surface roughness to reproduce the experimental measurements of a dust sample. The model fits the measurements quite well, proving that surface roughness and internal structure play a role in the scattering pattern of irregular cosmic dust particles.« less
Ashcroft, Rachelle
2014-01-01
Emphasis on quantity as the main performance measure may be posing challenges for Family Health Team (FHT) practices and organizational structures. This study asked: What healthcare practices and organizational structures are encouraged by the FHT model? An exploratory qualitative design guided by discourse analysis was used. This paper presents findings from in-depth semi-structured interviews conducted with seven policy informants and 29 FHT leaders. Participants report that performance measures value quantity and are not inclusive of the broad scope of attributes that comprise primary healthcare. Performance measures do not appear to be accurately capturing the demand for healthcare services, or the actual amount of services being provided by FHTs. RESULTS suggest that unintended consequences of performance measures may be posing challenges to access and health outcomes. It is recommended that performance measures be developed and used to measure, support and encourage FHTs to achieve the goals of PHC. Copyright © 2014 Longwoods Publishing.
Doctors or technicians: assessing quality of medical education
Hasan, Tayyab
2010-01-01
Medical education institutions usually adapt industrial quality management models that measure the quality of the process of a program but not the quality of the product. The purpose of this paper is to analyze the impact of industrial quality management models on medical education and students, and to highlight the importance of introducing a proper educational quality management model. Industrial quality management models can measure the training component in terms of competencies, but they lack the educational component measurement. These models use performance indicators to assess their process improvement efforts. Researchers suggest that the performance indicators used in educational institutions may only measure their fiscal efficiency without measuring the quality of the educational experience of the students. In most of the institutions, where industrial models are used for quality assurance, students are considered as customers and are provided with the maximum services and facilities possible. Institutions are required to fulfill a list of recommendations from the quality control agencies in order to enhance student satisfaction and to guarantee standard services. Quality of medical education should be assessed by measuring the impact of the educational program and quality improvement procedures in terms of knowledge base development, behavioral change, and patient care. Industrial quality models may focus on academic support services and processes, but educational quality models should be introduced in parallel to focus on educational standards and products. PMID:23745059
Liu, Yan; Cai, Wensheng; Shao, Xueguang
2016-12-05
Calibration transfer is essential for practical applications of near infrared (NIR) spectroscopy because the measurements of the spectra may be performed on different instruments and the difference between the instruments must be corrected. For most of calibration transfer methods, standard samples are necessary to construct the transfer model using the spectra of the samples measured on two instruments, named as master and slave instrument, respectively. In this work, a method named as linear model correction (LMC) is proposed for calibration transfer without standard samples. The method is based on the fact that, for the samples with similar physical and chemical properties, the spectra measured on different instruments are linearly correlated. The fact makes the coefficients of the linear models constructed by the spectra measured on different instruments are similar in profile. Therefore, by using the constrained optimization method, the coefficients of the master model can be transferred into that of the slave model with a few spectra measured on slave instrument. Two NIR datasets of corn and plant leaf samples measured with different instruments are used to test the performance of the method. The results show that, for both the datasets, the spectra can be correctly predicted using the transferred partial least squares (PLS) models. Because standard samples are not necessary in the method, it may be more useful in practical uses. Copyright © 2016 Elsevier B.V. All rights reserved.
Doctors or technicians: assessing quality of medical education.
Hasan, Tayyab
2010-01-01
Medical education institutions usually adapt industrial quality management models that measure the quality of the process of a program but not the quality of the product. The purpose of this paper is to analyze the impact of industrial quality management models on medical education and students, and to highlight the importance of introducing a proper educational quality management model. Industrial quality management models can measure the training component in terms of competencies, but they lack the educational component measurement. These models use performance indicators to assess their process improvement efforts. Researchers suggest that the performance indicators used in educational institutions may only measure their fiscal efficiency without measuring the quality of the educational experience of the students. In most of the institutions, where industrial models are used for quality assurance, students are considered as customers and are provided with the maximum services and facilities possible. Institutions are required to fulfill a list of recommendations from the quality control agencies in order to enhance student satisfaction and to guarantee standard services. Quality of medical education should be assessed by measuring the impact of the educational program and quality improvement procedures in terms of knowledge base development, behavioral change, and patient care. Industrial quality models may focus on academic support services and processes, but educational quality models should be introduced in parallel to focus on educational standards and products.
Multiresolution modeling with a JMASS-JWARS HLA Federation
NASA Astrophysics Data System (ADS)
Prince, John D.; Painter, Ron D.; Pendell, Brian; Richert, Walt; Wolcott, Christopher
2002-07-01
CACI, Inc.-Federal has built, tested, and demonstrated the use of a JMASS-JWARS HLA Federation that supports multi- resolution modeling of a weapon system and its subsystems in a JMASS engineering and engagement model environment, while providing a realistic JWARS theater campaign-level synthetic battle space and operational context to assess the weapon system's value added and deployment/employment supportability in a multi-day, combined force-on-force scenario. Traditionally, acquisition analyses require a hierarchical suite of simulation models to address engineering, engagement, mission and theater/campaign measures of performance, measures of effectiveness and measures of merit. Configuring and running this suite of simulations and transferring the appropriate data between each model is both time consuming and error prone. The ideal solution would be a single simulation with the requisite resolution and fidelity to perform all four levels of acquisition analysis. However, current computer hardware technologies cannot deliver the runtime performance necessary to support the resulting extremely large simulation. One viable alternative is to integrate the current hierarchical suite of simulation models using the DoD's High Level Architecture in order to support multi- resolution modeling. An HLA integration eliminates the extremely large model problem, provides a well-defined and manageable mixed resolution simulation and minimizes VV&A issues.
Capacity utilization study for aviation security cargo inspection queuing system
NASA Astrophysics Data System (ADS)
Allgood, Glenn O.; Olama, Mohammed M.; Lake, Joe E.; Brumback, Daryl
2010-04-01
In this paper, we conduct performance evaluation study for an aviation security cargo inspection queuing system for material flow and accountability. The queuing model employed in our study is based on discrete-event simulation and processes various types of cargo simultaneously. Onsite measurements are collected in an airport facility to validate the queuing model. The overall performance of the aviation security cargo inspection system is computed, analyzed, and optimized for the different system dynamics. Various performance measures are considered such as system capacity, residual capacity, throughput, capacity utilization, subscribed capacity utilization, resources capacity utilization, subscribed resources capacity utilization, and number of cargo pieces (or pallets) in the different queues. These metrics are performance indicators of the system's ability to service current needs and response capacity to additional requests. We studied and analyzed different scenarios by changing various model parameters such as number of pieces per pallet, number of TSA inspectors and ATS personnel, number of forklifts, number of explosives trace detection (ETD) and explosives detection system (EDS) inspection machines, inspection modality distribution, alarm rate, and cargo closeout time. The increased physical understanding resulting from execution of the queuing model utilizing these vetted performance measures should reduce the overall cost and shipping delays associated with new inspection requirements.
Capacity Utilization Study for Aviation Security Cargo Inspection Queuing System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allgood, Glenn O; Olama, Mohammed M; Lake, Joe E
In this paper, we conduct performance evaluation study for an aviation security cargo inspection queuing system for material flow and accountability. The queuing model employed in our study is based on discrete-event simulation and processes various types of cargo simultaneously. Onsite measurements are collected in an airport facility to validate the queuing model. The overall performance of the aviation security cargo inspection system is computed, analyzed, and optimized for the different system dynamics. Various performance measures are considered such as system capacity, residual capacity, throughput, capacity utilization, subscribed capacity utilization, resources capacity utilization, subscribed resources capacity utilization, and number ofmore » cargo pieces (or pallets) in the different queues. These metrics are performance indicators of the system s ability to service current needs and response capacity to additional requests. We studied and analyzed different scenarios by changing various model parameters such as number of pieces per pallet, number of TSA inspectors and ATS personnel, number of forklifts, number of explosives trace detection (ETD) and explosives detection system (EDS) inspection machines, inspection modality distribution, alarm rate, and cargo closeout time. The increased physical understanding resulting from execution of the queuing model utilizing these vetted performance measures should reduce the overall cost and shipping delays associated with new inspection requirements.« less
Solar power plant performance evaluation: simulation and experimental validation
NASA Astrophysics Data System (ADS)
Natsheh, E. M.; Albarbar, A.
2012-05-01
In this work the performance of solar power plant is evaluated based on a developed model comprise photovoltaic array, battery storage, controller and converters. The model is implemented using MATLAB/SIMULINK software package. Perturb and observe (P&O) algorithm is used for maximizing the generated power based on maximum power point tracker (MPPT) implementation. The outcome of the developed model are validated and supported by a case study carried out using operational 28.8kW grid-connected solar power plant located in central Manchester. Measurements were taken over 21 month's period; using hourly average irradiance and cell temperature. It was found that system degradation could be clearly monitored by determining the residual (the difference) between the output power predicted by the model and the actual measured power parameters. It was found that the residual exceeded the healthy threshold, 1.7kW, due to heavy snow in Manchester last winter. More important, the developed performance evaluation technique could be adopted to detect any other reasons that may degrade the performance of the P V panels such as shading and dirt. Repeatability and reliability of the developed system performance were validated during this period. Good agreement was achieved between the theoretical simulation and the real time measurement taken the online grid connected solar power plant.
From SED HI concept to Pleiades FM detection unit measurements
NASA Astrophysics Data System (ADS)
Renard, Christophe; Dantes, Didier; Neveu, Claude; Lamard, Jean-Luc; Oudinot, Matthieu; Materne, Alex
2017-11-01
The first flight model PLEIADES high resolution instrument under Thales Alenia Space development, on behalf of CNES, is currently in integration and test phases. Based on the SED HI detection unit concept, PLEIADES detection unit has been fully qualified before the integration at telescope level. The main radiometric performances have been measured on engineering and first flight models. This paper presents the results of performances obtained on the both models. After a recall of the SED HI concept, the design and performances of the main elements (charge coupled detectors, focal plane and video processing unit), detection unit radiometric performances are presented and compared to the instrument specifications for the panchromatic and multispectral bands. The performances treated are the following: - video signal characteristics, - dark signal level and dark signal non uniformity, - photo-response non uniformity, - non linearity and differential non linearity, - temporal and spatial noises regarding system definitions PLEIADES detection unit allows tuning of different functions: reference and sampling time positioning, anti-blooming level, gain value, TDI line number. These parameters are presented with their associated criteria of optimisation to achieve system radiometric performances and their sensitivities on radiometric performances. All the results of the measurements performed by Thales Alenia Space on the PLEIADES detection units demonstrate the high potential of the SED HI concept for Earth high resolution observation system allowing optimised performances at instrument and satellite levels.
Specifying and Refining a Measurement Model for a Simulation-Based Assessment. CSE Report 619.
ERIC Educational Resources Information Center
Levy, Roy; Mislevy, Robert J.
2004-01-01
The challenges of modeling students' performance in simulation-based assessments include accounting for multiple aspects of knowledge and skill that arise in different situations and the conditional dependencies among multiple aspects of performance in a complex assessment. This paper describes a Bayesian approach to modeling and estimating…
NASA Astrophysics Data System (ADS)
Gosman, Nathaniel
For energy utilities faced with expanded jurisdictional energy efficiency requirements and pursuing demand-side management (DSM) incentive programs in the large industrial sector, performance incentive programs can be an effective means to maximize the reliability of planned energy savings. Performance incentive programs balance the objectives of high participation rates with persistent energy savings by: (1) providing financial incentives and resources to minimize constraints to investment in energy efficiency, and (2) requiring that incentive payments be dependent on measured energy savings over time. As BC Hydro increases its DSM initiatives to meet the Clean Energy Act objective to reduce at least 66 per cent of new electricity demand with DSM by 2020, the utility is faced with a higher level of DSM risk, or uncertainties that impact the costeffective acquisition of planned energy savings. For industrial DSM incentive programs, DSM risk can be broken down into project development and project performance risks. Development risk represents the project ramp-up phase and is the risk that planned energy savings do not materialize due to low customer response to program incentives. Performance risk represents the operational phase and is the risk that planned energy savings do not persist over the effective measure life. DSM project development and performance risks are, in turn, a result of industrial economic, technological and organizational conditions, or DSM risk factors. In the BC large industrial sector, and characteristic of large industrial sectors in general, these DSM risk factors include: (1) capital constraints to investment in energy efficiency, (2) commodity price volatility, (3) limited internal staffing resources to deploy towards energy efficiency, (4) variable load, process-based energy saving potential, and (5) a lack of organizational awareness of an operation's energy efficiency over time (energy performance). This research assessed the capacity of alternative performance incentive program models to manage DSM risk in BC. Three performance incentive program models were assessed and compared to BC Hydro's current large industrial DSM incentive program, Power Smart Partners -- Transmission Project Incentives, itself a performance incentive-based program. Together, the selected program models represent a continuum of program design and implementation in terms of the schedule and level of incentives provided, the duration and rigour of measurement and verification (M&V), energy efficiency measures targeted and involvement of the private sector. A multi criteria assessment framework was developed to rank the capacity of each program model to manage BC large industrial DSM risk factors. DSM risk management rankings were then compared to program costeffectiveness, targeted energy savings potential in BC and survey results from BC industrial firms on the program models. The findings indicate that the reliability of DSM energy savings in the BC large industrial sector can be maximized through performance incentive program models that: (1) offer incentives jointly for capital and low-cost operations and maintenance (O&M) measures, (2) allow flexible lead times for project development, (3) utilize rigorous M&V methods capable of measuring variable load, process-based energy savings, (4) use moderate contract lengths that align with effective measure life, and (5) integrate energy management software tools capable of providing energy performance feedback to customers to maximize the persistence of energy savings. While this study focuses exclusively on the BC large industrial sector, the findings of this research have applicability to all energy utilities serving large, energy intensive industrial sectors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dirian, Yves; Foffa, Stefano; Kunz, Martin
We present a comprehensive and updated comparison with cosmological observations of two non-local modifications of gravity previously introduced by our group, the so called RR and RT models. We implement the background evolution and the cosmological perturbations of the models in a modified Boltzmann code, using CLASS. We then test the non-local models against the Planck 2015 TT, TE, EE and Cosmic Microwave Background (CMB) lensing data, isotropic and anisotropic Baryonic Acoustic Oscillations (BAO) data, JLA supernovae, H {sub 0} measurements and growth rate data, and we perform Bayesian parameter estimation. We then compare the RR, RT and ΛCDM models,more » using the Savage-Dickey method. We find that the RT model and ΛCDM perform equally well, while the performance of the RR model with respect to ΛCDM depends on whether or not we include a prior on H {sub 0} based on local measurements.« less
Variability-aware compact modeling and statistical circuit validation on SRAM test array
NASA Astrophysics Data System (ADS)
Qiao, Ying; Spanos, Costas J.
2016-03-01
Variability modeling at the compact transistor model level can enable statistically optimized designs in view of limitations imposed by the fabrication technology. In this work we propose a variability-aware compact model characterization methodology based on stepwise parameter selection. Transistor I-V measurements are obtained from bit transistor accessible SRAM test array fabricated using a collaborating foundry's 28nm FDSOI technology. Our in-house customized Monte Carlo simulation bench can incorporate these statistical compact models; and simulation results on SRAM writability performance are very close to measurements in distribution estimation. Our proposed statistical compact model parameter extraction methodology also has the potential of predicting non-Gaussian behavior in statistical circuit performances through mixtures of Gaussian distributions.
Spatial extrapolation of lysimeter results using thermal infrared imaging
NASA Astrophysics Data System (ADS)
Voortman, B. R.; Bosveld, F. C.; Bartholomeus, R. P.; Witte, J. P. M.
2016-12-01
Measuring evaporation (E) with lysimeters is costly and prone to numerous errors. By comparing the energy balance and the remotely sensed surface temperature of lysimeters with those of the undisturbed surroundings, we were able to assess the representativeness of lysimeter measurements and to quantify differences in evaporation caused by spatial variations in soil moisture content. We used an algorithm (the so called 3T model) to spatially extrapolate the measured E of a reference lysimeter based on differences in surface temperature, net radiation and soil heat flux. We tested the performance of the 3T model on measurements with multiple lysimeters (47.5 cm inner diameter) and micro lysimeters (19.2 cm inner diameter) installed in bare sand, moss and natural dry grass. We developed different scaling procedures using in situ measurements and remotely sensed surface temperatures to derive spatially distributed estimates of Rn and G and explored the physical soundness of the 3T model. Scaling of Rn and G considerably improved the performance of the 3T model for the bare sand and moss experiments (Nash-Sutcliffe efficiency (NSE) increasing from 0.45 to 0.89 and from 0.81 to 0.94, respectively). For the grass surface, the scaling procedures resulted in a poorer performance of the 3T model (NSE decreasing from 0.74 to 0.70), which was attributed to effects of shading and the difficulty to correct for differences in emissivity between dead and living biomass. The 3T model is physically unsound if the field scale average air temperature, measured at an arbitrarily chosen reference height, is used as input to the model. The proposed measurement system is relatively cheap, since it uses a zero tension (freely draining) lysimeter which results are extrapolated by the 3T model to the unaffected surroundings. The system is promising for bridging the gap between ground observations and satellite based estimates of E.
Synthesis of the Multilayer Cryogenic Insulation Modelling and Measurements
NASA Astrophysics Data System (ADS)
Polinski, J.; Chorowski, M.; Choudhury, A.; Datta, T. S.
2008-03-01
A thermodynamic approach towards insulation systems in cryogenic engineering is proposed. A mathematical model of the heat transfer through multilayer insulation (MLI) has been developed and experimentally verified. The model comprises both physical and engineering parameters determining the MLI performance and enables a complex optimization of the insulation system including the choice of the insulation location in a vacuum space. The model takes into account an interstitial (interlayer) gas pressure variation with the MLI number of layers and layers density. The paper presents the discussion of MLI performance in different conditions and provides comparison of computation results with experimental reference and measured data.
Performance comparison for Barnes model 12-1000, Exotech model 100, and Ideas Inc. Biometer Mark 2
NASA Technical Reports Server (NTRS)
Robinson, B. (Principal Investigator)
1981-01-01
Results of tests show that all channels of all instruments, except channel 3 of the Biometer Mark 2, were stable in response to input signals were linear, and were adequately stable in response to temperature changes. The Biometer Mark 2 is labelled with an inappropriate description of the units measured and the dynamic range is a inappropriate for field measurements causing unnecessarily high fractional errors. This instrument is, therefore, quantization limited. The dynamic range and noise performance of the Model 12-1000 are appropriate for remote sensing field research. The field of view and performance of the Model 100A and the Model 12-1000 are satisfactory. The Biometer Mark 2 has not, as yet, been satisfactorily equipped with an acceptable field of view determining device. Neither the widely used aperture plate nor the 24 deg cone are acceptable.
A Framework for Daylighting Optimization in Whole Buildings with OpenStudio
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2016-08-12
We present a toolkit and workflow for leveraging the OpenStudio (Guglielmetti et al. 2010) platform to perform daylighting analysis and optimization in a whole building energy modeling (BEM) context. We have re-implemented OpenStudio's integrated Radiance and EnergyPlus functionality as an OpenStudio Measure. The OpenStudio Radiance Measure works within the OpenStudio Application and Parametric Analysis Tool, as well as the OpenStudio Server large scale analysis framework, allowing a rigorous daylighting simulation to be performed on a single building model or potentially an entire population of programmatically generated models. The Radiance simulation results can automatically inform the broader building energy model, andmore » provide dynamic daylight metrics as a basis for decision. Through introduction and example, this paper illustrates the utility of the OpenStudio building energy modeling platform to leverage existing simulation tools for integrated building energy performance simulation, daylighting analysis, and reportage.« less
Probabilities and predictions: modeling the development of scientific problem-solving skills.
Stevens, Ron; Johnson, David F; Soller, Amy
2005-01-01
The IMMEX (Interactive Multi-Media Exercises) Web-based problem set platform enables the online delivery of complex, multimedia simulations, the rapid collection of student performance data, and has already been used in several genetic simulations. The next step is the use of these data to understand and improve student learning in a formative manner. This article describes the development of probabilistic models of undergraduate student problem solving in molecular genetics that detailed the spectrum of strategies students used when problem solving, and how the strategic approaches evolved with experience. The actions of 776 university sophomore biology majors from three molecular biology lecture courses were recorded and analyzed. Each of six simulations were first grouped by artificial neural network clustering to provide individual performance measures, and then sequences of these performances were probabilistically modeled by hidden Markov modeling to provide measures of progress. The models showed that students with different initial problem-solving abilities choose different strategies. Initial and final strategies varied across different sections of the same course and were not strongly correlated with other achievement measures. In contrast to previous studies, we observed no significant gender differences. We suggest that instructor interventions based on early student performances with these simulations may assist students to recognize effective and efficient problem-solving strategies and enhance learning.
ERIC Educational Resources Information Center
Moosai, Susan; Walker, David A.; Floyd, Deborah L.
2011-01-01
Prediction models using graduation rate as the performance indicator were obtained for community colleges in California, Florida, and Michigan. The results of this study indicated that institutional graduation rate could be predicted effectively from an aggregate of student and institutional characteristics. A performance measure was computed, the…
Ahn, Jaeil; Morita, Satoshi; Wang, Wenyi; Yuan, Ying
2017-01-01
Analyzing longitudinal dyadic data is a challenging task due to the complicated correlations from repeated measurements and within-dyad interdependence, as well as potentially informative (or non-ignorable) missing data. We propose a dyadic shared-parameter model to analyze longitudinal dyadic data with ordinal outcomes and informative intermittent missing data and dropouts. We model the longitudinal measurement process using a proportional odds model, which accommodates the within-dyad interdependence using the concept of the actor-partner interdependence effects, as well as dyad-specific random effects. We model informative dropouts and intermittent missing data using a transition model, which shares the same set of random effects as the longitudinal measurement model. We evaluate the performance of the proposed method through extensive simulation studies. As our approach relies on some untestable assumptions on the missing data mechanism, we perform sensitivity analyses to evaluate how the analysis results change when the missing data mechanism is misspecified. We demonstrate our method using a longitudinal dyadic study of metastatic breast cancer.
NASA Technical Reports Server (NTRS)
Li, Hui; Faruque, Fazlay; Williams, Worth; Al-Hamdan, Mohammad; Luvall, Jeffrey C.; Crosson, William; Rickman, Douglas; Limaye, Ashutosh
2009-01-01
Aerosol optical depth (AOD), an indirect estimate of particle matter using satellite observations, has shown great promise in improving estimates of PM 2.5 air quality surface. Currently, few studies have been conducted to explore the optimal way to apply AOD data to improve the model accuracy of PM 2.5 surface estimation in a real-time air quality system. We believe that two major aspects may be worthy of consideration in that area: 1) the approach to integrate satellite measurements with ground measurements in the pollution estimation, and 2) identification of an optimal temporal scale to calculate the correlation of AOD and ground measurements. This paper is focused on the second aspect on the identifying the optimal temporal scale to correlate AOD with PM2.5. Five following different temporal scales were chosen to evaluate their impact on the model performance: 1) within the last 3 days, 2) within the last 10 days, 3) within the last 30 days, 4) within the last 90 days, and 5) the time period with the highest correlation in a year. The model performance is evaluated for its accuracy, bias, and errors based on the following selected statistics: the Mean Bias, the Normalized Mean Bias, the Root Mean Square Error, Normalized Mean Error, and the Index of Agreement. This research shows that the model with the temporal scale of within the last 30 days displays the best model performance in this study area using 2004 and 2005 data sets.
Maciejewski, Matthew L; Liu, Chuan-Fen; Fihn, Stephan D
2009-01-01
To compare the ability of generic comorbidity and risk adjustment measures, a diabetes-specific measure, and a self-reported functional status measure to explain variation in health care expenditures for individuals with diabetes. This study included a retrospective cohort of 3,092 diabetic veterans participating in a multisite trial. Two comorbidity measures, four risk adjusters, a functional status measure, a diabetes complication count, and baseline expenditures were constructed from administrative and survey data. Outpatient, inpatient, and total expenditure models were estimated using ordinary least squares regression. Adjusted R(2) statistics and predictive ratios were compared across measures to assess overall explanatory power and explanatory power of low- and high-cost subgroups. Administrative data-based risk adjusters performed better than the comorbidity, functional status, and diabetes-specific measures in all expenditure models. The diagnostic cost groups (DCGs) measure had the greatest predictive power overall and for the low- and high-cost subgroups, while the diabetes-specific measure had the lowest predictive power. A model with DCGs and the diabetes-specific measure modestly improved predictive power. Existing generic measures can be useful for diabetes-specific research and policy applications, but more predictive diabetes-specific measures are needed.
USDA-ARS?s Scientific Manuscript database
Availability of continuous long-term measured data for model calibration and validation is limited due to time and resources constraints. As a result, hydrologic and water quality models are calibrated and, if possible, validated when measured data is available. Past work reported on the impact of t...
A Multinomial Model of Event-Based Prospective Memory
ERIC Educational Resources Information Center
Smith, Rebekah E.; Bayen, Ute J.
2004-01-01
Prospective memory is remembering to perform an action in the future. The authors introduce the 1st formal model of event-based prospective memory, namely, a multinomial model that includes 2 separate parameters related to prospective memory processes. The 1st measures preparatory attentional processes, and the 2nd measures retrospective memory…
Leong, Siow Hoo; Ong, Seng Huat
2017-01-01
This paper considers three crucial issues in processing scaled down image, the representation of partial image, similarity measure and domain adaptation. Two Gaussian mixture model based algorithms are proposed to effectively preserve image details and avoids image degradation. Multiple partial images are clustered separately through Gaussian mixture model clustering with a scan and select procedure to enhance the inclusion of small image details. The local image features, represented by maximum likelihood estimates of the mixture components, are classified by using the modified Bayes factor (MBF) as a similarity measure. The detection of novel local features from MBF will suggest domain adaptation, which is changing the number of components of the Gaussian mixture model. The performance of the proposed algorithms are evaluated with simulated data and real images and it is shown to perform much better than existing Gaussian mixture model based algorithms in reproducing images with higher structural similarity index.
Leong, Siow Hoo
2017-01-01
This paper considers three crucial issues in processing scaled down image, the representation of partial image, similarity measure and domain adaptation. Two Gaussian mixture model based algorithms are proposed to effectively preserve image details and avoids image degradation. Multiple partial images are clustered separately through Gaussian mixture model clustering with a scan and select procedure to enhance the inclusion of small image details. The local image features, represented by maximum likelihood estimates of the mixture components, are classified by using the modified Bayes factor (MBF) as a similarity measure. The detection of novel local features from MBF will suggest domain adaptation, which is changing the number of components of the Gaussian mixture model. The performance of the proposed algorithms are evaluated with simulated data and real images and it is shown to perform much better than existing Gaussian mixture model based algorithms in reproducing images with higher structural similarity index. PMID:28686634
Gandy, William M; Coberley, Carter; Pope, James E; Rula, Elizabeth Y
2016-01-01
To compare utility of employee well-being to health risk assessment (HRA) as predictors of productivity change. Panel data from 2189 employees who completed surveys 2 years apart were used in hierarchical models comparing the influence of well-being and health risk on longitudinal changes in presenteeism and job performance. Absenteeism change was evaluated in a nonexempt subsample. Change in well-being was the most significant independent predictor of productivity change across all three measures. Comparing hierarchical models, well-being models performed significantly better than HRA models. The HRA added no incremental explanatory power over well-being in combined models. Alone, nonphysical health well-being components outperformed the HRA for all productivity measures. Well-being offers a more comprehensive measure of factors that influence productivity and can be considered preferential to HRA in understanding and addressing suboptimal productivity.
ERIC Educational Resources Information Center
Shuck, Brad; Zigarmi, Drea; Owen, Jesse
2015-01-01
Purpose: The purpose of this study was to empirically examine the utility of self-determination theory (SDT) within the engagement-performance linkage. Design/methodology/approach: Bayesian multi-measurement mediation modeling was used to estimate the relation between SDT, engagement and a proxy measure of performance (e.g. work intentions) (N =…
Comparative Performance and Model Agreement of Three Common Photovoltaic Array Configurations.
Boyd, Matthew T
2018-02-01
Three grid-connected monocrystalline silicon arrays on the National Institute of Standards and Technology (NIST) campus in Gaithersburg, MD have been instrumented and monitored for 1 yr, with only minimal gaps in the data sets. These arrays range from 73 kW to 271 kW, and all use the same module, but have different tilts, orientations, and configurations. One array is installed facing east and west over a parking lot, one in an open field, and one on a flat roof. Various measured relationships and calculated standard metrics have been used to compare the relative performance of these arrays in their different configurations. Comprehensive performance models have also been created in the modeling software pvsyst for each array, and its predictions using measured on-site weather data are compared to the arrays' measured outputs. The comparisons show that all three arrays typically have monthly performance ratios (PRs) above 0.75, but differ significantly in their relative output, strongly correlating to their operating temperature and to a lesser extent their orientation. The model predictions are within 5% of the monthly delivered energy values except during the winter months, when there was intermittent snow on the arrays, and during maintenance and other outages.
Performance and Architecture Lab Modeling Tool
DOE Office of Scientific and Technical Information (OSTI.GOV)
2014-06-19
Analytical application performance models are critical for diagnosing performance-limiting resources, optimizing systems, and designing machines. Creating models, however, is difficult. Furthermore, models are frequently expressed in forms that are hard to distribute and validate. The Performance and Architecture Lab Modeling tool, or Palm, is a modeling tool designed to make application modeling easier. Palm provides a source code modeling annotation language. Not only does the modeling language divide the modeling task into sub problems, it formally links an application's source code with its model. This link is important because a model's purpose is to capture application behavior. Furthermore, this linkmore » makes it possible to define rules for generating models according to source code organization. Palm generates hierarchical models according to well-defined rules. Given an application, a set of annotations, and a representative execution environment, Palm will generate the same model. A generated model is a an executable program whose constituent parts directly correspond to the modeled application. Palm generates models by combining top-down (human-provided) semantic insight with bottom-up static and dynamic analysis. A model's hierarchy is defined by static and dynamic source code structure. Because Palm coordinates models and source code, Palm's models are 'first-class' and reproducible. Palm automates common modeling tasks. For instance, Palm incorporates measurements to focus attention, represent constant behavior, and validate models. Palm's workflow is as follows. The workflow's input is source code annotated with Palm modeling annotations. The most important annotation models an instance of a block of code. Given annotated source code, the Palm Compiler produces executables and the Palm Monitor collects a representative performance profile. The Palm Generator synthesizes a model based on the static and dynamic mapping of annotations to program behavior. The model -- an executable program -- is a hierarchical composition of annotation functions, synthesized functions, statistics for runtime values, and performance measurements.« less
Precision reconstruction of manufactured free-form components
NASA Astrophysics Data System (ADS)
Ristic, Mihailo; Brujic, Djordje; Ainsworth, Iain
2000-03-01
Manufacturing needs in many industries, especially the aerospace and the automotive, involve CAD remodeling of manufactured free-form parts using NURBS. This is typically performed as part of 'first article inspection' or 'closing the design loop.' The reconstructed model must satisfy requirements such as accuracy, compatibility with the original CAD model and adherence to various constraints. The paper outlines a methodology for realizing this task. Efficiency and quality of the results are achieved by utilizing the nominal CAD model. It is argued that measurement and remodeling steps are equally important. We explain how the measurement was optimized in terms of accuracy, point distribution and measuring speed using a CMM. Remodeling steps include registration, data segmentation, parameterization and surface fitting. Enforcement of constraints such as continuity was performed as part of the surface fitting process. It was found necessary that the relevant algorithms are able to perform in the presence of measurement noise, while making no special assumptions about regularity of data distribution. In order to deal with real life situations, a number of supporting functions for geometric modeling were required and these are described. The presented methodology was applied using real aeroengine parts and the experimental results are presented.
A Measurement and Simulation Based Methodology for Cache Performance Modeling and Tuning
NASA Technical Reports Server (NTRS)
Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)
1998-01-01
We present a cache performance modeling methodology that facilitates the tuning of uniprocessor cache performance for applications executing on shared memory multiprocessors by accurately predicting the effects of source code level modifications. Measurements on a single processor are initially used for identifying parts of code where cache utilization improvements may significantly impact the overall performance. Cache simulation based on trace-driven techniques can be carried out without gathering detailed address traces. Minimal runtime information for modeling cache performance of a selected code block includes: base virtual addresses of arrays, virtual addresses of variables, and loop bounds for that code block. Rest of the information is obtained from the source code. We show that the cache performance predictions are as reliable as those obtained through trace-driven simulations. This technique is particularly helpful to the exploration of various "what-if' scenarios regarding the cache performance impact for alternative code structures. We explain and validate this methodology using a simple matrix-matrix multiplication program. We then apply this methodology to predict and tune the cache performance of two realistic scientific applications taken from the Computational Fluid Dynamics (CFD) domain.
Controlling flexible structures with second order actuator dynamics
NASA Technical Reports Server (NTRS)
Inman, Daniel J.; Umland, Jeffrey W.; Bellos, John
1989-01-01
The control of flexible structures for those systems with actuators that are modeled by second order dynamics is examined. Two modeling approaches are investigated. First a stability and performance analysis is performed using a low order finite dimensional model of the structure. Secondly, a continuum model of the flexible structure to be controlled, coupled with lumped parameter second order dynamic models of the actuators performing the control is used. This model is appropriate in the modeling of the control of a flexible panel by proof-mass actuators as well as other beam, plate and shell like structural numbers. The model is verified with experimental measurements.
NASA Astrophysics Data System (ADS)
Widlowski, J.-L.; Pinty, B.; Lopatka, M.; Atzberger, C.; Buzica, D.; Chelle, M.; Disney, M.; Gastellu-Etchegorry, J.-P.; Gerboles, M.; Gobron, N.; Grau, E.; Huang, H.; Kallel, A.; Kobayashi, H.; Lewis, P. E.; Qin, W.; Schlerf, M.; Stuckens, J.; Xie, D.
2013-07-01
The radiation transfer model intercomparison (RAMI) activity aims at assessing the reliability of physics-based radiative transfer (RT) models under controlled experimental conditions. RAMI focuses on computer simulation models that mimic the interactions of radiation with plant canopies. These models are increasingly used in the development of satellite retrieval algorithms for terrestrial essential climate variables (ECVs). Rather than applying ad hoc performance metrics, RAMI-IV makes use of existing ISO standards to enhance the rigor of its protocols evaluating the quality of RT models. ISO-13528 was developed "to determine the performance of individual laboratories for specific tests or measurements." More specifically, it aims to guarantee that measurement results fall within specified tolerance criteria from a known reference. Of particular interest to RAMI is that ISO-13528 provides guidelines for comparisons where the true value of the target quantity is unknown. In those cases, "truth" must be replaced by a reliable "conventional reference value" to enable absolute performance tests. This contribution will show, for the first time, how the ISO-13528 standard developed by the chemical and physical measurement communities can be applied to proficiency testing of computer simulation models. Step by step, the pre-screening of data, the identification of reference solutions, and the choice of proficiency statistics will be discussed and illustrated with simulation results from the RAMI-IV "abstract canopy" scenarios. Detailed performance statistics of the participating RT models will be provided and the role of the accuracy of the reference solutions as well as the choice of the tolerance criteria will be highlighted.
Automatic reactor model synthesis with genetic programming.
Dürrenmatt, David J; Gujer, Willi
2012-01-01
Successful modeling of wastewater treatment plant (WWTP) processes requires an accurate description of the plant hydraulics. Common methods such as tracer experiments are difficult and costly and thus have limited applicability in practice; engineers are often forced to rely on their experience only. An implementation of grammar-based genetic programming with an encoding to represent hydraulic reactor models as program trees should fill this gap: The encoding enables the algorithm to construct arbitrary reactor models compatible with common software used for WWTP modeling by linking building blocks, such as continuous stirred-tank reactors. Discharge measurements and influent and effluent concentrations are the only required inputs. As shown in a synthetic example, the technique can be used to identify a set of reactor models that perform equally well. Instead of being guided by experience, the most suitable model can now be chosen by the engineer from the set. In a second example, temperature measurements at the influent and effluent of a primary clarifier are used to generate a reactor model. A virtual tracer experiment performed on the reactor model has good agreement with a tracer experiment performed on-site.
NASA Astrophysics Data System (ADS)
Zhao, Z.; Diemant, T.; Häring, T.; Rauscher, H.; Behm, R. J.
2005-12-01
We describe the design and performance of a high-pressure reaction cell for simultaneous kinetic and in situ infrared reflection (IR) spectroscopic measurements on model catalysts at elevated pressures, between 10-3 and 103mbars, which can be operated both as batch reactor and as flow reactor with defined gas flow. The cell is attached to an ultrahigh-vacuum (UHV) system, which is used for sample preparation and also contains facilities for sample characterization. Specific for this design is the combination of a small cell volume, which allows kinetic measurements with high sensitivity under batch or continuous flow conditions, the complete isolation of the cell from the UHV part during UHV measurements, continuous temperature control during both UHV and high-pressure operation, and rapid transfer between UHV and high-pressure stage. Gas dosing is performed by a designed gas-handling system, which allows operation as flow reactor with calibrated gas flows at adjustable pressures. To study the kinetics of reactions on the model catalysts, a quadrupole mass spectrometer is connected to the high-pressure cell. IR measurements are possible in situ by polarization-modulation infrared reflection-absorption spectroscopy, which also allows measurements at elevated pressures. The performance of the setup is demonstrated by test measurements on the kinetics for CO oxidation and the CO adsorption on a Au /TiO2/Ru(0001) model catalyst film at 1-50 mbar total pressure.
NASA Astrophysics Data System (ADS)
Fukumori, Ichiro; Raghunath, Ramanujam; Fu, Lee-Lueng; Chao, Yi
1999-11-01
The feasibility of assimilating satellite altimetry data into a global ocean general circulation model is studied. Three years of TOPEX/Poseidon data are analyzed using a global, three-dimensional, nonlinear primitive equation model. The assimilation's success is examined by analyzing its consistency and reliability measured by formal error estimates with respect to independent measurements. Improvements in model solution are demonstrated, in particular, properties not directly measured. Comparisons are performed with sea level measured by tide gauges, subsurface temperatures and currents from moorings, and bottom pressure measurements. Model representation errors dictate what can and cannot be resolved by assimilation, and its identification is emphasized.
Performance and evaluation of real-time multicomputer control systems
NASA Technical Reports Server (NTRS)
Shin, K. G.
1983-01-01
New performance measures, detailed examples, modeling of error detection process, performance evaluation of rollback recovery methods, experiments on FTMP, and optimal size of an NMR cluster are discussed.
Butler, Javed; McCoin, Nicole S; Feurer, Irene D; Speroff, Theodore; Davis, Stacy F; Chomsky, Don B; Wilson, John R; Merrill, Walter H; Drinkwater, Davis C; Pierson, Richard N; Pinson, C Wright
2003-10-01
Health-related quality of life and functional performance are important outcome measures following heart transplantation. This study investigates the impact of pre-transplant functional performance and post-transplant rejection episodes, obesity and osteopenia on post-transplant health-related quality of life and functional performance. Functional performance and health-related quality of life were measured in 70 adult heart transplant recipients. A composite health-related quality of life outcome measure was computed via principal component analysis. Iterative, multiple regression-based path analysis was used to develop an integrated model of variables that affect post-transplant functional performance and health-related quality of life. Functional performance, as measured by the Karnofsky scale, improved markedly during the first 6 months post-transplant and was then sustained for up to 3 years. Rejection Grade > or =2 was negatively associated with health-related quality of life, measured by Short Form-36 and reversed Psychosocial Adjustment to Illness Scale scores. Patients with osteopenia had lower Short Form-36 physical scores and obese patients had lower functional performance. Path analysis demonstrated a negative direct effect of obesity (beta = - 0.28, p < 0.05) on post-transplant functional performance. Post-transplant functional performance had a positive direct effect on the health-related quality of life composite score (beta = 0.48, p < 0.001), and prior rejection episodes grade > or =2 had a negative direct effect on this measure (beta = -0.29, p < 0.05). Either directly or through effects mediated by functional performance, moderate-to-severe rejection, obesity and osteopenia negatively impact health-related quality of life. These findings indicate that efforts should be made to devise immunosuppressive regimens that reduce the incidence of acute rejection, weight gain and osteopenia after heart transplantation.
Airloads and Wake Geometry Calculations for an Isolated Tiltrotor Model in a Wind Tunnel
NASA Technical Reports Server (NTRS)
Johnson, Wayne
2003-01-01
Th tiltrotor aircraft configuration has the potential to revolutionize air transportation by providing an economical combination of vertical take-off and landing capability with efficient, high-speed cruise flight. To achieve this potential it is necessary to have validated analytical tools that will support future tiltrotor aircraft development. These analytical tools must calculate tiltrotor aeromechanical behavior, including performance, structural loads, vibration, and aeroelastic stability, with an accuracy established by correlation with measured tiltrotor data. For many years such correlation has been performed for helicopter rotors (rotors designed for edgewise flight), but correlation activities for tiltrotors have been limited, in part by the absence of appropriate measured data. The recent test of the Tilt Rotor Aeroacoustic Model (TRAM) with a single, U4-scale V-22 rotor in the German-Dutch Wind Tunnel (DNW) now provides an extensive set of aeroacoustic, performance, and structural loads data. This paper will present calculations of airloads, wake geometry, and performance, including correlation with TRAM DNW measurements. The calculations were obtained using CAMRAD II, which is a modern rotorcraft comprehensive analysis, with advanced models intended for application to tiltrotor aircraft as well as helicopters. Comprehensive analyses have received extensive correlation with performance and loads measurements on helicopter rotors. The proposed paper is part of an initial effort to perform an equally extensive correlation with tiltrotor data. The correlation will establish the level of predictive capability achievable with current technology; identify the limitations of the current aerodynamic, wake, and structural models of tiltrotors; and lead to recommendations for research to extend tiltrotor aeromechanics analysis capability. The purpose of the Tilt Rotor Aeroacoustic Model (TRAM) experimental project is to provide data necessary to validate tiltrotor performance and aeroacoustic prediction methodologies and to investigate and demonstrate advanced civil tiltrotor technologies. The TRAM project is a key part of the NASA Short Haul Civil Tiltrotor (SHCT) project. The SHCT project is an element of the Aviation Systems Capacity Initiative within NASA. In April-May 1998 the TRAM was tested in the isolated rotor configuration at the Large Low-speed Facility of the German-Dutch Wind Tunnels (DNW). A preparatory test was conducted in December 1997. These tests were the first comprehensive aeroacoustic test for a tiltrotor, including not only noise and performance data, but airload and wake measurements as well. The TRAM can also be tested in a fill-span configuration, incorporating both rotors Lnd a fuselage model. The wind tunnel installation of the TRAM isolated rotor is shown. The rotor tested in the DNW was a 1/4-scale (9.5 ft diameter) model of the right-hand V-22 proprotor. The rotor and nacelle assembly was attached to an acoustically-treated, isolated rotor test stand through a mechanical pivot (the nacelle conversion axis). The TRAM was analyzed using the rotorcraft comprehensive analysis CAMRAD II. CAMRAD II is an aeromechanical analysis of helicopters and rotorcraft that incorporates a combination of advanced technologies, including multibody dynamics, nonlinear finite elements, and rotorcraft aerodynamics. The trim task finds the equilibrium solution (constant or periodic) for a steady state operating condition, in this case a rotor operating in a wind tunnel. For wind tunnel operation, the thrust and flapping are trimmed to target values. The aerodynamic model includes a wake analysis to calculate the rotor nonuniform induced-velocities, using a free wake geometry. The paper will present the results of CAMRAD II calculations compared to the TRAM DNW measurements for hover performance, helicopter mode performance, and helicopter mode airloads. An example of the hover performance results, comparing both mearements and calculations for the JVX (large scale) and TRAM (small scale) rotors, is shown. An example of the helicopter mode performance, showing the influence of the aerodynamic model (particularly the stall delay model) on the calculated power, induced power, and profile power is also shown. An example of the helicopter mode airloads, showing the influence of various wake and aerodynamic models on the calculations, is shown. Good correlation with measured airloads is obtained using the multiple-trailer wake model. The paper will present additional results, and describe and discuss the aerodynamic behavior in detail.
Shuguang Liua; Pamela Anderson; Guoyi Zhoud; Boone Kauffman; Flint Hughes; David Schimel; Vicente Watson; Joseph Tosi
2008-01-01
Objectively assessing the performance of a model and deriving model parameter values from observations are critical and challenging in landscape to regional modeling. In this paper, we applied a nonlinear inversion technique to calibrate the ecosystem model CENTURY against carbon (C) and nitrogen (N) stock measurements collected from 39 mature tropical forest sites in...
Reconsidering the measurement of ancillary service performance.
Griffin, D T; Rauscher, J A
1987-08-01
Prospective payment reimbursement systems have forced hospitals to review their costs more carefully. The result of the increased emphasis on costs is that many hospitals use costs, rather than margin, to judge the performance of ancillary services. However, arbitrary selection of performance measures for ancillary services can result in managerial decisions contrary to hospital objectives. Managerial accounting systems provide models which assist in the development of performance measures for ancillary services. Selection of appropriate performance measures provides managers with the incentive to pursue goals congruent with those of the hospital overall. This article reviews the design and implementation of managerial accounting systems, and considers the impact of prospective payment systems and proposed changes in capital reimbursement on this process.
Molshatzki, Noa; Drory, Yaacov; Myers, Vicki; Goldbourt, Uri; Benyamini, Yael; Steinberg, David M; Gerber, Yariv
2011-07-01
The relationship of risk factors to outcomes has traditionally been assessed by measures of association such as odds ratio or hazard ratio and their statistical significance from an adjusted model. However, a strong, highly significant association does not guarantee a gain in stratification capacity. Using recently developed model performance indices, we evaluated the incremental discriminatory power of individual and neighborhood socioeconomic status (SES) measures after myocardial infarction (MI). Consecutive patients aged ≤65 years (N=1178) discharged from 8 hospitals in central Israel after incident MI in 1992 to 1993 were followed-up through 2005. A basic model (demographic variables, traditional cardiovascular risk factors, and disease severity indicators) was compared with an extended model including SES measures (education, income, employment, living with a steady partner, and neighborhood SES) in terms of Harrell c statistic, integrated discrimination improvement (IDI), and net reclassification improvement (NRI). During the 13-year follow-up, 326 (28%) patients died. Cox proportional hazards models showed that all SES measures were significantly and independently associated with mortality. Furthermore, compared with the basic model, the extended model yielded substantial gains (all P<0.001) in c statistic (0.723 to 0.757), NRI (15.2%), IDI (5.9%), and relative IDI (32%). Improvement was observed both for sensitivity (classification of events) and specificity (classification of nonevents). This study illustrates the additional insights that can be gained from considering the IDI and NRI measures of model performance and suggests that, among community patients with incident MI, incorporating SES measures into a clinical-based model substantially improves long-term mortality risk prediction.
Tian, Mi; Deng, Zhu; Meng, Zhaokun; Li, Rui; Zhang, Zhiyi; Qi, Wenhui; Wang, Rui; Yin, Tingting; Ji, Menghui
2018-01-01
Children’s block building performances are used as indicators of other abilities in multiple domains. In the current study, we examined individual differences, types of model and social settings as influences on children’s block building performance. Chinese preschoolers (N = 180) participated in a block building activity in a natural setting, and performance was assessed with multiple measures in order to identify a range of specific skills. Using scores generated across these measures, three dependent variables were analyzed: block building skills, structural balance and structural features. An overall MANOVA showed that there were significant main effects of gender and grade level across most measures. Types of model showed no significant effect in children’s block building. There was a significant main effect of social settings on structural features, with the best performance in the 5-member group, followed by individual and then the 10-member block building. These findings suggest that boys performed better than girls in block building activity. Block building performance increased significantly from 1st to 2nd year of preschool, but not from second to third. The preschoolers created more representational constructions when presented with a model made of wooden rather than with a picture. There was partial evidence that children performed better when working with peers in a small group than when working alone or working in a large group. It is suggested that future study should examine other modalities rather than the visual one, diversify the samples and adopt a longitudinal investigation. PMID:29441031
Davies, Christopher E; Glonek, Gary Fv; Giles, Lynne C
2017-08-01
One purpose of a longitudinal study is to gain a better understanding of how an outcome of interest changes among a given population over time. In what follows, a trajectory will be taken to mean the series of measurements of the outcome variable for an individual. Group-based trajectory modelling methods seek to identify subgroups of trajectories within a population, such that trajectories that are grouped together are more similar to each other than to trajectories in distinct groups. Group-based trajectory models generally assume a certain structure in the covariances between measurements, for example conditional independence, homogeneous variance between groups or stationary variance over time. Violations of these assumptions could be expected to result in poor model performance. We used simulation to investigate the effect of covariance misspecification on misclassification of trajectories in commonly used models under a range of scenarios. To do this we defined a measure of performance relative to the ideal Bayesian correct classification rate. We found that the more complex models generally performed better over a range of scenarios. In particular, incorrectly specified covariance matrices could significantly bias the results but using models with a correct but more complicated than necessary covariance matrix incurred little cost.
Phan, Huy P
2008-03-01
Although extensive research has examined epistemological beliefs, reflective thinking and learning approaches, very few studies have looked at these three theoretical frameworks in their totality. This research tested two separate structural models of epistemological beliefs, learning approaches, reflective thinking and academic performance among tertiary students over a period of 12 months. Participants were first-year Arts (N=616; 271 females, 345 males) and second-year Mathematics (N=581; 241 females, 341 males) university students. Students' epistemological beliefs were measured with the Schommer epistemological questionnaire (EQ, Schommer, 1990). Reflective thinking was measured with the reflective thinking questionnaire (RTQ, Kember et al., 2000). Student learning approaches were measured with the revised study process questionnaire (R-SPQ-2F, Biggs, Kember, & Leung, 2001). LISREL 8 was used to test two structural equation models - the cross-lag model and the causal-mediating model. In the cross-lag model involving Arts students, structural equation modelling showed that epistemological beliefs influenced student learning approaches rather than the contrary. In the causal-mediating model involving Mathematics students, the results indicate that both epistemological beliefs and learning approaches predicted reflective thinking and academic performance. Furthermore, learning approaches mediated the effect of epistemological beliefs on reflective thinking and academic performance. Results of this study are significant as they integrated the three theoretical frameworks within the one study.
Relationship between pore geometric characteristics and SIP/NMR parameters observed for mudstones
NASA Astrophysics Data System (ADS)
Robinson, J.; Slater, L. D.; Keating, K.; Parker, B. L.; Robinson, T.
2017-12-01
The reliable estimation of permeability remains one of the most challenging problems in hydrogeological characterization. Cost effective, non-invasive geophysical methods such as spectral induced polarization (SIP) and nuclear magnetic resonance (NMR) offer an alternative to traditional sampling methods as they are sensitive to the mineral surfaces and pore spaces that control permeability. We performed extensive physical characterization, SIP and NMR geophysical measurements on fractured rock cores extracted from a mudstone site in an effort to compare 1) the pore size characterization determined from traditional and geophysical methods and 2) the performance of permeability models based on these methods. We focus on two physical characterizations that are well-correlated with hydraulic properties: the pore volume normalized surface area (Spor) and an interconnected pore diameter (Λ). We find the SIP polarization magnitude and relaxation time are better correlated with Spor than Λ, the best correlation of these SIP measures for our sample dataset was found with Spor divided by the electrical formation factor (F). NMR parameters are, similarly, better correlated with Spor than Λ. We implement previously proposed mechanistic and empirical permeability models using SIP and NMR parameters. A sandstone-calibrated SIP model using a polarization magnitude does not perform well while a SIP model using a mean relaxation time performs better in part by more sufficiently accounting for the effects of fluid chemistry. A sandstone-calibrated NMR permeability model using an average measure of the relaxation time does not perform well, presumably due to small pore sizes which are either not connected or contain water of limited mobility. An NMR model based on the laboratory determined portions of the bound versus mobile portions of the relaxation distribution performed reasonably well. While limitations exist, there are many opportunities to use geophysical data to predict permeability in mudstone formations.
NASA Astrophysics Data System (ADS)
Couach, O.; Balin, I.; Jimenez, R.; Quaglia, P.; Kirchner, F.; Ristori, P.; Simeonov, V.; Clappier, A.; van den Bergh, H.; Calpini, B.
In order to understand, to predict and to elaborate solutions concerning the photo- chemical and meteorological processes, which occur often in the summer time over the Grenoble city and its three surroundings valleys, both modeling and measurement approaches were considered. Two intensive air pollution and meteorological measure- ments campaigns were performed in 1998 and 1999. Ozone (O3) and other pollutants (NOx, CH2O, SO2, etc) as well as wind, temperature, solar radiation and relative hu- midity were intensively measured at surface level combined with 3D measurements range by using: an instrumented aircraft (Metair), two ozone lidars (e.g. EPFL ozone dial lidar) and wind profilers (e.g.Degreane). This poster will focus on the main results of these measurements like the 3D ozone distribution, the mixing height/planetary boundary layer evolution, the meteorological behavior, and the other pollutants evalu- ation. The paper also highlights the use of these measurements as a necessary database for comparison and checking (validation) of the model performances and thus to allow modeling solutions in predicting the air pollution events and thus permitting to build the right abatement strategies.
Lidar and radar measurements of the melting layer: observations of dark and bright band phenomena
NASA Astrophysics Data System (ADS)
Di Girolamo, P.; Summa, D.; Cacciani, M.; Norton, E. G.; Peters, G.; Dufournet, Y.
2012-05-01
Multi-wavelength lidar measurements in the melting layer revealing the presence of dark and bright bands have been performed by the University of BASILicata Raman lidar system (BASIL) during a stratiform rain event. Simultaneously radar measurements have been also performed from the same site by the University of Hamburg cloud radar MIRA 36 (35.5 GHz), the University of Hamburg dual-polarization micro rain radar (24.15 GHz) and the University of Manchester UHF wind profiler (1.29 GHz). Measurements from BASIL and the radars are illustrated and discussed in this paper for a specific case study on 23 July 2007 during the Convective and Orographically-induced Precipitation Study (COPS). Simulations of the lidar dark and bright band based on the application of concentric/eccentric sphere Lorentz-Mie codes and a melting layer model are also provided. Lidar and radar measurements and model results are also compared with measurements from a disdrometer on ground and a two-dimensional cloud (2DC) probe on-board the ATR42 SAFIRE. Measurements and model results are found to confirm and support the conceptual microphysical/scattering model elaborated by Sassen et al. (2005).
Problems With Risk Reclassification Methods for Evaluating Prediction Models
Pepe, Margaret S.
2011-01-01
For comparing the performance of a baseline risk prediction model with one that includes an additional predictor, a risk reclassification analysis strategy has been proposed. The first step is to cross-classify risks calculated according to the 2 models for all study subjects. Summary measures including the percentage of reclassification and the percentage of correct reclassification are calculated, along with 2 reclassification calibration statistics. The author shows that interpretations of the proposed summary measures and P values are problematic. The author's recommendation is to display the reclassification table, because it shows interesting information, but to use alternative methods for summarizing and comparing model performance. The Net Reclassification Index has been suggested as one alternative method. The author argues for reporting components of the Net Reclassification Index because they are more clinically relevant than is the single numerical summary measure. PMID:21555714
Application of Support Vector Machine to Forex Monitoring
NASA Astrophysics Data System (ADS)
Kamruzzaman, Joarder; Sarker, Ruhul A.
Previous studies have demonstrated superior performance of artificial neural network (ANN) based forex forecasting models over traditional regression models. This paper applies support vector machines to build a forecasting model from the historical data using six simple technical indicators and presents a comparison with an ANN based model trained by scaled conjugate gradient (SCG) learning algorithm. The models are evaluated and compared on the basis of five commonly used performance metrics that measure closeness of prediction as well as correctness in directional change. Forecasting results of six different currencies against Australian dollar reveal superior performance of SVM model using simple linear kernel over ANN-SCG model in terms of all the evaluation metrics. The effect of SVM parameter selection on prediction performance is also investigated and analyzed.
Measuring the usefulness of hidden units in Boltzmann machines with mutual information.
Berglund, Mathias; Raiko, Tapani; Cho, Kyunghyun
2015-04-01
Restricted Boltzmann machines (RBMs) and deep Boltzmann machines (DBMs) are important models in deep learning, but it is often difficult to measure their performance in general, or measure the importance of individual hidden units in specific. We propose to use mutual information to measure the usefulness of individual hidden units in Boltzmann machines. The measure is fast to compute, and serves as an upper bound for the information the neuron can pass on, enabling detection of a particular kind of poor training results. We confirm experimentally that the proposed measure indicates how much the performance of the model drops when some of the units of an RBM are pruned away. We demonstrate the usefulness of the measure for early detection of poor training in DBMs. Copyright © 2014 Elsevier Ltd. All rights reserved.
Photogrammetry-Based Automated Measurements for Tooth Shape and Occlusion Analysis
NASA Astrophysics Data System (ADS)
Knyaz, V. A.; Gaboutchian, A. V.
2016-06-01
Tooth measurements (odontometry) are performed for various scientific and practical applications, including dentistry. Present-day techniques are being increasingly based on 3D model use that provides wider prospects in comparison to measurements on real objects: teeth or their plaster copies. The main advantages emerge through application of new measurement methods which provide the needed degree of non-invasiveness, precision, convenience and details. Tooth measurements have been always regarded as a time-consuming research, even more so with use of new methods due to their wider opportunities. This is where automation becomes essential for further development and implication of measurement techniques. In our research automation in obtaining 3D models and automation of measurements provided essential data that was analysed to suggest recommendations for tooth preparation - one of the most responsible clinical procedures in prosthetic dentistry - within a comparatively short period of time. The original photogrammetric 3D reconstruction system allows to generate 3D models of dental arches, reproduce their closure, or occlusion, and to perform a set of standard measurement in automated mode.
Effect of pneumotach on measurement of vocal function
NASA Astrophysics Data System (ADS)
Walters, Gage; McPhail, Michael; Krane, Michael
2017-11-01
Aerodynamic and acoustic measurements of vocal function were performed in a physical model of the human airway with and without a pneumotach (Rothenberg mask), used by clinicians to measure vocal volume flow. The purpose of these experiments was to assess whether the device alters acoustic and aerodynamic conditions sufficiently to change phonation behavior. The airway model, which mimics acoustic behavior of an adult human airway from trachea to mouth, consists of a 31.5cm long straight duct with a 2.54cm square cross section. Model vocal folds comprised of molded silicone rubber were set into vibration by introducing airflow from a compressed air source. Measurements included transglottal pressure difference, mean volume flow, vocal fold vibratory motion, and sound pressure measured at the mouth. The experiments show that while the pneumotach imparted measurable aerodynamic and acoustic loads on the system, measurement of mean glottal resistance was not affected. Acoustic pressure levels were attenuated, however, suggesting clinical acoustic measurements of vocal function need correction when performed in conjunction with a pneumotach Acknowledge support from NIH DC R01005642-11.
NASA Technical Reports Server (NTRS)
Armstrong, Jeffrey B.; Simon, Donald L.
2012-01-01
Self-tuning aircraft engine models can be applied for control and health management applications. The self-tuning feature of these models minimizes the mismatch between any given engine and the underlying engineering model describing an engine family. This paper provides details of the construction of a self-tuning engine model centered on a piecewise linear Kalman filter design. Starting from a nonlinear transient aerothermal model, a piecewise linear representation is first extracted. The linearization procedure creates a database of trim vectors and state-space matrices that are subsequently scheduled for interpolation based on engine operating point. A series of steady-state Kalman gains can next be constructed from a reduced-order form of the piecewise linear model. Reduction of the piecewise linear model to an observable dimension with respect to available sensed engine measurements can be achieved using either a subset or an optimal linear combination of "health" parameters, which describe engine performance. The resulting piecewise linear Kalman filter is then implemented for faster-than-real-time processing of sensed engine measurements, generating outputs appropriate for trending engine performance, estimating both measured and unmeasured parameters for control purposes, and performing on-board gas-path fault diagnostics. Computational efficiency is achieved by designing multidimensional interpolation algorithms that exploit the shared scheduling of multiple trim vectors and system matrices. An example application illustrates the accuracy of a self-tuning piecewise linear Kalman filter model when applied to a nonlinear turbofan engine simulation. Additional discussions focus on the issue of transient response accuracy and the advantages of a piecewise linear Kalman filter in the context of validation and verification. The techniques described provide a framework for constructing efficient self-tuning aircraft engine models from complex nonlinear simulations.Self-tuning aircraft engine models can be applied for control and health management applications. The self-tuning feature of these models minimizes the mismatch between any given engine and the underlying engineering model describing an engine family. This paper provides details of the construction of a self-tuning engine model centered on a piecewise linear Kalman filter design. Starting from a nonlinear transient aerothermal model, a piecewise linear representation is first extracted. The linearization procedure creates a database of trim vectors and state-space matrices that are subsequently scheduled for interpolation based on engine operating point. A series of steady-state Kalman gains can next be constructed from a reduced-order form of the piecewise linear model. Reduction of the piecewise linear model to an observable dimension with respect to available sensed engine measurements can be achieved using either a subset or an optimal linear combination of "health" parameters, which describe engine performance. The resulting piecewise linear Kalman filter is then implemented for faster-than-real-time processing of sensed engine measurements, generating outputs appropriate for trending engine performance, estimating both measured and unmeasured parameters for control purposes, and performing on-board gas-path fault diagnostics. Computational efficiency is achieved by designing multidimensional interpolation algorithms that exploit the shared scheduling of multiple trim vectors and system matrices. An example application illustrates the accuracy of a self-tuning piecewise linear Kalman filter model when applied to a nonlinear turbofan engine simulation. Additional discussions focus on the issue of transient response accuracy and the advantages of a piecewise linear Kalman filter in the context of validation and verification. The techniques described provide a framework for constructing efficient self-tuning aircraft engine models from complex nonlinear simulatns.
Diffusive deposition of aerosols in Phebus containment during FPT-2 test
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kontautas, A.; Urbonavicius, E.
2012-07-01
At present the lumped-parameter codes is the main tool to investigate the complex response of the containment of Nuclear Power Plant in case of an accident. Continuous development and validation of the codes is required to perform realistic investigation of the processes that determine the possible source term of radioactive products to the environment. Validation of the codes is based on the comparison of the calculated results with the measurements performed in experimental facilities. The most extensive experimental program to investigate fission product release from the molten fuel, transport through the cooling circuit and deposition in the containment is performedmore » in PHEBUS test facility. Test FPT-2 performed in this facility is considered for analysis of processes taking place in containment. Earlier performed investigations using COCOSYS code showed that the code could be successfully used for analysis of thermal-hydraulic processes and deposition of aerosols, but there was also noticed that diffusive deposition on the vertical walls does not fit well with the measured results. In the CPA module of ASTEC code there is implemented different model for diffusive deposition, therefore the PHEBUS containment model was transferred from COCOSYS code to ASTEC-CPA to investigate the influence of the diffusive deposition modelling. Analysis was performed using PHEBUS containment model of 16 nodes. The calculated thermal-hydraulic parameters are in good agreement with measured results, which gives basis for realistic simulation of aerosol transport and deposition processes. Performed investigations showed that diffusive deposition model has influence on the aerosol deposition distribution on different surfaces in the test facility. (authors)« less
Lightweight ZERODUR: Validation of Mirror Performance and Mirror Modeling Predictions
NASA Technical Reports Server (NTRS)
Hull, Tony; Stahl, H. Philip; Westerhoff, Thomas; Valente, Martin; Brooks, Thomas; Eng, Ron
2017-01-01
Upcoming spaceborne missions, both moderate and large in scale, require extreme dimensional stability while relying both upon established lightweight mirror materials, and also upon accurate modeling methods to predict performance under varying boundary conditions. We describe tests, recently performed at NASA's XRCF chambers and laboratories in Huntsville Alabama, during which a 1.2 m diameter, f/1.2988% lightweighted SCHOTT lightweighted ZERODUR(TradeMark) mirror was tested for thermal stability under static loads in steps down to 230K. Test results are compared to model predictions, based upon recently published data on ZERODUR(TradeMark). In addition to monitoring the mirror surface for thermal perturbations in XRCF Thermal Vacuum tests, static load gravity deformations have been measured and compared to model predictions. Also the Modal Response(dynamic disturbance) was measured and compared to model. We will discuss the fabrication approach and optomechanical design of the ZERODUR(TradeMark) mirror substrate by SCHOTT, its optical preparation for test by Arizona Optical Systems (AOS). Summarize the outcome of NASA's XRCF tests and model validations
Lightweight ZERODUR®: Validation of mirror performance and mirror modeling predictions
NASA Astrophysics Data System (ADS)
Hull, Anthony B.; Stahl, H. Philip; Westerhoff, Thomas; Valente, Martin; Brooks, Thomas; Eng, Ron
2017-01-01
Upcoming spaceborne missions, both moderate and large in scale, require extreme dimensional stability while relying both upon established lightweight mirror materials, and also upon accurate modeling methods to predict performance under varying boundary conditions. We describe tests, recently performed at NASA’s XRCF chambers and laboratories in Huntsville Alabama, during which a 1.2m diameter, f/1.29 88% lightweighted SCHOTT lightweighted ZERODUR® mirror was tested for thermal stability under static loads in steps down to 230K. Test results are compared to model predictions, based upon recently published data on ZERODUR®. In addition to monitoring the mirror surface for thermal perturbations in XRCF Thermal Vacuum tests, static load gravity deformations have been measured and compared to model predictions. Also the Modal Response (dynamic disturbance) was measured and compared to model. We will discuss the fabrication approach and optomechanical design of the ZERODUR® mirror substrate by SCHOTT, its optical preparation for test by Arizona Optical Systems (AOS), and summarize the outcome of NASA’s XRCF tests and model validations.
ERIC Educational Resources Information Center
Kahraman, Nilufer; Brown, Crystal B.
2015-01-01
Psychometric models based on structural equation modeling framework are commonly used in many multiple-choice test settings to assess measurement invariance of test items across examinee subpopulations. The premise of the current article is that they may also be useful in the context of performance assessment tests to test measurement invariance…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoffman, Forrest M; Randerson, James T; Thornton, Peter E
2009-12-01
The need to capture important climate feedbacks in general circulation models (GCMs) has resulted in efforts to include atmospheric chemistry and land and ocean biogeochemistry into the next generation of production climate models, called Earth System Models (ESMs). While many terrestrial and ocean carbon models have been coupled to GCMs, recent work has shown that such models can yield a wide range of results (Friedlingstein et al., 2006). This work suggests that a more rigorous set of global offline and partially coupled experiments, along with detailed analyses of processes and comparisons with measurements, are needed. The Carbon-Land Model Intercomparison Projectmore » (C-LAMP) was designed to meet this need by providing a simulation protocol and model performance metrics based upon comparisons against best-available satellite- and ground-based measurements (Hoffman et al., 2007). Recently, a similar effort in Europe, called the International Land Model Benchmark (ILAMB) Project, was begun to assess the performance of European land surface models. These two projects will now serve as prototypes for a proposed international land-biosphere model benchmarking activity for those models participating in the IPCC Fifth Assessment Report (AR5). Initially used for model validation for terrestrial biogeochemistry models in the NCAR Community Land Model (CLM), C-LAMP incorporates a simulation protocol for both offline and partially coupled simulations using a prescribed historical trajectory of atmospheric CO2 concentrations. Models are confronted with data through comparisons against AmeriFlux site measurements, MODIS satellite observations, NOAA Globalview flask records, TRANSCOM inversions, and Free Air CO2 Enrichment (FACE) site measurements. Both sets of experiments have been performed using two different terrestrial biogeochemistry modules coupled to the CLM version 3 in the Community Climate System Model version 3 (CCSM3): the CASA model of Fung, et al., and the carbon-nitrogen (CN) model of Thornton. Comparisons of the CLM3 offline results against observational datasets have been performed and are described in Randerson et al. (2009). CLM version 4 has been evaluated using C-LAMP, showing improvement in many of the metrics. Efforts are now underway to initiate a Nitrogen-Land Model Intercomparison Project (N-LAMP) to better constrain the effects of the nitrogen cycle in biosphere models. Presented will be new results from C-LAMP for CLM4, initial N-LAMP developments, and the proposed land-biosphere model benchmarking activity.« less
Performance measures for transform data coding.
NASA Technical Reports Server (NTRS)
Pearl, J.; Andrews, H. C.; Pratt, W. K.
1972-01-01
This paper develops performance criteria for evaluating transform data coding schemes under computational constraints. Computational constraints that conform with the proposed basis-restricted model give rise to suboptimal coding efficiency characterized by a rate-distortion relation R(D) similar in form to the theoretical rate-distortion function. Numerical examples of this performance measure are presented for Fourier, Walsh, Haar, and Karhunen-Loeve transforms.
USDA-ARS?s Scientific Manuscript database
Direct normal irradiance (DNI) is required in the performance estimation of concentrating solar energy systems. The objective of this paper is to compare measured and modeled DNI data for a site in the Texas Panhandle (Bushland, Texas) to determine the accuracy of the model and where improvements mi...
NASA Astrophysics Data System (ADS)
Moonen, P.; Gromke, C.; Dorer, V.
2013-08-01
The potential of a Large Eddy Simulation (LES) model to reliably predict near-field pollutant dispersion is assessed. To that extent, detailed time-resolved numerical simulations of coupled flow and dispersion are conducted for a street canyon with tree planting. Different crown porosities are considered. The model performance is assessed in several steps, ranging from a qualitative comparison to measured concentrations, over statistical data analysis by means of scatter plots and box plots, up to the calculation of objective validation metrics. The extensive validation effort highlights and quantifies notable features and shortcomings of the model, which would otherwise remain unnoticed. The model performance is found to be spatially non-uniform. Closer agreement with measurement data is achieved near the canyon ends than for the central part of the canyon, and typical model acceptance criteria are satisfied more easily for the leeward than for the windward canyon wall. This demonstrates the need for rigorous model evaluation. Only quality-assured models can be used with confidence to support assessment, planning and implementation of pollutant mitigation strategies.
ERIC Educational Resources Information Center
Weiss, Michael J.; May, Henry
2012-01-01
As test-based educational accountability has moved to the forefront of national and state education policy, so has the desire for better measures of school performance. No Child Left Behind's (NCLB) status and safe harbor measures have been criticized for being unfair and unreliable, respectively. In response to such criticism, in 2005 the federal…
Choice and Change of Measures in Performance-Measurement Models
2005-05-01
associated costs . 3 Discussions of many current accounting and performance-measurement issues can be...change: an exploratory study. Accounting , Organizations, and Society, 24(3), 189-204. Adimando, C., Butler, R., Malley, S ., Ravid, S . A., Shepro, R...impact of contextual and process factors on the evaluation of activity-based costing systems. Accounting , Organizations and Society, 24, 525-559. Antle
Early seizure detection in an animal model of temporal lobe epilepsy
NASA Astrophysics Data System (ADS)
Talathi, Sachin S.; Hwang, Dong-Uk; Ditto, William; Carney, Paul R.
2007-11-01
The performance of five seizure detection schemes, i.e., Nonlinear embedding delay, Hurst scaling, Wavelet Scale, autocorrelation and gradient of accumulated energy, in their ability to detect EEG seizures close to the seizure onset time were evaluated to determine the feasibility of their application in the development of a real time closed loop seizure intervention program (RCLSIP). The criteria chosen for the performance evaluation were, high statistical robustness as determined through the predictability index, the sensitivity and the specificity of a given measure to detect an EEG seizure, the lag in seizure detection with respect to the EEG seizure onset time, as determined through visual inspection and the computational efficiency for each detection measure. An optimality function was designed to evaluate the overall performance of each measure dependent on the criteria chosen. While each of the above measures analyzed for seizure detection performed very well in terms of the statistical parameters, the nonlinear embedding delay measure was found to have the highest optimality index due to its ability to detect seizure very close to the EEG seizure onset time, thereby making it the most suitable dynamical measure in the development of RCLSIP in rat model with chronic limbic epilepsy.
Biewener, Andrew A.; Wakeling, James M.; Lee, Sabrina S.; Arnold, Allison S.
2014-01-01
We review here the use and reliability of Hill-type muscle models to predict muscle performance under varying conditions, ranging from in situ production of isometric force to in vivo dynamics of muscle length change and force in response to activation. Muscle models are frequently used in musculoskeletal simulations of movement, particularly when applied to studies of human motor performance in which surgically implanted transducers have limited use. Musculoskeletal simulations of different animal species also are being developed to evaluate comparative and evolutionary aspects of locomotor performance. However, such models are rarely validated against direct measures of fascicle strain or recordings of muscle–tendon force. Historically, Hill-type models simplify properties of whole muscle by scaling salient properties of single fibers to whole muscles, typically accounting for a muscle’s architecture and series elasticity. Activation of the model’s single contractile element (assigned the properties of homogenous fibers) is also simplified and is often based on temporal features of myoelectric (EMG) activation recorded from the muscle. Comparison of standard one-element models with a novel two-element model and with in situ and in vivo measures of EMG, fascicle strain, and force recorded from the gastrocnemius muscles of goats shows that a two-element Hill-type model, which allows independent recruitment of slow and fast units, better predicts temporal patterns of in situ and in vivo force. Recruitment patterns of slow/fast units based on wavelet decomposition of EMG activity in frequency–time space are generally correlated with the intensity spectra of the EMG signals, the strain rates of the fascicles, and the muscle–tendon forces measured in vivo, with faster units linked to greater strain rates and to more rapid forces. Using direct measures of muscle performance to further test Hill-type models, whether traditional or more complex, remains critical for establishing their accuracy and essential for verifying their applicability to scientific and clinical studies of musculoskeletal function. PMID:24928073
Profiling outcomes of ambulatory care: casemix affects perceived performance.
Berlowitz, D R; Ash, A S; Hickey, E C; Kader, B; Friedman, R; Moskowitz, M A
1998-06-01
The authors explored the role of casemix adjustment when profiling outcomes of ambulatory care. The authors reviewed the medical records of 656 patients with hypertension, diabetes, or chronic obstructive pulmonary disease (COPD) receiving care at one of three Department of Veterans Affairs medical centers. Outcomes included measures of physiological control for hypertension and diabetes, and of exacerbations for COPD. Predictors of poor outcomes, including physical examination findings, symptoms, and comorbidities, were identified and entered into regression models. Observed minus expected performance was described for each site, both before and after casemix adjustment. Risk-adjustment models were developed that were clinically plausible and had good performance properties. Differences existed among the three sites in the severity of the patients being cared for. For example, the percentage of patients expected to have poor blood pressure control were 35% at site 1, 37% at site 2, and 44% at site 3 (P < 0.01). Casemix-adjusted measures of performance were different from unadjusted measures. Sites that were outliers (P < 0.05) with one approach had observed performance no different from expected with another approach. Casemix adjustment models can be developed for outpatient medical conditions. Sites differ in the severity of patients they treat, and adjusting for these differences can alter judgments of site performance. Casemix adjustment is necessary when profiling outpatient medical conditions.
Predictive accuracy of a model of volatile anesthetic uptake.
Kennedy, R Ross; French, Richard A; Spencer, Christopher
2002-12-01
A computer program that models anesthetic uptake and distribution has been in use in our department for 20 yr as a teaching tool. New anesthesia machines that electronically measure fresh gas flow rates and vaporizer settings allowed us to assess the performance of this model during clinical anesthesia. Gas flow, vaporizer settings, and end-tidal concentrations were collected from the anesthesia machine (Datex S/5 ADU) at 10-s intervals during 30 elective anesthetics. These were entered into the uptake model. Expired anesthetic vapor concentrations were calculated and compared with actual values as measured by the patient monitor (Datex AS/3). Sevoflurane was used in 16 patients and isoflurane in 14 patients. For all patients, the median performance error was -0.24%, the median absolute performance error was 13.7%, divergence was 2.3%/h, and wobble was 3.1%. There was no significant difference between sevoflurane and isoflurane. This model predicted expired concentrations well in these patients. These results are similar to those seen when comparing calculated and actual propofol concentrations in propofol infusion systems and meet published guidelines for the accuracy of models used in target-controlled anesthesia systems. This model may be useful for predicting responses to changes in fresh gas and vapor settings. We compared measured inhaled anesthetic concentrations with those predicted by a model. The method used for comparison has been used to study models of propofol administration. Our model predicts expired isoflurane and sevoflurane concentrations at least as well as common propofol models predict arterial propofol concentrations.
Pimperl, A; Schreyögg, J; Rothgang, H; Busse, R; Glaeske, G; Hildebrandt, H
2015-12-01
Transparency of economic performance of integrated care systems (IV) is a basic requirement for the acceptance and further development of integrated care. Diverse evaluation methods are used but are seldom openly discussed because of the proprietary nature of the different business models. The aim of this article is to develop a generic model for measuring economic performance of IV interventions. A catalogue of five quality criteria is used to discuss different evaluation methods -(uncontrolled before-after-studies, control group-based approaches, regression models). On this -basis a best practice model is proposed. A regression model based on the German morbidity-based risk structure equalisation scheme (MorbiRSA) has some benefits in comparison to the other methods mentioned. In particular it requires less resources to be implemented and offers advantages concerning the relia-bility and the transparency of the method (=important for acceptance). Also validity is sound. Although RCTs and - also to a lesser -extent - complex difference-in-difference matching approaches can lead to a higher validity of the results, their feasibility in real life settings is limited due to economic and practical reasons. That is why central criticisms of a MorbiRSA-based model were addressed, adaptions proposed and incorporated in a best practice model: Population-oriented morbidity adjusted margin improvement model (P-DBV(MRSA)). The P-DBV(MRSA) approach may be used as a standardised best practice model for the economic evaluation of IV. Parallel to the proposed approach for measuring economic performance a balanced, quality-oriented performance measurement system should be introduced. This should prevent incentivising IV-players to undertake short-term cost cutting at the expense of quality. © Georg Thieme Verlag KG Stuttgart · New York.
NASA Technical Reports Server (NTRS)
Lindensmith, Chris A.; Briggs, H. Clark; Beregovski, Yuri; Feria, V. Alfonso; Goullioud, Renaud; Gursel, Yekta; Hahn, Inseob; Kinsella, Gary; Orzewalla, Matthew; Phillips, Charles
2006-01-01
SIM Planetquest (SIM) is a large optical interferometer for making microarcsecond measurements of the positions of stars, and to detect Earth-sized planets around nearby stars. To achieve this precision, SIM requires stability of optical components to tens of picometers per hour. The combination of SIM s large size (9 meter baseline) and the high stability requirement makes it difficult and costly to measure all aspects of system performance on the ground. To reduce risks, costs and to allow for a design with fewer intermediate testing stages, the SIM project is developing an integrated thermal, mechanical and optical modeling process that will allow predictions of the system performance to be made at the required high precision. This modeling process uses commercial, off-the-shelf tools and has been validated against experimental results at the precision of the SIM performance requirements. This paper presents the description of the model development, some of the models, and their validation in the Thermo-Opto-Mechanical (TOM3) testbed which includes full scale brassboard optical components and the metrology to test them at the SIM performance requirement levels.
Li, Yuelin; Root, James C; Atkinson, Thomas M; Ahles, Tim A
2016-06-01
Patient-reported cognition generally exhibits poor concordance with objectively assessed cognitive performance. In this article, we introduce latent regression Rasch modeling and provide a step-by-step tutorial for applying Rasch methods as an alternative to traditional correlation to better clarify the relationship of self-report and objective cognitive performance. An example analysis using these methods is also included. Introduction to latent regression Rasch modeling is provided together with a tutorial on implementing it using the JAGS programming language for the Bayesian posterior parameter estimates. In an example analysis, data from a longitudinal neurocognitive outcomes study of 132 breast cancer patients and 45 non-cancer matched controls that included self-report and objective performance measures pre- and post-treatment were analyzed using both conventional and latent regression Rasch model approaches. Consistent with previous research, conventional analysis and correlations between neurocognitive decline and self-reported problems were generally near zero. In contrast, application of latent regression Rasch modeling found statistically reliable associations between objective attention and processing speed measures with self-reported Attention and Memory scores. Latent regression Rasch modeling, together with correlation of specific self-reported cognitive domains with neurocognitive measures, helps to clarify the relationship of self-report with objective performance. While the majority of patients attribute their cognitive difficulties to memory decline, the Rash modeling suggests the importance of processing speed and initial learning. To encourage the use of this method, a step-by-step guide and programming language for implementation is provided. Implications of this method in cognitive outcomes research are discussed. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Jwo, Ching-Song; Cheng, Tseng-Tang; Cho, Hung-Pin; Chiang, Wei-Tang; Chen, Sih-Li; Chen, Chien-Wei; Jian, Ling-You
2011-12-01
This paper presents a reduced fan noise method, with increased fan-benefit analysis of various performances. The experimental approach adopts changes in the outlet in the form of two fans (flat tongue and a V-Type tongue plate) in order to measure the noise under the two forms of value and volume of supply air fan, shaft power consumption, operating current, and static pressure. The results showed that the tongue plate and the V-plane tongue plate noise between the value of the measurement location of 6.7 in the tongue plate in the plane below the noise level is about V-tongue plate 1 ~ 1.5dB (A). Air flow rate testing showed that the flat plate and the V-Type tongue plate between the tongue plate V-Type flow rate value, the measurement location of 3.4 in the tongue plate in the plane was more than the V-Type flow rate tongue plate 5 to 5.5%. Shaft power testing of measurement model 3, and measurement model 4, showed that the tongue plate in the plane V-tongue plate was more than 8%, 5%. The measurement models 3 and 4 and 5 showed more than the V-Type plane tongue plate 1%, 2.7%, and 2.6%. The measurement models 6 and 8 showed that, the flat tongue plate is less than the V-tongue plate of 2.9% and 2.3%. Static pressure testing showed that the flat tongue plate in particular measurement models (3,4,8,9), the static value of V-tongue plate than the 11.1% higher, respectively, 9%, 4.3%, and 3.7%. The results summarized above suggest that, in the specific measurement points, when parallel to the tongue plate the V-tongue board has better performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piotr A. Domanski; W. Vance Payne
2002-10-31
The main goal of this project was to investigate and compare the performance of an R410A air conditioner to that of an R22 air conditioner, with specific interest in performance at high ambient temperatures at which the condenser of the R410A system may be operating above the refrigerant's critical point. Part 1 of this project consisted of conducting comprehensive measurements of thermophysical for refrigerant R125 and refrigerant blends R410A and R507A and developing new equation of state formulations and mixture models for predicting thermophysical properties of HFC refrigerant blends. Part 2 of this project conducted performance measurements of split-system, 3-tonmore » R22 and R410A residential air conditioners in the 80 to 135 F (27.8 to 57.2 C) outdoor temperature range and development of a system performance model. The performance data was used in preparing a beta version of EVAP-COND, a windows-based simulation package for predicting performance of finned-tube evaporators and condensers. The modeling portion of this project also included the formulation of a model for an air-conditioner equipped with a thermal expansion valve (TXV). Capacity and energy efficiency ratio (EER) were measured and compared. The R22 system's performance was measured over the outdoor ambient temperature range of 80 to 135 F (27.8 to 57.2 C). The same test range was planned for the R410A system. However, the compressor's safety system cut off the compressor at the 135.0 F (57.2 C) test temperature. The highest measurement on this system was at 130.0 F (54.4 C). Subsequently, a custom-manufactured R410A compressor with a disabled safety system and a more powerful motor was installed and performance was measured at outdoor temperatures up to 155.0 F (68.3 C). Both systems had similar capacity and EER performance at 82.0 F (27.8 C). The capacity and EER degradation of both systems were nearly linearly dependent with rising ambient outdoor ambient test temperatures. The performance degradation of R410A at higher temperatures was greater than R22. However, the R22 and R410A systems both operated normally during all tests. Visual observations of the R410A system provided no indication of vibrations or TXV hunting at high ambient outdoor test conditions with the compressor operating in the transcritical regime.« less
Mengoni, Marlène; Kayode, Oluwasegun; Sikora, Sebastien N F; Zapata-Cornelio, Fernando Y; Gregory, Diane E; Wilcox, Ruth K
2017-08-01
The development of current surgical treatments for intervertebral disc damage could benefit from virtual environment accounting for population variations. For such models to be reliable, a relevant description of the mechanical properties of the different tissues and their role in the functional mechanics of the disc is of major importance. The aims of this work were first to assess the physiological hoop strain in the annulus fibrosus in fresh conditions ( n = 5) in order to extract a functional behaviour of the extrafibrillar matrix; then to reverse-engineer the annulus fibrosus fibrillar behaviour ( n = 6). This was achieved by performing both direct and global controlled calibration of material parameters, accounting for the whole process of experimental design and in silico model methodology. Direct-controlled models are specimen-specific models representing controlled experimental conditions that can be replicated and directly comparing measurements. Validation was performed on another six specimens and a sensitivity study was performed. Hoop strains were measured as 17 ± 3% after 10 min relaxation and 21 ± 4% after 20-25 min relaxation, with no significant difference between the two measurements. The extrafibrillar matrix functional moduli were measured as 1.5 ± 0.7 MPa. Fibre-related material parameters showed large variability, with a variance above 0.28. Direct-controlled calibration and validation provides confidence that the model development methodology can capture the measurable variation within the population of tested specimens.
Kayode, Oluwasegun; Sikora, Sebastien N. F.; Zapata-Cornelio, Fernando Y.; Gregory, Diane E.; Wilcox, Ruth K.
2017-01-01
The development of current surgical treatments for intervertebral disc damage could benefit from virtual environment accounting for population variations. For such models to be reliable, a relevant description of the mechanical properties of the different tissues and their role in the functional mechanics of the disc is of major importance. The aims of this work were first to assess the physiological hoop strain in the annulus fibrosus in fresh conditions (n = 5) in order to extract a functional behaviour of the extrafibrillar matrix; then to reverse-engineer the annulus fibrosus fibrillar behaviour (n = 6). This was achieved by performing both direct and global controlled calibration of material parameters, accounting for the whole process of experimental design and in silico model methodology. Direct-controlled models are specimen-specific models representing controlled experimental conditions that can be replicated and directly comparing measurements. Validation was performed on another six specimens and a sensitivity study was performed. Hoop strains were measured as 17 ± 3% after 10 min relaxation and 21 ± 4% after 20–25 min relaxation, with no significant difference between the two measurements. The extrafibrillar matrix functional moduli were measured as 1.5 ± 0.7 MPa. Fibre-related material parameters showed large variability, with a variance above 0.28. Direct-controlled calibration and validation provides confidence that the model development methodology can capture the measurable variation within the population of tested specimens. PMID:28879014
Scoring Methods in the International Land Benchmarking (ILAMB) Package
NASA Astrophysics Data System (ADS)
Collier, N.; Hoffman, F. M.; Keppel-Aleks, G.; Lawrence, D. M.; Mu, M.; Riley, W. J.; Randerson, J. T.
2017-12-01
The International Land Model Benchmarking (ILAMB) project is a model-data intercomparison and integration project designed to improve the performance of the land component of Earth system models. This effort is disseminated in the form of a python package which is openly developed (https://bitbucket.org/ncollier/ilamb). ILAMB is more than a workflow system that automates the generation of common scalars and plot comparisons to observational data. We aim to provide scientists and model developers with a tool to gain insight into model behavior. Thus, a salient feature of the ILAMB package is our synthesis methodology, which provides users with a high-level understanding of model performance. Within ILAMB, we calculate a non-dimensional score of a model's performance in a given dimension of the physics, chemistry, or biology with respect to an observational dataset. For example, we compare the Fluxnet-MTE Gross Primary Productivity (GPP) product against model output in the corresponding historical period. We compute common statistics such as the bias, root mean squared error, phase shift, and spatial distribution. We take these measures and find relative errors by normalizing the values, and then use the exponential to map this relative error to the unit interval. This allows for the scores to be combined into an overall score representing multiple aspects of model performance. In this presentation we give details of this process as well as a proposal for tuning the exponential mapping to make scores more cross comparable. However, as many models are calibrated using these scalar measures with respect to observational datasets, we also score the relationships among relevant variables in the model. For example, in the case of GPP, we also consider its relationship to precipitation, evapotranspiration, and temperature. We do this by creating a mean response curve and a two-dimensional distribution based on the observational data and model results. The response curves are then scored using a relative measure of the root mean squared error and the exponential as before. The distributions are scored using the so-called Hellinger distance, a statistical measure for how well one distribution is represented by another, and included in the model's overall score.
Measuring the ROI on Knowledge Management Systems.
ERIC Educational Resources Information Center
Wickhorst, Vickie
2002-01-01
Defines knowledge management and corporate portals and provides a model that can be applied to assessing return on investment (ROI) for a knowledge management solution. Highlights include leveraging knowledge in an organization; assessing the value of human capital; and the Intellectual Capital Performance Measurement Model. (LRW)
NASA Astrophysics Data System (ADS)
Robinson, S.; Julyan, P. J.; Hastings, D. L.; Zweit, J.
2004-12-01
The key performance measures of resolution, count rate, sensitivity and scatter fraction are predicted for a dedicated BGO block detector patient PET scanner (GE Advance) in 2D mode for imaging with the non-pure positron-emitting radionuclides 124I, 55Co, 61Cu, 62Cu, 64Cu and 76Br. Model calculations including parameters of the scanner, decay characteristics of the radionuclides and measured parameters in imaging the pure positron-emitter 18F are used to predict performance according to the National Electrical Manufacturers Association (NEMA) NU 2-1994 criteria. Predictions are tested with measurements made using 124I and show that, in comparison with 18F, resolution degrades by 1.2 mm radially and tangentially throughout the field-of-view (prediction: 1.2 mm), count-rate performance reduces considerably and in close accordance with calculations, sensitivity decreases to 23.4% of that with 18F (prediction: 22.9%) and measured scatter fraction increases from 10.0% to 14.5% (prediction: 14.7%). Model predictions are expected to be equally accurate for other radionuclides and may be extended to similar scanners. Although performance is worse with 124I than 18F, imaging is not precluded in 2D mode. The viability of 124I imaging and performance in a clinical context compared with 18F is illustrated with images of a patient with recurrent thyroid cancer acquired using both [124I]-sodium iodide and [18F]-2-fluoro-2-deoxyglucose.
Ranking streamflow model performance based on Information theory metrics
NASA Astrophysics Data System (ADS)
Martinez, Gonzalo; Pachepsky, Yakov; Pan, Feng; Wagener, Thorsten; Nicholson, Thomas
2016-04-01
The accuracy-based model performance metrics not necessarily reflect the qualitative correspondence between simulated and measured streamflow time series. The objective of this work was to use the information theory-based metrics to see whether they can be used as complementary tool for hydrologic model evaluation and selection. We simulated 10-year streamflow time series in five watersheds located in Texas, North Carolina, Mississippi, and West Virginia. Eight model of different complexity were applied. The information-theory based metrics were obtained after representing the time series as strings of symbols where different symbols corresponded to different quantiles of the probability distribution of streamflow. The symbol alphabet was used. Three metrics were computed for those strings - mean information gain that measures the randomness of the signal, effective measure complexity that characterizes predictability and fluctuation complexity that characterizes the presence of a pattern in the signal. The observed streamflow time series has smaller information content and larger complexity metrics than the precipitation time series. Watersheds served as information filters and and streamflow time series were less random and more complex than the ones of precipitation. This is reflected the fact that the watershed acts as the information filter in the hydrologic conversion process from precipitation to streamflow. The Nash Sutcliffe efficiency metric increased as the complexity of models increased, but in many cases several model had this efficiency values not statistically significant from each other. In such cases, ranking models by the closeness of the information-theory based parameters in simulated and measured streamflow time series can provide an additional criterion for the evaluation of hydrologic model performance.
Abbott, M.L.; Susong, D.D.; Krabbenhoft, D.P.; Rood, A.S.
2002-01-01
Mercury (total and methyl) was evaluated in snow samples collected near a major mercury emission source on the Idaho National Engineering and Environmental Laboratory (INEEL) in southeastern Idaho and 160 km downwind in Teton Range in western Wyoming. The sampling was done to assess near-field (<12 km) deposition rates around the source, compare them to those measured in a relatively remote, pristine downwind location, and to use the measurements to develop improved, site-specific model input parameters for precipitation scavenging coefficient and the fraction of Hg emissions deposited locally. Measured snow water concentrations (ng L-1) were converted to deposition (ug m-2) using the sample location snow water equivalent. The deposition was then compared to that predicted using the ISC3 air dispersion/deposition model which was run with a range of particle and vapor scavenging coefficient input values. Accepted model statistical performance measures (fractional bias and normalized mean square error) were calculated for the different modeling runs, and the best model performance was selected. Measured concentrations close to the source (average = 5.3 ng L-1) were about twice those measured in the Teton Range (average = 2.7 ng L-1) which were within the expected range of values for remote background areas. For most of the sampling locations, the ISC3 model predicted within a factor of two of the observed deposition. The best modeling performance was obtained using a scavenging coefficient value for 0.25 ??m diameter particulate and the assumption that all of the mercury is reactive Hg(II) and subject to local deposition. A 0.1 ??m particle assumption provided conservative overprediction of the data, while a vapor assumption resulted in highly variable predictions. Partitioning a fraction of the Hg emissions to elemental Hg(0) (a U.S. EPA default assumption for combustion facility risk assessments) would have underpredicted the observed fallout.
Cognitive load predicts point-of-care ultrasound simulator performance.
Aldekhyl, Sara; Cavalcanti, Rodrigo B; Naismith, Laura M
2018-02-01
The ability to maintain good performance with low cognitive load is an important marker of expertise. Incorporating cognitive load measurements in the context of simulation training may help to inform judgements of competence. This exploratory study investigated relationships between demographic markers of expertise, cognitive load measures, and simulator performance in the context of point-of-care ultrasonography. Twenty-nine medical trainees and clinicians at the University of Toronto with a range of clinical ultrasound experience were recruited. Participants answered a demographic questionnaire then used an ultrasound simulator to perform targeted scanning tasks based on clinical vignettes. Participants were scored on their ability to both acquire and interpret ultrasound images. Cognitive load measures included participant self-report, eye-based physiological indices, and behavioural measures. Data were analyzed using a multilevel linear modelling approach, wherein observations were clustered by participants. Experienced participants outperformed novice participants on ultrasound image acquisition. Ultrasound image interpretation was comparable between the two groups. Ultrasound image acquisition performance was predicted by level of training, prior ultrasound training, and cognitive load. There was significant convergence between cognitive load measurement techniques. A marginal model of ultrasound image acquisition performance including prior ultrasound training and cognitive load as fixed effects provided the best overall fit for the observed data. In this proof-of-principle study, the combination of demographic and cognitive load measures provided more sensitive metrics to predict ultrasound simulator performance. Performance assessments which include cognitive load can help differentiate between levels of expertise in simulation environments, and may serve as better predictors of skill transfer to clinical practice.
Scholey, J J; Wilcox, P D; Wisnom, M R; Friswell, M I
2009-06-01
A model for quantifying the performance of acoustic emission (AE) systems on plate-like structures is presented. Employing a linear transfer function approach the model is applicable to both isotropic and anisotropic materials. The model requires several inputs including source waveforms, phase velocity and attenuation. It is recognised that these variables may not be readily available, thus efficient measurement techniques are presented for obtaining phase velocity and attenuation in a form that can be exploited directly in the model. Inspired by previously documented methods, the application of these techniques is examined and some important implications for propagation characterisation in plates are discussed. Example measurements are made on isotropic and anisotropic plates and, where possible, comparisons with numerical solutions are made. By inputting experimentally obtained data into the model, quantitative system metrics are examined for different threshold values and sensor locations. By producing plots describing areas of hit success and source location error, the ability to measure the performance of different AE system configurations is demonstrated. This quantitative approach will help to place AE testing on a more solid foundation, underpinning its use in industrial AE applications.
A model of clutter for complex, multivariate geospatial displays.
Lohrenz, Maura C; Trafton, J Gregory; Beck, R Melissa; Gendron, Marlin L
2009-02-01
A novel model of measuring clutter in complex geospatial displays was compared with human ratings of subjective clutter as a measure of convergent validity. The new model is called the color-clustering clutter (C3) model. Clutter is a known problem in displays of complex data and has been shown to affect target search performance. Previous clutter models are discussed and compared with the C3 model. Two experiments were performed. In Experiment 1, participants performed subjective clutter ratings on six classes of information visualizations. Empirical results were used to set two free parameters in the model. In Experiment 2, participants performed subjective clutter ratings on aeronautical charts. Both experiments compared and correlated empirical data to model predictions. The first experiment resulted in a .76 correlation between ratings and C3. The second experiment resulted in a .86 correlation, significantly better than results from a model developed by Rosenholtz et al. Outliers to our correlation suggest further improvements to C3. We suggest that (a) the C3 model is a good predictor of subjective impressions of clutter in geospatial displays, (b) geospatial clutter is a function of color density and saliency (primary C3 components), and (c) pattern analysis techniques could further improve C3. The C3 model could be used to improve the design of electronic geospatial displays by suggesting when a display will be too cluttered for its intended audience.
Lepper, Paul A; D'Spain, Gerald L
2007-08-01
The performance of traditional techniques of passive localization in ocean acoustics such as time-of-arrival (phase differences) and amplitude ratios measured by multiple receivers may be degraded when the receivers are placed on an underwater vehicle due to effects of scattering. However, knowledge of the interference pattern caused by scattering provides a potential enhancement to traditional source localization techniques. Results based on a study using data from a multi-element receiving array mounted on the inner shroud of an autonomous underwater vehicle show that scattering causes the localization ambiguities (side lobes) to decrease in overall level and to move closer to the true source location, thereby improving localization performance, for signals in the frequency band 2-8 kHz. These measurements are compared with numerical modeling results from a two-dimensional time domain finite difference scheme for scattering from two fluid-loaded cylindrical shells. Measured and numerically modeled results are presented for multiple source aspect angles and frequencies. Matched field processing techniques quantify the source localization capabilities for both measurements and numerical modeling output.
Shen, Chung-Wei; Chen, Yi-Hau
2015-10-01
Missing observations and covariate measurement error commonly arise in longitudinal data. However, existing methods for model selection in marginal regression analysis of longitudinal data fail to address the potential bias resulting from these issues. To tackle this problem, we propose a new model selection criterion, the Generalized Longitudinal Information Criterion, which is based on an approximately unbiased estimator for the expected quadratic error of a considered marginal model accounting for both data missingness and covariate measurement error. The simulation results reveal that the proposed method performs quite well in the presence of missing data and covariate measurement error. On the contrary, the naive procedures without taking care of such complexity in data may perform quite poorly. The proposed method is applied to data from the Taiwan Longitudinal Study on Aging to assess the relationship of depression with health and social status in the elderly, accommodating measurement error in the covariate as well as missing observations. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Three real-time architectures - A study using reward models
NASA Technical Reports Server (NTRS)
Sjogren, J. A.; Smith, R. M.
1990-01-01
Numerous applications in the area of computer system analysis can be effectively studied with Markov reward models. These models describe the evolutionary behavior of the computer system by a continuous-time Markov chain, and a reward rate is associated with each state. In reliability/availability models, upstates have reward rate 1, and down states have reward rate zero associated with them. In a combined model of performance and reliability, the reward rate of a state may be the computational capacity, or a related performance measure. Steady-state expected reward rate and expected instantaneous reward rate are clearly useful measures which can be extracted from the Markov reward model. The diversity of areas where Markov reward models may be used is illustrated with a comparative study of three examples of interest to the fault tolerant computing community.
Distributed multi-criteria model evaluation and spatial association analysis
NASA Astrophysics Data System (ADS)
Scherer, Laura; Pfister, Stephan
2015-04-01
Model performance, if evaluated, is often communicated by a single indicator and at an aggregated level; however, it does not embrace the trade-offs between different indicators and the inherent spatial heterogeneity of model efficiency. In this study, we simulated the water balance of the Mississippi watershed using the Soil and Water Assessment Tool (SWAT). The model was calibrated against monthly river discharge at 131 measurement stations. Its time series were bisected to allow for subsequent validation at the same gauges. Furthermore, the model was validated against evapotranspiration which was available as a continuous raster based on remote sensing. The model performance was evaluated for each of the 451 sub-watersheds using four different criteria: 1) Nash-Sutcliffe efficiency (NSE), 2) percent bias (PBIAS), 3) root mean square error (RMSE) normalized to standard deviation (RSR), as well as 4) a combined indicator of the squared correlation coefficient and the linear regression slope (bR2). Conditions that might lead to a poor model performance include aridity, a very flat and steep relief, snowfall and dams, as indicated by previous research. In an attempt to explain spatial differences in model efficiency, the goodness of the model was spatially compared to these four phenomena by means of a bivariate spatial association measure which combines Pearson's correlation coefficient and Moran's index for spatial autocorrelation. In order to assess the model performance of the Mississippi watershed as a whole, three different averages of the sub-watershed results were computed by 1) applying equal weights, 2) weighting by the mean observed river discharge, 3) weighting by the upstream catchment area and the square root of the time series length. Ratings of model performance differed significantly in space and according to efficiency criterion. The model performed much better in the humid Eastern region than in the arid Western region which was confirmed by the high spatial association with the aridity index (ratio of mean annual precipitation to mean annual potential evapotranspiration). This association was still significant when controlling for slopes which manifested the second highest spatial association. In line with these findings, overall model efficiency of the entire Mississippi watershed appeared better when weighted with mean observed river discharge. Furthermore, the model received the highest rating with regards to PBIAS and was judged worst when considering NSE as the most comprehensive indicator. No universal performance indicator exists that considers all aspects of a hydrograph. Therefore, sound model evaluation must take into account multiple criteria. Since model efficiency varies in space which is masked by aggregated ratings spatially explicit model goodness should be communicated as standard praxis - at least as a measure of spatial variability of indicators. Furthermore, transparent documentation of the evaluation procedure also with regards to weighting of aggregated model performance is crucial but often lacking in published research. Finally, the high spatial association between model performance and aridity highlights the need to improve modelling schemes for arid conditions as priority over other aspects that might weaken model goodness.
Acoustic results of the Boeing model 360 whirl tower test
NASA Astrophysics Data System (ADS)
Watts, Michael E.; Jordan, David
1990-09-01
An evaluation is presented for whirl tower test results of the Model 360 helicopter's advanced, high-performance four-bladed composite rotor system intended to facilitate over-200-knot flight. During these performance measurements, acoustic data were acquired by seven microphones. A comparison of whirl-tower tests with theory indicate that theoretical prediction accuracies vary with both microphone position and the inclusion of ground reflection. Prediction errors varied from 0 to 40 percent of the measured signal-to-peak amplitude.
NASA Astrophysics Data System (ADS)
Zimmermann, Jesko; Jones, Michael
2016-04-01
Agriculture can be significant contributor to greenhouse gas emissions, this is especially prevalent in Ireland where the agricultural sector accounts for a third of total emissions. The high emissions are linked to both the importance of agriculture in the Irish economy and the focus on dairy and beef production. In order to reduce emissions three main categories are explored: (1) reduction of methane emissions from cattle, (2) reduction of nitrous oxide emissions from fertilisation, and (3) fostering the carbon sequestration potential of soils. The presented research focuses on the latter two categories, especially changes in fertiliser amount and composition. Soil properties and climate conditions measured at the four experimental sites (two silage and two spring barley) were used to parameterise four biogeochemical models (DayCent, ECOSSE, DNDC 9.4, and DNDC 9.5). All sites had a range of different fertiliser regimes applied. This included changes in amount (0 to 500 kg N/ha on grassland and 0 to 200 kg N/ha on arable fields), fertiliser type (calcium ammonium nitrate and urea), and added inhibitors (the nitrification inhibitor DCD, and the urease inhibitor Agrotain). Overall, 20 different treatments were applied to the grassland sites, and 17 to the arable sites. Nitrous oxide emissions, measured in 2013 and 2014 at all sites using closed chambers, were made available to validate model results for these emissions. To assess model performance for the daily measurements, the Root Mean Square Error (RMSE) was compared to the measured 95% confidence interval of the measured data (RMSE95). Bias was tested comparing the relative error (RE) the 95 % confidence interval of the relative error (RE95). Preliminary results show mixed model performance, depending on the model, site, and the fertiliser regime. However, with the exception of urea fertilisation and added inhibitors, all scenarios were reproduced by at least one model with no statistically significant total error (RMSE < RMSE95) or bias (RE< RE95). A general trend observed was that model performance declined with increased fertilisation rates. Overall, DayCent showed the best performance, however it does not provide the possibility to model the addition urease inhibitors. The results suggest that modelling changes in fertiliser regime on a large scale may require a multi-model approach to assure best performance. Ultimately, the research aims to develop a GIS based platform to apply such an approach on a regional scale.
Influence of Wake Models on Calculated Tiltrotor Aerodynamics
NASA Technical Reports Server (NTRS)
Johnson, Wayne
2001-01-01
The tiltrotor aircraft configuration has the potential to revolutionize air transportation by providing an economical combination of vertical take-off and landing capability with efficient, high-speed cruise flight. To achieve this potential it is necessary to have validated analytical tools that will support future tiltrotor aircraft development. These analytical tools must calculate tiltrotor aeromechanical behavior, including performance, structural loads, vibration, and aeroelastic stability, with an accuracy established by correlation with measured tiltrotor data. The recent test of the Tilt Rotor Aeroacoustic Model (TRAM) with a single,l/4-scale V-22 rotor in the German-Dutch Wind Tunnel (DNW) provides an extensive set of aeroacoustic, performance, and structural loads data. This paper will examine the influence of wake models on calculated tiltrotor aerodynamics, comparing calculations of performance and airloads with TRAM DNW measurements. The calculations will be performed using the comprehensive analysis CAMRAD II.
The Effect of Major Organizational Policy on Employee Attitudes Toward Graduate Degrees
2006-03-01
on the type of intention being assessed - measure of intention and measure of estimate ( Fishbein & Ajzen , 1975). The former is used to predict...motivated to pursue graduate degrees. Therefore, the Model of Reasoned Action’s measurement of estimate for goal achievement ( Fishbein & Ajzen , 1975...Five Years The measurement of intention from the Model of Reasoned Action for predicting the performance of a behavior ( Fishbein & Ajzen , 1975) was
Electric Motor Thermal Management R&D; NREL (National Renewable Energy Laboratory)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bennion, Kevin
2015-06-09
Thermal constraints place significant limitations on how electric motors ultimately perform. Without the ability to remove heat, the motor cannot operate without sacrificing performance, efficiency, and reliability. Finite element analysis and computational fluid dynamics modeling approaches are being increasingly utilized in the design and analysis of electric motors. As the models become more sophisticated, it is important to have detailed and accurate knowledge of both the passive thermal performance and the active cooling performance. In this work, we provide an overview of research characterizing both passive and active thermal elements related to electric motor thermal management. To better characterize themore » passive thermal performance, work is being performed to measure motor material thermal properties and thermal contact resistances. The active cooling performance of automatic transmission fluid (ATF) jets is also being measured to better understand the heat transfer coefficients of ATF impinging on motor copper windings.« less
Modeling and optimum time performance for concurrent processing
NASA Technical Reports Server (NTRS)
Mielke, Roland R.; Stoughton, John W.; Som, Sukhamoy
1988-01-01
The development of a new graph theoretic model for describing the relation between a decomposed algorithm and its execution in a data flow environment is presented. Called ATAMM, the model consists of a set of Petri net marked graphs useful for representing decision-free algorithms having large-grained, computationally complex primitive operations. Performance time measures which determine computing speed and throughput capacity are defined, and the ATAMM model is used to develop lower bounds for these times. A concurrent processing operating strategy for achieving optimum time performance is presented and illustrated by example.
User's Manual for Data for Validating Models for PV Module Performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marion, W.; Anderberg, A.; Deline, C.
2014-04-01
This user's manual describes performance data measured for flat-plate photovoltaic (PV) modules installed in Cocoa, Florida, Eugene, Oregon, and Golden, Colorado. The data include PV module current-voltage curves and associated meteorological data for approximately one-year periods. These publicly available data are intended to facilitate the validation of existing models for predicting the performance of PV modules, and for the development of new and improved models. For comparing different modeling approaches, using these public data will provide transparency and more meaningful comparisons of the relative benefits.
Evaluating hydrological model performance using information theory-based metrics
USDA-ARS?s Scientific Manuscript database
The accuracy-based model performance metrics not necessarily reflect the qualitative correspondence between simulated and measured streamflow time series. The objective of this work was to use the information theory-based metrics to see whether they can be used as complementary tool for hydrologic m...
Loudspeakers: Modeling and control
NASA Astrophysics Data System (ADS)
Al-Ali, Khalid Mohammad
This thesis documented a comprehensive study of loudspeaker modeling and control. A lumped-parameter model for a voice-coil loudspeaker in a vented enclosure was presented that derived from a consideration of physical principles. In addition, a low-frequency (20 Hz to 100 Hz), feedback control method designed to improve the nonlinear performance of the loudspeaker and a suitable performance measure for use in design and evaluation were proposed. Data from experiments performed on a variety of actual loudspeakers confirmed the practicality of the theory developed in this work. The lumped-parameter loudspeaker model, although simple, captured much of the nonlinear behavior of the loudspeaker. In addition, the model formulation allowed a straightforward application of modern control system methods and lent itself well to modern parametric identification techniques. The nonlinear performance of the loudspeaker system was evaluated using a suitable distortion measure that was proposed and compared with other distortion measures currently used in practice. Furthermore, the linearizing effect of feedback using a linear controller (both static and dynamic) was studied on a class of nonlinear systems. The results illustrated that the distortion reduction was potentially significant and a useful upper bound on the closed-loop distortion was found based on the sensitivity function of the system's linearization. A feedback scheme based on robust control theory was chosen for application to the loudspeaker system. Using the pressure output of the loudspeaker system for feedback, the technique offered significant advantages over those previously attempted. Illustrative examples were presented that proved the applicability of the theory developed in this dissertation to a variety of loudspeaker systems. The examples included a vented loudspeaker model and actual loudspeakers enclosed in both vented and sealed configurations. In each example, predictable and measurable distortion reduction at the output of the closed-loop system was recorded.
Liu, Hon-Man; Chen, Shan-Kai; Chen, Ya-Fang; Lee, Chung-Wei; Yeh, Lee-Ren
2016-01-01
Purpose To assess the inter session reproducibility of automatic segmented MRI-derived measures by FreeSurfer in a group of subjects with normal-appearing MR images. Materials and Methods After retrospectively reviewing a brain MRI database from our institute consisting of 14,758 adults, those subjects who had repeat scans and had no history of neurodegenerative disorders were selected for morphometry analysis using FreeSurfer. A total of 34 subjects were grouped by MRI scanner model. After automatic segmentation using FreeSurfer, label-wise comparison (involving area, thickness, and volume) was performed on all segmented results. An intraclass correlation coefficient was used to estimate the agreement between sessions. Wilcoxon signed rank test was used to assess the population mean rank differences across sessions. Mean-difference analysis was used to evaluate the difference intervals across scanners. Absolute percent difference was used to estimate the reproducibility errors across the MRI models. Kruskal-Wallis test was used to determine the across-scanner effect. Results The agreement in segmentation results for area, volume, and thickness measurements of all segmented anatomical labels was generally higher in Signa Excite and Verio models when compared with Sonata and TrioTim models. There were significant rank differences found across sessions in some labels of different measures. Smaller difference intervals in global volume measurements were noted on images acquired by Signa Excite and Verio models. For some brain regions, significant MRI model effects were observed on certain segmentation results. Conclusions Short-term scan-rescan reliability of automatic brain MRI morphometry is feasible in the clinical setting. However, since repeatability of software performance is contingent on the reproducibility of the scanner performance, the scanner performance must be calibrated before conducting such studies or before using such software for retrospective reviewing. PMID:26812647
Pitman, A; Jones, D N; Stuart, D; Lloydhope, K; Mallitt, K; O'Rourke, P
2009-10-01
The study reports on the evolution of the Australian radiologist relative value unit (RVU) model of measuring radiologist reporting workloads in teaching hospital departments, and aims to outline a way forward for the development of a broad national safety, quality and performance framework that enables value mapping, measurement and benchmarking. The Radiology International Benchmarking Project of Queensland Health provided a suitable high-level national forum where the existing Pitman-Jones RVU model was applied to contemporaneous data, and its shortcomings and potential avenues for future development were analysed. Application of the Pitman-Jones model to Queensland data and also a Victorian benchmark showed that the original recommendation of 40,000 crude RVU per full-time equivalent consultant radiologist (97-98 baseline level) has risen only moderately, to now lie around 45,000 crude RVU/full-time equivalent. Notwithstanding this, the model has a number of weaknesses and is becoming outdated, as it cannot capture newer time-consuming examinations particularly in CT. A significant re-evaluation of the value of medical imaging is required, and is now occurring. We must rethink how we measure, benchmark, display and continually improve medical imaging safety, quality and performance, throughout the imaging care cycle and beyond. It will be necessary to ensure alignment with patient needs, as well as clinical and organisational objectives. Clear recommendations for the development of an updated national reporting workload RVU system are available, and an opportunity now exists for developing a much broader national model. A more sophisticated and balanced multidimensional safety, quality and performance framework that enables measurement and benchmarking of all important elements of health-care service is needed.
Fatigue models for applied research in warfighting.
Hursh, Steven R; Redmond, Daniel P; Johnson, Michael L; Thorne, David R; Belenky, Gregory; Balkin, Thomas J; Storm, William F; Miller, James C; Eddy, Douglas R
2004-03-01
The U.S. Department of Defense (DOD) has long pursued applied research concerning fatigue in sustained and continuous military operations. In 1996, Hursh developed a simple homeostatic fatigue model and programmed the model into an actigraph to give a continuous indication of performance. Based on this initial work, the Army conducted a study of 1 wk of restricted sleep in 66 subjects with multiple measures of performance, termed the Sleep Dose-Response Study (SDR). This study provided numerical estimation of parameters for the Walter Reed Army Institute of Research Sleep Performance Model (SPM) and elucidated the relationships among several sleep-related performance measures. Concurrently, Hursh extended the original actigraph modeling structure and software expressions for use in other practical applications. The model became known as the Sleep, Activity, Fatigue, and Task Effectiveness (SAFTE) Model, and Hursh has applied it in the construction of a Fatigue Avoidance Scheduling Tool. This software is designed to help optimize the operational management of aviation ground and flight crews, but is not limited to that application. This paper describes the working fatigue model as it is being developed by the DOD laboratories, using the conceptual framework, vernacular, and notation of the SAFTE Model. At specific points where the SPM may differ from SAFTE, this is discussed. Extensions of the SAFTE Model to incorporate dynamic phase adjustment for both transmeridian relocation and shift work are described. The unexpected persistence of performance effects following chronic sleep restriction found in the SDR study necessitated some revisions of the SAFTE Model that are also described. The paper concludes with a discussion of several important modeling issues that remain to be addressed.
Statistical properties of four effect-size measures for mediation models.
Miočević, Milica; O'Rourke, Holly P; MacKinnon, David P; Brown, Hendricks C
2018-02-01
This project examined the performance of classical and Bayesian estimators of four effect size measures for the indirect effect in a single-mediator model and a two-mediator model. Compared to the proportion and ratio mediation effect sizes, standardized mediation effect-size measures were relatively unbiased and efficient in the single-mediator model and the two-mediator model. Percentile and bias-corrected bootstrap interval estimates of ab/s Y , and ab(s X )/s Y in the single-mediator model outperformed interval estimates of the proportion and ratio effect sizes in terms of power, Type I error rate, coverage, imbalance, and interval width. For the two-mediator model, standardized effect-size measures were superior to the proportion and ratio effect-size measures. Furthermore, it was found that Bayesian point and interval summaries of posterior distributions of standardized effect-size measures reduced excessive relative bias for certain parameter combinations. The standardized effect-size measures are the best effect-size measures for quantifying mediated effects.
Intercomparison of the community multiscale air quality model and CALGRID using process analysis.
O'Neill, Susan M; Lamb, Brian K
2005-08-01
This study was designed to examine the similarities and differences between two advanced photochemical air quality modeling systems: EPA Models-3/CMAQ and CALGRID/CALMET. Both modeling systems were applied to an ozone episode that occurred along the I-5 urban corridor in western Washington and Oregon during July 11-14, 1996. Both models employed the same modeling domain and used the same detailed gridded emission inventory. The CMAQ model was run using both the CB-IV and RADM2 chemical mechanisms, while CALGRID was used with the SAPRC-97 chemical mechanism. Outputfrom the Mesoscale Meteorological Model (MM5) employed with observational nudging was used in both models. The two modeling systems, representing three chemical mechanisms and two sets of meteorological inputs, were evaluated in terms of statistical performance measures for both 1- and 8-h average observed ozone concentrations. The results showed that the different versions of the systems were more similar than different, and all versions performed well in the Portland region and downwind of Seattle but performed poorly in the more rural region north of Seattle. Improving the meteorological input into the CALGRID/CALMET system with planetary boundary layer (PBL) parameters from the Models-3/CMAQ meteorology preprocessor (MCIP) improved the performance of the CALGRID/CALMET system. The 8-h ensemble case was often the best performer of all the cases indicating that the models perform better over longer analysis periods. The 1-h ensemble case, derived from all runs, was not necessarily an improvement over the five individual cases, but the standard deviation about the mean provided a measure of overall modeling uncertainty. Process analysis was applied to examine the contribution of the individual processes to the species conservation equation. The process analysis results indicated that the two modeling systems arrive at similar solutions by very different means. Transport rates are faster and exhibit greater fluctuations in the CMAQ cases than in the CALGRID cases, which lead to different placement of the urban ozone plumes. The CALGRID cases, which rely on the SAPRC97 chemical mechanism, exhibited a greater diurnal production/loss cycle of ozone concentrations per hour compared to either the RADM2 or CBIV chemical mechanisms in the CMAQ cases. These results demonstrate the need for specialized process field measurements to confirm whether we are modeling ozone with valid processes.
Medicaid plan, health centers reveal secrets to boosting HEDIS scores, quality of care.
1999-07-01
How to do well on HEDIS measurement and boost quality of care for your Medicaid members. Neighborhood Health Plan in Boston, MA, attributes its top performance on Medicaid HEDIS measures to providers' care models, a commitment to quality, and the quest for performance data.
Guiding principles and checklist for population-based quality metrics.
Krishnan, Mahesh; Brunelli, Steven M; Maddux, Franklin W; Parker, Thomas F; Johnson, Douglas; Nissenson, Allen R; Collins, Allan; Lacson, Eduardo
2014-06-06
The Centers for Medicare and Medicaid Services oversees the ESRD Quality Incentive Program to ensure that the highest quality of health care is provided by outpatient dialysis facilities that treat patients with ESRD. To that end, Centers for Medicare and Medicaid Services uses clinical performance measures to evaluate quality of care under a pay-for-performance or value-based purchasing model. Now more than ever, the ESRD therapeutic area serves as the vanguard of health care delivery. By translating medical evidence into clinical performance measures, the ESRD Prospective Payment System became the first disease-specific sector using the pay-for-performance model. A major challenge for the creation and implementation of clinical performance measures is the adjustments that are necessary to transition from taking care of individual patients to managing the care of patient populations. The National Quality Forum and others have developed effective and appropriate population-based clinical performance measures quality metrics that can be aggregated at the physician, hospital, dialysis facility, nursing home, or surgery center level. Clinical performance measures considered for endorsement by the National Quality Forum are evaluated using five key criteria: evidence, performance gap, and priority (impact); reliability; validity; feasibility; and usability and use. We have developed a checklist of special considerations for clinical performance measure development according to these National Quality Forum criteria. Although the checklist is focused on ESRD, it could also have broad application to chronic disease states, where health care delivery organizations seek to enhance quality, safety, and efficiency of their services. Clinical performance measures are likely to become the norm for tracking performance for health care insurers. Thus, it is critical that the methodologies used to develop such metrics serve the payer and the provider and most importantly, reflect what represents the best care to improve patient outcomes. Copyright © 2014 by the American Society of Nephrology.
Ertas, Gokhan
2018-07-01
To assess the value of joint evaluation of diffusion tensor imaging (DTI) measures by using logistic regression modelling to detect high GS risk group prostate tumors. Fifty tumors imaged using DTI on a 3 T MRI device were analyzed. Regions of interests focusing on the center of tumor foci and noncancerous tissue on the maps of mean diffusivity (MD) and fractional anisotropy (FA) were used to extract the minimum, the maximum and the mean measures. Measure ratio was computed by dividing tumor measure by noncancerous tissue measure. Logistic regression models were fitted for all possible pair combinations of the measures using 5-fold cross validation. Systematic differences are present for all MD measures and also for all FA measures in distinguishing the high risk tumors [GS ≥ 7(4 + 3)] from the low risk tumors [GS ≤ 7(3 + 4)] (P < 0.05). Smaller value for MD measures and larger value for FA measures indicate the high risk. The models enrolling the measures achieve good fits and good classification performances (R 2 adj = 0.55-0.60, AUC = 0.88-0.91), however the models using the measure ratios perform better (R 2 adj = 0.59-0.75, AUC = 0.88-0.95). The model that employs the ratios of minimum MD and maximum FA accomplishes the highest sensitivity, specificity and accuracy (Se = 77.8%, Sp = 96.9% and Acc = 90.0%). Joint evaluation of MD and FA diffusion tensor imaging measures is valuable to detect high GS risk group peripheral zone prostate tumors. However, use of the ratios of the measures improves the accuracy of the detections substantially. Logistic regression modelling provides a favorable solution for the joint evaluations easily adoptable in clinical practice. Copyright © 2018 Elsevier Inc. All rights reserved.
Connaughton, Veronica M; Amiruddin, Azhani; Clunies-Ross, Karen L; French, Noel; Fox, Allison M
2017-05-01
A major model of the cerebral circuits that underpin arithmetic calculation is the triple-code model of numerical processing. This model proposes that the lateralization of mathematical operations is organized across three circuits: a left-hemispheric dominant verbal code; a bilateral magnitude representation of numbers and a bilateral Arabic number code. This study simultaneously measured the blood flow of both middle cerebral arteries using functional transcranial Doppler ultrasonography to assess hemispheric specialization during the performance of both language and arithmetic tasks. The propositions of the triple-code model were assessed in a non-clinical adult group by measuring cerebral blood flow during the performance of multiplication and subtraction problems. Participants were 17 adults aged between 18-27 years. We obtained laterality indices for each type of mathematical operation and compared these in participants with left-hemispheric language dominance. It was hypothesized that blood flow would lateralize to the left hemisphere during the performance of multiplication operations, but would not lateralize during the performance of subtraction operations. Hemispheric blood flow was significantly left lateralized during the multiplication task, but was not lateralized during the subtraction task. Compared to high spatial resolution neuroimaging techniques previously used to measure cerebral lateralization, functional transcranial Doppler ultrasonography is a cost-effective measure that provides a superior temporal representation of arithmetic cognition. These results provide support for the triple-code model of arithmetic processing and offer complementary evidence that multiplication operations are processed differently in the adult brain compared to subtraction operations. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Posselt, D.; L'Ecuyer, T.; Matsui, T.
2009-05-01
Cloud resolving models are typically used to examine the characteristics of clouds and precipitation and their relationship to radiation and the large-scale circulation. As such, they are not required to reproduce the exact location of each observed convective system, much less each individual cloud. Some of the most relevant information about clouds and precipitation is provided by instruments located on polar-orbiting satellite platforms, but these observations are intermittent "snapshots" in time, making assessment of model performance challenging. In contrast to direct comparison, model results can be evaluated statistically. This avoids the requirement for the model to reproduce the observed systems, while returning valuable information on the performance of the model in a climate-relevant sense. The focus of this talk is a model evaluation study, in which updates to the microphysics scheme used in a three-dimensional version of the Goddard Cumulus Ensemble (GCE) model are evaluated using statistics of observed clouds, precipitation, and radiation. We present the results of multiday (non-equilibrium) simulations of organized deep convection using single- and double-moment versions of a the model's cloud microphysical scheme. Statistics of TRMM multi-sensor derived clouds, precipitation, and radiative fluxes are used to evaluate the GCE results, as are simulated TRMM measurements obtained using a sophisticated instrument simulator suite. We present advantages and disadvantages of performing model comparisons in retrieval and measurement space and conclude by motivating the use of data assimilation techniques for analyzing and improving model parameterizations.
Using performance measurement to drive improvement: a road map for change.
Galvin, Robert S; McGlynn, Elizabeth A
2003-01-01
Performance measures and reporting have not been adopted throughout the US health care system despite their central role in encouraging increased participation by consumers in decision-making. Understanding whether the failure of measurement and reporting to diffuse throughout the health system can be overcome is critical for determining future policy in this area. To create a conceptual framework for analyzing the current rate of adoption and evaluating alternatives for accelerating adoption, and to recommend a set of concrete steps that can be taken to increase the use of performance measurement and reporting. Review of three theoretic models (Rogers, Prochaska/DiClemente, Gladwell), examination of the literature on previous experiences with quality measurement and reporting, and interviews with select stakeholders. The three theoretic models provide a valuable framework for understanding why the use of performance measures is stalled ("the circle of unaccountability") and for generating ideas about concrete steps that could be taken to accelerate adoption. Six steps are recommended: (1) raise public awareness, (2) redesign measures and reports, (3) make the delivery of information timely, (4) require public reporting, (5) develop and implement systems to reward quality, and (6) actively court leaders. The recommended six steps are interconnected; action on all will be required to drive significant acceleration in rates of adoption of performance measurement and reporting. Leadership and coordination are necessary to ensure these steps are taken and that they work in concert with one another.
Cardinal rules: Visual orientation perception reflects knowledge of environmental statistics
Girshick, Ahna R.; Landy, Michael S.; Simoncelli, Eero P.
2011-01-01
Humans are remarkably good at performing visual tasks, but experimental measurements reveal substantial biases in the perception of basic visual attributes. An appealing hypothesis is that these biases arise through a process of statistical inference, in which information from noisy measurements is fused with a probabilistic model of the environment. But such inference is optimal only if the observer’s internal model matches the environment. Here, we provide evidence that this is the case. We measured performance in an orientation-estimation task, demonstrating the well-known fact that orientation judgements are more accurate at cardinal (horizontal and vertical) orientations, along with a new observation that judgements made under conditions of uncertainty are strongly biased toward cardinal orientations. We estimate observers’ internal models for orientation and find that they match the local orientation distribution measured in photographs. We also show how a neural population could embed probabilistic information responsible for such biases. PMID:21642976
Moon, Brianna F; Jones, Kyle M; Chen, Liu Qi; Liu, Peilu; Randtke, Edward A; Howison, Christine M; Pagel, Mark D
2015-01-01
Acidosis within tumor and kidney tissues has previously been quantitatively measured using a molecular imaging technique known as acidoCEST MRI. The previous studies used iopromide and iopamidol, two iodinated contrast agents that are approved for clinical CT diagnoses and have been repurposed for acidoCEST MRI studies. We aimed to compare the performance of the two agents for measuring pH by optimizing image acquisition conditions, correlating pH with a ratio of CEST effects from an agent, and evaluating the effects of concentration, endogenous T1 relaxation time and temperature on the pH-CEST ratio correlation for each agent. These results showed that the two agents had similar performance characteristics, although iopromide produced a pH measurement with a higher dynamic range while iopamidol produced a more precise pH measurement. We then compared the performance of the two agents to measure in vivo extracellular pH (pHe) within xenograft tumor models of Raji lymphoma and MCF-7 breast cancer. Our results showed that the pHe values measured with each agent were not significantly different. Also, iopromide consistently measured a greater region of the tumor relative to iopamidol in both tumor models. Therefore, an iodinated contrast agent for acidoCEST MRI should be selected based on the measurement properties needed for a specific biomedical study and the pharmacokinetic properties of a specific tumor model. Copyright © 2015 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Borup, Morten; Grum, Morten; Linde, Jens Jørgen; Mikkelsen, Peter Steen
2016-08-01
Numerous studies have shown that radar rainfall estimates need to be adjusted against rain gauge measurements in order to be useful for hydrological modelling. In the current study we investigate if adjustment can improve radar rainfall estimates to the point where they can be used for modelling overflows from urban drainage systems, and we furthermore investigate the importance of the aggregation period of the adjustment scheme. This is done by continuously adjusting X-band radar data based on the previous 5-30 min of rain data recorded by multiple rain gauges and propagating the rainfall estimates through a hydraulic urban drainage model. The model is built entirely from physical data, without any calibration, to avoid bias towards any specific type of rainfall estimate. The performance is assessed by comparing measured and modelled water levels at a weir downstream of a highly impermeable, well defined, 64 ha urban catchment, for nine overflow generating rain events. The dynamically adjusted radar data perform best when the aggregation period is as small as 10-20 min, in which case it performs much better than static adjusted radar data and data from rain gauges situated 2-3 km away.
Energy Performance Measurement and Simulation Modeling of Tactical Soft-Wall Shelters
2015-07-01
was too low to measure was on the order of 5 hours. Because the research team did not have access to the site between 1700 and 0500 hours the...Basic for Applications ( VBA ). The objective function was the root mean square (RMS) errors between modeled and measured heating load and the modeled...References Phase Change Energy Solutions. (2013). BioPCM web page, http://phasechange.com/index.php/en/about/our-material. Accessed 16 September
Improving competitiveness through performance-measurement systems.
Stewart, L J; Lockamy, A
2001-12-01
Parallels exist between the competitive pressures felt by U.S. manufacturers over the past 30 years and those experienced by healthcare providers today. Increasing market deregulation, changing government policies, and growing consumerism have altered the healthcare arena. Responding to similar pressures, manufacturers adopted a strategic orientation driven by customer needs and expectations that led them to achieve high performance levels and surpass their competition. The adoption of integrated performance-measurement systems was instrumental in these firms' success. An integrated performance-measurement model for healthcare organizations can help to blend the organization's strategy with the demands of the contemporary healthcare environment. Performance-measurement systems encourage healthcare organizations to focus on their mission and vision by aligning their strategic objectives and resource-allocation decisions with customer requirements.
Dependability and performability analysis
NASA Technical Reports Server (NTRS)
Trivedi, Kishor S.; Ciardo, Gianfranco; Malhotra, Manish; Sahner, Robin A.
1993-01-01
Several practical issues regarding specifications and solution of dependability and performability models are discussed. Model types with and without rewards are compared. Continuous-time Markov chains (CTMC's) are compared with (continuous-time) Markov reward models (MRM's) and generalized stochastic Petri nets (GSPN's) are compared with stochastic reward nets (SRN's). It is shown that reward-based models could lead to more concise model specifications and solution of a variety of new measures. With respect to the solution of dependability and performability models, three practical issues were identified: largeness, stiffness, and non-exponentiality, and a variety of approaches are discussed to deal with them, including some of the latest research efforts.
Maciejewski, Matthew L.; Liu, Chuan-Fen; Fihn, Stephan D.
2009-01-01
OBJECTIVE—To compare the ability of generic comorbidity and risk adjustment measures, a diabetes-specific measure, and a self-reported functional status measure to explain variation in health care expenditures for individuals with diabetes. RESEARCH DESIGN AND METHODS—This study included a retrospective cohort of 3,092 diabetic veterans participating in a multisite trial. Two comorbidity measures, four risk adjusters, a functional status measure, a diabetes complication count, and baseline expenditures were constructed from administrative and survey data. Outpatient, inpatient, and total expenditure models were estimated using ordinary least squares regression. Adjusted R2 statistics and predictive ratios were compared across measures to assess overall explanatory power and explanatory power of low- and high-cost subgroups. RESULTS—Administrative data–based risk adjusters performed better than the comorbidity, functional status, and diabetes-specific measures in all expenditure models. The diagnostic cost groups (DCGs) measure had the greatest predictive power overall and for the low- and high-cost subgroups, while the diabetes-specific measure had the lowest predictive power. A model with DCGs and the diabetes-specific measure modestly improved predictive power. CONCLUSIONS—Existing generic measures can be useful for diabetes-specific research and policy applications, but more predictive diabetes-specific measures are needed. PMID:18945927
Statistical analysis of target acquisition sensor modeling experiments
NASA Astrophysics Data System (ADS)
Deaver, Dawne M.; Moyer, Steve
2015-05-01
The U.S. Army RDECOM CERDEC NVESD Modeling and Simulation Division is charged with the development and advancement of military target acquisition models to estimate expected soldier performance when using all types of imaging sensors. Two elements of sensor modeling are (1) laboratory-based psychophysical experiments used to measure task performance and calibrate the various models and (2) field-based experiments used to verify the model estimates for specific sensors. In both types of experiments, it is common practice to control or measure environmental, sensor, and target physical parameters in order to minimize uncertainty of the physics based modeling. Predicting the minimum number of test subjects required to calibrate or validate the model should be, but is not always, done during test planning. The objective of this analysis is to develop guidelines for test planners which recommend the number and types of test samples required to yield a statistically significant result.
Lu, Tao
2017-01-01
The joint modeling of mean and variance for longitudinal data is an active research area. This type of model has the advantage of accounting for heteroscedasticity commonly observed in between and within subject variations. Most of researches focus on improving the estimating efficiency but ignore many data features frequently encountered in practice. In this article, we develop a mixed-effects location scale joint model that concurrently accounts for longitudinal data with multiple features. Specifically, our joint model handles heterogeneity, skewness, limit of detection, measurement errors in covariates which are typically observed in the collection of longitudinal data from many studies. We employ a Bayesian approach for making inference on the joint model. The proposed model and method are applied to an AIDS study. Simulation studies are performed to assess the performance of the proposed method. Alternative models under different conditions are compared.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jahandideh, Sepideh; Jahandideh, Samad; Asadabadi, Ebrahim Barzegari
2009-11-15
Prediction of the amount of hospital waste production will be helpful in the storage, transportation and disposal of hospital waste management. Based on this fact, two predictor models including artificial neural networks (ANNs) and multiple linear regression (MLR) were applied to predict the rate of medical waste generation totally and in different types of sharp, infectious and general. In this study, a 5-fold cross-validation procedure on a database containing total of 50 hospitals of Fars province (Iran) were used to verify the performance of the models. Three performance measures including MAR, RMSE and R{sup 2} were used to evaluate performancemore » of models. The MLR as a conventional model obtained poor prediction performance measure values. However, MLR distinguished hospital capacity and bed occupancy as more significant parameters. On the other hand, ANNs as a more powerful model, which has not been introduced in predicting rate of medical waste generation, showed high performance measure values, especially 0.99 value of R{sup 2} confirming the good fit of the data. Such satisfactory results could be attributed to the non-linear nature of ANNs in problem solving which provides the opportunity for relating independent variables to dependent ones non-linearly. In conclusion, the obtained results showed that our ANN-based model approach is very promising and may play a useful role in developing a better cost-effective strategy for waste management in future.« less
Biases and power for groups comparison on subjective health measurements.
Hamel, Jean-François; Hardouin, Jean-Benoit; Le Neel, Tanguy; Kubis, Gildas; Roquelaure, Yves; Sébille, Véronique
2012-01-01
Subjective health measurements are increasingly used in clinical research, particularly for patient groups comparisons. Two main types of analytical strategies can be used for such data: so-called classical test theory (CTT), relying on observed scores and models coming from Item Response Theory (IRT) relying on a response model relating the items responses to a latent parameter, often called latent trait. Whether IRT or CTT would be the most appropriate method to compare two independent groups of patients on a patient reported outcomes measurement remains unknown and was investigated using simulations. For CTT-based analyses, groups comparison was performed using t-test on the scores. For IRT-based analyses, several methods were compared, according to whether the Rasch model was considered with random effects or with fixed effects, and the group effect was included as a covariate or not. Individual latent traits values were estimated using either a deterministic method or by stochastic approaches. Latent traits were then compared with a t-test. Finally, a two-steps method was performed to compare the latent trait distributions, and a Wald test was performed to test the group effect in the Rasch model including group covariates. The only unbiased IRT-based method was the group covariate Wald's test, performed on the random effects Rasch model. This model displayed the highest observed power, which was similar to the power using the score t-test. These results need to be extended to the case frequently encountered in practice where data are missing and possibly informative.
Inferring brain-computational mechanisms with models of activity measurements
Diedrichsen, Jörn
2016-01-01
High-resolution functional imaging is providing increasingly rich measurements of brain activity in animals and humans. A major challenge is to leverage such data to gain insight into the brain's computational mechanisms. The first step is to define candidate brain-computational models (BCMs) that can perform the behavioural task in question. We would then like to infer which of the candidate BCMs best accounts for measured brain-activity data. Here we describe a method that complements each BCM by a measurement model (MM), which simulates the way the brain-activity measurements reflect neuronal activity (e.g. local averaging in functional magnetic resonance imaging (fMRI) voxels or sparse sampling in array recordings). The resulting generative model (BCM-MM) produces simulated measurements. To avoid having to fit the MM to predict each individual measurement channel of the brain-activity data, we compare the measured and predicted data at the level of summary statistics. We describe a novel particular implementation of this approach, called probabilistic representational similarity analysis (pRSA) with MMs, which uses representational dissimilarity matrices (RDMs) as the summary statistics. We validate this method by simulations of fMRI measurements (locally averaging voxels) based on a deep convolutional neural network for visual object recognition. Results indicate that the way the measurements sample the activity patterns strongly affects the apparent representational dissimilarities. However, modelling of the measurement process can account for these effects, and different BCMs remain distinguishable even under substantial noise. The pRSA method enables us to perform Bayesian inference on the set of BCMs and to recognize the data-generating model in each case. This article is part of the themed issue ‘Interpreting BOLD: a dialogue between cognitive and cellular neuroscience’. PMID:27574316
Comparison of COSMIC measurements with the IRI-2007 model over the eastern Mediterranean region.
Vryonides, P; Haralambous, H
2013-05-01
This paper presents a comparison of the International Reference Ionosphere (IRI-2007) model over the eastern Mediterranean region with peak ionospheric characteristics (foF2-hmF2) and electron density profiles measured by FORMOSAT-3/COSMIC satellites in terms of GPS radio occultation technique and the Cyprus digisonde. In the absence of systematic ionosonde measurements over this area, COSMIC measurements provide an opportunity to perform such a study by considering observations for year 2010 to investigate the behaviour of the IRI-2007 model over the eastern Mediterranean area.
NASA Astrophysics Data System (ADS)
Franck, Charmaine C.; Lee, Dave; Espinola, Richard L.; Murrill, Steven R.; Jacobs, Eddie L.; Griffin, Steve T.; Petkie, Douglas T.; Reynolds, Joe
2007-04-01
This paper describes the design and performance of the U.S. Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate's (NVESD), active 0.640-THz imaging testbed, developed in support of the Defense Advanced Research Project Agency's (DARPA) Terahertz Imaging Focal-Plane Technology (TIFT) program. The laboratory measurements and standoff images were acquired during the development of a NVESD and Army Research Laboratory terahertz imaging performance model. The imaging testbed is based on a 12-inch-diameter Off-Axis Elliptical (OAE) mirror designed with one focal length at 1 m and the other at 10 m. This paper will describe the design considerations of the OAE-mirror, dual-capability, active imaging testbed, as well as measurement/imaging results used to further develop the model.
ERIC Educational Resources Information Center
Guarino, Cassandra M.
2013-01-01
The push for accountability in public schooling has extended to the measurement of teacher performance, accelerated by federal efforts through Race to the Top. Currently, a large number of states and districts across the country are computing measures of teacher performance based on the standardized test scores of their students and using them in…
Parallel performance of TORT on the CRAY J90: Model and measurement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnett, A.; Azmy, Y.Y.
1997-10-01
A limitation on the parallel performance of TORT on the CRAY J90 is the amount of extra work introduced by the multitasking algorithm itself. The extra work beyond that of the serial version of the code, called overhead, arises from the synchronization of the parallel tasks and the accumulation of results by the master task. The goal of recent updates to TORT was to reduce the time consumed by these activities. To help understand which components of the multitasking algorithm contribute significantly to the overhead, a parallel performance model was constructed and compared to measurements of actual timings of themore » code.« less
International Space Station Future Correlation Analysis Improvements
NASA Technical Reports Server (NTRS)
Laible, Michael R.; Pinnamaneni, Murthy; Sugavanam, Sujatha; Grygier, Michael
2018-01-01
Ongoing modal analyses and model correlation are performed on different configurations of the International Space Station (ISS). These analyses utilize on-orbit dynamic measurements collected using four main ISS instrumentation systems: External Wireless Instrumentation System (EWIS), Internal Wireless Instrumentation System (IWIS), Space Acceleration Measurement System (SAMS), and Structural Dynamic Measurement System (SDMS). Remote Sensor Units (RSUs) are network relay stations that acquire flight data from sensors. Measured data is stored in the Remote Sensor Unit (RSU) until it receives a command to download data via RF to the Network Control Unit (NCU). Since each RSU has its own clock, it is necessary to synchronize measurements before analysis. Imprecise synchronization impacts analysis results. A study was performed to evaluate three different synchronization techniques: (i) measurements visually aligned to analytical time-response data using model comparison, (ii) Frequency Domain Decomposition (FDD), and (iii) lag from cross-correlation to align measurements. This paper presents the results of this study.
Performance analysis of Supply Chain Management with Supply Chain Operation reference model
NASA Astrophysics Data System (ADS)
Hasibuan, Abdurrozzaq; Arfah, Mahrani; Parinduri, Luthfi; Hernawati, Tri; Suliawati; Harahap, Bonar; Rahmah Sibuea, Siti; Krianto Sulaiman, Oris; purwadi, Adi
2018-04-01
This research was conducted at PT. Shamrock Manufacturing Corpora, the company is required to think creatively to implement competition strategy by producing goods/services that are more qualified, cheaper. Therefore, it is necessary to measure the performance of Supply Chain Management in order to improve the competitiveness. Therefore, the company is required to optimize its production output to meet the export quality standard. This research begins with the creation of initial dimensions based on Supply Chain Management process, ie Plan, Source, Make, Delivery, and Return with hierarchy based on Supply Chain Reference Operation that is Reliability, Responsiveness, Agility, Cost, and Asset. Key Performance Indicator identification becomes a benchmark in performance measurement whereas Snorm De Boer normalization serves to equalize Key Performance Indicator value. Analiytical Hierarchy Process is done to assist in determining priority criteria. Measurement of Supply Chain Management performance at PT. Shamrock Manufacturing Corpora produces SC. Responsiveness (0.649) has higher weight (priority) than other alternatives. The result of performance analysis using Supply Chain Reference Operation model of Supply Chain Management performance at PT. Shamrock Manufacturing Corpora looks good because its monitoring system between 50-100 is good.
NASA Astrophysics Data System (ADS)
Mölg, Thomas; Cullen, Nicolas J.; Kaser, Georg
Broadband radiation schemes (parameterizations) are commonly used tools in glacier mass-balance modelling, but their performance at high altitude in the tropics has not been evaluated in detail. Here we take advantage of a high-quality 2 year record of global radiation (G) and incoming longwave radiation (L↓) measured on Kersten Glacier, Kilimanjaro, East Africa, at 5873 m a.s.l., to optimize parameterizations of G and L↓. We show that the two radiation terms can be related by an effective cloud-cover fraction neff, so G or L↓ can be modelled based on neff derived from measured L↓ or G, respectively. At neff = 1, G is reduced to 35% of clear-sky G, and L↓ increases by 45-65% (depending on altitude) relative to clear-sky L↓. Validation for a 1 year dataset of G and L↓ obtained at 4850 m on Glaciar Artesonraju, Peruvian Andes, yields a satisfactory performance of the radiation scheme. Whether this performance is acceptable for mass-balance studies of tropical glaciers is explored by applying the data from Glaciar Artesonraju to a physically based mass-balance model, which requires, among others, G and L↓ as forcing variables. Uncertainties in modelled mass balance introduced by the radiation parameterizations do not exceed those that can be caused by errors in the radiation measurements. Hence, this paper provides a tool for inclusion in spatially distributed mass-balance modelling of tropical glaciers and/or extension of radiation data when only G or L↓ is measured.
NASA Astrophysics Data System (ADS)
Gao, X.; Li, T.; Zhang, X.; Geng, X.
2018-04-01
In this paper, we proposed the stochastic model of InSAR height measurement by considering the interferometric geometry of InSAR height measurement. The model directly described the relationship between baseline error and height measurement error. Then the simulation analysis in combination with TanDEM-X parameters was implemented to quantitatively evaluate the influence of baseline error to height measurement. Furthermore, the whole emulation validation of InSAR stochastic model was performed on the basis of SRTM DEM and TanDEM-X parameters. The spatial distribution characteristics and error propagation rule of InSAR height measurement were fully evaluated.
ERIC Educational Resources Information Center
Aristovnik, Aleksander
2012-01-01
The purpose of the paper is to review some previous researches examining ICT efficiency and the impact of ICT on educational output/outcome as well as different conceptual and methodological issues related to performance measurement. Moreover, a definition, measurements and the empirical application of a model measuring the efficiency of ICT use…
Comparisons of Aquarius Measurements over Oceans with Radiative Transfer Models at L-Band
NASA Technical Reports Server (NTRS)
Dinnat, E.; LeVine, D.; Abraham, S.; DeMattheis, P.; Utku, C.
2012-01-01
The Aquarius/SAC-D spacecraft includes three L-band (1.4 GHz) radiometers dedicated to measuring sea surface salinity. It was launched in June 2011 by NASA and CONAE (Argentine space agency). We report detailed comparisons of Aquarius measurements with radiative transfer model predictions. These comparisons are used as part of the initial assessment of Aquarius data and to estimate the radiometer calibration bias and stability. Comparisons are also being performed to assess the performance of models used in the retrieval algorithm for correcting the effect of various sources of geophysical "noise" (e.g. Faraday rotation, surface roughness). Such corrections are critical in bringing the error in retrieved salinity down to the required 0.2 practical salinity unit on monthly global maps at 150 km by 150 km resolution.
Galmarini, Stefano; Koffi, Brigitte; Solazzo, Efisio; Keating, Terry; Hogrefe, Christian; Schulz, Michael; Benedictow, Anna; Griesfeller, Jan Jurgen; Janssens-Maenhout, Greet; Carmichael, Greg; Fu, Joshua; Dentener, Frank
2017-01-31
We present an overview of the coordinated global numerical modelling experiments performed during 2012-2016 by the Task Force on Hemispheric Transport of Air Pollution (TF HTAP), the regional experiments by the Air Quality Model Evaluation International Initiative (AQMEII) over Europe and North America, and the Model Intercomparison Study for Asia (MICS-Asia). To improve model estimates of the impacts of intercontinental transport of air pollution on climate, ecosystems, and human health and to answer a set of policy-relevant questions, these three initiatives performed emission perturbation modelling experiments consistent across the global, hemispheric, and continental/regional scales. In all three initiatives, model results are extensively compared against monitoring data for a range of variables (meteorological, trace gas concentrations, and aerosol mass and composition) from different measurement platforms (ground measurements, vertical profiles, airborne measurements) collected from a number of sources. Approximately 10 to 25 modelling groups have contributed to each initiative, and model results have been managed centrally through three data hubs maintained by each initiative. Given the organizational complexity of bringing together these three initiatives to address a common set of policy-relevant questions, this publication provides the motivation for the modelling activity, the rationale for specific choices made in the model experiments, and an overview of the organizational structures for both the modelling and the measurements used and analysed in a number of modelling studies in this special issue.
NASA Astrophysics Data System (ADS)
Galmarini, Stefano; Koffi, Brigitte; Solazzo, Efisio; Keating, Terry; Hogrefe, Christian; Schulz, Michael; Benedictow, Anna; Griesfeller, Jan Jurgen; Janssens-Maenhout, Greet; Carmichael, Greg; Fu, Joshua; Dentener, Frank
2017-01-01
We present an overview of the coordinated global numerical modelling experiments performed during 2012-2016 by the Task Force on Hemispheric Transport of Air Pollution (TF HTAP), the regional experiments by the Air Quality Model Evaluation International Initiative (AQMEII) over Europe and North America, and the Model Intercomparison Study for Asia (MICS-Asia). To improve model estimates of the impacts of intercontinental transport of air pollution on climate, ecosystems, and human health and to answer a set of policy-relevant questions, these three initiatives performed emission perturbation modelling experiments consistent across the global, hemispheric, and continental/regional scales. In all three initiatives, model results are extensively compared against monitoring data for a range of variables (meteorological, trace gas concentrations, and aerosol mass and composition) from different measurement platforms (ground measurements, vertical profiles, airborne measurements) collected from a number of sources. Approximately 10 to 25 modelling groups have contributed to each initiative, and model results have been managed centrally through three data hubs maintained by each initiative. Given the organizational complexity of bringing together these three initiatives to address a common set of policy-relevant questions, this publication provides the motivation for the modelling activity, the rationale for specific choices made in the model experiments, and an overview of the organizational structures for both the modelling and the measurements used and analysed in a number of modelling studies in this special issue.
Galmarini, Stefano; Koffi, Brigitte; Solazzo, Efisio; Keating, Terry; Hogrefe, Christian; Schulz, Michael; Benedictow, Anna; Griesfeller, Jan Jurgen; Janssens-Maenhout, Greet; Carmichael, Greg; Fu, Joshua; Dentener, Frank
2018-01-01
We present an overview of the coordinated global numerical modelling experiments performed during 2012–2016 by the Task Force on Hemispheric Transport of Air Pollution (TF HTAP), the regional experiments by the Air Quality Model Evaluation International Initiative (AQMEII) over Europe and North America, and the Model Intercomparison Study for Asia (MICS-Asia). To improve model estimates of the impacts of intercontinental transport of air pollution on climate, ecosystems, and human health and to answer a set of policy-relevant questions, these three initiatives performed emission perturbation modelling experiments consistent across the global, hemispheric, and continental/regional scales. In all three initiatives, model results are extensively compared against monitoring data for a range of variables (meteorological, trace gas concentrations, and aerosol mass and composition) from different measurement platforms (ground measurements, vertical profiles, airborne measurements) collected from a number of sources. Approximately 10 to 25 modelling groups have contributed to each initiative, and model results have been managed centrally through three data hubs maintained by each initiative. Given the organizational complexity of bringing together these three initiatives to address a common set of policy-relevant questions, this publication provides the motivation for the modelling activity, the rationale for specific choices made in the model experiments, and an overview of the organizational structures for both the modelling and the measurements used and analysed in a number of modelling studies in this special issue. PMID:29541091
National Centers for Environmental Prediction
Modeling Center continuously monitors its NWP model performance against different performance measures, and AIRCFT GFS SSI and forecast fits to RAOBS for last 7 days spatial bias maps for different regions different regions GFS SSI and forecast fits to RAOBS for calendar months (time series, spatial and vertical
Modeling Age-Related Differences in Immediate Memory Using SIMPLE
ERIC Educational Resources Information Center
Surprenant, Aimee M.; Neath, Ian; Brown, Gordon D. A.
2006-01-01
In the SIMPLE model (Scale Invariant Memory and Perceptual Learning), performance on memory tasks is determined by the locations of items in multidimensional space, and better performance is associated with having fewer close neighbors. Unlike most previous simulations with SIMPLE, the ones reported here used measured, rather than assumed,…
USDA-ARS?s Scientific Manuscript database
Watershed models typically are evaluated solely through comparison of in-stream water and nutrient fluxes with measured data using established performance criteria, whereas processes and responses within the interior of the watershed that govern these global fluxes often are neglected. Due to the l...
NASA Astrophysics Data System (ADS)
Echevin, V.; Levy, M.; Memery, L.
The assimilation of two dimensional sea color data fields into a 3 dimensional coupled dynamical-biogeochemical model is performed using a 4DVAR algorithm. The biogeochemical model includes description of nitrates, ammonium, phytoplancton, zooplancton, detritus and dissolved organic matter. A subset of the biogeochemical model poorly known parameters (for example,phytoplancton growth, mortality,grazing) are optimized by minimizing a cost function measuring misfit between the observations and the model trajectory. Twin experiments are performed with an eddy resolving model of 5 km resolution in an academic configuration. Starting from oligotrophic conditions, an initially unstable baroclinic anticyclone splits into several eddies. Strong vertical velocities advect nitrates into the euphotic zone and generate a phytoplancton bloom. Biogeochemical parameters are perturbed to generate surface pseudo-observations of chlorophyll,which are assimilated in the model in order to retrieve the correct parameter perturbations. The impact of the type of measurement (quasi-instantaneous, daily mean, weekly mean) onto the retrieved set of parameters is analysed. Impacts of additional subsurface measurements and of errors in the circulation are also presented.
Continuous performance measurement in flight systems. [sequential control model
NASA Technical Reports Server (NTRS)
Connelly, E. M.; Sloan, N. A.; Zeskind, R. M.
1975-01-01
The desired response of many man machine control systems can be formulated as a solution to an optimal control synthesis problem where the cost index is given and the resulting optimal trajectories correspond to the desired trajectories of the man machine system. Optimal control synthesis provides the reference criteria and the significance of error information required for performance measurement. The synthesis procedure described provides a continuous performance measure (CPM) which is independent of the mechanism generating the control action. Therefore, the technique provides a meaningful method for online evaluation of man's control capability in terms of total man machine performance.
Klepper, C C; Isler, R C; Hillairet, J; Martin, E H; Colas, L; Ekedahl, A; Goniche, M; Harris, J H; Hillis, D L; Panayotis, S; Pegourié, B; Lotte, Ph; Colledani, G; Martin, V
2013-05-24
Fully dynamic Stark effect visible spectroscopy was used for the first time to directly measure the local rf electric field in the boundary plasma near a high-power antenna in high-performance, magnetically confined, fusion energy experiment. The measurement was performed in the superconducting tokamak Tore Supra, in the near field of a 1–3 MW, lower-hybrid, 3.7 GHz wave-launch antenna, and combined with modeling of neutral atom transport to estimate the local rf electric field amplitude (as low as 1–2 kV/cm) and direction in this region. The measurement was then shown to be consistent with the predicted values from a 2D full-wave propagation model. Notably the measurement confirmed that the electric field direction deviates substantially from the direction in which it is launched by the waveguides as it penetrates only a few cm radially inward into the plasma from the waveguides, consistent with the model.
Assessing Continuous Operator Workload With a Hybrid Scaffolded Neuroergonomic Modeling Approach.
Borghetti, Brett J; Giametta, Joseph J; Rusnock, Christina F
2017-02-01
We aimed to predict operator workload from neurological data using statistical learning methods to fit neurological-to-state-assessment models. Adaptive systems require real-time mental workload assessment to perform dynamic task allocations or operator augmentation as workload issues arise. Neuroergonomic measures have great potential for informing adaptive systems, and we combine these measures with models of task demand as well as information about critical events and performance to clarify the inherent ambiguity of interpretation. We use machine learning algorithms on electroencephalogram (EEG) input to infer operator workload based upon Improved Performance Research Integration Tool workload model estimates. Cross-participant models predict workload of other participants, statistically distinguishing between 62% of the workload changes. Machine learning models trained from Monte Carlo resampled workload profiles can be used in place of deterministic workload profiles for cross-participant modeling without incurring a significant decrease in machine learning model performance, suggesting that stochastic models can be used when limited training data are available. We employed a novel temporary scaffold of simulation-generated workload profile truth data during the model-fitting process. A continuous workload profile serves as the target to train our statistical machine learning models. Once trained, the workload profile scaffolding is removed and the trained model is used directly on neurophysiological data in future operator state assessments. These modeling techniques demonstrate how to use neuroergonomic methods to develop operator state assessments, which can be employed in adaptive systems.
Use of model calibration to achieve high accuracy in analysis of computer networks
Frogner, Bjorn; Guarro, Sergio; Scharf, Guy
2004-05-11
A system and method are provided for creating a network performance prediction model, and calibrating the prediction model, through application of network load statistical analyses. The method includes characterizing the measured load on the network, which may include background load data obtained over time, and may further include directed load data representative of a transaction-level event. Probabilistic representations of load data are derived to characterize the statistical persistence of the network performance variability and to determine delays throughout the network. The probabilistic representations are applied to the network performance prediction model to adapt the model for accurate prediction of network performance. Certain embodiments of the method and system may be used for analysis of the performance of a distributed application characterized as data packet streams.
A Bayesian hierarchical diffusion model decomposition of performance in Approach–Avoidance Tasks
Krypotos, Angelos-Miltiadis; Beckers, Tom; Kindt, Merel; Wagenmakers, Eric-Jan
2015-01-01
Common methods for analysing response time (RT) tasks, frequently used across different disciplines of psychology, suffer from a number of limitations such as the failure to directly measure the underlying latent processes of interest and the inability to take into account the uncertainty associated with each individual's point estimate of performance. Here, we discuss a Bayesian hierarchical diffusion model and apply it to RT data. This model allows researchers to decompose performance into meaningful psychological processes and to account optimally for individual differences and commonalities, even with relatively sparse data. We highlight the advantages of the Bayesian hierarchical diffusion model decomposition by applying it to performance on Approach–Avoidance Tasks, widely used in the emotion and psychopathology literature. Model fits for two experimental data-sets demonstrate that the model performs well. The Bayesian hierarchical diffusion model overcomes important limitations of current analysis procedures and provides deeper insight in latent psychological processes of interest. PMID:25491372
Katoh, Masakazu; Hamajima, Fumiyasu; Ogasawara, Takahiro; Hata, Ken-Ichiro
2009-06-01
A validation study of an in vitro skin irritation testing method using a reconstructed human skin model has been conducted by the European Centre for the Validation of Alternative Methods (ECVAM), and a protocol using EpiSkin (SkinEthic, France) has been approved. The structural and performance criteria of skin models for testing are defined in the ECVAM Performance Standards announced along with the approval. We have performed several evaluations of the new reconstructed human epidermal model LabCyte EPI-MODEL, and confirmed that it is applicable to skin irritation testing as defined in the ECVAM Performance Standards. We selected 19 materials (nine irritants and ten non-irritants) available in Japan as test chemicals among the 20 reference chemicals described in the ECVAM Performance Standard. A test chemical was applied to the surface of the LabCyte EPI-MODEL for 15 min, after which it was completely removed and the model then post-incubated for 42 hr. Cell v iability was measured by MTT assay and skin irritancy of the test chemical evaluated. In addition, interleukin-1 alpha (IL-1alpha) concentration in the culture supernatant after post-incubation was measured to provide a complementary evaluation of skin irritation. Evaluation of the 19 test chemicals resulted in 79% accuracy, 78% sensitivity and 80% specificity, confirming that the in vitro skin irritancy of the LabCyte EPI-MODEL correlates highly with in vivo skin irritation. These results suggest that LabCyte EPI-MODEL is applicable to the skin irritation testing protocol set out in the ECVAM Performance Standards.
Albright, Benjamin B.; Lewis, Valerie A.; Ross, Joseph S.; Colla, Carrie H.
2015-01-01
Background Accountable Care Organizations (ACOs) are a delivery and payment model aiming to coordinate care, control costs, and improve quality. Medicare ACOs are responsible for eight measures of preventive care quality. Objectives To create composite measures of preventive care quality and examine associations of ACO characteristics with performance. Design Cross-sectional study of Medicare Shared Savings Program and Pioneer participants. We linked quality performance to descriptive data from the National Survey of ACOs. We created composite measures using exploratory factor analysis, and used regression to assess associations with organizational characteristics. Results Of 252 eligible ACOs, 246 reported on preventive care quality, 177 of which completed the survey (response rate=72%). In their first year, ACOs lagged behind PPO performance on the majority of comparable measures. We identified two underlying factors among eight measures and created composites for each: disease prevention, driven by vaccines and cancer screenings, and wellness screening, driven by annual health screenings. Participation in the Advanced Payment Model, having fewer specialists, and having more Medicare ACO beneficiaries per primary care provider were associated with significantly better performance on both composites. Better performance on disease prevention was also associated with inclusion of a hospital, greater electronic health record capabilities, a larger primary care workforce, and fewer minority beneficiaries. Conclusions ACO preventive care quality performance is related to provider composition and benefitted by upfront investment. Vaccine and cancer screening quality performance is more dependent on organizational structure and characteristics than performance on annual wellness screenings, likely due to greater complexity in eligibility determination and service administration. PMID:26759974
Albright, Benjamin B; Lewis, Valerie A; Ross, Joseph S; Colla, Carrie H
2016-03-01
Accountable Care Organizations (ACOs) are a delivery and payment model aiming to coordinate care, control costs, and improve quality. Medicare ACOs are responsible for 8 measures of preventive care quality. To create composite measures of preventive care quality and examine associations of ACO characteristics with performance. This is a cross-sectional study of Medicare Shared Savings Program and Pioneer participants. We linked quality performance to descriptive data from the National Survey of ACOs. We created composite measures using exploratory factor analysis, and used regression to assess associations with organizational characteristics. Of 252 eligible ACOs, 246 reported on preventive care quality, 177 of which completed the survey (response rate=72%). In their first year, ACOs lagged behind PPO performance on the majority of comparable measures. We identified 2 underlying factors among 8 measures and created composites for each: disease prevention, driven by vaccines and cancer screenings, and wellness screening, driven by annual health screenings. Participation in the Advanced Payment Model, having fewer specialists, and having more Medicare ACO beneficiaries per primary care provider were associated with significantly better performance on both composites. Better performance on disease prevention was also associated with inclusion of a hospital, greater electronic health record capabilities, a larger primary care workforce, and fewer minority beneficiaries. ACO preventive care quality performance is related to provider composition and benefitted by upfront investment. Vaccine and cancer screening quality performance is more dependent on organizational structure and characteristics than performance on annual wellness screenings, likely due to greater complexity in eligibility determination and service administration.
A new empirical model to estimate hourly diffuse photosynthetic photon flux density
NASA Astrophysics Data System (ADS)
Foyo-Moreno, I.; Alados, I.; Alados-Arboledas, L.
2018-05-01
Knowledge of the photosynthetic photon flux density (Qp) is critical in different applications dealing with climate change, plant physiology, biomass production, and natural illumination in greenhouses. This is particularly true regarding its diffuse component (Qpd), which can enhance canopy light-use efficiency and thereby boost carbon uptake. Therefore, diffuse photosynthetic photon flux density is a key driving factor of ecosystem-productivity models. In this work, we propose a model to estimate this component, using a previous model to calculate Qp and furthermore divide it into its components. We have used measurements in urban Granada (southern Spain), of global solar radiation (Rs) to study relationships between the ratio Qpd/Rs with different parameters accounting for solar position, water-vapour absorption and sky conditions. The model performance has been validated with experimental measurements from sites having varied climatic conditions. The model provides acceptable results, with the mean bias error and root mean square error varying between - 0.3 and - 8.8% and between 9.6 and 20.4%, respectively. Direct measurements of this flux are very scarce so that modelling simulations are needed, this is particularly true regarding its diffuse component. We propose a new parameterization to estimate this component using only measured data of solar global irradiance, which facilitates its use for the construction of long-term data series of PAR in regions where continuous measurements of PAR are not yet performed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brandt, Riley E.; Mangan, Niall M.; Li, Jian V.
2016-11-21
In novel photovoltaic absorbers, it is often difficult to assess the root causes of low open-circuit voltages, which may be due to bulk recombination or sub-optimal contacts. In the present work, we discuss the role of temperature- and illumination-dependent device electrical measurements in quantifying and distinguishing these performance losses - in particular, for determining bounds on interface recombination velocities, band alignment, and minority carrier lifetime. We assess the accuracy of this approach by direct comparison to photoelectron spectroscopy. Then, we demonstrate how more computationally intensive model parameter fitting approaches can draw more insights from this broad measurement space. We applymore » this measurement and modeling approach to high-performance III-V and thin-film chalcogenide devices.« less
Kendall, G M; Wakeford, R; Athanson, M; Vincent, T J; Carter, E J; McColl, N P; Little, M P
2016-03-01
Gamma radiation from natural sources (including directly ionising cosmic rays) is an important component of background radiation. In the present paper, indoor measurements of naturally occurring gamma rays that were undertaken as part of the UK Childhood Cancer Study are summarised, and it is shown that these are broadly compatible with an earlier UK National Survey. The distribution of indoor gamma-ray dose rates in Great Britain is approximately normal with mean 96 nGy/h and standard deviation 23 nGy/h. Directly ionising cosmic rays contribute about one-third of the total. The expanded dataset allows a more detailed description than previously of indoor gamma-ray exposures and in particular their geographical variation. Various strategies for predicting indoor natural background gamma-ray dose rates were explored. In the first of these, a geostatistical model was fitted, which assumes an underlying geologically determined spatial variation, superimposed on which is a Gaussian stochastic process with Matérn correlation structure that models the observed tendency of dose rates in neighbouring houses to correlate. In the second approach, a number of dose-rate interpolation measures were first derived, based on averages over geologically or administratively defined areas or using distance-weighted averages of measurements at nearest-neighbour points. Linear regression was then used to derive an optimal linear combination of these interpolation measures. The predictive performances of the two models were compared via cross-validation, using a randomly selected 70 % of the data to fit the models and the remaining 30 % to test them. The mean square error (MSE) of the linear-regression model was lower than that of the Gaussian-Matérn model (MSE 378 and 411, respectively). The predictive performance of the two candidate models was also evaluated via simulation; the OLS model performs significantly better than the Gaussian-Matérn model.
Conceptualizing and communicating ecological river restoration: Chapter 2
Jacobson, Robert B.; Berkley, Jim
2011-01-01
We present a general conceptual model for communicating aspects of river restoration and management. The model is generic and adaptable to most riverine settings, independent of size. The model has separate categories of natural and social-economic drivers, and management actions are envisioned as modifiers of naturally dynamic systems. The model includes a decision-making structure in which managers, stakeholders, and scientists interact to define management objectives and performance evaluation. The model depicts a stress to the riverine ecosystem as either (1) deviation in the regimes (flow, sediment, temperature, light, biogeochemical, and genetic) by altering the frequency, magnitude, duration, timing, or rate of change of the fluxes or (2) imposition of a hard structural constraint on channel form. Restoration is depicted as naturalization of those regimes or removal of the constraint. The model recognizes the importance of river history in conditioning future responses. Three hierarchical tiers of essential ecosystem characteristics (EECs) illustrate how management actions typically propagate through physical/chemical processes to habitat to biotic responses. Uncertainty and expense in modeling or measuring responses increase in moving from tiers 1 to 3. Social-economic characteristics are shown in a parallel structure that emphasizes the need to quantify trade-offs between ecological and social-economic systems. Performance measures for EECs are also hierarchical, showing that selection of measures depend on participants’ willingness to accept uncertainty. The general form is of an adaptive management loop in which the performance measures are compared to reference conditions or success criteria and the information is fed back into the decision-making process.
NASA Astrophysics Data System (ADS)
Gallagher, C. B.; Ferraro, A.
2018-05-01
A possible alternative to the standard model of measurement-based quantum computation (MBQC) is offered by the sequential model of MBQC—a particular class of quantum computation via ancillae. Although these two models are equivalent under ideal conditions, their relative resilience to noise in practical conditions is not yet known. We analyze this relationship for various noise models in the ancilla preparation and in the entangling-gate implementation. The comparison of the two models is performed utilizing both the gate infidelity and the diamond distance as figures of merit. Our results show that in the majority of instances the sequential model outperforms the standard one in regard to a universal set of operations for quantum computation. Further investigation is made into the performance of sequential MBQC in experimental scenarios, thus setting benchmarks for possible cavity-QED implementations.
Multi-linear model set design based on the nonlinearity measure and H-gap metric.
Shaghaghi, Davood; Fatehi, Alireza; Khaki-Sedigh, Ali
2017-05-01
This paper proposes a model bank selection method for a large class of nonlinear systems with wide operating ranges. In particular, nonlinearity measure and H-gap metric are used to provide an effective algorithm to design a model bank for the system. Then, the proposed model bank is accompanied with model predictive controllers to design a high performance advanced process controller. The advantage of this method is the reduction of excessive switch between models and also decrement of the computational complexity in the controller bank that can lead to performance improvement of the control system. The effectiveness of the method is verified by simulations as well as experimental studies on a pH neutralization laboratory apparatus which confirms the efficiency of the proposed algorithm. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Organizational and Market Influences on Physician Performance on Patient Experience Measures
Rodriguez, Hector P; von Glahn, Ted; Rogers, William H; Safran, Dana Gelb
2009-01-01
Objective To examine the extent to which medical group and market factors are related to individual primary care physician (PCP) performance on patient experience measures. Data Sources This study employs Clinician and Group CAHPS survey data (n=105,663) from 2,099 adult PCPs belonging to 34 diverse medical groups across California. Medical group directors were interviewed to assess the magnitude and nature of financial incentives directed at individual physicians and the adoption of patient experience improvement strategies. Primary care services area (PCSA) data were used to characterize the market environment of physician practices. Study Design We used multilevel models to estimate the relationship between medical group and market factors and physician performance on each Clinician and Group CAHPS measure. Models statistically controlled for respondent characteristics and accounted for the clustering of respondents within physicians, physicians within medical groups, and medical groups within PCSAs using random effects. Principal Findings Compared with physicians belonging to independent practice associations, physicians belonging to integrated medical groups had better performance on the communication (p=.007) and care coordination (p=.03) measures. Physicians belonging to medical groups with greater numbers of PCPs had better performance on all measures. The use of patient experience improvement strategies was not associated with performance. Greater emphasis on productivity and efficiency criteria in individual physician financial incentive formulae was associated with worse access to care (p=.04). Physicians located in PCSAs with higher area-level deprivation had worse performance on the access to care (p=.04) and care coordination (p<.001) measures. Conclusions Physicians from integrated medical groups and groups with greater numbers of PCPs performed better on several patient experience measures, suggesting that organized care processes adopted by these groups may enhance patients' experiences. Physicians practicing in markets with high concentrations of vulnerable populations may be disadvantaged by constraints that affect performance. Future studies should clarify the extent to which performance deficits associated with area-level deprivation are modifiable. PMID:19674429
Organizational and market influences on physician performance on patient experience measures.
Rodriguez, Hector P; von Glahn, Ted; Rogers, William H; Safran, Dana Gelb
2009-06-01
To examine the extent to which medical group and market factors are related to individual primary care physician (PCP) performance on patient experience measures. This study employs Clinician and Group CAHPS survey data (n=105,663) from 2,099 adult PCPs belonging to 34 diverse medical groups across California. Medical group directors were interviewed to assess the magnitude and nature of financial incentives directed at individual physicians and the adoption of patient experience improvement strategies. Primary care services area (PCSA) data were used to characterize the market environment of physician practices. We used multilevel models to estimate the relationship between medical group and market factors and physician performance on each Clinician and Group CAHPS measure. Models statistically controlled for respondent characteristics and accounted for the clustering of respondents within physicians, physicians within medical groups, and medical groups within PCSAs using random effects. Compared with physicians belonging to independent practice associations, physicians belonging to integrated medical groups had better performance on the communication ( p=.007) and care coordination ( p=.03) measures. Physicians belonging to medical groups with greater numbers of PCPs had better performance on all measures. The use of patient experience improvement strategies was not associated with performance. Greater emphasis on productivity and efficiency criteria in individual physician financial incentive formulae was associated with worse access to care ( p=.04). Physicians located in PCSAs with higher area-level deprivation had worse performance on the access to care ( p=.04) and care coordination ( p<.001) measures. Physicians from integrated medical groups and groups with greater numbers of PCPs performed better on several patient experience measures, suggesting that organized care processes adopted by these groups may enhance patients' experiences. Physicians practicing in markets with high concentrations of vulnerable populations may be disadvantaged by constraints that affect performance. Future studies should clarify the extent to which performance deficits associated with area-level deprivation are modifiable.
Numerical prediction of Pelton turbine efficiency
NASA Astrophysics Data System (ADS)
Jošt, D.; Mežnar, P.; Lipej, A.
2010-08-01
This paper presents a numerical analysis of flow in a 2 jet Pelton turbine with horizontal axis. The analysis was done for the model at several operating points in different operating regimes. The results were compared to the results of a test of the model. Analysis was performed using ANSYS CFX-12.1 computer code. A k-ω SST turbulent model was used. Free surface flow was modelled by two-phase homogeneous model. At first, a steady state analysis of flow in the distributor with two injectors was performed for several needle strokes. This provided us with data on flow energy losses in the distributor and the shape and velocity of jets. The second step was an unsteady analysis of the runner with jets. Torque on the shaft was then calculated from pressure distribution data. Averaged torque values are smaller than measured ones. Consequently, calculated turbine efficiency is also smaller than the measured values, the difference is about 4 %. The shape of the efficiency diagram conforms well to the measurements.
Manual control of yaw motion with combined visual and vestibular cues
NASA Technical Reports Server (NTRS)
Zacharias, G. L.; Young, L. R.
1977-01-01
Measurements are made of manual control performance in the closed-loop task of nulling perceived self-rotation velocity about an earth-vertical axis. Self-velocity estimation was modelled as a function of the simultaneous presentation of vestibular and peripheral visual field motion cues. Based on measured low-frequency operator behavior in three visual field environments, a parallel channel linear model is proposed which has separate visual and vestibular pathways summing in a complementary manner. A correction to the frequency responses is provided by a separate measurement of manual control performance in an analogous visual pursuit nulling task. The resulting dual-input describing function for motion perception dependence on combined cue presentation supports the complementary model, in which vestibular cues dominate sensation at frequencies above 0.05 Hz. The describing function model is extended by the proposal of a non-linear cue conflict model, in which cue weighting depends on the level of agreement between visual and vestibular cues.
An Introduction to the Partial Credit Model for Developing Nursing Assessments.
ERIC Educational Resources Information Center
Fox, Christine
1999-01-01
Demonstrates how the partial credit model, a variation of the Rasch Measurement Model, can be used to develop performance-based assessments for nursing education. Applies the model using the Practical Knowledge Inventory for Nurses. (SK)
A comparison of the Injury Severity Score and the Trauma Mortality Prediction Model.
Cook, Alan; Weddle, Jo; Baker, Susan; Hosmer, David; Glance, Laurent; Friedman, Lee; Osler, Turner
2014-01-01
Performance benchmarking requires accurate measurement of injury severity. Despite its shortcomings, the Injury Severity Score (ISS) remains the industry standard 40 years after its creation. A new severity measure, the Trauma Mortality Prediction Model (TMPM), uses either the Abbreviated Injury Scale (AIS) or DRG International Classification of Diseases-9th Rev. (ICD-9) lexicons and may better quantify injury severity compared with ISS. We compared the performance of TMPM with ISS and other measures of injury severity in a single cohort of patients. We included 337,359 patient records with injuries reliably described in both the AIS and the ICD-9 lexicons from the National Trauma Data Bank. Five injury severity measures (ISS, maximum AIS score, New Injury Severity Score [NISS], ICD-9-Based Injury Severity Score [ICISS], TMPM) were computed using either the AIS or ICD-9 codes. These measures were compared for discrimination (area under the receiver operating characteristic curve), an estimate of proximity to a model that perfectly predicts the outcome (Akaike information criterion), and model calibration curves. TMPM demonstrated superior receiver operating characteristic curve, Akaike information criterion, and calibration using either the AIS or ICD-9 lexicons. Calibration plots demonstrate the monotonic characteristics of the TMPM models contrasted by the nonmonotonic features of the other prediction models. Severity measures were more accurate with the AIS lexicon rather than ICD-9. NISS proved superior to ISS in either lexicon. Since NISS is simpler to compute, it should replace ISS when a quick estimate of injury severity is required for AIS-coded injuries. Calibration curves suggest that the nonmonotonic nature of ISS may undermine its performance. TMPM demonstrated superior overall mortality prediction compared with all other models including ISS whether the AIS or ICD-9 lexicons were used. Because TMPM provides an absolute probability of death, it may allow clinicians to communicate more precisely with one another and with patients and families. Disagnostic study, level I; prognostic study, level II.
Mathew, P J; Sailam, S; Sivasailam, R; Thingnum, S K S; Puri, G D
2016-01-01
We compared the performance of a propofol target-controlled infusion (TCI) using Marsh versus PGIMER models in patients undergoing open heart surgery, in terms of measured plasma levels of propofol and objective pharmacodynamic effect. Twenty-three, ASA II/III adult patients aged 18-65 years and scheduled for elective open heart surgery received Marsh or PGIMER (Postgraduate Institute of Medical Education and Research) pharmacokinetic models of TCI for the induction and maintenance of anaesthesia with propofol in a randomized, active-controlled, non-inferiority trial. The plasma levels of propofol were measured at specified time points before, during and after bypass. The performances of both the models were similar, as determined by the error (%) in maintaining the target plasma concentrations: MDPE of -5.0 (-12.0, 5.0) in the PGIMER group vs -6.4 (-7.7 to 0.5) in the Marsh group and MDAPE of 9.1 (5, 15) in the PGIMER group vs 8 (6.7, 10.1) in the Marsh group. These values indicate that both models over-predicted the plasma propofol concentration. The new pharmacokinetic model based on data from Indian patients is comparable in performance to the commercially available Marsh pharmacokinetic model. © The Author(s) 2015.
Suzuki, Shigeru; Machida, Haruhiko; Tanaka, Isao; Ueno, Eiko
2012-11-01
To compare the performance of model-based iterative reconstruction (MBIR) with that of standard filtered back projection (FBP) for measuring vascular wall attenuation. After subjecting 9 vascular models (actual attenuation value of wall, 89 HU) with wall thickness of 0.5, 1.0, or 1.5 mm that we filled with contrast material of 275, 396, or 542 HU to scanning using 64-detector computed tomography (CT), we reconstructed images using MBIR and FBP (Bone, Detail kernels) and measured wall attenuation at the center of the wall for each model. We performed attenuation measurements for each model and additional supportive measurements by a differentiation curve. We analyzed statistics using analyzes of variance with repeated measures. Using the Bone kernel, standard deviation of the measurement exceeded 30 HU in most conditions. In measurements at the wall center, the attenuation values obtained using MBIR were comparable to or significantly closer to the actual wall attenuation than those acquired using Detail kernel. Using differentiation curves, we could measure attenuation for models with walls of 1.0- or 1.5-mm thickness using MBIR but only those of 1.5-mm thickness using Detail kernel. We detected no significant differences among the attenuation values of the vascular walls of either thickness (MBIR, P=0.1606) or among the 3 densities of intravascular contrast material (MBIR, P=0.8185; Detail kernel, P=0.0802). Compared with FBP, MBIR reduces both reconstruction blur and image noise simultaneously, facilitates recognition of vascular wall boundaries, and can improve accuracy in measuring wall attenuation. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
ISS Plasma Interaction: Measurements and Modeling
NASA Technical Reports Server (NTRS)
Barsamian, H.; Mikatarian, R.; Alred, J.; Minow, J.; Koontz, S.
2004-01-01
Ionospheric plasma interaction effects on the International Space Station are discussed in the following paper. The large structure and high voltage arrays of the ISS represent a complex system interacting with LEO plasma. Discharge current measurements made by the Plasma Contactor Units and potential measurements made by the Floating Potential Probe delineate charging and magnetic induction effects on the ISS. Based on theoretical and physical understanding of the interaction phenomena, a model of ISS plasma interaction has been developed. The model includes magnetic induction effects, interaction of the high voltage solar arrays with ionospheric plasma, and accounts for other conductive areas on the ISS. Based on these phenomena, the Plasma Interaction Model has been developed. Limited verification of the model has been performed by comparison of Floating Potential Probe measurement data to simulations. The ISS plasma interaction model will be further tested and verified as measurements from the Floating Potential Measurement Unit become available, and construction of the ISS continues.
Van Driel, Robin; Trask, Catherine; Johnson, Peter W; Callaghan, Jack P; Koehoorn, Mieke; Teschke, Kay
2013-01-01
Measuring trunk posture in the workplace commonly involves subjective observation or self-report methods or the use of costly and time-consuming motion analysis systems (current gold standard). This work compared trunk inclination measurements using a simple data-logging inclinometer with trunk flexion measurements using a motion analysis system, and evaluated adding measures of subject anthropometry to exposure prediction models to improve the agreement between the two methods. Simulated lifting tasks (n=36) were performed by eight participants, and trunk postures were simultaneously measured with each method. There were significant differences between the two methods, with the inclinometer initially explaining 47% of the variance in the motion analysis measurements. However, adding one key anthropometric parameter (lower arm length) to the inclinometer-based trunk flexion prediction model reduced the differences between the two systems and accounted for 79% of the motion analysis method's variance. Although caution must be applied when generalizing lower-arm length as a correction factor, the overall strategy of anthropometric modeling is a novel contribution. In this lifting-based study, by accounting for subject anthropometry, a single, simple data-logging inclinometer shows promise for trunk posture measurement and may have utility in larger-scale field studies where similar types of tasks are performed.
The dissociation of subjective measures of mental workload and performance
NASA Technical Reports Server (NTRS)
Yeh, Y. H.; Wickens, C. D.
1984-01-01
Dissociation between performance and subjective workload measures was investigated in the theoretical framework of the multiple resources model. Subjective measures do not preserve the vector characteristics in the multidimensional space described by the model. A theory of dissociation was proposed to locate the sources that may produce dissociation between the two workload measures. According to the theory, performance is affected by every aspect of processing whereas subjective workload is sensitive to the amount of aggregate resource investment and is dominated by the demands on the perceptual/central resources. The proposed theory was tested in three experiments. Results showed that performance improved but subjective workload was elevated with an increasing amount of resource investment. Furthermore, subjective workload was not as sensitive as was performance to differences in the amount of resource competition between two tasks. The demand on perceptual/central resources was found to be the most salient component of subjective workload. Dissociation occurred when the demand on this component was increased by the number of concurrent tasks or by the number of display elements. However, demands on response resources were weighted in subjective introspection as much as demands on perceptual/central resources. The implications of these results for workload practitioners are described.
Loyola Briceno, Ana Carolina; Kawatu, Jennifer; Saul, Katie; DeAngelis, Katie; Frederiksen, Brittni; Moskosky, Susan B; Gavin, Lorrie
2017-09-01
The objective was to describe a Performance Measure Learning Collaborative (PMLC) designed to help Title X family planning grantees use new clinical performance measures for contraceptive care. Twelve Title X grantee-service site teams participated in an 8-month PMLC from November 2015 to June 2016; baseline was assessed in October 2015. Each team documented their selected best practices and strategies to improve performance, and calculated the contraceptive care performance measures at baseline and for each of the subsequent 8 months. PMLC sites implemented a mix of best practices: (a) ensuring access to a broad range of methods (n=7 sites), (b) supporting women through client-centered counseling and reproductive life planning (n=8 sites), (c) developing systems for same-day provision of all methods (n=10 sites) and (d) utilizing diverse payment options to reduce cost as a barrier (n=4 sites). Ten sites (83%) observed an increase in the clinical performance measures focused on most and moderately effective methods (MME), with a median percent change of 6% for MME (from a median of 73% at baseline to 77% post-PMLC). Evidence suggests that the PMLC model is an approach that can be used to improve the quality of contraceptive care offered to clients in some settings. Further replication of the PMLC among other groups and beyond the Title X network will help strengthen the current model through lessons learned. Using the performance measures in the context of a learning collaborative may be a useful strategy for other programs (e.g., Federally Qualified Health Centers, Medicaid, private health plans) that provide contraceptive care. Expanded use of the measures may help increase access to contraceptive care to achieve national goals for family planning. Published by Elsevier Inc.
MODELING AND PERFORMANCE EVALUATION FOR AVIATION SECURITY CARGO INSPECTION QUEUING SYSTEM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allgood, Glenn O; Olama, Mohammed M; Rose, Terri A
Beginning in 2010, the U.S. will require that all cargo loaded in passenger aircraft be inspected. This will require more efficient processing of cargo and will have a significant impact on the inspection protocols and business practices of government agencies and the airlines. In this paper, we conduct performance evaluation study for an aviation security cargo inspection queuing system for material flow and accountability. The overall performance of the aviation security cargo inspection system is computed, analyzed, and optimized for the different system dynamics. Various performance measures are considered such as system capacity, residual capacity, and throughput. These metrics aremore » performance indicators of the system s ability to service current needs and response capacity to additional requests. The increased physical understanding resulting from execution of the queuing model utilizing these vetted performance measures will reduce the overall cost and shipping delays associated with the new inspection requirements.« less
Effects of distributed database modeling on evaluation of transaction rollbacks
NASA Technical Reports Server (NTRS)
Mukkamala, Ravi
1991-01-01
Data distribution, degree of data replication, and transaction access patterns are key factors in determining the performance of distributed database systems. In order to simplify the evaluation of performance measures, database designers and researchers tend to make simplistic assumptions about the system. The effect is studied of modeling assumptions on the evaluation of one such measure, the number of transaction rollbacks, in a partitioned distributed database system. Six probabilistic models and expressions are developed for the numbers of rollbacks under each of these models. Essentially, the models differ in terms of the available system information. The analytical results so obtained are compared to results from simulation. From here, it is concluded that most of the probabilistic models yield overly conservative estimates of the number of rollbacks. The effect of transaction commutativity on system throughout is also grossly undermined when such models are employed.
Effects of distributed database modeling on evaluation of transaction rollbacks
NASA Technical Reports Server (NTRS)
Mukkamala, Ravi
1991-01-01
Data distribution, degree of data replication, and transaction access patterns are key factors in determining the performance of distributed database systems. In order to simplify the evaluation of performance measures, database designers and researchers tend to make simplistic assumptions about the system. Here, researchers investigate the effect of modeling assumptions on the evaluation of one such measure, the number of transaction rollbacks in a partitioned distributed database system. The researchers developed six probabilistic models and expressions for the number of rollbacks under each of these models. Essentially, the models differ in terms of the available system information. The analytical results obtained are compared to results from simulation. It was concluded that most of the probabilistic models yield overly conservative estimates of the number of rollbacks. The effect of transaction commutativity on system throughput is also grossly undermined when such models are employed.
Gruber-Baldini, Ann L.; Hicks, Gregory; Ostir, Glen; Klinedinst, N. Jennifer; Orwig, Denise; Magaziner, Jay
2015-01-01
Background Measurement of physical function post hip fracture has been conceptualized using multiple different measures. Purpose This study tested a comprehensive measurement model of physical function. Design This was a descriptive secondary data analysis including 168 men and 171 women post hip fracture. Methods Using structural equation modeling, a measurement model of physical function which included grip strength, activities of daily living, instrumental activities of daily living and performance was tested for fit at 2 and 12 months post hip fracture and among male and female participants and validity of the measurement model of physical function was evaluated based on how well the model explained physical activity, exercise and social activities post hip fracture. Findings The measurement model of physical function fit the data. The amount of variance the model or individual factors of the model explained varied depending on the activity. Conclusion Decisions about the ideal way in which to measure physical function should be based on outcomes considered and participant Clinical Implications The measurement model of physical function is a reliable and valid method to comprehensively measure physical function across the hip fracture recovery trajectory. Practical but useful assessment of function should be considered and monitored over the recovery trajectory post hip fracture. PMID:26492866
Operator Performance Measures for Assessing Voice Communication Effectiveness
1989-07-01
performance and work- load assessment techniques have been based.I Broadbent (1958) described a limited capacity filter model of human information...INFORMATION PROCESSING 20 3.1.1. Auditory Attention 20 3.1.2. Auditory Memory 24 3.2. MODELS OF INFORMATION PROCESSING 24 3.2.1. Capacity Theories 25...Learning 0 Attention * Language Specialization • Decision Making• Problem Solving Auditory Information Processing Models of Processing Ooemtor
Development of a Navy Job-Specific Vocational Interest Model
2006-12-01
The role of job satisfaction in absence behavior. Organizational Behavior and Human Performance , 19, 148-161. Jackofsky, E. F., & Peters, L. H. (1983...Guidance Quarterly, (December), 160-165. Spencer, D. G., & Steers, R. M. (1981). Performance as a moderator of the job- satisfaction -turnover relationship...Application of Process Model to Measurement of Career Choice Satisfaction .............. 9 Content Model of Vocational Interests: Constructs and Structures
Installation effects on performance of multiple model V/STOL lift fans
NASA Technical Reports Server (NTRS)
Diedrich, J. H.; Clough, N.; Lieblein, S.
1972-01-01
An experimental program was performed in which the individual performance of multiple VTOL model lift fans was measured. The model tested consisted of three 5.5 in. diameter tip-turbine driven model VTOL lift fans mounted chordwise in a two-dimensional wing to simulate a pod-type array. The performance data provided significant insight into possible thrust variations and losses caused by the presence of cover doors, adjacent fuselage panels, and adjacent fans. The effect of a partial loss of drive air supply (simulated gas generator failure) on fan performance was also investigated. The results of the tests demonstrated that lift fan installation variables and hardware can have a significant effect on the thrust of the individual fans.
Strategies for concurrent processing of complex algorithms in data driven architectures
NASA Technical Reports Server (NTRS)
Som, Sukhamoy; Stoughton, John W.; Mielke, Roland R.
1990-01-01
Performance modeling and performance enhancement for periodic execution of large-grain, decision-free algorithms in data flow architectures are discussed. Applications include real-time implementation of control and signal processing algorithms where performance is required to be highly predictable. The mapping of algorithms onto the specified class of data flow architectures is realized by a marked graph model called algorithm to architecture mapping model (ATAMM). Performance measures and bounds are established. Algorithm transformation techniques are identified for performance enhancement and reduction of resource (computing element) requirements. A systematic design procedure is described for generating operating conditions for predictable performance both with and without resource constraints. An ATAMM simulator is used to test and validate the performance prediction by the design procedure. Experiments on a three resource testbed provide verification of the ATAMM model and the design procedure.
Advanced Actuation Systems Development. Volume 2
1989-08-01
and unloaded performance characteristics of a test specimen produced by General Dynamics Corporation as a feasibility model. The actuation system for...changing the camber of the test specimen is unique and was evaluated with a series of input/output measurements. The testing verified the general ...MAWS General ’rest Procedure........................................6 General Performance Measurements .................................... 10 Test
Leadership in a Performative Context: A Framework for Decision-Making
ERIC Educational Resources Information Center
Chitpin, Stephanie; Jones, Ken
2015-01-01
This paper examines a model of decision-making within the context of current and emerging regimes of accountability being proposed and implemented for school systems in a number of jurisdictions. These approaches to accountability typically involve the use of various measurable student learning outcomes as well as other measures of performance to…
Radiative Transport Modelling of Thermal Barrier Coatings
2017-03-24
of being able to extract useful data. To account for this deficiency, the purpose of this project is to improve models for use in OCT measurements ...PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Lumium optical precision measurement solutions...coefficients. Further, the model will need to take into account the effects of interface reflections and a multilayer structure. Such a model is of
SOFIA 2 model telescope wind tunnel test report
NASA Technical Reports Server (NTRS)
Keas, Paul
1995-01-01
This document outlines the tests performed to make aerodynamic force and torque measurements on the SOFIA wind tunnel model telescope. These tests were performed during the SOFIA 2 wind tunnel test in the 14 ft wind tunnel during the months of June through August 1994. The test was designed to measure the dynamic cross elevation moment acting on the SOFIA model telescope due to aerodynamic loading. The measurements were taken with the telescope mounted in an open cavity in the tail section of the SOFIA model 747. The purpose of the test was to obtain an estimate of the full scale aerodynamic disturbance spectrum, by scaling up the wind tunnel results (taking into account differences in sail area, air density, cavity dimension, etc.). An estimate of the full scale cross elevation moment spectrum was needed to help determine the impact this disturbance would have on the telescope positioning system requirements. A model of the telescope structure, made of a light weight composite material, was mounted in the open cavity of the SOFIA wind tunnel model. This model was mounted via a force balance to the cavity bulkhead. Despite efforts to use a 'stiff' balance, and a lightweight model, the balance/telescope system had a very low resonant frequency (37 Hz) compared to the desired measurement bandwidth (1000 Hz). Due to this mechanical resonance of the balance/telescope system, the balance alone could not provide an accurate measure of applied aerodynamic force at the high frequencies desired. A method of measurement was developed that incorporated accelerometers in addition to the balance signal, to calculate the aerodynamic force.
Oh, Eric J; Shepherd, Bryan E; Lumley, Thomas; Shaw, Pamela A
2018-04-15
For time-to-event outcomes, a rich literature exists on the bias introduced by covariate measurement error in regression models, such as the Cox model, and methods of analysis to address this bias. By comparison, less attention has been given to understanding the impact or addressing errors in the failure time outcome. For many diseases, the timing of an event of interest (such as progression-free survival or time to AIDS progression) can be difficult to assess or reliant on self-report and therefore prone to measurement error. For linear models, it is well known that random errors in the outcome variable do not bias regression estimates. With nonlinear models, however, even random error or misclassification can introduce bias into estimated parameters. We compare the performance of 2 common regression models, the Cox and Weibull models, in the setting of measurement error in the failure time outcome. We introduce an extension of the SIMEX method to correct for bias in hazard ratio estimates from the Cox model and discuss other analysis options to address measurement error in the response. A formula to estimate the bias induced into the hazard ratio by classical measurement error in the event time for a log-linear survival model is presented. Detailed numerical studies are presented to examine the performance of the proposed SIMEX method under varying levels and parametric forms of the error in the outcome. We further illustrate the method with observational data on HIV outcomes from the Vanderbilt Comprehensive Care Clinic. Copyright © 2017 John Wiley & Sons, Ltd.
Cubature/ Unscented/ Sigma Point Kalman Filtering with Angular Measurement Models
2015-07-06
Cubature/ Unscented/ Sigma Point Kalman Filtering with Angular Measurement Models David Frederic Crouse Naval Research Laboratory 4555 Overlook Ave...measurement and process non- linearities, such as the cubature Kalman filter , can perform ex- tremely poorly in many applications involving angular... Kalman filtering is a realization of the best linear unbiased estimator (BLUE) that evaluates certain integrals for expected values using different forms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duangthongsuk, Weerapun; Fluid Mechanics, Thermal Engineering and Multiphase Flow Research Laboratory; Wongwises, Somchai
2010-07-15
This article reports a comparison of the differences between using measured and computed thermophysical properties to describe the heat transfer performance of TiO{sub 2}-water nanofluids. In this study, TiO{sub 2} nanoparticles with average diameters of 21 nm and a particle volume fraction of 0.2-1 vol.% are used. The thermal conductivity and viscosity of nanofluids were measured by using transient hot-wire apparatus and a Bohlin rotational rheometer, respectively. The well-known correlations for calculating the thermal conductivity and viscosity of nanofluids were used for describing the Nusselt number of nanofluids and compared with the results from the measured data. The results showmore » that use of the models of thermophysical properties for calculating the Nusselt number of nanofluids gave similar results to use of the measured data. Where there is a lack of measured data on thermophysical properties, the most appropriate models for computing the thermal conductivity and viscosity of the nanofluids are the models of Yu and Choi and Wang et al., respectively. (author)« less
Dissociation of performance and subjective measures of workload
NASA Technical Reports Server (NTRS)
Yeh, Yei-Yu; Wickens, Christopher D.
1988-01-01
A theory is presented to identify sources that produce dissociations between performance and subjective measures of workload. The theory states that performance is determined by (1) amount of resources invested, (2) resource efficiency, and (3) degree of competition for common resources in a multidimensional space described in the multiple-resources model. Subjective perception of workload, multidimensional in nature, increases with greater amounts of resource investment and with greater demands on working memory. Performance and subjective workload measures dissociate when greater resources are invested to improve performance of a resource-limited task; when demands on working memory are increased by time-sharing between concurrent tasks or between display elements; and when performance is sensitive to resource competition and subjective measures are more sensitive to total investment. These dissociation findings and their implications are discussed and directions for future research are suggested.
Liu, S.; Anderson, P.; Zhou, G.; Kauffman, B.; Hughes, F.; Schimel, D.; Watson, Vicente; Tosi, Joseph
2008-01-01
Objectively assessing the performance of a model and deriving model parameter values from observations are critical and challenging in landscape to regional modeling. In this paper, we applied a nonlinear inversion technique to calibrate the ecosystem model CENTURY against carbon (C) and nitrogen (N) stock measurements collected from 39 mature tropical forest sites in seven life zones in Costa Rica. Net primary productivity from the Moderate-Resolution Imaging Spectroradiometer (MODIS), C and N stocks in aboveground live biomass, litter, coarse woody debris (CWD), and in soils were used to calibrate the model. To investigate the resolution of available observations on the number of adjustable parameters, inversion was performed using nine setups of adjustable parameters. Statistics including observation sensitivity, parameter correlation coefficient, parameter sensitivity, and parameter confidence limits were used to evaluate the information content of observations, resolution of model parameters, and overall model performance. Results indicated that soil organic carbon content, soil nitrogen content, and total aboveground biomass carbon had the highest information contents, while measurements of carbon in litter and nitrogen in CWD contributed little to the parameter estimation processes. The available information could resolve the values of 2-4 parameters. Adjusting just one parameter resulted in under-fitting and unacceptable model performance, while adjusting five parameters simultaneously led to over-fitting. Results further indicated that the MODIS NPP values were compressed as compared with the spatial variability of net primary production (NPP) values inferred from inverse modeling. Using inverse modeling to infer NPP and other sensitive model parameters from C and N stock observations provides an opportunity to utilize data collected by national to regional forest inventory systems to reduce the uncertainties in the carbon cycle and generate valuable databases to validate and improve MODIS NPP algorithms.
Status of Air Quality in Central California and Needs for Further Study
NASA Astrophysics Data System (ADS)
Tanrikulu, S.; Beaver, S.; Soong, S.; Tran, C.; Jia, Y.; Matsuoka, J.; McNider, R. T.; Biazar, A. P.; Palazoglu, A.; Lee, P.; Wang, J.; Kang, D.; Aneja, V. P.
2012-12-01
Ozone and PM2.5 levels frequently exceed NAAQS in central California (CC). Additional emission reductions are needed to attain and maintain the standards there. Agencies are developing cost-effective emission control strategies along with complementary incentive programs to reduce emissions when exceedances are forecasted. These approaches require accurate modeling and forecasting capabilities. A variety of models have been rigorously applied (MM5, WRF, CMAQ, CAMx) over CC. Despite the vast amount of land-based measurements from special field programs and significant effort, models have historically exhibited marginal performance. Satellite data may improve model performance by: establishing IC/BC over outlying areas of the modeling domain having unknown conditions; enabling FDDA over the Pacific Ocean to characterize important marine inflows and pollutant outflows; and filling in the gaps of the land-based monitoring network. BAAQMD, in collaboration with the NASA AQAST, plans to conduct four studies that include satellite-based data in CC air quality analysis and modeling: The first project enhances and refines weather patterns, especially aloft, impacting summer ozone formation. Surface analyses were unable to characterize the strong attenuating effect of the complex terrain to steer marine winds impinging on the continent. The dense summer clouds and fog over the Pacific Ocean form spatial patterns that can be related to the downstream air flows through polluted areas. The goal of this project is to explore, characterize, and quantify these relationships using cloud cover data. Specifically, cloud agreement statistics will be developed using satellite data and model clouds. Model skin temperature predictions will be compared to both MODIS and GOES skin temperatures. The second project evaluates and improves the initial and simulated fields of meteorological models that provide inputs to air quality models. The study will attempt to determine whether a cloud dynamical adjustment developed by UAHuntsville can improve model performance for maritime stratus and whether a moisture adjustment scheme in the Pleim-Xiu boundary layer scheme can use satellite data in place of coarse surface air temperature measurements. The goal is to improve meteorological model performance that leads to improved air quality model performance. The third project evaluates and improves forecasting skills of the National Air Quality Forecasting Model in CC by using land-based routine measurements as well as satellite data. Local forecasts are mostly based on surface meteorological and air quality measurements and weather charts provided by NWS. The goal is to improve the average accuracy in forecasting exceedances, which is around 60%. The fourth project uses satellite data for monitoring trends in fine particulate matter (PM2.5) in the San Francisco Bay Area. It evaluates the effectiveness of a rule adopted in 2008 that restricts household wood burning on days forecasted to have high PM2.5 levels. The goal is to complement current analyses based on surface data covering the largest sub-regions and population centers. The overall goal is to use satellite data to overcome limitations of land-based measurements. The outcomes will be further conceptual understanding of pollutant formation, improved regulatory model performance, and better optimized forecasting programs.
NASA Astrophysics Data System (ADS)
Schöniger, Anneli; Wöhling, Thomas; Nowak, Wolfgang
2014-05-01
Bayesian model averaging ranks the predictive capabilities of alternative conceptual models based on Bayes' theorem. The individual models are weighted with their posterior probability to be the best one in the considered set of models. Finally, their predictions are combined into a robust weighted average and the predictive uncertainty can be quantified. This rigorous procedure does, however, not yet account for possible instabilities due to measurement noise in the calibration data set. This is a major drawback, since posterior model weights may suffer a lack of robustness related to the uncertainty in noisy data, which may compromise the reliability of model ranking. We present a new statistical concept to account for measurement noise as source of uncertainty for the weights in Bayesian model averaging. Our suggested upgrade reflects the limited information content of data for the purpose of model selection. It allows us to assess the significance of the determined posterior model weights, the confidence in model selection, and the accuracy of the quantified predictive uncertainty. Our approach rests on a brute-force Monte Carlo framework. We determine the robustness of model weights against measurement noise by repeatedly perturbing the observed data with random realizations of measurement error. Then, we analyze the induced variability in posterior model weights and introduce this "weighting variance" as an additional term into the overall prediction uncertainty analysis scheme. We further determine the theoretical upper limit in performance of the model set which is imposed by measurement noise. As an extension to the merely relative model ranking, this analysis provides a measure of absolute model performance. To finally decide, whether better data or longer time series are needed to ensure a robust basis for model selection, we resample the measurement time series and assess the convergence of model weights for increasing time series length. We illustrate our suggested approach with an application to model selection between different soil-plant models following up on a study by Wöhling et al. (2013). Results show that measurement noise compromises the reliability of model ranking and causes a significant amount of weighting uncertainty, if the calibration data time series is not long enough to compensate for its noisiness. This additional contribution to the overall predictive uncertainty is neglected without our approach. Thus, we strongly advertise to include our suggested upgrade in the Bayesian model averaging routine.
Effective UV radiation from model calculations and measurements
NASA Technical Reports Server (NTRS)
Feister, Uwe; Grewe, Rolf
1994-01-01
Model calculations have been made to simulate the effect of atmospheric ozone and geographical as well as meteorological parameters on solar UV radiation reaching the ground. Total ozone values as measured by Dobson spectrophotometer and Brewer spectrometer as well as turbidity were used as input to the model calculation. The performance of the model was tested by spectroradiometric measurements of solar global UV radiation at Potsdam. There are small differences that can be explained by the uncertainty of the measurements, by the uncertainty of input data to the model and by the uncertainty of the radiative transfer algorithms of the model itself. Some effects of solar radiation to the biosphere and to air chemistry are discussed. Model calculations and spectroradiometric measurements can be used to study variations of the effective radiation in space in space time. The comparability of action spectra and their uncertainties are also addressed.
Detailed performance and environmental monitoring of aquifer heating and cooling systems
NASA Astrophysics Data System (ADS)
Acuna, José; Ahlkrona, Malva; Zandin, Hanna; Singh, Ashutosh
2016-04-01
The project intends to quantify the performance and environmental impact of large scale aquifer thermal energy storage, as well as point at recommendations for operating and estimating the environmental footprint of future systems. Field measurements, test of innovative equipment as well as advanced modelling work and analysis will be performed. The following aspects are introduced and covered in the presentation: -Thermal, chemical and microbiological influence of akvifer thermal energy storage systems: measurement and evaluation of real conditions and the influence of one system in operation. -Follow up of energy extraction from aquifer as compared to projected values, recommendations for improvements. -Evaluation of the most used thermal modeling tool for design and calculation of groundwater temperatures, calculations with MODFLOW/MT3DMS -Test and evaluation of optical fiber cables as a way to measure temperatures in aquifer thermal energy storages
Rothrauff-Laschober, Tanja C; Eby, Lillian Turner de Tormes; Sauer, Julia B
2013-01-01
When mental health counselors have limited and/or inadequate training in substance use disorders (SUDs), effective clinical supervision (ECS) may advance their professional development. The purpose of the current study was to investigate whether ECS is related to the job performance of SUD counselors. Data were obtained in person via paper-and-pencil surveys from 392 matched SUD counselor-clinical supervisor dyads working in 27 SUD treatment organizations across the United States. ECS was rated by counselors and measured with five multi-item scales (i.e., sponsoring counselors' careers, providing challenging assignments, role modeling, accepting/confirming counselors' competence, overall supervisor task proficiency). Clinical supervisors rated counselors' job performance, which was measured with two multi-item scales (i.e., task performance, performance within supervisory relationship). Using mixed-effects models, we found that most aspects of ECS are related to SUD counselor job performance. Thus, ECS may indeed enhance counselors' task performance and performance within the supervisory relationship, and, as a consequence, offset limited formal SUD training.
2013-01-01
When mental health counselors have limited and/or inadequate training in substance use disorders (SUDs), effective clinical supervision (ECS) may advance their professional development. The purpose of the current study was to investigate whether ECS is related to the job performance of SUD counselors. Data were obtained in person via paper-and-pencil surveys from 392 matched SUD counselor-clinical supervisor dyads working in 27 SUD treatment organizations across the United States. ECS was rated by counselors and measured with five multi-item scales (i.e., sponsoring counselors’ careers, providing challenging assignments, role modeling, accepting/confirming counselors’ competence, overall supervisor task proficiency). Clinical supervisors rated counselors’ job performance, which was measured with two multi-item scales (i.e., task performance, performance within supervisory relationship). Using mixed-effects models, we found that most aspects of ECS are related to SUD counselor job performance. Thus, ECS may indeed enhance counselors’ task performance and performance within the supervisory relationship, and, as a consequence, offset limited formal SUD training. PMID:25061265
Lane, Andrew M.; Terry, Peter C.; Devonport, Tracey J.; Friesen, Andrew P.; Totterdell, Peter A.
2017-01-01
The present study tested and extended Lane and Terry (2000) conceptual model of mood-performance relationships using a large dataset from an online experiment. Methodological and theoretical advances included testing a more balanced model of pleasant and unpleasant emotions, and evaluating relationships among emotion regulation traits, states and beliefs, psychological skills use, perceptions of performance, mental preparation, and effort exerted during competition. Participants (N = 73,588) completed measures of trait emotion regulation, emotion regulation beliefs, regulation efficacy, use of psychological skills, and rated their anger, anxiety, dejection, excitement, energy, and happiness before completing a competitive concentration task. Post-competition, participants completed measures of effort exerted, beliefs about the quality of mental preparation, and subjective performance. Results showed that dejection associated with worse performance with the no-dejection group performing 3.2% better. Dejection associated with higher anxiety and anger scores and lower energy, excitement, and happiness scores. The proposed moderating effect of dejection was supported for the anxiety-performance relationship but not the anger-performance relationship. In the no-dejection group, participants who reported moderate or high anxiety outperformed those reporting low anxiety by about 1.6%. Overall, results showed partial support for Lane and Terry’s model. In terms of extending the model, results showed dejection associated with greater use of suppression, less frequent use of re-appraisal and psychological skills, lower emotion regulation beliefs, and lower emotion regulation efficacy. Further, dejection associated with greater effort during performance, beliefs that pre-competition emotions did not assist goal achievement, and low subjective performance. Future research is required to investigate the role of intense emotions in emotion regulation and performance. PMID:28458641
Lane, Andrew M; Terry, Peter C; Devonport, Tracey J; Friesen, Andrew P; Totterdell, Peter A
2017-01-01
The present study tested and extended Lane and Terry (2000) conceptual model of mood-performance relationships using a large dataset from an online experiment. Methodological and theoretical advances included testing a more balanced model of pleasant and unpleasant emotions, and evaluating relationships among emotion regulation traits, states and beliefs, psychological skills use, perceptions of performance, mental preparation, and effort exerted during competition. Participants ( N = 73,588) completed measures of trait emotion regulation, emotion regulation beliefs, regulation efficacy, use of psychological skills, and rated their anger, anxiety, dejection, excitement, energy, and happiness before completing a competitive concentration task. Post-competition, participants completed measures of effort exerted, beliefs about the quality of mental preparation, and subjective performance. Results showed that dejection associated with worse performance with the no-dejection group performing 3.2% better. Dejection associated with higher anxiety and anger scores and lower energy, excitement, and happiness scores. The proposed moderating effect of dejection was supported for the anxiety-performance relationship but not the anger-performance relationship. In the no-dejection group, participants who reported moderate or high anxiety outperformed those reporting low anxiety by about 1.6%. Overall, results showed partial support for Lane and Terry's model. In terms of extending the model, results showed dejection associated with greater use of suppression, less frequent use of re-appraisal and psychological skills, lower emotion regulation beliefs, and lower emotion regulation efficacy. Further, dejection associated with greater effort during performance, beliefs that pre-competition emotions did not assist goal achievement, and low subjective performance. Future research is required to investigate the role of intense emotions in emotion regulation and performance.
Model identification of signal transduction networks from data using a state regulator problem.
Gadkar, K G; Varner, J; Doyle, F J
2005-03-01
Advances in molecular biology provide an opportunity to develop detailed models of biological processes that can be used to obtain an integrated understanding of the system. However, development of useful models from the available knowledge of the system and experimental observations still remains a daunting task. In this work, a model identification strategy for complex biological networks is proposed. The approach includes a state regulator problem (SRP) that provides estimates of all the component concentrations and the reaction rates of the network using the available measurements. The full set of the estimates is utilised for model parameter identification for the network of known topology. An a priori model complexity test that indicates the feasibility of performance of the proposed algorithm is developed. Fisher information matrix (FIM) theory is used to address model identifiability issues. Two signalling pathway case studies, the caspase function in apoptosis and the MAP kinase cascade system, are considered. The MAP kinase cascade, with measurements restricted to protein complex concentrations, fails the a priori test and the SRP estimates are poor as expected. The apoptosis network structure used in this work has moderate complexity and is suitable for application of the proposed tools. Using a measurement set of seven protein concentrations, accurate estimates for all unknowns are obtained. Furthermore, the effects of measurement sampling frequency and quality of information in the measurement set on the performance of the identified model are described.
NASA Astrophysics Data System (ADS)
Ferrero, Enrico; Alessandrini, Stefano; Vandenberghe, Francois
2018-03-01
We tested several planetary-boundary-layer (PBL) schemes available in the Weather Research and Forecasting (WRF) model against measured wind speed and direction, temperature and turbulent kinetic energy (TKE) at three levels (5, 9, 25 m). The Urban Turbulence Project dataset, gathered from the outskirts of Turin, Italy and used for the comparison, provides measurements made by sonic anemometers for more than 1 year. In contrast to other similar studies, which have mainly focused on short-time periods, we considered 2 months of measurements (January and July) representing both the seasonal and the daily variabilities. To understand how the WRF-model PBL schemes perform in an urban environment, often characterized by low wind-speed conditions, we first compared six PBL schemes against observations taken by the highest anemometer located in the inertial sub-layer. The availability of the TKE measurements allows us to directly evaluate the performances of the model; results of the model evaluation are presented in terms of quantile versus quantile plots and statistical indices. Secondly, we considered WRF-model PBL schemes that can be coupled to the urban-surface exchange parametrizations and compared the simulation results with measurements from the two lower anemometers located inside the canopy layer. We find that the PBL schemes accounting for TKE are more accurate and the model representation of the roughness sub-layer improves when the urban model is coupled to each PBL scheme.
Maeng, Daniel D; Scanlon, Dennis P; Chernew, Michael E; Gronniger, Tim; Wodchis, Walter P; McLaughlin, Catherine G
2010-01-01
Objective To examine the extent to which health plan quality measures capture physician practice patterns rather than plan characteristics. Data Source We gathered and merged secondary data from the following four sources: a private firm that collected information on individual physicians and their health plan affiliations, The National Committee for Quality Assurance, InterStudy, and the Dartmouth Atlas. Study Design We constructed two measures of physician network overlap for all health plans in our sample and linked them to selected measures of plan performance. Two linear regression models were estimated to assess the relationship between the measures of physician network overlap and the plan performance measures. Principal Findings The results indicate that in the presence of a higher degree of provider network overlap, plan performance measures tend to converge to a lower level of quality. Conclusions Standard health plan performance measures reflect physician practice patterns rather than plans' effort to improve quality. This implies that more provider-oriented measurement, such as would be possible with accountable care organizations or medical homes, may facilitate patient decision making and provide further incentives to improve performance. PMID:20403064
The Muon $g$-$2$ Experiment at Fermilab
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gohn, Wesley
A new measurement of the anomalous magnetic moment of the muon,more » $$a_{\\mu} \\equiv (g-2)/2$$, will be performed at the Fermi National Accelerator Laboratory with data taking beginning in 2017. The most recent measurement, performed at Brookhaven National Laboratory (BNL) and completed in 2001, shows a 3.5 standard deviation discrepancy with the standard model value of $$a_\\mu$$. The new measurement will accumulate 21 times the BNL statistics using upgraded magnet, detector, and storage ring systems, enabling a measurement of $$a_\\mu$$ to 140 ppb, a factor of 4 improvement in the uncertainty the previous measurement. This improvement in precision, combined with recent improvements in our understanding of the QCD contributions to the muon $g$-$2$, could provide a discrepancy from the standard model greater than 7$$\\sigma$$ if the central value is the same as that measured by the BNL experiment, which would be a clear indication of new physics.« less
Segmentation and determination of joint space width in foot radiographs
NASA Astrophysics Data System (ADS)
Schenk, O.; de Muinck Keizer, D. M.; Bernelot Moens, H. J.; Slump, C. H.
2016-03-01
Joint damage in rheumatoid arthritis is frequently assessed using radiographs of hands and feet. Evaluation includes measurements of the joint space width (JSW) and detection of erosions. Current visual scoring methods are timeconsuming and subject to inter- and intra-observer variability. Automated measurement methods avoid these limitations and have been fairly successful in hand radiographs. This contribution aims at foot radiographs. Starting from an earlier proposed automated segmentation method we have developed a novel model based image analysis algorithm for JSW measurements. This method uses active appearance and active shape models to identify individual bones. The model compiles ten submodels, each representing a specific bone of the foot (metatarsals 1-5, proximal phalanges 1-5). We have performed segmentation experiments using 24 foot radiographs, randomly selected from a large database from the rheumatology department of a local hospital: 10 for training and 14 for testing. Segmentation was considered successful if the joint locations are correctly determined. Segmentation was successful in only 14%. To improve results a step-by-step analysis will be performed. We performed JSW measurements on 14 randomly selected radiographs. JSW was successfully measured in 75%, mean and standard deviation are 2.30+/-0.36mm. This is a first step towards automated determination of progression of RA and therapy response in feet using radiographs.
Test anxiety and academic performance in chiropractic students.
Zhang, Niu; Henderson, Charles N R
2014-01-01
Objective : We assessed the level of students' test anxiety, and the relationship between test anxiety and academic performance. Methods : We recruited 166 third-quarter students. The Test Anxiety Inventory (TAI) was administered to all participants. Total scores from written examinations and objective structured clinical examinations (OSCEs) were used as response variables. Results : Multiple regression analysis shows that there was a modest, but statistically significant negative correlation between TAI scores and written exam scores, but not OSCE scores. Worry and emotionality were the best predictive models for written exam scores. Mean total anxiety and emotionality scores for females were significantly higher than those for males, but not worry scores. Conclusion : Moderate-to-high test anxiety was observed in 85% of the chiropractic students examined. However, total test anxiety, as measured by the TAI score, was a very weak predictive model for written exam performance. Multiple regression analysis demonstrated that replacing total anxiety (TAI) with worry and emotionality (TAI subscales) produces a much more effective predictive model of written exam performance. Sex, age, highest current academic degree, and ethnicity contributed little additional predictive power in either regression model. Moreover, TAI scores were not found to be statistically significant predictors of physical exam skill performance, as measured by OSCEs.
NASA Astrophysics Data System (ADS)
Janpaule, Inese; Haritonova, Diana; Balodis, Janis; Zarins, Ansis; Silabriedis, Gunars; Kaminskis, Janis
2015-03-01
Development of a digital zenith telescope prototype, improved zenith camera construction and analysis of experimental vertical deflection measurements for the improvement of the Latvian geoid model has been performed at the Institute of Geodesy and Geoinformatics (GGI), University of Latvia. GOCE satellite data was used to compute geoid model for the Riga region, and European gravimetric geoid model EGG97 and 102 data points of GNSS/levelling were used as input data in the calculations of Latvian geoid model.
Experimental measurement and modeling of snow accumulation and snowmelt in a mountain microcatchment
NASA Astrophysics Data System (ADS)
Danko, Michal; Krajčí, Pavel; Hlavčo, Jozef; Kostka, Zdeněk; Holko, Ladislav
2016-04-01
Fieldwork is a very useful source of data in all geosciences. This naturally applies also to the snow hydrology. Snow accumulation and snowmelt are spatially very heterogeneous especially in non-forested, mountain environments. Direct field measurements provide the most accurate information about it. Quantification and understanding of processes, that cause these spatial differences are crucial in prediction and modelling of runoff volumes in spring snowmelt period. This study presents possibilities of detailed measurement and modeling of snow cover characteristics in a mountain experimental microcatchment located in northern part of Slovakia in Western Tatra mountains. Catchment area is 0.059 km2 and mean altitude is 1500 m a.s.l. Measurement network consists of 27 snow poles, 3 small snow lysimeters, discharge measurement device and standard automatic weather station. Snow depth and snow water equivalent (SWE) were measured twice a month near the snow poles. These measurements were used to estimate spatial differences in accumulation of SWE. Snowmelt outflow was measured by small snow lysimeters. Measurements were performed in winter 2014/2015. Snow water equivalent variability was very high in such a small area. Differences between particular measuring points reached 600 mm in time of maximum SWE. The results indicated good performance of a snow lysimeter in case of snowmelt timing identification. Increase of snowmelt measured by the snow lysimeter had the same timing as increase in discharge at catchment's outlet and the same timing as the increase in air temperature above the freezing point. Measured data were afterwards used in distributed rainfall-runoff model MIKE-SHE. Several methods were used for spatial distribution of precipitation and snow water equivalent. The model was able to simulate snow water equivalent and snowmelt timing in daily step reasonably well. Simulated discharges were slightly overestimated in later spring.
1992-09-01
abilities is fit along with the autoregressive process. Initially, the influences on search performance of within-group age and sex were included as control...Results: PerformanceLAbility Structure Measurement Model: Ability Structure The correlations between all the ability measures, age, and sex are...subsequent analyses for young adults. Age and sex were included as control variables. There was an age range of 15 years; this range is sufficiently large that
Performance assessment in complex individual and team tasks
NASA Technical Reports Server (NTRS)
Eddy, Douglas R.
1992-01-01
Described here is an eclectic, performance based approach to assessing cognitive performance from multiple perspectives. The experience gained from assessing the effects of antihistamines and scenario difficulty on C (exp 2) decision making performance in Airborne Warning and Control Systems (AWACS) weapons director (WD) teams can serve as a model for realistic simulations in space operations. Emphasis is placed on the flexibility of measurement, hierarchical organization of measurement levels, data collection from multiple perspectives, and the difficulty of managing large amounts of data.
NASA Technical Reports Server (NTRS)
Swift, C. T.; Goodberlet, M. A.; Wilkerson, J. C.
1990-01-01
The Defence Meteorological Space Program's (DMSP) Special Sensor Microwave/Imager (SSM/I), an operational wind speed algorithm was developed. The algorithm is based on the D-matrix approach which seeks a linear relationship between measured SSM/I brightness temperatures and environmental parameters. D-matrix performance was validated by comparing algorithm derived wind speeds with near-simultaneous and co-located measurements made by off-shore ocean buoys. Other topics include error budget modeling, alternate wind speed algorithms, and D-matrix performance with one or more inoperative SSM/I channels.
Modeling returns volatility: Realized GARCH incorporating realized risk measure
NASA Astrophysics Data System (ADS)
Jiang, Wei; Ruan, Qingsong; Li, Jianfeng; Li, Ye
2018-06-01
This study applies realized GARCH models by introducing several risk measures of intraday returns into the measurement equation, to model the daily volatility of E-mini S&P 500 index futures returns. Besides using the conventional realized measures, realized volatility and realized kernel as our benchmarks, we also use generalized realized risk measures, realized absolute deviation, and two realized tail risk measures, realized value-at-risk and realized expected shortfall. The empirical results show that realized GARCH models using the generalized realized risk measures provide better volatility estimation for the in-sample and substantial improvement in volatility forecasting for the out-of-sample. In particular, the realized expected shortfall performs best for all of the alternative realized measures. Our empirical results reveal that future volatility may be more attributable to present losses (risk measures). The results are robust to different sample estimation windows.
Stutter-Step Models of Performance in School
ERIC Educational Resources Information Center
Morgan, Stephen L.; Leenman, Theodore S.; Todd, Jennifer J.; Kentucky; Weeden, Kim A.
2013-01-01
To evaluate a stutter-step model of academic performance in high school, this article adopts a unique measure of the beliefs of 12,591 high school sophomores from the Education Longitudinal Study, 2002-2006. Verbatim responses to questions on occupational plans are coded to capture specific job titles, the listing of multiple jobs, and the listing…
Khan, Taimoor; De, Asok
2014-01-01
In the last decade, artificial neural networks have become very popular techniques for computing different performance parameters of microstrip antennas. The proposed work illustrates a knowledge-based neural networks model for predicting the appropriate shape and accurate size of the slot introduced on the radiating patch for achieving desired level of resonance, gain, directivity, antenna efficiency, and radiation efficiency for dual-frequency operation. By incorporating prior knowledge in neural model, the number of required training patterns is drastically reduced. Further, the neural model incorporated with prior knowledge can be used for predicting response in extrapolation region beyond the training patterns region. For validation, a prototype is also fabricated and its performance parameters are measured. A very good agreement is attained between measured, simulated, and predicted results.
De, Asok
2014-01-01
In the last decade, artificial neural networks have become very popular techniques for computing different performance parameters of microstrip antennas. The proposed work illustrates a knowledge-based neural networks model for predicting the appropriate shape and accurate size of the slot introduced on the radiating patch for achieving desired level of resonance, gain, directivity, antenna efficiency, and radiation efficiency for dual-frequency operation. By incorporating prior knowledge in neural model, the number of required training patterns is drastically reduced. Further, the neural model incorporated with prior knowledge can be used for predicting response in extrapolation region beyond the training patterns region. For validation, a prototype is also fabricated and its performance parameters are measured. A very good agreement is attained between measured, simulated, and predicted results. PMID:27382616
NASA Astrophysics Data System (ADS)
Languy, Fabian; Vandenrijt, Jean-François; Saint-Georges, Philippe; Georges, Marc P.
2017-06-01
The manufacture of mirrors for space application is expensive and the requirements on the optical performance increase over years. To achieve higher performance, larger mirrors are manufactured but the larger the mirror the higher the sensitivity to temperature variation and therefore the higher the degradation of optical performances. To avoid the use of an expensive thermal regulation, we need to develop tools able to predict how optics behaves with thermal constraints. This paper presents the comparison between experimental surface mirror deformation and theoretical results from a multiphysics model. The local displacements of the mirror surface have been measured with the use of electronic speckle pattern interferometry (ESPI) and the deformation itself has been calculated by subtracting the rigid body motion. After validation of the mechanical model, experimental and numerical wave front errors are compared.
NASA Astrophysics Data System (ADS)
Kong, Changduk; Lim, Semyeong
2011-12-01
Recently, the health monitoring system of major gas path components of gas turbine uses mostly the model based method like the Gas Path Analysis (GPA). This method is to find quantity changes of component performance characteristic parameters such as isentropic efficiency and mass flow parameter by comparing between measured engine performance parameters such as temperatures, pressures, rotational speeds, fuel consumption, etc. and clean engine performance parameters without any engine faults which are calculated by the base engine performance model. Currently, the expert engine diagnostic systems using the artificial intelligent methods such as Neural Networks (NNs), Fuzzy Logic and Genetic Algorithms (GAs) have been studied to improve the model based method. Among them the NNs are mostly used to the engine fault diagnostic system due to its good learning performance, but it has a drawback due to low accuracy and long learning time to build learning data base if there are large amount of learning data. In addition, it has a very complex structure for finding effectively single type faults or multiple type faults of gas path components. This work builds inversely a base performance model of a turboprop engine to be used for a high altitude operation UAV using measured performance data, and proposes a fault diagnostic system using the base engine performance model and the artificial intelligent methods such as Fuzzy logic and Neural Network. The proposed diagnostic system isolates firstly the faulted components using Fuzzy Logic, then quantifies faults of the identified components using the NN leaned by fault learning data base, which are obtained from the developed base performance model. In leaning the NN, the Feed Forward Back Propagation (FFBP) method is used. Finally, it is verified through several test examples that the component faults implanted arbitrarily in the engine are well isolated and quantified by the proposed diagnostic system.
Monte Carlo simulation of Ray-Scan 64 PET system and performance evaluation using GATE toolkit
NASA Astrophysics Data System (ADS)
Li, Suying; Zhang, Qiushi; Vuletic, Ivan; Xie, Zhaoheng; Yang, Kun; Ren, Qiushi
2017-02-01
In this study, we aimed to develop a GATE model for the simulation of Ray-Scan 64 PET scanner and model its performance characteristics. A detailed implementation of system geometry and physical process were included in the simulation model. Then we modeled the performance characteristics of Ray-Scan 64 PET system for the first time, based on National Electrical Manufacturers Association (NEMA) NU-2 2007 protocols and validated the model against experimental measurement, including spatial resolution, sensitivity, counting rates and noise equivalent count rate (NECR). Moreover, an accurate dead time module was investigated to simulate the counting rate performance. Overall results showed reasonable agreement between simulation and experimental data. The validation results showed the reliability and feasibility of the GATE model to evaluate major performance of Ray-Scan 64 PET system. It provided a useful tool for a wide range of research applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoffman, Forrest M; Randerson, Jim; Thornton, Peter E
2009-01-01
The need to capture important climate feebacks in general circulation models (GCMs) has resulted in new efforts to include atmospheric chemistry and land and ocean biogeochemistry into the next generation of production climate models, now often referred to as Earth System Models (ESMs). While many terrestrial and ocean carbon models have been coupled to GCMs, recent work has shown that such models can yield a wide range of results, suggesting that a more rigorous set of offline and partially coupled experiments, along with detailed analyses of processes and comparisons with measurements, are warranted. The Carbon-Land Model Intercomparison Project (C-LAMP) providesmore » a simulation protocol and model performance metrics based upon comparisons against best-available satellite- and ground-based measurements (Hoffman et al., 2007). C-LAMP provides feedback to the modeling community regarding model improvements and to the measurement community by suggesting new observational campaigns. C-LAMP Experiment 1 consists of a set of uncoupled simulations of terrestrial carbon models specifically designed to examine the ability of the models to reproduce surface carbon and energy fluxes at multiple sites and to exhibit the influence of climate variability, prescribed atmospheric carbon dioxide (CO{sub 2}), nitrogen (N) deposition, and land cover change on projections of terrestrial carbon fluxes during the 20th century. Experiment 2 consists of partially coupled simulations of the terrestrial carbon model with an active atmosphere model exchanging energy and moisture fluxes. In all experiments, atmospheric CO{sub 2} follows the prescribed historical trajectory from C{sup 4}MIP. In Experiment 2, the atmosphere model is forced with prescribed sea surface temperatures (SSTs) and corresponding sea ice concentrations from the Hadley Centre; prescribed CO{sub 2} is radiatively active; and land, fossil fuel, and ocean CO{sub 2} fluxes are advected by the model. Both sets of experiments have been performed using two different terrestrial biogeochemistry modules coupled to the Community Land Model version 3 (CLM3) in the Community Climate System Model version 3 (CCSM3): The CASA model of Fung, et al., and the carbon-nitrogen (CN) model of Thornton. Comparisons against Ameriflus site measurements, MODIS satellite observations, NOAA flask records, TRANSCOM inversions, and Free Air CO{sub 2} Enrichment (FACE) site measurements, and other datasets have been performed and are described in Randerson et al. (2009). The C-LAMP diagnostics package was used to validate improvements to CASA and CN for use in the next generation model, CLM4. It is hoped that this effort will serve as a prototype for an international carbon-cycle model benchmarking activity for models being used for the Inter-governmental Panel on Climate Change (IPCC) Fifth Assessment Report. More information about C-LAMP, the experimental protocol, performance metrics, output standards, and model-data comparisons from the CLM3-CASA and CLM3-CN models are available at http://www.climatemodeling.org/c-lamp.« less
NASA Astrophysics Data System (ADS)
Slater, L. D.; Robinson, J.; Weller, A.; Keating, K.; Robinson, T.; Parker, B. L.
2017-12-01
Geophysical length scales determined from complex conductivity (CC) measurements can be used to estimate permeability k when the electrical formation factor F describing the ratio between tortuosity and porosity is known. Two geophysical length scales have been proposed: [1] the imaginary conductivity σ" normalized by the specific polarizability cp; [2] the time constant τ multiplied by a diffusion coefficient D+. The parameters cp and D+ account for the control of fluid chemistry and/or varying minerology on the geophysical length scale. We evaluated the predictive capability of two recently presented CC permeability models: [1] an empirical formulation based on σ"; [2] a mechanistic formulation based on τ;. The performance of the CC models was evaluated against measured permeability; this performance was also compared against that of well-established k estimation equations that use geometric length scales to represent the pore scale properties controlling fluid flow. Both CC models predict permeability within one order of magnitude for a database of 58 sandstone samples, with the exception of those samples characterized by high pore volume normalized surface area Spor and more complex mineralogy including significant dolomite. Variations in cp and D+ likely contribute to the poor performance of the models for these high Spor samples. The ultimate value of such geophysical models for permeability prediction lies in their application to field scale geophysical datasets. Two observations favor the implementation of the σ" based model over the τ based model for field-scale estimation: [1] the limited range of variation in cp relative to D+; [2] σ" is readily measured using field geophysical instrumentation (at a single frequency) whereas τ requires broadband spectral measurements that are extremely challenging and time consuming to accurately measure in the field. However, the need for a reliable estimate of F remains a major obstacle to the field-scale implementation of either of the CC permeability models for k estimation.
Icing Encounter Duration Sensitivity Study
NASA Technical Reports Server (NTRS)
Addy, Harold E., Jr.; Lee, Sam
2011-01-01
This paper describes a study performed to investigate how aerodynamic performance degradation progresses with time throughout an exposure to icing conditions. It is one of the first documented studies of the effects of ice contamination on aerodynamic performance at various points in time throughout an icing encounter. Both a 1.5 and 6 ft chord, two-dimensional, NACA-23012 airfoils were subjected to icing conditions in the NASA Icing Research Tunnel for varying lengths of time. At the end of each run, lift, drag, and pitching moment measurements were made. Measurements with the 1.5 ft chord model showed that maximum lift and pitching moment degraded more rapidly early in the exposure and degraded more slowly as time progressed. Drag for the 1.5 ft chord model degraded more linearly with time, although drag for very short exposure durations was slightly higher than expected. Only drag measurements were made with the 6 ft chord airfoil. Here, drag for the long exposures was higher than expected. Novel comparison of drag measurements versus an icing scaling parameter, accumulation parameter times collection efficiency was used to compare the data from the two different size model. The comparisons provided a means of assessing the level of fidelity needed for accurate icing simulation.
The relationship between frequency of performance and perceived importance of health behaviours.
Nudelman, Gabriel; Ivanova, Eliza
2018-04-01
The relationship between performance of health behaviours and their perceived importance was examined among 250 adults. Frequency of performance and perceived importance of 21 health behaviours, self-assessed health and the Big Five personality traits were measured. As expected, importance and performance were positively correlated. Self-assessed health was more strongly associated with performance than importance, and a model wherein importance affects performance, which in turn affects self-assessed health, was superior to a model wherein performance affects importance. The Big Five significantly explained performance, particularly conscientiousness, and importance explained performance beyond this effect. Consequently, importance perceptions should be considered when developing behavioural interventions.
NASA Technical Reports Server (NTRS)
Hansen, S. B.; Fournier, K. B.; Finkenthal, M. J.; Smith, R.; Puetterich, T.; Neu, R.
2006-01-01
High-resolution measurements of K-shell emission from O, F, and Ne have been performed at the ASDEX Upgrade tokamak in Garching, Germany. Independently measured temperature and density profiles of the plasma provide a unique test bed for model validation. We present comparisons of measured spectra with calculations based on transport and collisional-radiative models and discuss the reliability of commonly used diagnostic line ratios.
Performance assessment of a single-pixel compressive sensing imaging system
NASA Astrophysics Data System (ADS)
Du Bosq, Todd W.; Preece, Bradley L.
2016-05-01
Conventional electro-optical and infrared (EO/IR) systems capture an image by measuring the light incident at each of the millions of pixels in a focal plane array. Compressive sensing (CS) involves capturing a smaller number of unconventional measurements from the scene, and then using a companion process known as sparse reconstruction to recover the image as if a fully populated array that satisfies the Nyquist criteria was used. Therefore, CS operates under the assumption that signal acquisition and data compression can be accomplished simultaneously. CS has the potential to acquire an image with equivalent information content to a large format array while using smaller, cheaper, and lower bandwidth components. However, the benefits of CS do not come without compromise. The CS architecture chosen must effectively balance between physical considerations (SWaP-C), reconstruction accuracy, and reconstruction speed to meet operational requirements. To properly assess the value of such systems, it is necessary to fully characterize the image quality, including artifacts and sensitivity to noise. Imagery of the two-handheld object target set at range was collected using a passive SWIR single-pixel CS camera for various ranges, mirror resolution, and number of processed measurements. Human perception experiments were performed to determine the identification performance within the trade space. The performance of the nonlinear CS camera was modeled with the Night Vision Integrated Performance Model (NV-IPM) by mapping the nonlinear degradations to an equivalent linear shift invariant model. Finally, the limitations of CS modeling techniques will be discussed.
Nozza, R J
1987-06-01
Performance of infants in a speech-sound discrimination task (/ba/ vs /da/) was measured at three stimulus intensity levels (50, 60, and 70 dB SPL) using the operant head-turn procedure. The procedure was modified so that data could be treated as though from a single-interval (yes-no) procedure, as is commonly done, as well as if from a sustained attention (vigilance) task. Discrimination performance changed significantly with increase in intensity, suggesting caution in the interpretation of results from infant discrimination studies in which only single stimulus intensity levels within this range are used. The assumptions made about the underlying methodological model did not change the performance-intensity relationships. However, infants demonstrated response decrement, typical of vigilance tasks, which supports the notion that the head-turn procedure is represented best by the vigilance model. Analysis then was done according to a method designed for tasks with undefined observation intervals [C. S. Watson and T. L. Nichols, J. Acoust. Soc. Am. 59, 655-668 (1976)]. Results reveal that, while group data are reasonably well represented across levels of difficulty by the fixed-interval model, there is a variation in performance as a function of time following trial onset that could lead to underestimation of performance in some cases.
Argonne Bubble Experiment Thermal Model Development II
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buechler, Cynthia Eileen
2016-07-01
This report describes the continuation of the work reported in “Argonne Bubble Experiment Thermal Model Development”. The experiment was performed at Argonne National Laboratory (ANL) in 2014. A rastered 35 MeV electron beam deposited power in a solution of uranyl sulfate, generating heat and radiolytic gas bubbles. Irradiations were performed at three beam power levels, 6, 12 and 15 kW. Solution temperatures were measured by thermocouples, and gas bubble behavior was observed. This report will describe the Computational Fluid Dynamics (CFD) model that was developed to calculate the temperatures and gas volume fractions in the solution vessel during the irradiations.more » The previous report described an initial analysis performed on a geometry that had not been updated to reflect the as-built solution vessel. Here, the as-built geometry is used. Monte-Carlo N-Particle (MCNP) calculations were performed on the updated geometry, and these results were used to define the power deposition profile for the CFD analyses, which were performed using Fluent, Ver. 16.2. CFD analyses were performed for the 12 and 15 kW irradiations, and further improvements to the model were incorporated, including the consideration of power deposition in nearby vessel components, gas mixture composition, and bubble size distribution. The temperature results of the CFD calculations are compared to experimental measurements.« less
1990-08-01
Distribution Unlimited Accession Number: 3539 Publication Date: Aug 01, 1990 Title: AIDA Model 1.0 Final Report Corporate Author Or Publisher: Software...Part: 1 Author: D.R.Sloggett Date: 27.7.90 Issue: 23 C.J.Slim Title: AIDA Model 1.0 Final Report i Doc. Ref.: AIDA/3/26/01 U Cross Ref.: AIDA/1/06/01...functionality and integrity. These tests also provided initial performance measures for the AIDA Model 1.0 system. The results from theI baseline runs performed
Age-class separation of blue-winged ducks
Hohman, W.L.; Moore, J.L.; Twedt, D.J.; Mensik, John G.; Logerwell, E.
1995-01-01
Accurate determination of age is of fundamental importance to population and life history studies of waterfowl and their management. Therefore, we developed quantitative methods that separate adult and immature blue-winged teal (Anas discors), cinnamon teal (A. cyanoptera), and northern shovelers (A. clypeata) during spring and summer. To assess suitability of discriminant models using 9 remigial measurements, we compared model performance (% agreement between predicted age and age assigned to birds on the basis of definitive cloacal or rectral feather characteristics) in different flyways (Mississippi and Pacific) and between years (1990-91 and 1991-92). We also applied age-classification models to wings obtained from U.S. Fish and Wildlife Service harvest surveys in the Mississippi and Central-Pacific flyways (wing-bees) for which age had been determined using qualitative characteristics (i.e., remigial markings, shape, or wear). Except for male northern shovelers, models correctly aged lt 90% (range 70-86%) of blue-winged ducks. Model performance varied among species and differed between sexes and years. Proportions of individuals that were correctly aged were greater for males (range 63-86%) than females (range 39-69%). Models for northern shovelers performed better in flyway comparisons within year (1991-92, La. model applied to Calif. birds, and Calif. model applied to La. birds: 90 and 94% for M, and 89 and 76% for F, respectively) than in annual comparisons within the Mississippi Flyway (1991-92 model applied to 1990-91 data: 79% for M, 50% for F). Exclusion of measurements that varied by flyway or year did not improve model performance. Quantitative methods appear to be of limited value for age separation of female blue-winged ducks. Close agreement between predicted age and age assigned to wings from the wing-bees suggests that qualitative and quantitative methods may be equally accurate for age separation of male blue-winged ducks. We interpret annual and flyway differences in remigial measurements and reduced performance of age classification models as evidence of high variability in size of blue-winged ducks' remiges. Variability in remigial size of these and other small-bodied waterfowl may be related to nutrition during molt.
NASA Astrophysics Data System (ADS)
Javaherchi, Teymour; Stelzenmuller, Nick; Seydel, Joseph; Aliseda, Alberto
2014-11-01
The performance, turbulent wake evolution and interaction of multiple Horizontal Axis Hydrokinetic Turbines (HAHT) is analyzed in a 45:1 scale model setup. We combine experimental measurements with different RANS-based computational simulations that model the turbines with sliding-mesh, rotating reference frame and blame element theory strategies. The influence of array spacing and Tip Speed Ratio on performance and wake velocity structure is investigated in three different array configurations: Two coaxial turbines at different downstream spacing (5d to 14d), Three coaxial turbines with 5d and 7d downstream spacing, and Three turbines with lateral offset (0.5d) and downstream spacing (5d & 7d). Comparison with experimental measurements provides insights into the dynamics of HAHT arrays, and by extension to closely packed HAWT arrays. The experimental validation process also highlights the influence of the closure model used (k- ω SST and k- ɛ) and the flow Reynolds number (Re=40,000 to 100,000) on the computational predictions of devices' performance and characteristics of the flow field inside the above-mentioned arrays, establishing the strengths and limitations of existing numerical models for use in industrially-relevant settings (computational cost and time). Supported by DOE through the National Northwest Marine Renewable Energy Center (NNMREC).
IONSIV(R) IE-911 Performance in Savannah River Site Radioactive Waste
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walker, D.D.
2001-06-04
This report describes cesium sorption from high-level radioactive waste solutions onto IONSIV(R) IE-911 at ambient temperature. Researchers characterized six radioactive waste samples from five high-level waste tanks in the Savannah River Site tank farm, diluted the wastes to 5.6 M Na+, and made equilibrium and kinetic measurements of cesium sorption. The equilibrium measurements were compared to ZAM (Zheng, Anthony, and Martin) model predictions. The kinetic measurements were compared to simulant solutions whose column performance has been measured.
Deflection Shape Reconstructions of a Rotating Five-blade Helicopter Rotor from TLDV Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fioretti, A.; Castellini, P.; Tomasini, E. P.
2010-05-28
Helicopters are aircraft machines which are subjected to high level of vibrations, mainly due to spinning rotors. These are made of two or more blades attached by hinges to a central hub, which can make the dynamic behaviour difficult to study. However, they share some common dynamic properties with the ones expected in bladed discs, thereby the analytical modelling of rotors can be performed using some assumptions as the ones adopted for the bladed discs. This paper presents results of a vibrations study performed on a scaled helicopter rotor model which was rotating at a fix rotational speed and excitedmore » by an air jet. A simplified analytical model of that rotor was also produced to help the identifications of the vibration patterns measured using a single point tracking-SLDV measurement method.« less
Modeling and experimental study on near-field acoustic levitation by flexural mode.
Liu, Pinkuan; Li, Jin; Ding, Han; Cao, Wenwu
2009-12-01
Near-field acoustic levitation (NFAL) has been used in noncontact handling and transportation of small objects to avoid contamination. We have performed a theoretical analysis based on nonuniform vibrating surface to quantify the levitation force produced by the air film and also conducted experimental tests to verify our model. Modal analysis was performed using ANSYS on the flexural plate radiator to obtain its natural frequency of desired mode, which is used to design the measurement system. Then, the levitation force was calculated as a function of levitation distance based on squeeze gas film theory using measured amplitude and phase distributions on the vibrator surface. Compared with previous fluid-structural analyses using a uniform piston motion, our model based on the nonuniform radiating surface of the vibrator is more realistic and fits better with experimentally measured levitation force.
Validation of a new modal performance measure for flexible controllers design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simo, J.B.; Tahan, S.A.; Kamwa, I.
1996-05-01
A new modal performance measure for power system stabilizer (PSS) optimization is proposed in this paper. The new method is based on modifying the square envelopes of oscillating modes, in order to take into account their damping ratios while minimizing the performance index. This criteria is applied to flexible controllers optimal design, on a multi-input-multi-output (MIMO) reduced-order model of a prototype power system. The multivariable model includes four generators, each having one input and one output. Linear time-response simulation and transient stability analysis with a nonlinear package confirm the superiority of the proposed criteria and illustrate its effectiveness in decentralizedmore » control.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Irminger, Philip; Starke, Michael R; Dimitrovski, Aleksandar D
2014-01-01
Power system equipment manufacturers and researchers continue to experiment with novel overhead electric conductor designs that support better conductor performance and address congestion issues. To address the technology gap in testing these novel designs, Oak Ridge National Laboratory constructed the Powerline Conductor Accelerated Testing (PCAT) facility to evaluate the performance of novel overhead conductors in an accelerated fashion in a field environment. Additionally, PCAT has the capability to test advanced sensors and measurement methods for accessing overhead conductor performance and condition. Equipped with extensive measurement and monitoring devices, PCAT provides a platform to improve/validate conductor computer models and assess themore » performance of novel conductors. The PCAT facility and its testing capabilities are described in this paper.« less
Design principles and field performance of a solar spectral irradiance meter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tatsiankou, V.; Hinzer, K.; Haysom, J.
2016-08-01
A solar spectral irradiance meter (SSIM), designed for measuring the direct normal irradiance (DNI) in six wavelength bands, has been combined with models to determine key atmospheric transmittances and the resulting spectral irradiance distribution of DNI under all sky conditions. The design principles of the SSIM, implementation of a parameterized transmittance model, and field performance comparisons of modeled solar spectra with reference radiometer measurements are presented. Two SSIMs were tested and calibrated at the National Renewable Energy Laboratory (NREL) against four spectroradiometers and an absolute cavity radiometer. The SSIMs' DNI was on average within 1% of the DNI values reportedmore » by one of NREL's primary absolute cavity radiometers. An additional SSIM was installed at the SUNLAB Outdoor Test Facility in September 2014, with ongoing collection of environmental and spectral data. The SSIM's performance in Ottawa was compared against a commercial pyrheliometer and a spectroradiometer over an eight month study. The difference in integrated daily spectral irradiance between the SSIM and the ASD spectroradiometer was found to be less than 1%. The cumulative energy density collected by the SSIM over this duration agreed with that measured by an Eppley model NIP pyrheliometer to within 0.5%. No degradation was observed.« less
Total atmospheric ozone determined from spectral measurements of direct solar UV irradiance
NASA Astrophysics Data System (ADS)
Huber, Martin; Blumthaler, Mario; Ambach, Walter; Staehelin, Johannes
1995-01-01
With a double monochromator, high resolution spectral measurements of direct solar UV-irradiance were performed in Arosa during February and March, 1993. Total atmospheric ozone amount is determined by fitting model calculations to the measured spectra. The results are compared with the operationally performed measurements of a Dobson and a Brewer spectrometer. The total ozone amount determined from spectral measurements differs from the results of the Dobson instrument by -1.1±0.9% and from those of the Brewer instrument by -0.4±0.7%.
Development of a model to assess environmental performance, concerning HSE-MS principles.
Abbaspour, M; Hosseinzadeh Lotfi, F; Karbassi, A R; Roayaei, E; Nikoomaram, H
2010-06-01
The main objective of the present study was to develop a valid and appropriate model to evaluate companies' efficiency and environmental performance, concerning health, safety, and environmental management system principles. The proposed model overcomes the shortcomings of the previous models developed in this area. This model has been designed on the basis of a mathematical method known as Data Envelopment Analysis (DEA). In order to differentiate high-performing companies from weak ones, one of DEA nonradial models named as enhanced Russell graph efficiency measure has been applied. Since some of the environmental performance indicators cannot be controlled by companies' managers, it was necessary to develop the model in a way that it could be applied when discretionary and/or nondiscretionary factors were involved. The model, then, has been modified on a real case that comprised 12 oil and gas general contractors. The results showed the relative efficiency, inefficiency sources, and the rank of contractors.
Evaluating Curriculum-Based Measurement from a Behavioral Assessment Perspective
ERIC Educational Resources Information Center
Ardoin, Scott P.; Roof, Claire M.; Klubnick, Cynthia; Carfolite, Jessica
2008-01-01
Curriculum-based measurement Reading (CBM-R) is an assessment procedure used to evaluate students' relative performance compared to peers and to evaluate their growth in reading. Within the response to intervention (RtI) model, CBM-R data are plotted in time series fashion as a means modeling individual students' response to varying levels of…
Measuring Learning Progressions Using Bayesian Modeling in Complex Assessments
ERIC Educational Resources Information Center
Rutstein, Daisy Wise
2012-01-01
This research examines issues regarding model estimation and robustness in the use of Bayesian Inference Networks (BINs) for measuring Learning Progressions (LPs). It provides background information on LPs and how they might be used in practice. Two simulation studies are performed, along with real data examples. The first study examines the case…
Hypoxia: Exposure Time Until Significant Performance Effects
2016-03-07
arterial oxygen saturation (SpO2) from the temporal artery. Datex-Ohmeda 3900 P Pulse Oximeter . The Datex-Ohmeda 3900 P pulse oximeter measured SpO2 at...flight helmet. Nonin ® model 8000 R Ear Cup Sensor. The Nonin ® model 8000 R in-helmet ear cup reflectance sensor is an oximeter that measures
Data envelopment analysis in service quality evaluation: an empirical study
NASA Astrophysics Data System (ADS)
Najafi, Seyedvahid; Saati, Saber; Tavana, Madjid
2015-09-01
Service quality is often conceptualized as the comparison between service expectations and the actual performance perceptions. It enhances customer satisfaction, decreases customer defection, and promotes customer loyalty. Substantial literature has examined the concept of service quality, its dimensions, and measurement methods. We introduce the perceived service quality index (PSQI) as a single measure for evaluating the multiple-item service quality construct based on the SERVQUAL model. A slack-based measure (SBM) of efficiency with constant inputs is used to calculate the PSQI. In addition, a non-linear programming model based on the SBM is proposed to delineate an improvement guideline and improve service quality. An empirical study is conducted to assess the applicability of the method proposed in this study. A large number of studies have used DEA as a benchmarking tool to measure service quality. These models do not propose a coherent performance evaluation construct and consequently fail to deliver improvement guidelines for improving service quality. The DEA models proposed in this study are designed to evaluate and improve service quality within a comprehensive framework and without any dependency on external data.
Biases and Power for Groups Comparison on Subjective Health Measurements
Hamel, Jean-François; Hardouin, Jean-Benoit; Le Neel, Tanguy; Kubis, Gildas; Roquelaure, Yves; Sébille, Véronique
2012-01-01
Subjective health measurements are increasingly used in clinical research, particularly for patient groups comparisons. Two main types of analytical strategies can be used for such data: so-called classical test theory (CTT), relying on observed scores and models coming from Item Response Theory (IRT) relying on a response model relating the items responses to a latent parameter, often called latent trait. Whether IRT or CTT would be the most appropriate method to compare two independent groups of patients on a patient reported outcomes measurement remains unknown and was investigated using simulations. For CTT-based analyses, groups comparison was performed using t-test on the scores. For IRT-based analyses, several methods were compared, according to whether the Rasch model was considered with random effects or with fixed effects, and the group effect was included as a covariate or not. Individual latent traits values were estimated using either a deterministic method or by stochastic approaches. Latent traits were then compared with a t-test. Finally, a two-steps method was performed to compare the latent trait distributions, and a Wald test was performed to test the group effect in the Rasch model including group covariates. The only unbiased IRT-based method was the group covariate Wald’s test, performed on the random effects Rasch model. This model displayed the highest observed power, which was similar to the power using the score t-test. These results need to be extended to the case frequently encountered in practice where data are missing and possibly informative. PMID:23115620
Flowfield characterization and model development in detonation tubes
NASA Astrophysics Data System (ADS)
Owens, Zachary Clark
A series of experiments and numerical simulations are performed to advance the understanding of flowfield phenomena and impulse generation in detonation tubes. Experiments employing laser-based velocimetry, high-speed schlieren imaging and pressure measurements are used to construct a dataset against which numerical models can be validated. The numerical modeling culminates in the development of a two-dimensional, multi-species, finite-rate-chemistry, parallel, Navier-Stokes solver. The resulting model is specifically designed to assess unsteady, compressible, reacting flowfields, and its utility for studying multidimensional detonation structure is demonstrated. A reduced, quasi-one-dimensional model with source terms accounting for wall losses is also developed for rapid parametric assessment. Using these experimental and numerical tools, two primary objectives are pursued. The first objective is to gain an understanding of how nozzles affect unsteady, detonation flowfields and how they can be designed to maximize impulse in a detonation based propulsion system called a pulse detonation engine. It is shown that unlike conventional, steady-flow propulsion systems where converging-diverging nozzles generate optimal performance, unsteady detonation tube performance during a single-cycle is maximized using purely diverging nozzles. The second objective is to identify the primary underlying mechanisms that cause velocity and pressure measurements to deviate from idealized theory. An investigation of the influence of non-ideal losses including wall heat transfer, friction and condensation leads to the development of improved models that reconcile long-standing discrepancies between predicted and measured detonation tube performance. It is demonstrated for the first time that wall condensation of water vapor in the combustion products can cause significant deviations from ideal theory.
Robertson, Amy N.; Wendt, Fabian; Jonkman, Jason M.; ...
2017-10-01
This paper summarizes the findings from Phase II of the Offshore Code Comparison, Collaboration, Continued, with Correlation project. The project is run under the International Energy Agency Wind Research Task 30, and is focused on validating the tools used for modeling offshore wind systems through the comparison of simulated responses of select system designs to physical test data. Validation activities such as these lead to improvement of offshore wind modeling tools, which will enable the development of more innovative and cost-effective offshore wind designs. For Phase II of the project, numerical models of the DeepCwind floating semisubmersible wind system weremore » validated using measurement data from a 1/50th-scale validation campaign performed at the Maritime Research Institute Netherlands offshore wave basin. Validation of the models was performed by comparing the calculated ultimate and fatigue loads for eight different wave-only and combined wind/wave test cases against the measured data, after calibration was performed using free-decay, wind-only, and wave-only tests. The results show a decent estimation of both the ultimate and fatigue loads for the simulated results, but with a fairly consistent underestimation in the tower and upwind mooring line loads that can be attributed to an underestimation of wave-excitation forces outside the linear wave-excitation region, and the presence of broadband frequency excitation in the experimental measurements from wind. Participant results showed varied agreement with the experimental measurements based on the modeling approach used. Modeling attributes that enabled better agreement included: the use of a dynamic mooring model; wave stretching, or some other hydrodynamic modeling approach that excites frequencies outside the linear wave region; nonlinear wave kinematics models; and unsteady aerodynamics models. Also, it was observed that a Morison-only hydrodynamic modeling approach could create excessive pitch excitation and resulting tower loads in some frequency bands.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robertson, Amy N.; Wendt, Fabian; Jonkman, Jason M.
This paper summarizes the findings from Phase II of the Offshore Code Comparison, Collaboration, Continued, with Correlation project. The project is run under the International Energy Agency Wind Research Task 30, and is focused on validating the tools used for modeling offshore wind systems through the comparison of simulated responses of select system designs to physical test data. Validation activities such as these lead to improvement of offshore wind modeling tools, which will enable the development of more innovative and cost-effective offshore wind designs. For Phase II of the project, numerical models of the DeepCwind floating semisubmersible wind system weremore » validated using measurement data from a 1/50th-scale validation campaign performed at the Maritime Research Institute Netherlands offshore wave basin. Validation of the models was performed by comparing the calculated ultimate and fatigue loads for eight different wave-only and combined wind/wave test cases against the measured data, after calibration was performed using free-decay, wind-only, and wave-only tests. The results show a decent estimation of both the ultimate and fatigue loads for the simulated results, but with a fairly consistent underestimation in the tower and upwind mooring line loads that can be attributed to an underestimation of wave-excitation forces outside the linear wave-excitation region, and the presence of broadband frequency excitation in the experimental measurements from wind. Participant results showed varied agreement with the experimental measurements based on the modeling approach used. Modeling attributes that enabled better agreement included: the use of a dynamic mooring model; wave stretching, or some other hydrodynamic modeling approach that excites frequencies outside the linear wave region; nonlinear wave kinematics models; and unsteady aerodynamics models. Also, it was observed that a Morison-only hydrodynamic modeling approach could create excessive pitch excitation and resulting tower loads in some frequency bands.« less
Zhan, Xiaobin; Jiang, Shulan; Yang, Yili; Liang, Jian; Shi, Tielin; Li, Xiwen
2015-09-18
This paper proposes an ultrasonic measurement system based on least squares support vector machines (LS-SVM) for inline measurement of particle concentrations in multicomponent suspensions. Firstly, the ultrasonic signals are analyzed and processed, and the optimal feature subset that contributes to the best model performance is selected based on the importance of features. Secondly, the LS-SVM model is tuned, trained and tested with different feature subsets to obtain the optimal model. In addition, a comparison is made between the partial least square (PLS) model and the LS-SVM model. Finally, the optimal LS-SVM model with the optimal feature subset is applied to inline measurement of particle concentrations in the mixing process. The results show that the proposed method is reliable and accurate for inline measuring the particle concentrations in multicomponent suspensions and the measurement accuracy is sufficiently high for industrial application. Furthermore, the proposed method is applicable to the modeling of the nonlinear system dynamically and provides a feasible way to monitor industrial processes.
Simplifying BRDF input data for optical signature modeling
NASA Astrophysics Data System (ADS)
Hallberg, Tomas; Pohl, Anna; Fagerström, Jan
2017-05-01
Scene simulations of optical signature properties using signature codes normally requires input of various parameterized measurement data of surfaces and coatings in order to achieve realistic scene object features. Some of the most important parameters are used in the model of the Bidirectional Reflectance Distribution Function (BRDF) and are normally determined by surface reflectance and scattering measurements. Reflectance measurements of the spectral Directional Hemispherical Reflectance (DHR) at various incident angles can normally be performed in most spectroscopy labs, while measuring the BRDF is more complicated or may not be available at all in many optical labs. We will present a method in order to achieve the necessary BRDF data directly from DHR measurements for modeling software using the Sandford-Robertson BRDF model. The accuracy of the method is tested by modeling a test surface by comparing results from using estimated and measured BRDF data as input to the model. These results show that using this method gives no significant loss in modeling accuracy.
NASA Astrophysics Data System (ADS)
Barraclough, Brendan; Li, Jonathan G.; Lebron, Sharon; Fan, Qiyong; Liu, Chihray; Yan, Guanghua
2015-08-01
The ionization chamber volume averaging effect is a well-known issue without an elegant solution. The purpose of this study is to propose a novel convolution-based approach to address the volume averaging effect in model-based treatment planning systems (TPSs). Ionization chamber-measured beam profiles can be regarded as the convolution between the detector response function and the implicit real profiles. Existing approaches address the issue by trying to remove the volume averaging effect from the measurement. In contrast, our proposed method imports the measured profiles directly into the TPS and addresses the problem by reoptimizing pertinent parameters of the TPS beam model. In the iterative beam modeling process, the TPS-calculated beam profiles are convolved with the same detector response function. Beam model parameters responsible for the penumbra are optimized to drive the convolved profiles to match the measured profiles. Since the convolved and the measured profiles are subject to identical volume averaging effect, the calculated profiles match the real profiles when the optimization converges. The method was applied to reoptimize a CC13 beam model commissioned with profiles measured with a standard ionization chamber (Scanditronix Wellhofer, Bartlett, TN). The reoptimized beam model was validated by comparing the TPS-calculated profiles with diode-measured profiles. Its performance in intensity-modulated radiation therapy (IMRT) quality assurance (QA) for ten head-and-neck patients was compared with the CC13 beam model and a clinical beam model (manually optimized, clinically proven) using standard Gamma comparisons. The beam profiles calculated with the reoptimized beam model showed excellent agreement with diode measurement at all measured geometries. Performance of the reoptimized beam model was comparable with that of the clinical beam model in IMRT QA. The average passing rates using the reoptimized beam model increased substantially from 92.1% to 99.3% with 3%/3 mm and from 79.2% to 95.2% with 2%/2 mm when compared with the CC13 beam model. These results show the effectiveness of the proposed method. Less inter-user variability can be expected of the final beam model. It is also found that the method can be easily integrated into model-based TPS.
Rise time and response measurements on a LiSOCl2 cell
NASA Technical Reports Server (NTRS)
Bastien, Caroline; Lecomte, Eric J.
1992-01-01
Dynamic impedance tests were performed on a 180 Ah LiSOCl2 cell in the frame of a short term work contract awarded by Aerospatiale as part of the Hermes Space Plane development work. These tests consisted of rise time and response measurements. The rise time test was performed to show the ability to deliver 4 KW, in the nominal voltage range (75-115 V), within less than 100 microseconds, and after a period at rest of 13 days. The response measurements test consisted of step response and frequency response tests. The frequency response test was performed to characterize the response of the LiSOCl2 cell to a positive or negative load step of 10 A starting from various currents. The test was performed for various depths of discharge and various temperatures. The test results were used to build a mathematical, electrical model of the LiSOCl2 cell which are also presented. The test description, test results, electrical modelization description, and conclusions are presented.
Prediction of Coronary Artery Disease Risk Based on Multiple Longitudinal Biomarkers
Yang, Lili; Yu, Menggang; Gao, Sujuan
2016-01-01
In the last decade, few topics in the area of cardiovascular disease (CVD) research have received as much attention as risk prediction. One of the well documented risk factors for CVD is high blood pressure (BP). Traditional CVD risk prediction models consider BP levels measured at a single time and such models form the basis for current clinical guidelines for CVD prevention. However, in clinical practice, BP levels are often observed and recorded in a longitudinal fashion. Information on BP trajectories can be powerful predictors for CVD events. We consider joint modeling of time to coronary artery disease and individual longitudinal measures of systolic and diastolic BPs in a primary care cohort with up to 20 years of follow-up. We applied novel prediction metrics to assess the predictive performance of joint models. Predictive performances of proposed joint models and other models were assessed via simulations and illustrated using the primary care cohort. PMID:26439685
Nallikuzhy, Jiss J; Dandapat, S
2017-06-01
In this work, a new patient-specific approach to enhance the spatial resolution of ECG is proposed and evaluated. The proposed model transforms a three-lead ECG into a standard twelve-lead ECG thereby enhancing its spatial resolution. The three leads used for prediction are obtained from the standard twelve-lead ECG. The proposed model takes advantage of the improved inter-lead correlation in wavelet domain. Since the model is patient-specific, it also selects the optimal predictor leads for a given patient using a lead selection algorithm. The lead selection algorithm is based on a new diagnostic similarity score which computes the diagnostic closeness between the original and the spatially enhanced leads. Standard closeness measures are used to assess the performance of the model. The similarity in diagnostic information between the original and the spatially enhanced leads are evaluated using various diagnostic measures. Repeatability and diagnosability are performed to quantify the applicability of the model. A comparison of the proposed model is performed with existing models that transform a subset of standard twelve-lead ECG into the standard twelve-lead ECG. From the analysis of the results, it is evident that the proposed model preserves diagnostic information better compared to other models. Copyright © 2017 Elsevier Ltd. All rights reserved.
CAD and CAE Analysis for Siphon Jet Toilet
NASA Astrophysics Data System (ADS)
Wang, Yuhua; Xiu, Guoji; Tan, Haishu
The high precision 3D laser scanner with the dual CCD technology was used to measure the original design sample of a siphon jet toilet. The digital toilet model was constructed from the cloud data measured with the curve and surface fitting technology and the CAD/CAE systems. The Realizable k - ɛ double equation model of the turbulence viscosity coefficient method and the VOF multiphase flow model were used to simulate the flushing flow in the toilet digital model. Through simulating and analyzing the distribution of the flushing flow's total pressure, the flow speed at the toilet-basin surface and the siphoning bent tube, the toilet performance can be evaluated efficiently and conveniently. The method of "establishing digital model, flushing flow simulating, performances evaluating, function shape modifying" would provide a high efficiency approach to develop new water-saving toilets.
Benoit, Gaëlle; Heinkélé, Christophe; Gourdon, Emmanuel
2013-12-01
This paper deals with a numerical procedure to identify the acoustical parameters of road pavement from surface impedance measurements. This procedure comprises three steps. First, a suitable equivalent fluid model for the acoustical properties porous media is chosen, the variation ranges for the model parameters are set, and a sensitivity analysis for this model is performed. Second, this model is used in the parameter inversion process, which is performed with simulated annealing in a selected frequency range. Third, the sensitivity analysis and inversion process are repeated to estimate each parameter in turn. This approach is tested on data obtained for porous bituminous concrete and using the Zwikker and Kosten equivalent fluid model. This work provides a good foundation for the development of non-destructive in situ methods for the acoustical characterization of road pavements.
Evaluation of Preduster in Cement Industry Based on Computational Fluid Dynamic
NASA Astrophysics Data System (ADS)
Septiani, E. L.; Widiyastuti, W.; Djafaar, A.; Ghozali, I.; Pribadi, H. M.
2017-10-01
Ash-laden hot air from clinker in cement industry is being used to reduce water contain in coal, however it may contain large amount of ash even though it was treated by a preduster. This study investigated preduster performance as a cyclone separator in the cement industry by Computational Fluid Dynamic method. In general, the best performance of cyclone is it have relatively high efficiency with the low pressure drop. The most accurate and simple turbulence model, Reynold Average Navier Stokes (RANS), standard k-ε, and combination with Lagrangian model as particles tracking model were used to solve the problem. The measurement in simulation result are flow pattern in the cyclone, pressure outlet and collection efficiency of preduster. The applied model well predicted by comparing with the most accurate empirical model and pressure outlet in experimental measurement.
Al-Chokhachy, Robert K.; Wegner, Seth J.; Isaak, Daniel J.; Kershner, Jeffrey L.
2013-01-01
Understanding a species’ thermal niche is becoming increasingly important for management and conservation within the context of global climate change, yet there have been surprisingly few efforts to compare assessments of a species’ thermal niche across methods. To address this uncertainty, we evaluated the differences in model performance and interpretations of a species’ thermal niche when using different measures of stream temperature and surrogates for stream temperature. Specifically, we used a logistic regression modeling framework with three different indicators of stream thermal conditions (elevation, air temperature, and stream temperature) referenced to a common set of Brook Trout Salvelinus fontinalis distribution data from the Boise River basin, Idaho. We hypothesized that stream temperature predictions that were contemporaneous with fish distribution data would have stronger predictive performance than composite measures of stream temperature or any surrogates for stream temperature. Across the different indicators of thermal conditions, the highest measure of accuracy was found for the model based on stream temperature predictions that were contemporaneous with fish distribution data (percent correctly classified = 71%). We found considerable differences in inferences across models, with up to 43% disagreement in the amount of stream habitat that was predicted to be suitable. The differences in performance between models support the growing efforts in many areas to develop accurate stream temperature models for investigations of species’ thermal niches.
ERIC Educational Resources Information Center
Harrison, David J.; Saito, Laurel; Markee, Nancy; Herzog, Serge
2017-01-01
To examine the impact of a hybrid-flipped model utilising active learning techniques, the researchers inverted one section of an undergraduate fluid mechanics course, reduced seat time, and engaged in active learning sessions in the classroom. We compared this model to the traditional section on four performance measures. We employed a propensity…
Multivariate Modelling of the Career Intent of Air Force Personnel.
1980-09-01
index (HOPP) was used as a measure of current job satisfaction . As with the Vroom and Fishbein/Graen models, two separate validations were accom...34 Organizational Behavior and Human Performance , 23: 251-267, 1979. Lewis, Logan M. "Expectancy Theory as a Predictive Model of Career Intent, Job Satisfaction ...W. Albright. "Expectancy Theory Predictions of the Satisfaction , Effort, Performance , and Retention of Naval Aviation Officers," Organizational
Mechanistic materials modeling for nuclear fuel performance
Tonks, Michael R.; Andersson, David; Phillpot, Simon R.; ...
2017-03-15
Fuel performance codes are critical tools for the design, certification, and safety analysis of nuclear reactors. However, their ability to predict fuel behavior under abnormal conditions is severely limited by their considerable reliance on empirical materials models correlated to burn-up (a measure of the number of fission events that have occurred, but not a unique measure of the history of the material). In this paper, we propose a different paradigm for fuel performance codes to employ mechanistic materials models that are based on the current state of the evolving microstructure rather than burn-up. In this approach, a series of statemore » variables are stored at material points and define the current state of the microstructure. The evolution of these state variables is defined by mechanistic models that are functions of fuel conditions and other state variables. The material properties of the fuel and cladding are determined from microstructure/property relationships that are functions of the state variables and the current fuel conditions. Multiscale modeling and simulation is being used in conjunction with experimental data to inform the development of these models. Finally, this mechanistic, microstructure-based approach has the potential to provide a more predictive fuel performance capability, but will require a team of researchers to complete the required development and to validate the approach.« less
A practical measure of workplace resilience: developing the resilience at work scale.
Winwood, Peter C; Colon, Rochelle; McEwen, Kath
2013-10-01
To develop an effective measure of resilience at work for use in individual work-related performance and emotional distress contexts. Two separate cross-sectional studies investigated: (1) exploratory factor analysis of 45 items putatively underpinning workplace resilience among 397 participants and (2) confirmatory factor analysis of resilience measure derived from Study 1 demonstrating a credible model of interaction, with performance outcome variables among 194 participants. A 20-item scale explaining 67% of variance, measuring seven aspects of workplace resilience, which are teachable and capable of conscious development, was achieved. A credible model of relationships with work engagement, sleep, stress recovery, and physical health was demonstrated in the expected directions. The new scale shows considerable promise as a reliable instrument for use in the area of employee support and development.
NASA Astrophysics Data System (ADS)
Ebrahimian, Hamed; Astroza, Rodrigo; Conte, Joel P.; de Callafon, Raymond A.
2017-02-01
This paper presents a framework for structural health monitoring (SHM) and damage identification of civil structures. This framework integrates advanced mechanics-based nonlinear finite element (FE) modeling and analysis techniques with a batch Bayesian estimation approach to estimate time-invariant model parameters used in the FE model of the structure of interest. The framework uses input excitation and dynamic response of the structure and updates a nonlinear FE model of the structure to minimize the discrepancies between predicted and measured response time histories. The updated FE model can then be interrogated to detect, localize, classify, and quantify the state of damage and predict the remaining useful life of the structure. As opposed to recursive estimation methods, in the batch Bayesian estimation approach, the entire time history of the input excitation and output response of the structure are used as a batch of data to estimate the FE model parameters through a number of iterations. In the case of non-informative prior, the batch Bayesian method leads to an extended maximum likelihood (ML) estimation method to estimate jointly time-invariant model parameters and the measurement noise amplitude. The extended ML estimation problem is solved efficiently using a gradient-based interior-point optimization algorithm. Gradient-based optimization algorithms require the FE response sensitivities with respect to the model parameters to be identified. The FE response sensitivities are computed accurately and efficiently using the direct differentiation method (DDM). The estimation uncertainties are evaluated based on the Cramer-Rao lower bound (CRLB) theorem by computing the exact Fisher Information matrix using the FE response sensitivities with respect to the model parameters. The accuracy of the proposed uncertainty quantification approach is verified using a sampling approach based on the unscented transformation. Two validation studies, based on realistic structural FE models of a bridge pier and a moment resisting steel frame, are performed to validate the performance and accuracy of the presented nonlinear FE model updating approach and demonstrate its application to SHM. These validation studies show the excellent performance of the proposed framework for SHM and damage identification even in the presence of high measurement noise and/or way-out initial estimates of the model parameters. Furthermore, the detrimental effects of the input measurement noise on the performance of the proposed framework are illustrated and quantified through one of the validation studies.
McAuley, Edward; Morris, Katherine S; Doerksen, Shawna E; Motl, Robert W; Liang, Hu; White, Siobhan M; Wójcicki, Thomas R; Rosengren, Karl
2007-12-01
To examine the hypothesis that changes in self-efficacy and functional performance mediate, in part, the beneficial effect of physical activity on functional limitations over time. Prospective, observational study. Community-based. Two hundred forty-nine community-dwelling older women. Participants completed measures of self-reported physical activity, functional limitations, and self-efficacy. Four measures of physical function performance were also assessed. Measures were completed at baseline and 24 months. Data were analyzed using a panel model within a covariance modeling framework. Results indicated that increases in physical activity over time were associated with greater improvements in self-efficacy, which was associated in turn with improved physical function performance, both of which mediated the association between physical activity and functional limitations. Fewer functional limitations at baseline were also associated with higher levels of self-efficacy at 24 months. Age, race, and health status covariates did not significantly change these relationships. The findings support the mediating roles of self-efficacy and physical function performance in the relationship between longitudinal changes in physical activity and functional limitations in older women.
NASA Technical Reports Server (NTRS)
OsowskiNeil, Doreen; Yee, Jeng-Hwa; Boldt, John; Edwards, David
2010-01-01
We present the progress toward an analytical performance model of a 2.3 micron infrared correlation radiometer (IRCRg) prototype subsystem for a future geostationary space-borne instrument. The prototype is designed specifically to measure carbon monoxide (CO) from geostationary orbit. NASA's Geostationary Coastal and Air Pollution Events (GEO-CAPE) mission, one of the United States Earth Science and Applications Decadal Survey missions, specifies the use of infrared correlation radiometry to measure CO in two spectral regions for this mission. GEO-CAPE will use the robust IRCR measurement technique at geostationary orbit, nearly 50 times farther away than the Terra/MOPITT orbit, to determine hourly changes in CO across a continental domain. The abundance of CO in Earth's troposphere directly affects the concentration of hydroxyl, which regulates the lifetimes of many tropospheric pollutants. In addition, CO is a precursor to ozone formation; CO is used as a tracer to study the transport of global and regional pollutants; and CO is used as an indicator of both natural and anthropogenic air pollution sources and sinks. We have structured our development project to enable rapid evaluation of future spaceborne instrument designs. The project is part of NASA's Instrument Incubator Program. We describe the architecture of the performance model and the planned evaluation of the performance model using laboratory test data.
Incentive regulation and performance measurement of the Portuguese solid waste management services.
Marques, Rui Cunha; Simões, Pedro
2009-03-01
Measuring the performance of solid waste management services usually uncovers very high potential for gains in efficiency and productivity. This circumstance occurs, naturally, due to the fact that these services are outside the market and because they are subjected to various market failures in their organizational framework. The aim of this study was to examine the Portuguese regulatory model and to measure the performance of the Portuguese solid waste management services in order to identify the major reforms carried out and their outcomes. As a first objective, the sunshine regulatory approach adopted in Portugal, in which performance comparison and its public discussion are the main tools, was investigated. The second objective was to compute the efficiency of the Portuguese solid waste management services by means of the non-parametric technique of data envelopment analysis (DEA), evaluating the Portuguese regulatory model and the existing market structure, as well as the influence of the operational environment on efficiency. The benchmarking frontier technique of DEA is particularly useful in the efficiency measurement of public utilities, in which knowledge of the production function is relatively scarce. Several DEA models were used and they all depicted significant inefficiency. The study also proved that efficiency did not depend on ownership (public or private) and that there was no difference in efficiency between the players, irrespective of whether they were regulated or not.
Ackerman, S B; Kelley, E A
1983-01-01
The performance of a fiberoptic probe colorimeter (model PC800; Brinkmann Instruments, Inc., Westbury, N.Y.) for quantitating enzymatic or colorimetric assays in 96-well microtiter plates was compared with the performances of a spectrophotometer (model 240; Gilford Instrument Laboratories, Inc., Oberlin, Ohio) and a commercially available enzyme immunoassay reader (model MR590; Dynatech Laboratories, Inc., Alexandria, Va.). Alkaline phosphatase-p-nitrophenyl phosphate in 3 M NaOH was used as the chromophore source. Six types of plates were evaluated for use with the probe colorimeter; they generated reproducibility values (100% coefficient of variation) ranging from 91 to 98% when one individual made 24 independent measurements on the same dilution of chromophore on each plate. Eleven individuals each performed 24 measurements with the colorimeter on either a visually light (absorbance of 0.10 at 420 nm) or a dark (absorbance of 0.80 at 420 nm) dilution of chromophore; reproducibilities averaged 87% for the light dilution and 97% for the dark dilution. When one individual measured the same chromophore sample at least 20 times in the colorimeter, in the spectrophotometer or in the enzyme immunoassay reader, reproducibility for each instrument was greater than 99%. Measurements of a dilution series of chromophore in a fixed volume indicated that the optical responses of each instrument were linear in a range of 0.05 to 1.10 absorbance units. Images PMID:6341399
Ackerman, S B; Kelley, E A
1983-03-01
The performance of a fiberoptic probe colorimeter (model PC800; Brinkmann Instruments, Inc., Westbury, N.Y.) for quantitating enzymatic or colorimetric assays in 96-well microtiter plates was compared with the performances of a spectrophotometer (model 240; Gilford Instrument Laboratories, Inc., Oberlin, Ohio) and a commercially available enzyme immunoassay reader (model MR590; Dynatech Laboratories, Inc., Alexandria, Va.). Alkaline phosphatase-p-nitrophenyl phosphate in 3 M NaOH was used as the chromophore source. Six types of plates were evaluated for use with the probe colorimeter; they generated reproducibility values (100% coefficient of variation) ranging from 91 to 98% when one individual made 24 independent measurements on the same dilution of chromophore on each plate. Eleven individuals each performed 24 measurements with the colorimeter on either a visually light (absorbance of 0.10 at 420 nm) or a dark (absorbance of 0.80 at 420 nm) dilution of chromophore; reproducibilities averaged 87% for the light dilution and 97% for the dark dilution. When one individual measured the same chromophore sample at least 20 times in the colorimeter, in the spectrophotometer or in the enzyme immunoassay reader, reproducibility for each instrument was greater than 99%. Measurements of a dilution series of chromophore in a fixed volume indicated that the optical responses of each instrument were linear in a range of 0.05 to 1.10 absorbance units.
Prediction-error variance in Bayesian model updating: a comparative study
NASA Astrophysics Data System (ADS)
Asadollahi, Parisa; Li, Jian; Huang, Yong
2017-04-01
In Bayesian model updating, the likelihood function is commonly formulated by stochastic embedding in which the maximum information entropy probability model of prediction error variances plays an important role and it is Gaussian distribution subject to the first two moments as constraints. The selection of prediction error variances can be formulated as a model class selection problem, which automatically involves a trade-off between the average data-fit of the model class and the information it extracts from the data. Therefore, it is critical for the robustness in the updating of the structural model especially in the presence of modeling errors. To date, three ways of considering prediction error variances have been seem in the literature: 1) setting constant values empirically, 2) estimating them based on the goodness-of-fit of the measured data, and 3) updating them as uncertain parameters by applying Bayes' Theorem at the model class level. In this paper, the effect of different strategies to deal with the prediction error variances on the model updating performance is investigated explicitly. A six-story shear building model with six uncertain stiffness parameters is employed as an illustrative example. Transitional Markov Chain Monte Carlo is used to draw samples of the posterior probability density function of the structure model parameters as well as the uncertain prediction variances. The different levels of modeling uncertainty and complexity are modeled through three FE models, including a true model, a model with more complexity, and a model with modeling error. Bayesian updating is performed for the three FE models considering the three aforementioned treatments of the prediction error variances. The effect of number of measurements on the model updating performance is also examined in the study. The results are compared based on model class assessment and indicate that updating the prediction error variances as uncertain parameters at the model class level produces more robust results especially when the number of measurement is small.
Strategies for concurrent processing of complex algorithms in data driven architectures
NASA Technical Reports Server (NTRS)
Stoughton, John W.; Mielke, Roland R.; Som, Sukhamony
1990-01-01
The performance modeling and enhancement for periodic execution of large-grain, decision-free algorithms in data flow architectures is examined. Applications include real-time implementation of control and signal processing algorithms where performance is required to be highly predictable. The mapping of algorithms onto the specified class of data flow architectures is realized by a marked graph model called ATAMM (Algorithm To Architecture Mapping Model). Performance measures and bounds are established. Algorithm transformation techniques are identified for performance enhancement and reduction of resource (computing element) requirements. A systematic design procedure is described for generating operating conditions for predictable performance both with and without resource constraints. An ATAMM simulator is used to test and validate the performance prediction by the design procedure. Experiments on a three resource testbed provide verification of the ATAMM model and the design procedure.
The BepiColombo Laser Altimeter (BELA): Scientific Performance at Mercury
NASA Astrophysics Data System (ADS)
Hussmann, H.; Steinbrügge, G.; Stark, A.; Oberst, J.; Thomas, N.; Lara, L.-M.
2018-05-01
We discuss the expected scientific performance of BELA in Mercury orbit. Based on a performance model, we present the measurement accuracy of global and local topography, surface slopes and roughness, as well as the tidal Love number h2.
In situ observations of the isotopic composition of methane at the Cabauw tall tower site
NASA Astrophysics Data System (ADS)
Röckmann, Thomas; Eyer, Simon; van der Veen, Carina; Popa, Maria E.; Tuzson, Béla; Monteil, Guillaume; Houweling, Sander; Harris, Eliza; Brunner, Dominik; Fischer, Hubertus; Zazzeri, Giulia; Lowry, David; Nisbet, Euan G.; Brand, Willi A.; Necki, Jaroslav M.; Emmenegger, Lukas; Mohn, Joachim
2016-08-01
High-precision analyses of the isotopic composition of methane in ambient air can potentially be used to discriminate between different source categories. Due to the complexity of isotope ratio measurements, such analyses have generally been performed in the laboratory on air samples collected in the field. This poses a limitation on the temporal resolution at which the isotopic composition can be monitored with reasonable logistical effort. Here we present the performance of a dual isotope ratio mass spectrometric system (IRMS) and a quantum cascade laser absorption spectroscopy (QCLAS)-based technique for in situ analysis of the isotopic composition of methane under field conditions. Both systems were deployed at the Cabauw Experimental Site for Atmospheric Research (CESAR) in the Netherlands and performed in situ, high-frequency (approx. hourly) measurements for a period of more than 5 months. The IRMS and QCLAS instruments were in excellent agreement with a slight systematic offset of (+0.25 ± 0.04) ‰ for δ13C and (-4.3 ± 0.4) ‰ for δD. This was corrected for, yielding a combined dataset with more than 2500 measurements of both δ13C and δD. The high-precision and high-temporal-resolution dataset not only reveals the overwhelming contribution of isotopically depleted agricultural CH4 emissions from ruminants at the Cabauw site but also allows the identification of specific events with elevated contributions from more enriched sources such as natural gas and landfills. The final dataset was compared to model calculations using the global model TM5 and the mesoscale model FLEXPART-COSMO. The results of both models agree better with the measurements when the TNO-MACC emission inventory is used in the models than when the EDGAR inventory is used. This suggests that high-resolution isotope measurements have the potential to further constrain the methane budget when they are performed at multiple sites that are representative for the entire European domain.
In-situ observations of the isotopic composition of methane at the Cabauw tall tower site
NASA Astrophysics Data System (ADS)
Röckmann, Thomas; Eyer, Simon; van der Veen, Carina; E Popa, Maria; Tuzson, Béla; Monteil, Guillaume; Houweling, Sander; Harris, Eliza; Brunner, Dominik; Fischer, Hubertus; Zazzeri, Giulia; Lowry, David; Nisbet, Euan G.; Brand, Willi A.; Necki, Jaroslav M.; Emmenegger, Lukas; Mohn, Joachim
2017-04-01
High precision analyses of the isotopic composition of methane in ambient air can potentially be used to discriminate between different source categories. Due to the complexity of isotope ratio measurements, such analyses have generally been performed in the laboratory on air samples collected in the field. This poses a limitation on the temporal resolution at which the isotopic composition can be monitored with reasonable logistical effort. Here we present the performance of a dual isotope ratio mass spectrometric system (IRMS) and a quantum cascade laser absorption spectroscopy (QCLAS) based technique for in-situ analysis of the isotopic composition of methane under field conditions. Both systems were deployed at the Cabauw experimental site for atmospheric research (CESAR) in the Netherlands and performed in-situ, high-frequency (approx. hourly) measurements for a period of more than 5 months. The IRMS and QCLAS instruments were in excellent agreement with a slight systematic offset of +0.05 ± 0.03 ‰ for δ13C-CH4 and -3.6 ± 0.4 ‰ for δD-CH4. This was corrected for, yielding a combined dataset with more than 2500 measurements of both δ13C and δD. The high precision and temporal resolution dataset does not only reveal the overwhelming contribution of isotopically depleted agricultural CH4 emissions from ruminants at the Cabauw site, but also allows the identification of specific events with elevated contributions from more enriched sources such as natural gas and landfills. The final dataset was compared to model calculations using the global model TM5 and the mesoscale model FLEXPART-COSMO. The results of both models agree better with the measurements when the TNO-MACC emission inventory is used in the models than when the EDGAR inventory is used. This suggests that high-resolution isotope measurements have the potential to further constrain the methane budget, when they are performed at multiple sites that are representative for the entire European domain.
Predicting Performance in Medical Education Continuum: Toward Better Use of Conventional Measures.
ERIC Educational Resources Information Center
Williams, Albert P.
Medical school admissions and performance in 10 medical schools were assessed in relation to prediction using conventional measures. The origin of the research was an attempt to determine the effects of affirmative action on academic medicine. For the 10 schools, admissions decisions were analyzed, and an attempt was made to model statistically…
Measuring School Performance To Improve Student Achievement and To Reward Effective Programs.
ERIC Educational Resources Information Center
Heistad, Dave; Spicuzza, Rick
This paper describes the method that the Minneapolis Public School system (MPS), Minnesota, uses to measure school and student performance. MPS uses a multifaceted system that both captures and accounts for the complexity of a large urban school district. The system incorporates: (1) a hybrid model of critical indicators that report on level of…
Modeling the erythemal surface diffuse irradiance fraction for Badajoz, Spain
NASA Astrophysics Data System (ADS)
Sanchez, Guadalupe; Serrano, Antonio; Cancillo, María Luisa
2017-10-01
Despite its important role on the human health and numerous biological processes, the diffuse component of the erythemal ultraviolet irradiance (UVER) is scarcely measured at standard radiometric stations and therefore needs to be estimated. This study proposes and compares 10 empirical models to estimate the UVER diffuse fraction. These models are inspired from mathematical expressions originally used to estimate total diffuse fraction, but, in this study, they are applied to the UVER case and tested against experimental measurements. In addition to adapting to the UVER range the various independent variables involved in these models, the total ozone column has been added in order to account for its strong impact on the attenuation of ultraviolet radiation. The proposed models are fitted to experimental measurements and validated against an independent subset. The best-performing model (RAU3) is based on a model proposed by Ruiz-Arias et al. (2010) and shows values of r2 equal to 0.91 and relative root-mean-square error (rRMSE) equal to 6.1 %. The performance achieved by this entirely empirical model is better than those obtained by previous semi-empirical approaches and therefore needs no additional information from other physically based models. This study expands on previous research to the ultraviolet range and provides reliable empirical models to accurately estimate the UVER diffuse fraction.
Nesmith, Jonathan C. B.; Das, Adrian J.; O'Hara, Kevin L.; van Mantgem, Phillip J.
2015-01-01
Tree mortality is a vital component of forest management in the context of prescribed fires; however, few studies have examined the effect of prefire tree health on postfire mortality. This is especially relevant for sugar pine (Pinus lambertiana Douglas), a species experiencing population declines due to a suite of anthropogenic factors. Using data from an old-growth mixed-conifer forest in Sequoia National Park, we evaluated the effects of fire, tree size, prefire radial growth, and crown condition on postfire mortality. Models based only on tree size and measures of fire damage were compared with models that included tree size, fire damage, and prefire tree health (e.g., measures of prefire tree radial growth or crown condition). Immediately following the fire, the inclusion of different metrics of prefire tree health produced variable improvements over the models that included only tree size and measures of fire damage, as models that included measures of crown condition performed better than fire-only models, but models that included measures of prefire radial growth did not perform better. However, 5 years following the fire, sugar pine mortality was best predicted by models that included measures of both fire damage and prefire tree health, specifically, diameter at breast height (DBH, 1.37 m), crown scorch, 30-year mean growth, and the number of sharp declines in growth over a 30-year period. This suggests that factors that influence prefire tree health (e.g., drought, competition, pathogens, etc.) may partially determine postfire mortality, especially when accounting for delayed mortality following fire.
A model to measure fluid outflow in rabbit capsules post glaucoma implant surgery.
Nguyen, Dan Q; Ross, Craig M; Li, Yu Qin; Pandav, Surinder; Gardiner, Bruce; Smith, David; How, Alicia C; Crowston, Jonathan G; Coote, Michael A
2012-10-05
Prior models of glaucoma filtration surgery assess bleb morphology, which does not always reflect function. Our aim is to establish a model that directly measures tissue hydraulic conductivity of postsurgical outflow in rabbit bleb capsules following experimental glaucoma filtration surgery. Nine rabbits underwent insertion of a single-plate pediatric Molteno implant into the anterior chamber of their left eye. Right eyes were used as controls. The rabbits were then allocated to one of two groups. Group one had outflow measurements performed at 1 week after surgery (n = 5), and group two had measurements performed at 4 weeks (n = 4). Measurements were performed by cannulating the drainage tube ostium in situ with a needle attached to a pressure transducer and a fluid column at 15 mm Hg. The drop in the fluid column was measured every minute for 5 minutes. For the control eyes (n = 6), the anterior chamber of the unoperated fellow eye was cannulated. Animals were euthanized with the implant and its surrounding capsule dissected and fixed in 4% paraformaldehyde, and embedded in paraffin before 6-μm sections were cut for histologic staining. By 7 days after surgery, tube outflow was 0.117 ± 0.036 μL/min/mm Hg at 15 mm Hg (mean ± SEM), whereas at 28 days, it was 0.009 ± 0.003 μL/min/mm Hg. Control eyes had an outflow of 0.136 ± 0.007 μL/min/mm Hg (P = 0.004, one-way ANOVA). Hematoxylin and eosin staining demonstrated a thinner and looser arrangement of collagenous tissue in the capsules at 1 week compared with that at 4 weeks, which had thicker and more densely arranged collagen. We describe a new model to directly measure hydraulic conductivity in a rabbit glaucoma surgery implant model. The principal physiologic endpoint of glaucoma surgery can be reliably quantified and consistently measured with this model. At 28 days post glaucoma filtration surgery, a rabbit bleb capsule has significantly reduced tissue hydraulic conductivity, in line with loss of implant outflow facility, and increased thickness and density of fibrous encapsulation.
Modeling of video compression effects on target acquisition performance
NASA Astrophysics Data System (ADS)
Cha, Jae H.; Preece, Bradley; Espinola, Richard L.
2009-05-01
The effect of video compression on image quality was investigated from the perspective of target acquisition performance modeling. Human perception tests were conducted recently at the U.S. Army RDECOM CERDEC NVESD, measuring identification (ID) performance on simulated military vehicle targets at various ranges. These videos were compressed with different quality and/or quantization levels utilizing motion JPEG, motion JPEG2000, and MPEG-4 encoding. To model the degradation on task performance, the loss in image quality is fit to an equivalent Gaussian MTF scaled by the Structural Similarity Image Metric (SSIM). Residual compression artifacts are treated as 3-D spatio-temporal noise. This 3-D noise is found by taking the difference of the uncompressed frame, with the estimated equivalent blur applied, and the corresponding compressed frame. Results show good agreement between the experimental data and the model prediction. This method has led to a predictive performance model for video compression by correlating various compression levels to particular blur and noise input parameters for NVESD target acquisition performance model suite.
Error Modeling of Multibaseline Optical Truss: Part 1: Modeling of System Level Performance
NASA Technical Reports Server (NTRS)
Milman, Mark H.; Korechoff, R. E.; Zhang, L. D.
2004-01-01
Global astrometry is the measurement of stellar positions and motions. These are typically characterized by five parameters, including two position parameters, two proper motion parameters, and parallax. The Space Interferometry Mission (SIM) will derive these parameters for a grid of approximately 1300 stars covering the celestial sphere to an accuracy of approximately 4uas, representing a two orders of magnitude improvement over the most precise current star catalogues. Narrow angle astrometry will be performed to a 1uas accuracy. A wealth of scientific information will be obtained from these accurate measurements encompassing many aspects of both galactic (and extragalactic science. SIM will be subject to a number of instrument errors that can potentially degrade performance. Many of these errors are systematic in that they are relatively static and repeatable with respect to the time frame and direction of the observation. This paper and its companion define the modeling of the, contributing factors to these errors and the analysis of how they impact SIM's ability to perform astrometric science.
ERIC Educational Resources Information Center
Elder, Catherine; McNamara, Tim; Congdon, Peter
2003-01-01
Used Rasch analytic procedures to study item bias or differential item functioning in both dichotomous and scalar items on a test of English for academic purposes. Results for 139 college students on a pilot English language test model the approach and illustrate the measurement challenges posed by a diagnostic instrument to measure English…
Clary, Christelle; Lewis, Daniel J; Flint, Ellen; Smith, Neil R; Kestens, Yan; Cummins, Steven
2016-12-01
Studies that explore associations between the local food environment and diet routinely use global regression models, which assume that relationships are invariant across space, yet such stationarity assumptions have been little tested. We used global and geographically weighted regression models to explore associations between the residential food environment and fruit and vegetable intake. Analyses were performed in 4 boroughs of London, United Kingdom, using data collected between April 2012 and July 2012 from 969 adults in the Olympic Regeneration in East London Study. Exposures were assessed both as absolute densities of healthy and unhealthy outlets, taken separately, and as a relative measure (proportion of total outlets classified as healthy). Overall, local models performed better than global models (lower Akaike information criterion). Locally estimated coefficients varied across space, regardless of the type of exposure measure, although changes of sign were observed only when absolute measures were used. Despite findings from global models showing significant associations between the relative measure and fruit and vegetable intake (β = 0.022; P < 0.01) only, geographically weighted regression models using absolute measures outperformed models using relative measures. This study suggests that greater attention should be given to nonstationary relationships between the food environment and diet. It further challenges the idea that a single measure of exposure, whether relative or absolute, can reflect the many ways the food environment may shape health behaviors. © The Author 2016. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Observability and synchronization of neuron models.
Aguirre, Luis A; Portes, Leonardo L; Letellier, Christophe
2017-10-01
Observability is the property that enables recovering the state of a dynamical system from a reduced number of measured variables. In high-dimensional systems, it is therefore important to make sure that the variable recorded to perform the analysis conveys good observability of the system dynamics. The observability of a network of neuron models depends nontrivially on the observability of the node dynamics and on the topology of the network. The aim of this paper is twofold. First, to perform a study of observability using four well-known neuron models by computing three different observability coefficients. This not only clarifies observability properties of the models but also shows the limitations of applicability of each type of coefficients in the context of such models. Second, to study the emergence of phase synchronization in networks composed of neuron models. This is done performing multivariate singular spectrum analysis which, to the best of the authors' knowledge, has not been used in the context of networks of neuron models. It is shown that it is possible to detect phase synchronization: (i) without having to measure all the state variables, but only one (that provides greatest observability) from each node and (ii) without having to estimate the phase.
Wind tunnel model surface gauge for measuring roughness
NASA Technical Reports Server (NTRS)
Vorburger, T. V.; Gilsinn, D. E.; Teague, E. C.; Giauque, C. H. W.; Scire, F. E.; Cao, L. X.
1987-01-01
The optical inspection of surface roughness research has proceeded along two different lines. First, research into a quantitative understanding of light scattering from metal surfaces and into the appropriate models to describe the surfaces themselves. Second, the development of a practical instrument for the measurement of rms roughness of high performance wind tunnel models with smooth finishes. The research is summarized, with emphasis on the second avenue of research.
A system performance throughput model applicable to advanced manned telescience systems
NASA Technical Reports Server (NTRS)
Haines, Richard F.
1990-01-01
As automated space systems become more complex, autonomous, and opaque to the flight crew, it becomes increasingly difficult to determine whether the total system is performing as it should. Some of the complex and interrelated human performance measurement issues are addressed that are related to total system validation. An evaluative throughput model is presented which can be used to generate a human operator-related benchmark or figure of merit for a given system which involves humans at the input and output ends as well as other automated intelligent agents. The concept of sustained and accurate command/control data information transfer is introduced. The first two input parameters of the model involve nominal and off-nominal predicted events. The first of these calls for a detailed task analysis while the second is for a contingency event assessment. The last two required input parameters involving actual (measured) events, namely human performance and continuous semi-automated system performance. An expression combining these four parameters was found using digital simulations and identical, representative, random data to yield the smallest variance.
PAH concentrations simulated with the AURAMS-PAH chemical transport model over Canada and the USA
NASA Astrophysics Data System (ADS)
Galarneau, E.; Makar, P. A.; Zheng, Q.; Narayan, J.; Zhang, J.; Moran, M. D.; Bari, M. A.; Pathela, S.; Chen, A.; Chlumsky, R.
2013-07-01
The off-line Eulerian AURAMS chemical transport model was adapted to simulate the atmospheric fate of seven PAHs: phenanthrene, anthracene, fluoranthene, pyrene, benz[a]anthracene, chrysene + triphenylene, and benzo[a]pyrene. The model was then run for the year 2002 with hourly output on a~grid covering southern Canada and the continental USA with 42 km horizontal grid spacing. Model predictions were compared to ~ 5000 24 h average PAH measurements from 45 sites, eight of which also provided data on particle/gas partitioning which had been modelled using two alternative schemes. This is the first known regional modelling study for PAHs over a North American domain and the first modelling study at any scale to compare alternative particle/gas partitioning schemes against paired field measurements. Annual average modelled total (gas + particle) concentrations were statistically indistinguishable from measured values for fluoranthene, pyrene and benz[a]anthracene whereas the model underestimated concentrations of phenanthrene, anthracene and chrysene + triphenylene. Significance for benzo[a]pyrene performance was close to the statistical threshold and depended on the particle/gas partitioning scheme employed. On a day-to-day basis, the model simulated total PAH concentrations to the correct order of magnitude the majority of the time. Model performance differed substantially between measurement locations and the limited available evidence suggests that the model spatial resolution was too coarse to capture the distribution of concentrations in densely populated areas. A more detailed analysis of the factors influencing modelled particle/gas partitioning is warranted based on the findings in this study.
Phillips, Charles D; Hawes, Catherine; Lieberman, Trudy; Koren, Mary Jane
2007-06-25
Nursing home performance measurement systems are practically ubiquitous. The vast majority of these systems aspire to rank order all nursing homes based on quantitative measures of quality. However, the ability of such systems to identify homes differing in quality is hampered by the multidimensional nature of nursing homes and their residents. As a result, the authors doubt the ability of many nursing home performance systems to truly help consumers differentiate among homes providing different levels of quality. We also argue that, for consumers, performance measurement models are better at identifying problem facilities than potentially good homes. In response to these concerns we present a proposal for a less ambitious approach to nursing home performance measurement than previously used. We believe consumers can make better informed choice using a simpler system designed to pinpoint poor-quality nursing homes, rather than one designed to rank hundreds of facilities based on differences in quality-of-care indicators that are of questionable importance. The suggested performance model is based on five principles used in the development of the Consumers Union 2006 Nursing Home Quality Monitor. We can best serve policy-makers and consumers by eschewing nursing home reporting systems that present information about all the facilities in a city, a state, or the nation on a website or in a report. We argue for greater modesty in our efforts and a focus on identifying only the potentially poorest or best homes. In the end, however, it is important to remember that information from any performance measurement website or report is no substitute for multiple visits to a home at different times of the day to personally assess quality.
Generation of calibrated tungsten target x-ray spectra: modified TBC model.
Costa, Paulo R; Nersissian, Denise Y; Salvador, Fernanda C; Rio, Patrícia B; Caldas, Linda V E
2007-01-01
In spite of the recent advances in the experimental detection of x-ray spectra, theoretical or semi-empirical approaches for determining realistic x-ray spectra in the range of diagnostic energies are important tools for planning experiments, estimating radiation doses in patients, and formulating radiation shielding models. The TBC model is one of the most useful approaches since it allows for straightforward computer implementation, and it is able to accurately reproduce the spectra generated by tungsten target x-ray tubes. However, as originally presented, the TBC model fails in situations where the determination of x-ray spectra produced by an arbitrary waveform or the calculation of realistic values of air kerma for a specific x-ray system is desired. In the present work, the authors revisited the assumptions used in the original paper published by . They proposed a complementary formulation for taking into account the waveform and the representation of the calculated spectra in a dosimetric quantity. The performance of the proposed model was evaluated by comparing values of air kerma and first and second half value layers from calculated and measured spectra by using different voltages and filtrations. For the output, the difference between experimental and calculated data was better then 5.2%. First and second half value layers presented differences of 23.8% and 25.5% in the worst case. The performance of the model in accurately calculating these data was better for lower voltage values. Comparisons were also performed with spectral data measured using a CZT detector. Another test was performed by the evaluation of the model when considering a waveform distinct of a constant potential. In all cases the model results can be considered as a good representation of the measured data. The results from the modifications to the TBC model introduced in the present work reinforce the value of the TBC model for application of quantitative evaluations in radiation physics.
NASA Astrophysics Data System (ADS)
Bérubé, Charles L.; Chouteau, Michel; Shamsipour, Pejman; Enkin, Randolph J.; Olivo, Gema R.
2017-08-01
Spectral induced polarization (SIP) measurements are now widely used to infer mineralogical or hydrogeological properties from the low-frequency electrical properties of the subsurface in both mineral exploration and environmental sciences. We present an open-source program that performs fast multi-model inversion of laboratory complex resistivity measurements using Markov-chain Monte Carlo simulation. Using this stochastic method, SIP parameters and their uncertainties may be obtained from the Cole-Cole and Dias models, or from the Debye and Warburg decomposition approaches. The program is tested on synthetic and laboratory data to show that the posterior distribution of a multiple Cole-Cole model is multimodal in particular cases. The Warburg and Debye decomposition approaches yield unique solutions in all cases. It is shown that an adaptive Metropolis algorithm performs faster and is less dependent on the initial parameter values than the Metropolis-Hastings step method when inverting SIP data through the decomposition schemes. There are no advantages in using an adaptive step method for well-defined Cole-Cole inversion. Finally, the influence of measurement noise on the recovered relaxation time distribution is explored. We provide the geophysics community with a open-source platform that can serve as a base for further developments in stochastic SIP data inversion and that may be used to perform parameter analysis with various SIP models.
Echo State Networks for data-driven downhole pressure estimation in gas-lift oil wells.
Antonelo, Eric A; Camponogara, Eduardo; Foss, Bjarne
2017-01-01
Process measurements are of vital importance for monitoring and control of industrial plants. When we consider offshore oil production platforms, wells that require gas-lift technology to yield oil production from low pressure oil reservoirs can become unstable under some conditions. This undesirable phenomenon is usually called slugging flow, and can be identified by an oscillatory behavior of the downhole pressure measurement. Given the importance of this measurement and the unreliability of the related sensor, this work aims at designing data-driven soft-sensors for downhole pressure estimation in two contexts: one for speeding up first-principle model simulation of a vertical riser model; and another for estimating the downhole pressure using real-world data from an oil well from Petrobras based only on topside platform measurements. Both tasks are tackled by employing Echo State Networks (ESN) as an efficient technique for training Recurrent Neural Networks. We show that a single ESN is capable of robustly modeling both the slugging flow behavior and a steady state based only on a square wave input signal representing the production choke opening in the vertical riser. Besides, we compare the performance of a standard network to the performance of a multiple timescale hierarchical architecture in the second task and show that the latter architecture performs better in modeling both large irregular transients and more commonly occurring small oscillations. Copyright © 2016 Elsevier Ltd. All rights reserved.
Szlejf, Claudia; Parra-Rodríguez, Lorena; Rosas-Carrasco, Oscar
2017-08-01
The aims of this study were to determine the prevalence of osteosarcopenic obesity (OSO) and to investigate its association with frailty and physical performance in Mexican community-dwelling middle-aged and older women. Cross-sectional analysis of a prospective cohort. The FraDySMex study, a 2-round evaluation of community-dwelling adults from 2 municipalities in Mexico City. Participants were 434 women aged 50 years or older, living in the designated area in Mexico City. Body composition was measured with dual-energy X-ray absorptiometry and OSO was defined by the coexistence of sarcopenia, osteopenia, or osteoporosis and obesity. Information regarding demographic characteristics; comorbidities; mental status; nutritional status; and history of falls, fractures, and hospitalization was obtained from questionnaires. Objective measurements of muscle strength and function were grip strength using a hand dynamometer, 6-meter gait speed using a GAIT Rite instrumented walkway, and lower extremity functioning measured by the Short Physical Performance Battery (SPPB). Frailty was assessed using the Frailty Phenotype (Fried criteria), the Gerontopole Frailty Screening Tool (GFST), and the FRAIL scale, to build 3 logistic regression models. The prevalence of OSO was 19% (n = 81). Frailty (according to the Frailty Phenotype and the GFST) and poor physical performance measured by the SPPB were independently associated with OSO, controlled by age. In the logistic regression model assessing frailty with the Frailty Phenotype, the odds ratio (95% confidence interval) for frailty was 4.86 (2.47-9.55), and for poor physical performance it was 2.11 (1.15-3.89). In the model assessing frailty with the GFST, it was 2.12 (1.10-4.11), and for poor physical performance it was 2.15 (1.18-3.92). Finally, in the model with the FRAIL scale, it was 1.69 (0.85-3.36) for frailty and 2.29 (1.27-4.15) for poor physical performance. OSO is a frequent condition in middle-aged and older women, and it is independently associated with frailty and poor physical performance. Copyright © 2017 AMDA – The Society for Post-Acute and Long-Term Care Medicine. Published by Elsevier Inc. All rights reserved.
Relationships between cognitive performance, neuroimaging, and vascular disease: the DHS-Mind Study
Hsu, Fang-Chi; Raffield, Laura M.; Hugenschmidt, Christina E.; Cox, Amanda; Xu, Jianzhao; Carr, J. Jeffery; Freedman, Barry I.; Maldjian, Joseph A.; Williamson, Jeff D.; Bowden, Donald W.
2015-01-01
Background Type 2 diabetes mellitus increases risk for cognitive decline and dementia; elevated burdens of vascular disease are hypothesized to contribute to this risk. These relationships were examined in the Diabetes Heart Study-Mind using a battery of cognitive tests, neuroimaging measures, and subclinical cardiovascular disease (CVD) burden assessed by coronary artery calcified plaque (CAC). We hypothesized that CAC would attenuate the association between neuroimaging measures and cognition performance. Methods Associations were examined using marginal models in this family-based cohort of 572 European Americans from 263 families. All models were adjusted for age, gender, education, type 2 diabetes, and hypertension, with some neuroimaging measures additionally adjusted for intracranial volume. Results Higher total brain volume (TBV) was associated with better performance on the Digit Symbol Substitution Task (DSST) and Semantic Fluency (both p≤7.0 x 10−4). Higher gray matter volume (GMV) was associated with better performance on the Modified Mini-Mental State Examination and Semantic Fluency (both p≤9.0 x 10−4). Adjusting for CAC caused minimal changes to the results. Conclusions Relationships exist between neuroimaging measures and cognitive performance in a type 2 diabetes-enriched European American cohort. Associations were minimally attenuated after adjusting for subclinical CVD. Additional work is needed to understand how subclinical CVD burden interacts with other factors and impacts relationships between neuroimaging and cognitive testing measures. PMID:26185004