Creating "Intelligent" Climate Model Ensemble Averages Using a Process-Based Framework
NASA Astrophysics Data System (ADS)
Baker, N. C.; Taylor, P. C.
2014-12-01
The CMIP5 archive contains future climate projections from over 50 models provided by dozens of modeling centers from around the world. Individual model projections, however, are subject to biases created by structural model uncertainties. As a result, ensemble averaging of multiple models is often used to add value to model projections: consensus projections have been shown to consistently outperform individual models. Previous reports for the IPCC establish climate change projections based on an equal-weighted average of all model projections. However, certain models reproduce climate processes better than other models. Should models be weighted based on performance? Unequal ensemble averages have previously been constructed using a variety of mean state metrics. What metrics are most relevant for constraining future climate projections? This project develops a framework for systematically testing metrics in models to identify optimal metrics for unequal weighting multi-model ensembles. A unique aspect of this project is the construction and testing of climate process-based model evaluation metrics. A climate process-based metric is defined as a metric based on the relationship between two physically related climate variables—e.g., outgoing longwave radiation and surface temperature. Metrics are constructed using high-quality Earth radiation budget data from NASA's Clouds and Earth's Radiant Energy System (CERES) instrument and surface temperature data sets. It is found that regional values of tested quantities can vary significantly when comparing weighted and unweighted model ensembles. For example, one tested metric weights the ensemble by how well models reproduce the time-series probability distribution of the cloud forcing component of reflected shortwave radiation. The weighted ensemble for this metric indicates lower simulated precipitation (up to .7 mm/day) in tropical regions than the unweighted ensemble: since CMIP5 models have been shown to overproduce precipitation, this result could indicate that the metric is effective in identifying models which simulate more realistic precipitation. Ultimately, the goal of the framework is to identify performance metrics for advising better methods for ensemble averaging models and create better climate predictions.
Creating "Intelligent" Ensemble Averages Using a Process-Based Framework
NASA Astrophysics Data System (ADS)
Baker, Noel; Taylor, Patrick
2014-05-01
The CMIP5 archive contains future climate projections from over 50 models provided by dozens of modeling centers from around the world. Individual model projections, however, are subject to biases created by structural model uncertainties. As a result, ensemble averaging of multiple models is used to add value to individual model projections and construct a consensus projection. Previous reports for the IPCC establish climate change projections based on an equal-weighted average of all model projections. However, individual models reproduce certain climate processes better than other models. Should models be weighted based on performance? Unequal ensemble averages have previously been constructed using a variety of mean state metrics. What metrics are most relevant for constraining future climate projections? This project develops a framework for systematically testing metrics in models to identify optimal metrics for unequal weighting multi-model ensembles. The intention is to produce improved ("intelligent") unequal-weight ensemble averages. A unique aspect of this project is the construction and testing of climate process-based model evaluation metrics. A climate process-based metric is defined as a metric based on the relationship between two physically related climate variables—e.g., outgoing longwave radiation and surface temperature. Several climate process metrics are constructed using high-quality Earth radiation budget data from NASA's Clouds and Earth's Radiant Energy System (CERES) instrument in combination with surface temperature data sets. It is found that regional values of tested quantities can vary significantly when comparing the equal-weighted ensemble average and an ensemble weighted using the process-based metric. Additionally, this study investigates the dependence of the metric weighting scheme on the climate state using a combination of model simulations including a non-forced preindustrial control experiment, historical simulations, and several radiative forcing Representative Concentration Pathway (RCP) scenarios. Ultimately, the goal of the framework is to advise better methods for ensemble averaging models and create better climate predictions.
Testing Strategies for Model-Based Development
NASA Technical Reports Server (NTRS)
Heimdahl, Mats P. E.; Whalen, Mike; Rajan, Ajitha; Miller, Steven P.
2006-01-01
This report presents an approach for testing artifacts generated in a model-based development process. This approach divides the traditional testing process into two parts: requirements-based testing (validation testing) which determines whether the model implements the high-level requirements and model-based testing (conformance testing) which determines whether the code generated from a model is behaviorally equivalent to the model. The goals of the two processes differ significantly and this report explores suitable testing metrics and automation strategies for each. To support requirements-based testing, we define novel objective requirements coverage metrics similar to existing specification and code coverage metrics. For model-based testing, we briefly describe automation strategies and examine the fault-finding capability of different structural coverage metrics using tests automatically generated from the model.
Metrics for Performance Evaluation of Patient Exercises during Physical Therapy.
Vakanski, Aleksandar; Ferguson, Jake M; Lee, Stephen
2017-06-01
The article proposes a set of metrics for evaluation of patient performance in physical therapy exercises. Taxonomy is employed that classifies the metrics into quantitative and qualitative categories, based on the level of abstraction of the captured motion sequences. Further, the quantitative metrics are classified into model-less and model-based metrics, in reference to whether the evaluation employs the raw measurements of patient performed motions, or whether the evaluation is based on a mathematical model of the motions. The reviewed metrics include root-mean square distance, Kullback Leibler divergence, log-likelihood, heuristic consistency, Fugl-Meyer Assessment, and similar. The metrics are evaluated for a set of five human motions captured with a Kinect sensor. The metrics can potentially be integrated into a system that employs machine learning for modelling and assessment of the consistency of patient performance in home-based therapy setting. Automated performance evaluation can overcome the inherent subjectivity in human performed therapy assessment, and it can increase the adherence to prescribed therapy plans, and reduce healthcare costs.
Lin, Meihua; Li, Haoli; Zhao, Xiaolei; Qin, Jiheng
2013-01-01
Genome-wide analysis of gene-gene interactions has been recognized as a powerful avenue to identify the missing genetic components that can not be detected by using current single-point association analysis. Recently, several model-free methods (e.g. the commonly used information based metrics and several logistic regression-based metrics) were developed for detecting non-linear dependence between genetic loci, but they are potentially at the risk of inflated false positive error, in particular when the main effects at one or both loci are salient. In this study, we proposed two conditional entropy-based metrics to challenge this limitation. Extensive simulations demonstrated that the two proposed metrics, provided the disease is rare, could maintain consistently correct false positive rate. In the scenarios for a common disease, our proposed metrics achieved better or comparable control of false positive error, compared to four previously proposed model-free metrics. In terms of power, our methods outperformed several competing metrics in a range of common disease models. Furthermore, in real data analyses, both metrics succeeded in detecting interactions and were competitive with the originally reported results or the logistic regression approaches. In conclusion, the proposed conditional entropy-based metrics are promising as alternatives to current model-based approaches for detecting genuine epistatic effects. PMID:24339984
Evaluating hydrological model performance using information theory-based metrics
USDA-ARS?s Scientific Manuscript database
The accuracy-based model performance metrics not necessarily reflect the qualitative correspondence between simulated and measured streamflow time series. The objective of this work was to use the information theory-based metrics to see whether they can be used as complementary tool for hydrologic m...
NASA Astrophysics Data System (ADS)
Boé, Julien; Terray, Laurent
2014-05-01
Ensemble approaches for climate change projections have become ubiquitous. Because of large model-to-model variations and, generally, lack of rationale for the choice of a particular climate model against others, it is widely accepted that future climate change and its impacts should not be estimated based on a single climate model. Generally, as a default approach, the multi-model ensemble mean (MMEM) is considered to provide the best estimate of climate change signals. The MMEM approach is based on the implicit hypothesis that all the models provide equally credible projections of future climate change. This hypothesis is unlikely to be true and ideally one would want to give more weight to more realistic models. A major issue with this alternative approach lies in the assessment of the relative credibility of future climate projections from different climate models, as they can only be evaluated against present-day observations: which present-day metric(s) should be used to decide which models are "good" and which models are "bad" in the future climate? Once a supposedly informative metric has been found, other issues arise. What is the best statistical method to combine multiple models results taking into account their relative credibility measured by a given metric? How to be sure in the end that the metric-based estimate of future climate change is not in fact less realistic than the MMEM? It is impossible to provide strict answers to those questions in the climate change context. Yet, in this presentation, we propose a methodological approach based on a perfect model framework that could bring some useful elements of answer to the questions previously mentioned. The basic idea is to take a random climate model in the ensemble and treat it as if it were the truth (results of this model, in both past and future climate, are called "synthetic observations"). Then, all the other members from the multi-model ensemble are used to derive thanks to a metric-based approach a posterior estimate of climate change, based on the synthetic observation of the metric. Finally, it is possible to compare the posterior estimate to the synthetic observation of future climate change to evaluate the skill of the method. The main objective of this presentation is to describe and apply this perfect model framework to test different methodological issues associated with non-uniform model weighting and similar metric-based approaches. The methodology presented is general, but will be applied to the specific case of summer temperature change in France, for which previous works have suggested potentially useful metrics associated with soil-atmosphere and cloud-temperature interactions. The relative performances of different simple statistical approaches to combine multiple model results based on metrics will be tested. The impact of ensemble size, observational errors, internal variability, and model similarity will be characterized. The potential improvements associated with metric-based approaches compared to the MMEM is terms of errors and uncertainties will be quantified.
A guide to calculating habitat-quality metrics to inform conservation of highly mobile species
Bieri, Joanna A.; Sample, Christine; Thogmartin, Wayne E.; Diffendorfer, James E.; Earl, Julia E.; Erickson, Richard A.; Federico, Paula; Flockhart, D. T. Tyler; Nicol, Sam; Semmens, Darius J.; Skraber, T.; Wiederholt, Ruscena; Mattsson, Brady J.
2018-01-01
Many metrics exist for quantifying the relative value of habitats and pathways used by highly mobile species. Properly selecting and applying such metrics requires substantial background in mathematics and understanding the relevant management arena. To address this multidimensional challenge, we demonstrate and compare three measurements of habitat quality: graph-, occupancy-, and demographic-based metrics. Each metric provides insights into system dynamics, at the expense of increasing amounts and complexity of data and models. Our descriptions and comparisons of diverse habitat-quality metrics provide means for practitioners to overcome the modeling challenges associated with management or conservation of such highly mobile species. Whereas previous guidance for applying habitat-quality metrics has been scattered in diversified tracks of literature, we have brought this information together into an approachable format including accessible descriptions and a modeling case study for a typical example that conservation professionals can adapt for their own decision contexts and focal populations.Considerations for Resource ManagersManagement objectives, proposed actions, data availability and quality, and model assumptions are all relevant considerations when applying and interpreting habitat-quality metrics.Graph-based metrics answer questions related to habitat centrality and connectivity, are suitable for populations with any movement pattern, quantify basic spatial and temporal patterns of occupancy and movement, and require the least data.Occupancy-based metrics answer questions about likelihood of persistence or colonization, are suitable for populations that undergo localized extinctions, quantify spatial and temporal patterns of occupancy and movement, and require a moderate amount of data.Demographic-based metrics answer questions about relative or absolute population size, are suitable for populations with any movement pattern, quantify demographic processes and population dynamics, and require the most data.More real-world examples applying occupancy-based, agent-based, and continuous-based metrics to seasonally migratory species are needed to better understand challenges and opportunities for applying these metrics more broadly.
Geospace Environment Modeling 2008-2009 Challenge: Ground Magnetic Field Perturbations
NASA Technical Reports Server (NTRS)
Pulkkinen, A.; Kuznetsova, M.; Ridley, A.; Raeder, J.; Vapirev, A.; Weimer, D.; Weigel, R. S.; Wiltberger, M.; Millward, G.; Rastatter, L.;
2011-01-01
Acquiring quantitative metrics!based knowledge about the performance of various space physics modeling approaches is central for the space weather community. Quantification of the performance helps the users of the modeling products to better understand the capabilities of the models and to choose the approach that best suits their specific needs. Further, metrics!based analyses are important for addressing the differences between various modeling approaches and for measuring and guiding the progress in the field. In this paper, the metrics!based results of the ground magnetic field perturbation part of the Geospace Environment Modeling 2008 2009 Challenge are reported. Predictions made by 14 different models, including an ensemble model, are compared to geomagnetic observatory recordings from 12 different northern hemispheric locations. Five different metrics are used to quantify the model performances for four storm events. It is shown that the ranking of the models is strongly dependent on the type of metric used to evaluate the model performance. None of the models rank near or at the top systematically for all used metrics. Consequently, one cannot pick the absolute winner : the choice for the best model depends on the characteristics of the signal one is interested in. Model performances vary also from event to event. This is particularly clear for root!mean!square difference and utility metric!based analyses. Further, analyses indicate that for some of the models, increasing the global magnetohydrodynamic model spatial resolution and the inclusion of the ring current dynamics improve the models capability to generate more realistic ground magnetic field fluctuations.
Multi-objective optimization for generating a weighted multi-model ensemble
NASA Astrophysics Data System (ADS)
Lee, H.
2017-12-01
Many studies have demonstrated that multi-model ensembles generally show better skill than each ensemble member. When generating weighted multi-model ensembles, the first step is measuring the performance of individual model simulations using observations. There is a consensus on the assignment of weighting factors based on a single evaluation metric. When considering only one evaluation metric, the weighting factor for each model is proportional to a performance score or inversely proportional to an error for the model. While this conventional approach can provide appropriate combinations of multiple models, the approach confronts a big challenge when there are multiple metrics under consideration. When considering multiple evaluation metrics, it is obvious that a simple averaging of multiple performance scores or model ranks does not address the trade-off problem between conflicting metrics. So far, there seems to be no best method to generate weighted multi-model ensembles based on multiple performance metrics. The current study applies the multi-objective optimization, a mathematical process that provides a set of optimal trade-off solutions based on a range of evaluation metrics, to combining multiple performance metrics for the global climate models and their dynamically downscaled regional climate simulations over North America and generating a weighted multi-model ensemble. NASA satellite data and the Regional Climate Model Evaluation System (RCMES) software toolkit are used for assessment of the climate simulations. Overall, the performance of each model differs markedly with strong seasonal dependence. Because of the considerable variability across the climate simulations, it is important to evaluate models systematically and make future projections by assigning optimized weighting factors to the models with relatively good performance. Our results indicate that the optimally weighted multi-model ensemble always shows better performance than an arithmetic ensemble mean and may provide reliable future projections.
Ranking streamflow model performance based on Information theory metrics
NASA Astrophysics Data System (ADS)
Martinez, Gonzalo; Pachepsky, Yakov; Pan, Feng; Wagener, Thorsten; Nicholson, Thomas
2016-04-01
The accuracy-based model performance metrics not necessarily reflect the qualitative correspondence between simulated and measured streamflow time series. The objective of this work was to use the information theory-based metrics to see whether they can be used as complementary tool for hydrologic model evaluation and selection. We simulated 10-year streamflow time series in five watersheds located in Texas, North Carolina, Mississippi, and West Virginia. Eight model of different complexity were applied. The information-theory based metrics were obtained after representing the time series as strings of symbols where different symbols corresponded to different quantiles of the probability distribution of streamflow. The symbol alphabet was used. Three metrics were computed for those strings - mean information gain that measures the randomness of the signal, effective measure complexity that characterizes predictability and fluctuation complexity that characterizes the presence of a pattern in the signal. The observed streamflow time series has smaller information content and larger complexity metrics than the precipitation time series. Watersheds served as information filters and and streamflow time series were less random and more complex than the ones of precipitation. This is reflected the fact that the watershed acts as the information filter in the hydrologic conversion process from precipitation to streamflow. The Nash Sutcliffe efficiency metric increased as the complexity of models increased, but in many cases several model had this efficiency values not statistically significant from each other. In such cases, ranking models by the closeness of the information-theory based parameters in simulated and measured streamflow time series can provide an additional criterion for the evaluation of hydrologic model performance.
A no-reference video quality assessment metric based on ROI
NASA Astrophysics Data System (ADS)
Jia, Lixiu; Zhong, Xuefei; Tu, Yan; Niu, Wenjuan
2015-01-01
A no reference video quality assessment metric based on the region of interest (ROI) was proposed in this paper. In the metric, objective video quality was evaluated by integrating the quality of two compressed artifacts, i.e. blurring distortion and blocking distortion. The Gaussian kernel function was used to extract the human density maps of the H.264 coding videos from the subjective eye tracking data. An objective bottom-up ROI extraction model based on magnitude discrepancy of discrete wavelet transform between two consecutive frames, center weighted color opponent model, luminance contrast model and frequency saliency model based on spectral residual was built. Then only the objective saliency maps were used to compute the objective blurring and blocking quality. The results indicate that the objective ROI extraction metric has a higher the area under the curve (AUC) value. Comparing with the conventional video quality assessment metrics which measured all the video quality frames, the metric proposed in this paper not only decreased the computation complexity, but improved the correlation between subjective mean opinion score (MOS) and objective scores.
Information Geometry for Landmark Shape Analysis: Unifying Shape Representation and Deformation
Peter, Adrian M.; Rangarajan, Anand
2010-01-01
Shape matching plays a prominent role in the comparison of similar structures. We present a unifying framework for shape matching that uses mixture models to couple both the shape representation and deformation. The theoretical foundation is drawn from information geometry wherein information matrices are used to establish intrinsic distances between parametric densities. When a parameterized probability density function is used to represent a landmark-based shape, the modes of deformation are automatically established through the information matrix of the density. We first show that given two shapes parameterized by Gaussian mixture models (GMMs), the well-known Fisher information matrix of the mixture model is also a Riemannian metric (actually, the Fisher-Rao Riemannian metric) and can therefore be used for computing shape geodesics. The Fisher-Rao metric has the advantage of being an intrinsic metric and invariant to reparameterization. The geodesic—computed using this metric—establishes an intrinsic deformation between the shapes, thus unifying both shape representation and deformation. A fundamental drawback of the Fisher-Rao metric is that it is not available in closed form for the GMM. Consequently, shape comparisons are computationally very expensive. To address this, we develop a new Riemannian metric based on generalized ϕ-entropy measures. In sharp contrast to the Fisher-Rao metric, the new metric is available in closed form. Geodesic computations using the new metric are considerably more efficient. We validate the performance and discriminative capabilities of these new information geometry-based metrics by pairwise matching of corpus callosum shapes. We also study the deformations of fish shapes that have various topological properties. A comprehensive comparative analysis is also provided using other landmark-based distances, including the Hausdorff distance, the Procrustes metric, landmark-based diffeomorphisms, and the bending energies of the thin-plate (TPS) and Wendland splines. PMID:19110497
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morley, Steven Karl
This report reviews existing literature describing forecast accuracy metrics, concentrating on those based on relative errors and percentage errors. We then review how the most common of these metrics, the mean absolute percentage error (MAPE), has been applied in recent radiation belt modeling literature. Finally, we describe metrics based on the ratios of predicted to observed values (the accuracy ratio) that address the drawbacks inherent in using MAPE. Specifically, we define and recommend the median log accuracy ratio as a measure of bias and the median symmetric accuracy as a measure of accuracy.
NASA Technical Reports Server (NTRS)
Rastaetter, L.; Kuznetsova, M.; Hesse, M.; Pulkkinen, A.; Glocer, A.; Yu, Y.; Meng, X.; Raeder, J.; Wiltberger, M.; Welling, D.;
2011-01-01
In this paper the metrics-based results of the Dst part of the 2008-2009 GEM Metrics Challenge are reported. The Metrics Challenge asked modelers to submit results for 4 geomagnetic storm events and 5 different types of observations that can be modeled by statistical or climatological or physics-based (e.g. MHD) models of the magnetosphere-ionosphere system. We present the results of over 25 model settings that were run at the Community Coordinated Modeling Center (CCMC) and at the institutions of various modelers for these events. To measure the performance of each of the models against the observations we use comparisons of one-hour averaged model data with the Dst index issued by the World Data Center for Geomagnetism, Kyoto, Japan, and direct comparison of one-minute model data with the one-minute Dst index calculated by the United States Geologic Survey (USGS).
Decision-relevant evaluation of climate models: A case study of chill hours in California
NASA Astrophysics Data System (ADS)
Jagannathan, K. A.; Jones, A. D.; Kerr, A. C.
2017-12-01
The past decade has seen a proliferation of different climate datasets with over 60 climate models currently in use. Comparative evaluation and validation of models can assist practitioners chose the most appropriate models for adaptation planning. However, such assessments are usually conducted for `climate metrics' such as seasonal temperature, while sectoral decisions are often based on `decision-relevant outcome metrics' such as growing degree days or chill hours. Since climate models predict different metrics with varying skill, the goal of this research is to conduct a bottom-up evaluation of model skill for `outcome-based' metrics. Using chill hours (number of hours in winter months where temperature is lesser than 45 deg F) in Fresno, CA as a case, we assess how well different GCMs predict the historical mean and slope of chill hours, and whether and to what extent projections differ based on model selection. We then compare our results with other climate-based evaluations of the region, to identify similarities and differences. For the model skill evaluation, historically observed chill hours were compared with simulations from 27 GCMs (and multiple ensembles). Model skill scores were generated based on a statistical hypothesis test of the comparative assessment. Future projections from RCP 8.5 runs were evaluated, and a simple bias correction was also conducted. Our analysis indicates that model skill in predicting chill hour slope is dependent on its skill in predicting mean chill hours, which results from the non-linear nature of the chill metric. However, there was no clear relationship between the models that performed well for the chill hour metric and those that performed well in other temperature-based evaluations (such winter minimum temperature or diurnal temperature range). Further, contrary to conclusions from other studies, we also found that the multi-model mean or large ensemble mean results may not always be most appropriate for this outcome metric. Our assessment sheds light on key differences between global versus local skill, and broad versus specific skill of climate models, highlighting that decision-relevant model evaluation may be crucial for providing practitioners with the best available climate information for their specific needs.
A software quality model and metrics for risk assessment
NASA Technical Reports Server (NTRS)
Hyatt, L.; Rosenberg, L.
1996-01-01
A software quality model and its associated attributes are defined and used as the model for the basis for a discussion on risk. Specific quality goals and attributes are selected based on their importance to a software development project and their ability to be quantified. Risks that can be determined by the model's metrics are identified. A core set of metrics relating to the software development process and its products is defined. Measurements for each metric and their usability and applicability are discussed.
On the use of hidden Markov models for gaze pattern modeling
NASA Astrophysics Data System (ADS)
Mannaru, Pujitha; Balasingam, Balakumar; Pattipati, Krishna; Sibley, Ciara; Coyne, Joseph
2016-05-01
Some of the conventional metrics derived from gaze patterns (on computer screens) to study visual attention, engagement and fatigue are saccade counts, nearest neighbor index (NNI) and duration of dwells/fixations. Each of these metrics has drawbacks in modeling the behavior of gaze patterns; one such drawback comes from the fact that some portions on the screen are not as important as some other portions on the screen. This is addressed by computing the eye gaze metrics corresponding to important areas of interest (AOI) on the screen. There are some challenges in developing accurate AOI based metrics: firstly, the definition of AOI is always fuzzy; secondly, it is possible that the AOI may change adaptively over time. Hence, there is a need to introduce eye-gaze metrics that are aware of the AOI in the field of view; at the same time, the new metrics should be able to automatically select the AOI based on the nature of the gazes. In this paper, we propose a novel way of computing NNI based on continuous hidden Markov models (HMM) that model the gazes as 2D Gaussian observations (x-y coordinates of the gaze) with the mean at the center of the AOI and covariance that is related to the concentration of gazes. The proposed modeling allows us to accurately compute the NNI metric in the presence of multiple, undefined AOI on the screen in the presence of intermittent casual gazing that is modeled as random gazes on the screen.
Evaluating Modeled Impact Metrics for Human Health, Agriculture Growth, and Near-Term Climate
NASA Astrophysics Data System (ADS)
Seltzer, K. M.; Shindell, D. T.; Faluvegi, G.; Murray, L. T.
2017-12-01
Simulated metrics that assess impacts on human health, agriculture growth, and near-term climate were evaluated using ground-based and satellite observations. The NASA GISS ModelE2 and GEOS-Chem models were used to simulate the near-present chemistry of the atmosphere. A suite of simulations that varied by model, meteorology, horizontal resolution, emissions inventory, and emissions year were performed, enabling an analysis of metric sensitivities to various model components. All simulations utilized consistent anthropogenic global emissions inventories (ECLIPSE V5a or CEDS), and an evaluation of simulated results were carried out for 2004-2006 and 2009-2011 over the United States and 2014-2015 over China. Results for O3- and PM2.5-based metrics featured minor differences due to the model resolutions considered here (2.0° × 2.5° and 0.5° × 0.666°) and model, meteorology, and emissions inventory each played larger roles in variances. Surface metrics related to O3 were consistently high biased, though to varying degrees, demonstrating the need to evaluate particular modeling frameworks before O3 impacts are quantified. Surface metrics related to PM2.5 were diverse, indicating that a multimodel mean with robust results are valuable tools in predicting PM2.5-related impacts. Oftentimes, the configuration that captured the change of a metric best over time differed from the configuration that captured the magnitude of the same metric best, demonstrating the challenge in skillfully simulating impacts. These results highlight the strengths and weaknesses of these models in simulating impact metrics related to air quality and near-term climate. With such information, the reliability of historical and future simulations can be better understood.
Sakieh, Yousef; Salmanmahiny, Abdolrassoul
2016-03-01
Performance evaluation is a critical step when developing land-use and cover change (LUCC) models. The present study proposes a spatially explicit model performance evaluation method, adopting a landscape metric-based approach. To quantify GEOMOD model performance, a set of composition- and configuration-based landscape metrics including number of patches, edge density, mean Euclidean nearest neighbor distance, largest patch index, class area, landscape shape index, and splitting index were employed. The model takes advantage of three decision rules including neighborhood effect, persistence of change direction, and urbanization suitability values. According to the results, while class area, largest patch index, and splitting indices demonstrated insignificant differences between spatial pattern of ground truth and simulated layers, there was a considerable inconsistency between simulation results and real dataset in terms of the remaining metrics. Specifically, simulation outputs were simplistic and the model tended to underestimate number of developed patches by producing a more compact landscape. Landscape-metric-based performance evaluation produces more detailed information (compared to conventional indices such as the Kappa index and overall accuracy) on the model's behavior in replicating spatial heterogeneity features of a landscape such as frequency, fragmentation, isolation, and density. Finally, as the main characteristic of the proposed method, landscape metrics employ the maximum potential of observed and simulated layers for a performance evaluation procedure, provide a basis for more robust interpretation of a calibration process, and also deepen modeler insight into the main strengths and pitfalls of a specific land-use change model when simulating a spatiotemporal phenomenon.
Localized Multi-Model Extremes Metrics for the Fourth National Climate Assessment
NASA Astrophysics Data System (ADS)
Thompson, T. R.; Kunkel, K.; Stevens, L. E.; Easterling, D. R.; Biard, J.; Sun, L.
2017-12-01
We have performed localized analysis of scenario-based datasets for the Fourth National Climate Assessment (NCA4). These datasets include CMIP5-based Localized Constructed Analogs (LOCA) downscaled simulations at daily temporal resolution and 1/16th-degree spatial resolution. Over 45 temperature and precipitation extremes metrics have been processed using LOCA data, including threshold, percentile, and degree-days calculations. The localized analysis calculates trends in the temperature and precipitation extremes metrics for relatively small regions such as counties, metropolitan areas, climate zones, administrative areas, or economic zones. For NCA4, we are currently addressing metropolitan areas as defined by U.S. Census Bureau Metropolitan Statistical Areas. Such localized analysis provides essential information for adaptation planning at scales relevant to local planning agencies and businesses. Nearly 30 such regions have been analyzed to date. Each locale is defined by a closed polygon that is used to extract LOCA-based extremes metrics specific to the area. For each metric, single-model data at each LOCA grid location are first averaged over several 30-year historical and future periods. Then, for each metric, the spatial average across the region is calculated using model weights based on both model independence and reproducibility of current climate conditions. The range of single-model results is also captured on the same localized basis, and then combined with the weighted ensemble average for each region and each metric. For example, Boston-area cooling degree days and maximum daily temperature is shown below for RCP8.5 (red) and RCP4.5 (blue) scenarios. We also discuss inter-regional comparison of these metrics, as well as their relevance to risk analysis for adaptation planning.
On the new metrics for IMRT QA verification.
Garcia-Romero, Alejandro; Hernandez-Vitoria, Araceli; Millan-Cebrian, Esther; Alba-Escorihuela, Veronica; Serrano-Zabaleta, Sonia; Ortega-Pardina, Pablo
2016-11-01
The aim of this work is to search for new metrics that could give more reliable acceptance/rejection criteria on the IMRT verification process and to offer solutions to the discrepancies found among different conventional metrics. Therefore, besides conventional metrics, new ones are proposed and evaluated with new tools to find correlations among them. These new metrics are based on the processing of the dose-volume histogram information, evaluating the absorbed dose differences, the dose constraint fulfillment, or modified biomathematical treatment outcome models such as tumor control probability (TCP) and normal tissue complication probability (NTCP). An additional purpose is to establish whether the new metrics yield the same acceptance/rejection plan distribution as the conventional ones. Fifty eight treatment plans concerning several patient locations are analyzed. All of them were verified prior to the treatment, using conventional metrics, and retrospectively after the treatment with the new metrics. These new metrics include the definition of three continuous functions, based on dose-volume histograms resulting from measurements evaluated with a reconstructed dose system and also with a Monte Carlo redundant calculation. The 3D gamma function for every volume of interest is also calculated. The information is also processed to obtain ΔTCP or ΔNTCP for the considered volumes of interest. These biomathematical treatment outcome models have been modified to increase their sensitivity to dose changes. A robustness index from a radiobiological point of view is defined to classify plans in robustness against dose changes. Dose difference metrics can be condensed in a single parameter: the dose difference global function, with an optimal cutoff that can be determined from a receiver operating characteristics (ROC) analysis of the metric. It is not always possible to correlate differences in biomathematical treatment outcome models with dose difference metrics. This is due to the fact that the dose constraint is often far from the dose that has an actual impact on the radiobiological model, and therefore, biomathematical treatment outcome models are insensitive to big dose differences between the verification system and the treatment planning system. As an alternative, the use of modified radiobiological models which provides a better correlation is proposed. In any case, it is better to choose robust plans from a radiobiological point of view. The robustness index defined in this work is a good predictor of the plan rejection probability according to metrics derived from modified radiobiological models. The global 3D gamma-based metric calculated for each plan volume shows a good correlation with the dose difference metrics and presents a good performance in the acceptance/rejection process. Some discrepancies have been found in dose reconstruction depending on the algorithm employed. Significant and unavoidable discrepancies were found between the conventional metrics and the new ones. The dose difference global function and the 3D gamma for each plan volume are good classifiers regarding dose difference metrics. ROC analysis is useful to evaluate the predictive power of the new metrics. The correlation between biomathematical treatment outcome models and the dose difference-based metrics is enhanced by using modified TCP and NTCP functions that take into account the dose constraints for each plan. The robustness index is useful to evaluate if a plan is likely to be rejected. Conventional verification should be replaced by the new metrics, which are clinically more relevant.
The data quality analyzer: a quality control program for seismic data
Ringler, Adam; Hagerty, M.T.; Holland, James F.; Gonzales, A.; Gee, Lind S.; Edwards, J.D.; Wilson, David; Baker, Adam
2015-01-01
The quantification of data quality is based on the evaluation of various metrics (e.g., timing quality, daily noise levels relative to long-term noise models, and comparisons between broadband data and event synthetics). Users may select which metrics contribute to the assessment and those metrics are aggregated into a “grade” for each station. The DQA is being actively used for station diagnostics and evaluation based on the completed metrics (availability, gap count, timing quality, deviation from a global noise model, deviation from a station noise model, coherence between co-located sensors, and comparison between broadband data and synthetics for earthquakes) on stations in the Global Seismographic Network and Advanced National Seismic System.
NASA Astrophysics Data System (ADS)
Shao, G.; Gallion, J.; Fei, S.
2016-12-01
Sound forest aboveground biomass estimation is required to monitor diverse forest ecosystems and their impacts on the changing climate. Lidar-based regression models provided promised biomass estimations in most forest ecosystems. However, considerable uncertainties of biomass estimations have been reported in the temperate hardwood and hardwood-dominated mixed forests. Varied site productivities in temperate hardwood forests largely diversified height and diameter growth rates, which significantly reduced the correlation between tree height and diameter at breast height (DBH) in mature and complex forests. It is, therefore, difficult to utilize height-based lidar metrics to predict DBH-based field-measured biomass through a simple regression model regardless the variation of site productivity. In this study, we established a multi-dimension nonlinear regression model incorporating lidar metrics and site productivity classes derived from soil features. In the regression model, lidar metrics provided horizontal and vertical structural information and productivity classes differentiated good and poor forest sites. The selection and combination of lidar metrics were discussed. Multiple regression models were employed and compared. Uncertainty analysis was applied to the best fit model. The effects of site productivity on the lidar-based biomass model were addressed.
A condition metric for Eucalyptus woodland derived from expert evaluations.
Sinclair, Steve J; Bruce, Matthew J; Griffioen, Peter; Dodd, Amanda; White, Matthew D
2018-02-01
The evaluation of ecosystem quality is important for land-management and land-use planning. Evaluation is unavoidably subjective, and robust metrics must be based on consensus and the structured use of observations. We devised a transparent and repeatable process for building and testing ecosystem metrics based on expert data. We gathered quantitative evaluation data on the quality of hypothetical grassy woodland sites from experts. We used these data to train a model (an ensemble of 30 bagged regression trees) capable of predicting the perceived quality of similar hypothetical woodlands based on a set of 13 site variables as inputs (e.g., cover of shrubs, richness of native forbs). These variables can be measured at any site and the model implemented in a spreadsheet as a metric of woodland quality. We also investigated the number of experts required to produce an opinion data set sufficient for the construction of a metric. The model produced evaluations similar to those provided by experts, as shown by assessing the model's quality scores of expert-evaluated test sites not used to train the model. We applied the metric to 13 woodland conservation reserves and asked managers of these sites to independently evaluate their quality. To assess metric performance, we compared the model's evaluation of site quality with the managers' evaluations through multidimensional scaling. The metric performed relatively well, plotting close to the center of the space defined by the evaluators. Given the method provides data-driven consensus and repeatability, which no single human evaluator can provide, we suggest it is a valuable tool for evaluating ecosystem quality in real-world contexts. We believe our approach is applicable to any ecosystem. © 2017 State of Victoria.
Information risk and security modeling
NASA Astrophysics Data System (ADS)
Zivic, Predrag
2005-03-01
This research paper presentation will feature current frameworks to addressing risk and security modeling and metrics. The paper will analyze technical level risk and security metrics of Common Criteria/ISO15408, Centre for Internet Security guidelines, NSA configuration guidelines and metrics used at this level. Information IT operational standards view on security metrics such as GMITS/ISO13335, ITIL/ITMS and architectural guidelines such as ISO7498-2 will be explained. Business process level standards such as ISO17799, COSO and CobiT will be presented with their control approach to security metrics. Top level, the maturity standards such as SSE-CMM/ISO21827, NSA Infosec Assessment and CobiT will be explored and reviewed. For each defined level of security metrics the research presentation will explore the appropriate usage of these standards. The paper will discuss standards approaches to conducting the risk and security metrics. The research findings will demonstrate the need for common baseline for both risk and security metrics. This paper will show the relation between the attribute based common baseline and corporate assets and controls for risk and security metrics. IT will be shown that such approach spans over all mentioned standards. The proposed approach 3D visual presentation and development of the Information Security Model will be analyzed and postulated. Presentation will clearly demonstrate the benefits of proposed attributes based approach and defined risk and security space for modeling and measuring.
NASA Astrophysics Data System (ADS)
Asadzadeh, M.; Maclean, A.; Tolson, B. A.; Burn, D. H.
2009-05-01
Hydrologic model calibration aims to find a set of parameters that adequately simulates observations of watershed behavior, such as streamflow, or a state variable, such as snow water equivalent (SWE). There are different metrics for evaluating calibration effectiveness that involve quantifying prediction errors, such as the Nash-Sutcliffe (NS) coefficient and bias evaluated for the entire calibration period, on a seasonal basis, for low flows, or for high flows. Many of these metrics are conflicting such that the set of parameters that maximizes the high flow NS differs from the set of parameters that maximizes the low flow NS. Conflicting objectives are very likely when different calibration objectives are based on different fluxes and/or state variables (e.g., NS based on streamflow versus SWE). One of the most popular ways to balance different metrics is to aggregate them based on their importance and find the set of parameters that optimizes a weighted sum of the efficiency metrics. Comparing alternative hydrologic models (e.g., assessing model improvement when a process or more detail is added to the model) based on the aggregated objective might be misleading since it represents one point on the tradeoff of desired error metrics. To derive a more comprehensive model comparison, we solved a bi-objective calibration problem to estimate the tradeoff between two error metrics for each model. Although this approach is computationally more expensive than the aggregation approach, it results in a better understanding of the effectiveness of selected models at each level of every error metric and therefore provides a better rationale for judging relative model quality. The two alternative models used in this study are two MESH hydrologic models (version 1.2) of the Wolf Creek Research basin that differ in their watershed spatial discretization (a single Grouped Response Unit, GRU, versus multiple GRUs). The MESH model, currently under development by Environment Canada, is a coupled land-surface and hydrologic model. Results will demonstrate the conclusions a modeller might make regarding the value of additional watershed spatial discretization under both an aggregated (single-objective) and multi-objective model comparison framework.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Urrego-Blanco, Jorge R.; Hunke, Elizabeth C.; Urban, Nathan M.
Here, we implement a variance-based distance metric (D n) to objectively assess skill of sea ice models when multiple output variables or uncertainties in both model predictions and observations need to be considered. The metric compares observations and model data pairs on common spatial and temporal grids improving upon highly aggregated metrics (e.g., total sea ice extent or volume) by capturing the spatial character of model skill. The D n metric is a gamma-distributed statistic that is more general than the χ 2 statistic commonly used to assess model fit, which requires the assumption that the model is unbiased andmore » can only incorporate observational error in the analysis. The D n statistic does not assume that the model is unbiased, and allows the incorporation of multiple observational data sets for the same variable and simultaneously for different variables, along with different types of variances that can characterize uncertainties in both observations and the model. This approach represents a step to establish a systematic framework for probabilistic validation of sea ice models. The methodology is also useful for model tuning by using the D n metric as a cost function and incorporating model parametric uncertainty as part of a scheme to optimize model functionality. We apply this approach to evaluate different configurations of the standalone Los Alamos sea ice model (CICE) encompassing the parametric uncertainty in the model, and to find new sets of model configurations that produce better agreement than previous configurations between model and observational estimates of sea ice concentration and thickness.« less
Urrego-Blanco, Jorge R.; Hunke, Elizabeth C.; Urban, Nathan M.; ...
2017-04-01
Here, we implement a variance-based distance metric (D n) to objectively assess skill of sea ice models when multiple output variables or uncertainties in both model predictions and observations need to be considered. The metric compares observations and model data pairs on common spatial and temporal grids improving upon highly aggregated metrics (e.g., total sea ice extent or volume) by capturing the spatial character of model skill. The D n metric is a gamma-distributed statistic that is more general than the χ 2 statistic commonly used to assess model fit, which requires the assumption that the model is unbiased andmore » can only incorporate observational error in the analysis. The D n statistic does not assume that the model is unbiased, and allows the incorporation of multiple observational data sets for the same variable and simultaneously for different variables, along with different types of variances that can characterize uncertainties in both observations and the model. This approach represents a step to establish a systematic framework for probabilistic validation of sea ice models. The methodology is also useful for model tuning by using the D n metric as a cost function and incorporating model parametric uncertainty as part of a scheme to optimize model functionality. We apply this approach to evaluate different configurations of the standalone Los Alamos sea ice model (CICE) encompassing the parametric uncertainty in the model, and to find new sets of model configurations that produce better agreement than previous configurations between model and observational estimates of sea ice concentration and thickness.« less
Performance Metrics, Error Modeling, and Uncertainty Quantification
NASA Technical Reports Server (NTRS)
Tian, Yudong; Nearing, Grey S.; Peters-Lidard, Christa D.; Harrison, Kenneth W.; Tang, Ling
2016-01-01
A common set of statistical metrics has been used to summarize the performance of models or measurements- the most widely used ones being bias, mean square error, and linear correlation coefficient. They assume linear, additive, Gaussian errors, and they are interdependent, incomplete, and incapable of directly quantifying uncertainty. The authors demonstrate that these metrics can be directly derived from the parameters of the simple linear error model. Since a correct error model captures the full error information, it is argued that the specification of a parametric error model should be an alternative to the metrics-based approach. The error-modeling methodology is applicable to both linear and nonlinear errors, while the metrics are only meaningful for linear errors. In addition, the error model expresses the error structure more naturally, and directly quantifies uncertainty. This argument is further explained by highlighting the intrinsic connections between the performance metrics, the error model, and the joint distribution between the data and the reference.
Using Geometry-Based Metrics as Part of Fitness-for-Purpose Evaluations of 3D City Models
NASA Astrophysics Data System (ADS)
Wong, K.; Ellul, C.
2016-10-01
Three-dimensional geospatial information is being increasingly used in a range of tasks beyond visualisation. 3D datasets, however, are often being produced without exact specifications and at mixed levels of geometric complexity. This leads to variations within the models' geometric and semantic complexity as well as the degree of deviation from the corresponding real world objects. Existing descriptors and measures of 3D data such as CityGML's level of detail are perhaps only partially sufficient in communicating data quality and fitness-for-purpose. This study investigates whether alternative, automated, geometry-based metrics describing the variation of complexity within 3D datasets could provide additional relevant information as part of a process of fitness-for-purpose evaluation. The metrics include: mean vertex/edge/face counts per building; vertex/face ratio; minimum 2D footprint area and; minimum feature length. Each metric was tested on six 3D city models from international locations. The results show that geometry-based metrics can provide additional information on 3D city models as part of fitness-for-purpose evaluations. The metrics, while they cannot be used in isolation, may provide a complement to enhance existing data descriptors if backed up with local knowledge, where possible.
Fusion set selection with surrogate metric in multi-atlas based image segmentation
NASA Astrophysics Data System (ADS)
Zhao, Tingting; Ruan, Dan
2016-02-01
Multi-atlas based image segmentation sees unprecedented opportunities but also demanding challenges in the big data era. Relevant atlas selection before label fusion plays a crucial role in reducing potential performance loss from heterogeneous data quality and high computation cost from extensive data. This paper starts with investigating the image similarity metric (termed ‘surrogate’), an alternative to the inaccessible geometric agreement metric (termed ‘oracle’) in atlas relevance assessment, and probes into the problem of how to select the ‘most-relevant’ atlases and how many such atlases to incorporate. We propose an inference model to relate the surrogates and the oracle geometric agreement metrics. Based on this model, we quantify the behavior of the surrogates in mimicking oracle metrics for atlas relevance ordering. Finally, analytical insights on the choice of fusion set size are presented from a probabilistic perspective, with the integrated goal of including the most relevant atlases and excluding the irrelevant ones. Empirical evidence and performance assessment are provided based on prostate and corpus callosum segmentation.
Measures of model performance based on the log accuracy ratio
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morley, Steven Karl; Brito, Thiago Vasconcelos; Welling, Daniel T.
Quantitative assessment of modeling and forecasting of continuous quantities uses a variety of approaches. We review existing literature describing metrics for forecast accuracy and bias, concentrating on those based on relative errors and percentage errors. Of these accuracy metrics, the mean absolute percentage error (MAPE) is one of the most common across many fields and has been widely applied in recent space science literature and we highlight the benefits and drawbacks of MAPE and proposed alternatives. We then introduce the log accuracy ratio, and derive from it two metrics: the median symmetric accuracy; and the symmetric signed percentage bias. Robustmore » methods for estimating the spread of a multiplicative linear model using the log accuracy ratio are also presented. The developed metrics are shown to be easy to interpret, robust, and to mitigate the key drawbacks of their more widely-used counterparts based on relative errors and percentage errors. Their use is illustrated with radiation belt electron flux modeling examples.« less
Measures of model performance based on the log accuracy ratio
Morley, Steven Karl; Brito, Thiago Vasconcelos; Welling, Daniel T.
2018-01-03
Quantitative assessment of modeling and forecasting of continuous quantities uses a variety of approaches. We review existing literature describing metrics for forecast accuracy and bias, concentrating on those based on relative errors and percentage errors. Of these accuracy metrics, the mean absolute percentage error (MAPE) is one of the most common across many fields and has been widely applied in recent space science literature and we highlight the benefits and drawbacks of MAPE and proposed alternatives. We then introduce the log accuracy ratio, and derive from it two metrics: the median symmetric accuracy; and the symmetric signed percentage bias. Robustmore » methods for estimating the spread of a multiplicative linear model using the log accuracy ratio are also presented. The developed metrics are shown to be easy to interpret, robust, and to mitigate the key drawbacks of their more widely-used counterparts based on relative errors and percentage errors. Their use is illustrated with radiation belt electron flux modeling examples.« less
Using a safety forecast model to calculate future safety metrics.
DOT National Transportation Integrated Search
2017-05-01
This research sought to identify a process to improve long-range planning prioritization by using forecasted : safety metrics in place of the existing Utah Department of Transportation Safety Indexa metric based on historical : crash data. The res...
2016-01-01
Background: The price of food has long been considered one of the major factors that affects food choices. However, the price metric (e.g., the price of food per calorie or the price of food per gram) that individuals predominantly use when making food choices is unclear. Understanding which price metric is used is especially important for studying individuals with severe budget constraints because food price then becomes even more important in food choice. Objective: We assessed which price metric is used by low-income individuals in deciding what to eat. Methods: With the use of data from NHANES and the USDA Food and Nutrient Database for Dietary Studies, we created an agent-based model that simulated an environment representing the US population, wherein individuals were modeled as agents with a specific weight, age, and income. In our model, agents made dietary food choices while meeting their budget limits with the use of 1 of 3 different metrics for decision making: energy cost (price per calorie), unit price (price per gram), and serving price (price per serving). The food consumption patterns generated by our model were compared to 3 independent data sets. Results: The food choice behaviors observed in 2 of the data sets were found to be closest to the simulated dietary patterns generated by the price per calorie metric. The behaviors observed in the third data set were equidistant from the patterns generated by price per calorie and price per serving metrics, whereas results generated by the price per gram metric were further away. Conclusions: Our simulations suggest that dietary food choice based on price per calorie best matches actual consumption patterns and may therefore be the most salient price metric for low-income populations. PMID:27655757
Beheshti, Rahmatollah; Igusa, Takeru; Jones-Smith, Jessica
2016-11-01
The price of food has long been considered one of the major factors that affects food choices. However, the price metric (e.g., the price of food per calorie or the price of food per gram) that individuals predominantly use when making food choices is unclear. Understanding which price metric is used is especially important for studying individuals with severe budget constraints because food price then becomes even more important in food choice. We assessed which price metric is used by low-income individuals in deciding what to eat. With the use of data from NHANES and the USDA Food and Nutrient Database for Dietary Studies, we created an agent-based model that simulated an environment representing the US population, wherein individuals were modeled as agents with a specific weight, age, and income. In our model, agents made dietary food choices while meeting their budget limits with the use of 1 of 3 different metrics for decision making: energy cost (price per calorie), unit price (price per gram), and serving price (price per serving). The food consumption patterns generated by our model were compared to 3 independent data sets. The food choice behaviors observed in 2 of the data sets were found to be closest to the simulated dietary patterns generated by the price per calorie metric. The behaviors observed in the third data set were equidistant from the patterns generated by price per calorie and price per serving metrics, whereas results generated by the price per gram metric were further away. Our simulations suggest that dietary food choice based on price per calorie best matches actual consumption patterns and may therefore be the most salient price metric for low-income populations. © 2016 American Society for Nutrition.
Kireeva, Natalia V; Ovchinnikova, Svetlana I; Kuznetsov, Sergey L; Kazennov, Andrey M; Tsivadze, Aslan Yu
2014-02-01
This study concerns large margin nearest neighbors classifier and its multi-metric extension as the efficient approaches for metric learning which aimed to learn an appropriate distance/similarity function for considered case studies. In recent years, many studies in data mining and pattern recognition have demonstrated that a learned metric can significantly improve the performance in classification, clustering and retrieval tasks. The paper describes application of the metric learning approach to in silico assessment of chemical liabilities. Chemical liabilities, such as adverse effects and toxicity, play a significant role in drug discovery process, in silico assessment of chemical liabilities is an important step aimed to reduce costs and animal testing by complementing or replacing in vitro and in vivo experiments. Here, to our knowledge for the first time, a distance-based metric learning procedures have been applied for in silico assessment of chemical liabilities, the impact of metric learning on structure-activity landscapes and predictive performance of developed models has been analyzed, the learned metric was used in support vector machines. The metric learning results have been illustrated using linear and non-linear data visualization techniques in order to indicate how the change of metrics affected nearest neighbors relations and descriptor space.
NASA Astrophysics Data System (ADS)
Kireeva, Natalia V.; Ovchinnikova, Svetlana I.; Kuznetsov, Sergey L.; Kazennov, Andrey M.; Tsivadze, Aslan Yu.
2014-02-01
This study concerns large margin nearest neighbors classifier and its multi-metric extension as the efficient approaches for metric learning which aimed to learn an appropriate distance/similarity function for considered case studies. In recent years, many studies in data mining and pattern recognition have demonstrated that a learned metric can significantly improve the performance in classification, clustering and retrieval tasks. The paper describes application of the metric learning approach to in silico assessment of chemical liabilities. Chemical liabilities, such as adverse effects and toxicity, play a significant role in drug discovery process, in silico assessment of chemical liabilities is an important step aimed to reduce costs and animal testing by complementing or replacing in vitro and in vivo experiments. Here, to our knowledge for the first time, a distance-based metric learning procedures have been applied for in silico assessment of chemical liabilities, the impact of metric learning on structure-activity landscapes and predictive performance of developed models has been analyzed, the learned metric was used in support vector machines. The metric learning results have been illustrated using linear and non-linear data visualization techniques in order to indicate how the change of metrics affected nearest neighbors relations and descriptor space.
NASA Astrophysics Data System (ADS)
Gide, Milind S.; Karam, Lina J.
2016-08-01
With the increased focus on visual attention (VA) in the last decade, a large number of computational visual saliency methods have been developed over the past few years. These models are traditionally evaluated by using performance evaluation metrics that quantify the match between predicted saliency and fixation data obtained from eye-tracking experiments on human observers. Though a considerable number of such metrics have been proposed in the literature, there are notable problems in them. In this work, we discuss shortcomings in existing metrics through illustrative examples and propose a new metric that uses local weights based on fixation density which overcomes these flaws. To compare the performance of our proposed metric at assessing the quality of saliency prediction with other existing metrics, we construct a ground-truth subjective database in which saliency maps obtained from 17 different VA models are evaluated by 16 human observers on a 5-point categorical scale in terms of their visual resemblance with corresponding ground-truth fixation density maps obtained from eye-tracking data. The metrics are evaluated by correlating metric scores with the human subjective ratings. The correlation results show that the proposed evaluation metric outperforms all other popular existing metrics. Additionally, the constructed database and corresponding subjective ratings provide an insight into which of the existing metrics and future metrics are better at estimating the quality of saliency prediction and can be used as a benchmark.
A neural net-based approach to software metrics
NASA Technical Reports Server (NTRS)
Boetticher, G.; Srinivas, Kankanahalli; Eichmann, David A.
1992-01-01
Software metrics provide an effective method for characterizing software. Metrics have traditionally been composed through the definition of an equation. This approach is limited by the fact that all the interrelationships among all the parameters be fully understood. This paper explores an alternative, neural network approach to modeling metrics. Experiments performed on two widely accepted metrics, McCabe and Halstead, indicate that the approach is sound, thus serving as the groundwork for further exploration into the analysis and design of software metrics.
Newsome, Seth D.; Yeakel, Justin D.; Wheatley, Patrick V.; Tinker, M. Tim
2012-01-01
Ecologists are increasingly using stable isotope analysis to inform questions about variation in resource and habitat use from the individual to community level. In this study we investigate data sets from 2 California sea otter (Enhydra lutris nereis) populations to illustrate the advantages and potential pitfalls of applying various statistical and quantitative approaches to isotopic data. We have subdivided these tools, or metrics, into 3 categories: IsoSpace metrics, stable isotope mixing models, and DietSpace metrics. IsoSpace metrics are used to quantify the spatial attributes of isotopic data that are typically presented in bivariate (e.g., δ13C versus δ15N) 2-dimensional space. We review IsoSpace metrics currently in use and present a technique by which uncertainty can be included to calculate the convex hull area of consumers or prey, or both. We then apply a Bayesian-based mixing model to quantify the proportion of potential dietary sources to the diet of each sea otter population and compare this to observational foraging data. Finally, we assess individual dietary specialization by comparing a previously published technique, variance components analysis, to 2 novel DietSpace metrics that are based on mixing model output. As the use of stable isotope analysis in ecology continues to grow, the field will need a set of quantitative tools for assessing isotopic variance at the individual to community level. Along with recent advances in Bayesian-based mixing models, we hope that the IsoSpace and DietSpace metrics described here will provide another set of interpretive tools for ecologists.
The data quality analyzer: A quality control program for seismic data
NASA Astrophysics Data System (ADS)
Ringler, A. T.; Hagerty, M. T.; Holland, J.; Gonzales, A.; Gee, L. S.; Edwards, J. D.; Wilson, D.; Baker, A. M.
2015-03-01
The U.S. Geological Survey's Albuquerque Seismological Laboratory (ASL) has several initiatives underway to enhance and track the quality of data produced from ASL seismic stations and to improve communication about data problems to the user community. The Data Quality Analyzer (DQA) is one such development and is designed to characterize seismic station data quality in a quantitative and automated manner. The DQA consists of a metric calculator, a PostgreSQL database, and a Web interface: The metric calculator, SEEDscan, is a Java application that reads and processes miniSEED data and generates metrics based on a configuration file. SEEDscan compares hashes of metadata and data to detect changes in either and performs subsequent recalculations as needed. This ensures that the metric values are up to date and accurate. SEEDscan can be run as a scheduled task or on demand. The PostgreSQL database acts as a central hub where metric values and limited station descriptions are stored at the channel level with one-day granularity. The Web interface dynamically loads station data from the database and allows the user to make requests for time periods of interest, review specific networks and stations, plot metrics as a function of time, and adjust the contribution of various metrics to the overall quality grade of the station. The quantification of data quality is based on the evaluation of various metrics (e.g., timing quality, daily noise levels relative to long-term noise models, and comparisons between broadband data and event synthetics). Users may select which metrics contribute to the assessment and those metrics are aggregated into a "grade" for each station. The DQA is being actively used for station diagnostics and evaluation based on the completed metrics (availability, gap count, timing quality, deviation from a global noise model, deviation from a station noise model, coherence between co-located sensors, and comparison between broadband data and synthetics for earthquakes) on stations in the Global Seismographic Network and Advanced National Seismic System.
Spatial modelling of landscape aesthetic potential in urban-rural fringes.
Sahraoui, Yohan; Clauzel, Céline; Foltête, Jean-Christophe
2016-10-01
The aesthetic potential of landscape has to be modelled to provide tools for land-use planning. This involves identifying landscape attributes and revealing individuals' landscape preferences. Landscape aesthetic judgments of individuals (n = 1420) were studied by means of a photo-based survey. A set of landscape visibility metrics was created to measure landscape composition and configuration in each photograph using spatial data. These metrics were used as explanatory variables in multiple linear regressions to explain aesthetic judgments. We demonstrate that landscape aesthetic judgments may be synthesized in three consensus groups. The statistical results obtained show that landscape visibility metrics have good explanatory power. Ultimately, we propose a spatial modelling of landscape aesthetic potential based on these results combined with systematic computation of visibility metrics. Copyright © 2016 Elsevier Ltd. All rights reserved.
An, Ming-Wen; Mandrekar, Sumithra J; Branda, Megan E; Hillman, Shauna L; Adjei, Alex A; Pitot, Henry C; Goldberg, Richard M; Sargent, Daniel J
2011-10-15
The categorical definition of response assessed via the Response Evaluation Criteria in Solid Tumors has documented limitations. We sought to identify alternative metrics for tumor response that improve prediction of overall survival. Individual patient data from three North Central Cancer Treatment Group trials (N0026, n = 117; N9741, n = 1,109; and N9841, n = 332) were used. Continuous metrics of tumor size based on longitudinal tumor measurements were considered in addition to a trichotomized response [TriTR: response (complete or partial) vs. stable disease vs. progression). Cox proportional hazards models, adjusted for treatment arm and baseline tumor burden, were used to assess the impact of the metrics on subsequent overall survival, using a landmark analysis approach at 12, 16, and 24 weeks postbaseline. Model discrimination was evaluated by the concordance (c) index. The overall best response rates for the three trials were 26%, 45%, and 25%, respectively. Although nearly all metrics were statistically significantly associated with overall survival at the different landmark time points, the concordance indices (c-index) for the traditional response metrics ranged from 0.59 to 0.65; for the continuous metrics from 0.60 to 0.66; and for the TriTR metrics from 0.64 to 0.69. The c-indices for TriTR at 12 weeks were comparable with those at 16 and 24 weeks. Continuous tumor measurement-based metrics provided no predictive improvement over traditional response-based metrics or TriTR; TriTR had better predictive ability than best TriTR or confirmed response. If confirmed, TriTR represents a promising endpoint for future phase II trials. ©2011 AACR.
An, Ming-Wen; Mandrekar, Sumithra J.; Branda, Megan E.; Hillman, Shauna L.; Adjei, Alex A.; Pitot, Henry; Goldberg, Richard M.; Sargent, Daniel J.
2011-01-01
Purpose The categorical definition of response assessed via the Response Evaluation Criteria in Solid Tumors has documented limitations. We sought to identify alternative metrics for tumor response that improve prediction of overall survival. Experimental Design Individual patient data from three North Central Cancer Treatment Group trials (N0026, n=117; N9741, n=1109; N9841, n=332) were used. Continuous metrics of tumor size based on longitudinal tumor measurements were considered in addition to a trichotomized response (TriTR: Response vs. Stable vs. Progression). Cox proportional hazards models, adjusted for treatment arm and baseline tumor burden, were used to assess the impact of the metrics on subsequent overall survival, using a landmark analysis approach at 12-, 16- and 24-weeks post baseline. Model discrimination was evaluated using the concordance (c) index. Results The overall best response rates for the three trials were 26%, 45%, and 25% respectively. While nearly all metrics were statistically significantly associated with overall survival at the different landmark time points, the c-indices for the traditional response metrics ranged from 0.59-0.65; for the continuous metrics from 0.60-0.66 and for the TriTR metrics from 0.64-0.69. The c-indices for TriTR at 12-weeks were comparable to those at 16- and 24-weeks. Conclusions Continuous tumor-measurement-based metrics provided no predictive improvement over traditional response based metrics or TriTR; TriTR had better predictive ability than best TriTR or confirmed response. If confirmed, TriTR represents a promising endpoint for future Phase II trials. PMID:21880789
Partially supervised speaker clustering.
Tang, Hao; Chu, Stephen Mingyu; Hasegawa-Johnson, Mark; Huang, Thomas S
2012-05-01
Content-based multimedia indexing, retrieval, and processing as well as multimedia databases demand the structuring of the media content (image, audio, video, text, etc.), one significant goal being to associate the identity of the content to the individual segments of the signals. In this paper, we specifically address the problem of speaker clustering, the task of assigning every speech utterance in an audio stream to its speaker. We offer a complete treatment to the idea of partially supervised speaker clustering, which refers to the use of our prior knowledge of speakers in general to assist the unsupervised speaker clustering process. By means of an independent training data set, we encode the prior knowledge at the various stages of the speaker clustering pipeline via 1) learning a speaker-discriminative acoustic feature transformation, 2) learning a universal speaker prior model, and 3) learning a discriminative speaker subspace, or equivalently, a speaker-discriminative distance metric. We study the directional scattering property of the Gaussian mixture model (GMM) mean supervector representation of utterances in the high-dimensional space, and advocate exploiting this property by using the cosine distance metric instead of the euclidean distance metric for speaker clustering in the GMM mean supervector space. We propose to perform discriminant analysis based on the cosine distance metric, which leads to a novel distance metric learning algorithm—linear spherical discriminant analysis (LSDA). We show that the proposed LSDA formulation can be systematically solved within the elegant graph embedding general dimensionality reduction framework. Our speaker clustering experiments on the GALE database clearly indicate that 1) our speaker clustering methods based on the GMM mean supervector representation and vector-based distance metrics outperform traditional speaker clustering methods based on the “bag of acoustic features” representation and statistical model-based distance metrics, 2) our advocated use of the cosine distance metric yields consistent increases in the speaker clustering performance as compared to the commonly used euclidean distance metric, 3) our partially supervised speaker clustering concept and strategies significantly improve the speaker clustering performance over the baselines, and 4) our proposed LSDA algorithm further leads to state-of-the-art speaker clustering performance.
MESUR: USAGE-BASED METRICS OF SCHOLARLY IMPACT
DOE Office of Scientific and Technical Information (OSTI.GOV)
BOLLEN, JOHAN; RODRIGUEZ, MARKO A.; VAN DE SOMPEL, HERBERT
2007-01-30
The evaluation of scholarly communication items is now largely a matter of expert opinion or metrics derived from citation data. Both approaches can fail to take into account the myriad of factors that shape scholarly impact. Usage data has emerged as a promising complement to existing methods o fassessment but the formal groundwork to reliably and validly apply usage-based metrics of schlolarly impact is lacking. The Andrew W. Mellon Foundation funded MESUR project constitutes a systematic effort to define, validate and cross-validate a range of usage-based metrics of schlolarly impact by creating a semantic model of the scholarly communication process.more » The constructed model will serve as the basis of a creating a large-scale semantic network that seamlessly relates citation, bibliographic and usage data from a variety of sources. A subsequent program that uses the established semantic network as a reference data set will determine the characteristics and semantics of a variety of usage-based metrics of schlolarly impact. This paper outlines the architecture and methodology adopted by the MESUR project and its future direction.« less
Empirical Evaluation of Hunk Metrics as Bug Predictors
NASA Astrophysics Data System (ADS)
Ferzund, Javed; Ahsan, Syed Nadeem; Wotawa, Franz
Reducing the number of bugs is a crucial issue during software development and maintenance. Software process and product metrics are good indicators of software complexity. These metrics have been used to build bug predictor models to help developers maintain the quality of software. In this paper we empirically evaluate the use of hunk metrics as predictor of bugs. We present a technique for bug prediction that works at smallest units of code change called hunks. We build bug prediction models using random forests, which is an efficient machine learning classifier. Hunk metrics are used to train the classifier and each hunk metric is evaluated for its bug prediction capabilities. Our classifier can classify individual hunks as buggy or bug-free with 86 % accuracy, 83 % buggy hunk precision and 77% buggy hunk recall. We find that history based and change level hunk metrics are better predictors of bugs than code level hunk metrics.
Modeling Mediterranean forest structure using airborne laser scanning data
NASA Astrophysics Data System (ADS)
Bottalico, Francesca; Chirici, Gherardo; Giannini, Raffaello; Mele, Salvatore; Mura, Matteo; Puxeddu, Michele; McRoberts, Ronald E.; Valbuena, Ruben; Travaglini, Davide
2017-05-01
The conservation of biological diversity is recognized as a fundamental component of sustainable development, and forests contribute greatly to its preservation. Structural complexity increases the potential biological diversity of a forest by creating multiple niches that can host a wide variety of species. To facilitate greater understanding of the contributions of forest structure to forest biological diversity, we modeled relationships between 14 forest structure variables and airborne laser scanning (ALS) data for two Italian study areas representing two common Mediterranean forests, conifer plantations and coppice oaks subjected to irregular intervals of unplanned and non-standard silvicultural interventions. The objectives were twofold: (i) to compare model prediction accuracies when using two types of ALS metrics, echo-based metrics and canopy height model (CHM)-based metrics, and (ii) to construct inferences in the form of confidence intervals for large area structural complexity parameters. Our results showed that the effects of the two study areas on accuracies were greater than the effects of the two types of ALS metrics. In particular, accuracies were less for the more complex study area in terms of species composition and forest structure. However, accuracies achieved using the echo-based metrics were only slightly greater than when using the CHM-based metrics, thus demonstrating that both options yield reliable and comparable results. Accuracies were greatest for dominant height (Hd) (R2 = 0.91; RMSE% = 8.2%) and mean height weighted by basal area (R2 = 0.83; RMSE% = 10.5%) when using the echo-based metrics, 99th percentile of the echo height distribution and interquantile distance. For the forested area, the generalized regression (GREG) estimate of mean Hd was similar to the simple random sampling (SRS) estimate, 15.5 m for GREG and 16.2 m SRS. Further, the GREG estimator with standard error of 0.10 m was considerable more precise than the SRS estimator with standard error of 0.69 m.
Woskie, Susan R; Bello, Dhimiter; Gore, Rebecca J; Stowe, Meredith H; Eisen, Ellen A; Liu, Youcheng; Sparer, Judy A; Redlich, Carrie A; Cullen, Mark R
2008-09-01
Because many occupational epidemiologic studies use exposure surrogates rather than quantitative exposure metrics, the UMass Lowell and Yale study of autobody shop workers provided an opportunity to evaluate the relative utility of surrogates and quantitative exposure metrics in an exposure response analysis of cross-week change in respiratory function. A task-based exposure assessment was used to develop several metrics of inhalation exposure to isocyanates. The metrics included the surrogates, job title, counts of spray painting events during the day, counts of spray and bystander exposure events, and a quantitative exposure metric that incorporated exposure determinant models based on task sampling and a personal workplace protection factor for respirator use, combined with a daily task checklist. The result of the quantitative exposure algorithm was an estimate of the daily time-weighted average respirator-corrected total NCO exposure (microg/m(3)). In general, these four metrics were found to be variable in agreement using measures such as weighted kappa and Spearman correlation. A logistic model for 10% drop in FEV(1) from Monday morning to Thursday morning was used to evaluate the utility of each exposure metric. The quantitative exposure metric was the most favorable, producing the best model fit, as well as the greatest strength and magnitude of association. This finding supports the reports of others that reducing exposure misclassification can improve risk estimates that otherwise would be biased toward the null. Although detailed and quantitative exposure assessment can be more time consuming and costly, it can improve exposure-disease evaluations and is more useful for risk assessment purposes. The task-based exposure modeling method successfully produced estimates of daily time-weighted average exposures in the complex and changing autobody shop work environment. The ambient TWA exposures of all of the office workers and technicians and 57% of the painters were found to be below the current U.K. Health and Safety Executive occupational exposure limit (OEL) for total NCO of 20 microg/m(3). When respirator use was incorporated, all personal daily exposures were below the U.K. OEL.
Algal bioassessment metrics for wadeable streams and rivers of Maine, USA
Danielson, Thomas J.; Loftin, Cynthia S.; Tsomides, Leonidas; DiFranco, Jeanne L.; Connors, Beth
2011-01-01
Many state water-quality agencies use biological assessment methods based on lotic fish and macroinvertebrate communities, but relatively few states have incorporated algal multimetric indices into monitoring programs. Algae are good indicators for monitoring water quality because they are sensitive to many environmental stressors. We evaluated benthic algal community attributes along a landuse gradient affecting wadeable streams and rivers in Maine, USA, to identify potential bioassessment metrics. We collected epilithic algal samples from 193 locations across the state. We computed weighted-average optima for common taxa for total P, total N, specific conductance, % impervious cover, and % developed watershed, which included all land use that is no longer forest or wetland. We assigned Maine stream tolerance values and categories (sensitive, intermediate, tolerant) to taxa based on their optima and responses to watershed disturbance. We evaluated performance of algal community metrics used in multimetric indices from other regions and novel metrics based on Maine data. Metrics specific to Maine data, such as the relative richness of species characterized as being sensitive in Maine, were more correlated with % developed watershed than most metrics used in other regions. Few community-structure attributes (e.g., species richness) were useful metrics in Maine. Performance of algal bioassessment models would be improved if metrics were evaluated with attributes of local data before inclusion in multimetric indices or statistical models. ?? 2011 by The North American Benthological Society.
A GPS Phase-Locked Loop Performance Metric Based on the Phase Discriminator Output
Stevanovic, Stefan; Pervan, Boris
2018-01-01
We propose a novel GPS phase-lock loop (PLL) performance metric based on the standard deviation of tracking error (defined as the discriminator’s estimate of the true phase error), and explain its advantages over the popular phase jitter metric using theory, numerical simulation, and experimental results. We derive an augmented GPS phase-lock loop (PLL) linear model, which includes the effect of coherent averaging, to be used in conjunction with this proposed metric. The augmented linear model allows more accurate calculation of tracking error standard deviation in the presence of additive white Gaussian noise (AWGN) as compared to traditional linear models. The standard deviation of tracking error, with a threshold corresponding to half of the arctangent discriminator pull-in region, is shown to be a more reliable/robust measure of PLL performance under interference conditions than the phase jitter metric. In addition, the augmented linear model is shown to be valid up until this threshold, which facilitates efficient performance prediction, so that time-consuming direct simulations and costly experimental testing can be reserved for PLL designs that are much more likely to be successful. The effect of varying receiver reference oscillator quality on the tracking error metric is also considered. PMID:29351250
NASA Technical Reports Server (NTRS)
Hochhalter, Jake D.; Littlewood, David J.; Christ, Robert J., Jr.; Veilleux, M. G.; Bozek, J. E.; Ingraffea, A. R.; Maniatty, Antionette M.
2010-01-01
The objective of this paper is to develop further a framework for computationally modeling microstructurally small fatigue crack growth in AA 7075-T651 [1]. The focus is on the nucleation event, when a crack extends from within a second-phase particle into a surrounding grain, since this has been observed to be an initiating mechanism for fatigue crack growth in this alloy. It is hypothesized that nucleation can be predicted by computing a non-local nucleation metric near the crack front. The hypothesis is tested by employing a combination of experimentation and nite element modeling in which various slip-based and energy-based nucleation metrics are tested for validity, where each metric is derived from a continuum crystal plasticity formulation. To investigate each metric, a non-local procedure is developed for the calculation of nucleation metrics in the neighborhood of a crack front. Initially, an idealized baseline model consisting of a single grain containing a semi-ellipsoidal surface particle is studied to investigate the dependence of each nucleation metric on lattice orientation, number of load cycles, and non-local regularization method. This is followed by a comparison of experimental observations and computational results for microstructural models constructed by replicating the observed microstructural geometry near second-phase particles in fatigue specimens. It is found that orientation strongly influences the direction of slip localization and, as a result, in uences the nucleation mechanism. Also, the baseline models, replication models, and past experimental observation consistently suggest that a set of particular grain orientations is most likely to nucleate fatigue cracks. It is found that a continuum crystal plasticity model and a non-local nucleation metric can be used to predict the nucleation event in AA 7075-T651. However, nucleation metric threshold values that correspond to various nucleation governing mechanisms must be calibrated.
Visual salience metrics for image inpainting
NASA Astrophysics Data System (ADS)
Ardis, Paul A.; Singhal, Amit
2009-01-01
Quantitative metrics for successful image inpainting currently do not exist, with researchers instead relying upon qualitative human comparisons to evaluate their methodologies and techniques. In an attempt to rectify this situation, we propose two new metrics to capture the notions of noticeability and visual intent in order to evaluate inpainting results. The proposed metrics use a quantitative measure of visual salience based upon a computational model of human visual attention. We demonstrate how these two metrics repeatably correlate with qualitative opinion in a human observer study, correctly identify the optimum uses for exemplar-based inpainting (as specified in the original publication), and match qualitative opinion in published examples.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mendell, Mark J.; Lei, Quanhong; Cozen, Myrna O.
2003-10-01
Metrics of culturable airborne microorganisms for either total organisms or suspected harmful subgroups have generally not been associated with symptoms among building occupants. However, the visible presence of moisture damage or mold in residences and other buildings has consistently been associated with respiratory symptoms and other health effects. This relationship is presumably caused by adverse but uncharacterized exposures to moisture-related microbiological growth. In order to assess this hypothesis, we studied relationships in U.S. office buildings between the prevalence of respiratory and irritant symptoms, the concentrations of airborne microorganisms that require moist surfaces on which to grow, and the presence ofmore » visible water damage. For these analyses we used data on buildings, indoor environments, and occupants collected from a representative sample of 100 U.S. office buildings in the U.S. Environmental Protection Agency's Building Assessment Survey and Evaluation (EPA BASE) study. We created 19 alternate metrics, using scales ranging from 3-10 units, that summarized the concentrations of airborne moisture-indicating microorganisms (AMIMOs) as indicators of moisture in buildings. Two were constructed to resemble a metric previously reported to be associated with lung function changes in building occupants; the others were based on another metric from the same group of Finnish researchers, concentration cutpoints from other studies, and professional judgment. We assessed three types of associations: between AMIMO metrics and symptoms in office workers, between evidence of water damage and symptoms, and between water damage and AMIMO metrics. We estimated (as odds ratios (ORs) with 95% confidence intervals) the unadjusted and adjusted associations between the 19 metrics and two types of weekly, work-related symptoms--lower respiratory and mucous membrane--using logistic regression models. Analyses used the original AMIMO metrics and were repeated with simplified dichotomized metrics. The multivariate models adjusted for other potential confounding variables associated with respondents, occupied spaces, buildings, or ventilation systems. Models excluded covariates for moisture-related risks hypothesized to increase AMIMO levels. We also estimated the association of water damage (using variables for specific locations in the study space or building, or summary variables) with the two symptom outcomes. Finally, using selected AMIMO metrics as outcomes, we constructed logistic regression models with observations at the building level to estimate unadjusted and adjusted associations of evident water damage with AMIMO metrics. All original AMIMO metrics showed little overall pattern of unadjusted or adjusted association with either symptom outcome. The 3-category metric resembling that previously used by others, which of all constructed metrics had the largest number of buildings in its top category, was not associated with symptoms in these buildings. However, most metrics with few buildings in their highest category showed increased risk for both symptoms in that category, especially metrics using cutpoints of >100 but <500 colony-forming units (CFU)/m{sup 3} for concentration of total culturable fungi. With AMIMO metrics dichotomized to compare the highest category with all lower categories combined, four metrics had unadjusted ORs between 1.4 and 1.6 for both symptom outcomes. The same four metrics had adjusted ORs of 1.7-2.1 for both symptom outcomes. In models of water damage and symptoms, several specific locations of past water damage had significant associations with outcomes, with ORs ranging from 1.4-1.6. In bivariate models of water damage and selected AMIMO metrics, a number of specific types of water damage and several summary variables for water damage were very strongly associated with AMIMO metrics (significant ORs ranging above 15). Multivariate modeling with the dichotomous AMIMO metrics was not possible due to limited numbers of observations.« less
Surface metrics: An alternative to patch metrics for the quantification of landscape structure
Kevin McGarigal; Sermin Tagil; Samuel A. Cushman
2009-01-01
Modern landscape ecology is based on the patch mosaic paradigm, in which landscapes are conceptualized and analyzed as mosaics of discrete patches. While this model has been widely successful, there are many situations where it is more meaningful to model landscape structure based on continuous rather than discrete spatial heterogeneity. The growing field of surface...
Process-oriented Observational Metrics for CMIP6 Climate Model Assessments
NASA Astrophysics Data System (ADS)
Jiang, J. H.; Su, H.
2016-12-01
Observational metrics based on satellite observations have been developed and effectively applied during post-CMIP5 model evaluation and improvement projects. As new physics and parameterizations continue to be included in models for the upcoming CMIP6, it is important to continue objective comparisons between observations and model results. This talk will summarize the process-oriented observational metrics and methodologies for constraining climate models with A-Train satellite observations and support CMIP6 model assessments. We target parameters and processes related to atmospheric clouds and water vapor, which are critically important for Earth's radiative budget, climate feedbacks, and water and energy cycles, and thus reduce uncertainties in climate models.
Liegl, Gregor; Wahl, Inka; Berghöfer, Anne; Nolte, Sandra; Pieh, Christoph; Rose, Matthias; Fischer, Felix
2016-03-01
To investigate the validity of a common depression metric in independent samples. We applied a common metrics approach based on item-response theory for measuring depression to four German-speaking samples that completed the Patient Health Questionnaire (PHQ-9). We compared the PHQ item parameters reported for this common metric to reestimated item parameters that derived from fitting a generalized partial credit model solely to the PHQ-9 items. We calibrated the new model on the same scale as the common metric using two approaches (estimation with shifted prior and Stocking-Lord linking). By fitting a mixed-effects model and using Bland-Altman plots, we investigated the agreement between latent depression scores resulting from the different estimation models. We found different item parameters across samples and estimation methods. Although differences in latent depression scores between different estimation methods were statistically significant, these were clinically irrelevant. Our findings provide evidence that it is possible to estimate latent depression scores by using the item parameters from a common metric instead of reestimating and linking a model. The use of common metric parameters is simple, for example, using a Web application (http://www.common-metrics.org) and offers a long-term perspective to improve the comparability of patient-reported outcome measures. Copyright © 2016 Elsevier Inc. All rights reserved.
Quality assessment for color reproduction using a blind metric
NASA Astrophysics Data System (ADS)
Bringier, B.; Quintard, L.; Larabi, M.-C.
2007-01-01
This paper deals with image quality assessment. This field plays nowadays an important role in various image processing applications. Number of objective image quality metrics, that correlate or not, with the subjective quality have been developed during the last decade. Two categories of metrics can be distinguished, the first with full-reference and the second with no-reference. Full-reference metric tries to evaluate the distortion introduced to an image with regards to the reference. No-reference approach attempts to model the judgment of image quality in a blind way. Unfortunately, the universal image quality model is not on the horizon and empirical models established on psychophysical experimentation are generally used. In this paper, we focus only on the second category to evaluate the quality of color reproduction where a blind metric, based on human visual system modeling is introduced. The objective results are validated by single-media and cross-media subjective tests.
NASA Astrophysics Data System (ADS)
Camp, H. A.; Moyer, Steven; Moore, Richard K.
2010-04-01
The Night Vision and Electronic Sensors Directorate's current time-limited search (TLS) model, which makes use of the targeting task performance (TTP) metric to describe image quality, does not explicitly account for the effects of visual clutter on observer performance. The TLS model is currently based on empirical fits to describe human performance for a time of day, spectrum and environment. Incorporating a clutter metric into the TLS model may reduce the number of these empirical fits needed. The masked target transform volume (MTTV) clutter metric has been previously presented and compared to other clutter metrics. Using real infrared imagery of rural images with varying levels of clutter, NVESD is currently evaluating the appropriateness of the MTTV metric. NVESD had twenty subject matter experts (SME) rank the amount of clutter in each scene in a series of pair-wise comparisons. MTTV metric values were calculated and then compared to the SME observers rankings. The MTTV metric ranked the clutter in a similar manner to the SME evaluation, suggesting that the MTTV metric may emulate SME response. This paper is a first step in quantifying clutter and measuring the agreement to subjective human evaluation.
Learning Compositional Shape Models of Multiple Distance Metrics by Information Projection.
Luo, Ping; Lin, Liang; Liu, Xiaobai
2016-07-01
This paper presents a novel compositional contour-based shape model by incorporating multiple distance metrics to account for varying shape distortions or deformations. Our approach contains two key steps: 1) contour feature generation and 2) generative model pursuit. For each category, we first densely sample an ensemble of local prototype contour segments from a few positive shape examples and describe each segment using three different types of distance metrics. These metrics are diverse and complementary with each other to capture various shape deformations. We regard the parameterized contour segment plus an additive residual ϵ as a basic subspace, namely, ϵ -ball, in the sense that it represents local shape variance under the certain distance metric. Using these ϵ -balls as features, we then propose a generative learning algorithm to pursue the compositional shape model, which greedily selects the most representative features under the information projection principle. In experiments, we evaluate our model on several public challenging data sets, and demonstrate that the integration of multiple shape distance metrics is capable of dealing various shape deformations, articulations, and background clutter, hence boosting system performance.
Research on quality metrics of wireless adaptive video streaming
NASA Astrophysics Data System (ADS)
Li, Xuefei
2018-04-01
With the development of wireless networks and intelligent terminals, video traffic has increased dramatically. Adaptive video streaming has become one of the most promising video transmission technologies. For this type of service, a good QoS (Quality of Service) of wireless network does not always guarantee that all customers have good experience. Thus, new quality metrics have been widely studies recently. Taking this into account, the objective of this paper is to investigate the quality metrics of wireless adaptive video streaming. In this paper, a wireless video streaming simulation platform with DASH mechanism and multi-rate video generator is established. Based on this platform, PSNR model, SSIM model and Quality Level model are implemented. Quality Level Model considers the QoE (Quality of Experience) factors such as image quality, stalling and switching frequency while PSNR Model and SSIM Model mainly consider the quality of the video. To evaluate the performance of these QoE models, three performance metrics (SROCC, PLCC and RMSE) which are used to make a comparison of subjective and predicted MOS (Mean Opinion Score) are calculated. From these performance metrics, the monotonicity, linearity and accuracy of these quality metrics can be observed.
Prediction of Hydrologic Characteristics for Ungauged Catchments to Support Hydroecological Modeling
NASA Astrophysics Data System (ADS)
Bond, Nick R.; Kennard, Mark J.
2017-11-01
Hydrologic variability is a fundamental driver of ecological processes and species distribution patterns within river systems, yet the paucity of gauges in many catchments means that streamflow data are often unavailable for ecological survey sites. Filling this data gap is an important challenge in hydroecological research. To address this gap, we first test the ability to spatially extrapolate hydrologic metrics calculated from gauged streamflow data to ungauged sites as a function of stream distance and catchment area. Second, we examine the ability of statistical models to predict flow regime metrics based on climate and catchment physiographic variables. Our assessment focused on Australia's largest catchment, the Murray-Darling Basin (MDB). We found that hydrologic metrics were predictable only between sites within ˜25 km of one another. Beyond this, correlations between sites declined quickly. We found less than 40% of fish survey sites from a recent basin-wide monitoring program (n = 777 sites) to fall within this 25 km range, thereby greatly limiting the ability to utilize gauge data for direct spatial transposition of hydrologic metrics to biological survey sites. In contrast, statistical model-based transposition proved effective in predicting ecologically relevant aspects of the flow regime (including metrics describing central tendency, high- and low-flows intermittency, seasonality, and variability) across the entire gauge network (median R2 ˜ 0.54, range 0.39-0.94). Modeled hydrologic metrics thus offer a useful alternative to empirical data when examining biological survey data from ungauged sites. More widespread use of these statistical tools and modeled metrics could expand our understanding of flow-ecology relationships.
Improving Climate Projections Using "Intelligent" Ensembles
NASA Technical Reports Server (NTRS)
Baker, Noel C.; Taylor, Patrick C.
2015-01-01
Recent changes in the climate system have led to growing concern, especially in communities which are highly vulnerable to resource shortages and weather extremes. There is an urgent need for better climate information to develop solutions and strategies for adapting to a changing climate. Climate models provide excellent tools for studying the current state of climate and making future projections. However, these models are subject to biases created by structural uncertainties. Performance metrics-or the systematic determination of model biases-succinctly quantify aspects of climate model behavior. Efforts to standardize climate model experiments and collect simulation data-such as the Coupled Model Intercomparison Project (CMIP)-provide the means to directly compare and assess model performance. Performance metrics have been used to show that some models reproduce present-day climate better than others. Simulation data from multiple models are often used to add value to projections by creating a consensus projection from the model ensemble, in which each model is given an equal weight. It has been shown that the ensemble mean generally outperforms any single model. It is possible to use unequal weights to produce ensemble means, in which models are weighted based on performance (called "intelligent" ensembles). Can performance metrics be used to improve climate projections? Previous work introduced a framework for comparing the utility of model performance metrics, showing that the best metrics are related to the variance of top-of-atmosphere outgoing longwave radiation. These metrics improve present-day climate simulations of Earth's energy budget using the "intelligent" ensemble method. The current project identifies several approaches for testing whether performance metrics can be applied to future simulations to create "intelligent" ensemble-mean climate projections. It is shown that certain performance metrics test key climate processes in the models, and that these metrics can be used to evaluate model quality in both current and future climate states. This information will be used to produce new consensus projections and provide communities with improved climate projections for urgent decision-making.
Cantuaria, Manuella Lech; Suh, Helen; Løfstrøm, Per; Blanes-Vidal, Victoria
2016-11-01
The assignment of exposure is one of the main challenges faced by environmental epidemiologists. However, misclassification of exposures has not been explored in population epidemiological studies on air pollution from biodegradable wastes. The objective of this study was to investigate the use of different approaches for assessing exposure to air pollution from biodegradable wastes by analyzing (1) the misclassification of exposure that is committed by using these surrogates, (2) the existence of differential misclassification (3) the effects that misclassification may have on health effect estimates and the interpretation of epidemiological results, and (4) the ability of the exposure measures to predict health outcomes using 10-fold cross validation. Four different exposure assessment approaches were studied: ammonia concentrations at the residence (Metric I), distance to the closest source (Metric II), number of sources within certain distances from the residence (Metric IIIa,b) and location in a specific region (Metric IV). Exposure-response models based on Metric I provided the highest predictive ability (72.3%) and goodness-of-fit, followed by IV, III and II. When compared to Metric I, Metric IV yielded the best results for exposure misclassification analysis and interpretation of health effect estimates, followed by Metric IIIb, IIIa and II. The study showed that modelled NH 3 concentrations provide more accurate estimations of true exposure than distances-based surrogates, and that distance-based surrogates (especially those based on distance to the closest point source) are imprecise methods to identify exposed populations, although they may be useful for initial studies. Copyright © 2016 Elsevier GmbH. All rights reserved.
Value-based metrics and Internet-based enterprises
NASA Astrophysics Data System (ADS)
Gupta, Krishan M.
2001-10-01
Within the last few years, a host of value-based metrics like EVA, MVA, TBR, CFORI, and TSR have evolved. This paper attempts to analyze the validity and applicability of EVA and Balanced Scorecard for Internet based organizations. Despite the collapse of the dot-com model, the firms engaged in e- commerce continue to struggle to find new ways to account for customer-base, technology, employees, knowledge, etc, as part of the value of the firm. While some metrics, like the Balance Scorecard are geared towards internal use, others like EVA are for external use. Value-based metrics are used for performing internal audits as well as comparing firms against one another; and can also be effectively utilized by individuals outside the firm looking to determine if the firm is creating value for its stakeholders.
NASA Astrophysics Data System (ADS)
Shrigley, Robert L.
This study was based on Hovland's four-part statement, Who says what to whom with what effect, the rationale for persuasive communication, a theoretical model for modifying attitudes. Part I was a survey of 139 perservice elementary teachers from which were generated the more credible characteristics of metric instructors, a central element in the who component of Hovland's model. They were: (1) background in mathematics and science, (2) fluency in metrics, (3) capability of thinking metrically, (4) a record of excellent teaching, (5) previous teaching of metric measurement to children, (6) responsibility for teaching metric content in methods courses and (7) an open enthusiasm for metric conversion. Part II was a survey of 45 mathematics educators where belief statements were synthesized for the what component of Hovland's model. It found that math educators support metric measurement because: (1) it is consistent with our monetary system; (2) the conversion of units is easier into metric than English; (3) it is easier to teach and easier to learn than English measurement; there is less need for common fractions; (4) most nations use metric measurement; scientists have used it for decades; (5) American industry has begun to use it; (6) metric measurement will facilitate world trade and communication; and (7) American children will need it as adults; educational agencies are mandating it. With the who and what of Hovland's four-part statement defined, educational researchers now have baseline data to use in testing experimentally the effect of persuasive communication on the attitude of preservice teachers toward metrication.
Nicol, Sam; Wiederholt, Ruscena; Diffendorfer, James E.; Mattsson, Brady; Thogmartin, Wayne E.; Semmens, Darius J.; Laura Lopez-Hoffman,; Norris, Ryan
2016-01-01
Mobile species with complex spatial dynamics can be difficult to manage because their population distributions vary across space and time, and because the consequences of managing particular habitats are uncertain when evaluated at the level of the entire population. Metrics to assess the importance of habitats and pathways connecting habitats in a network are necessary to guide a variety of management decisions. Given the many metrics developed for spatially structured models, it can be challenging to select the most appropriate one for a particular decision. To guide the management of spatially structured populations, we define three classes of metrics describing habitat and pathway quality based on their data requirements (graph-based, occupancy-based, and demographic-based metrics) and synopsize the ecological literature relating to these classes. Applying the first steps of a formal decision-making approach (problem framing, objectives, and management actions), we assess the utility of metrics for particular types of management decisions. Our framework can help managers with problem framing, choosing metrics of habitat and pathway quality, and to elucidate the data needs for a particular metric. Our goal is to help managers to narrow the range of suitable metrics for a management project, and aid in decision-making to make the best use of limited resources.
New exposure-based metric approach for evaluating O3 risk to North American aspen forests
K.E. Percy; M. Nosal; W. Heilman; T. Dann; J. Sober; A.H. Legge; D.F. Karnosky
2007-01-01
The United States and Canada currently use exposure-based metrics to protect vegetation from O3. Using 5 years (1999-2003) of co-measured O3, meteorology and growth response, we have developed exposure-based regression models that predict Populus tremuloides growth change within the North American ambient...
NASA Astrophysics Data System (ADS)
Stisen, S.; Demirel, C.; Koch, J.
2017-12-01
Evaluation of performance is an integral part of model development and calibration as well as it is of paramount importance when communicating modelling results to stakeholders and the scientific community. There exists a comprehensive and well tested toolbox of metrics to assess temporal model performance in the hydrological modelling community. On the contrary, the experience to evaluate spatial performance is not corresponding to the grand availability of spatial observations readily available and to the sophisticate model codes simulating the spatial variability of complex hydrological processes. This study aims at making a contribution towards advancing spatial pattern oriented model evaluation for distributed hydrological models. This is achieved by introducing a novel spatial performance metric which provides robust pattern performance during model calibration. The promoted SPAtial EFficiency (spaef) metric reflects three equally weighted components: correlation, coefficient of variation and histogram overlap. This multi-component approach is necessary in order to adequately compare spatial patterns. spaef, its three components individually and two alternative spatial performance metrics, i.e. connectivity analysis and fractions skill score, are tested in a spatial pattern oriented model calibration of a catchment model in Denmark. The calibration is constrained by a remote sensing based spatial pattern of evapotranspiration and discharge timeseries at two stations. Our results stress that stand-alone metrics tend to fail to provide holistic pattern information to the optimizer which underlines the importance of multi-component metrics. The three spaef components are independent which allows them to complement each other in a meaningful way. This study promotes the use of bias insensitive metrics which allow comparing variables which are related but may differ in unit in order to optimally exploit spatial observations made available by remote sensing platforms. We see great potential of spaef across environmental disciplines dealing with spatially distributed modelling.
Metrics, models and data for assessment of resilience of urban infrastructure systems : final report
DOT National Transportation Integrated Search
2016-12-01
This document is a summary of findings based on this research as presented in several conferences during the : course of the project. The research focused on identifying the basic metrics and models that can be used to : develop representations of pe...
Updating stand-level forest inventories using airborne laser scanning and Landsat time series data
NASA Astrophysics Data System (ADS)
Bolton, Douglas K.; White, Joanne C.; Wulder, Michael A.; Coops, Nicholas C.; Hermosilla, Txomin; Yuan, Xiaoping
2018-04-01
Vertical forest structure can be mapped over large areas by combining samples of airborne laser scanning (ALS) data with wall-to-wall spatial data, such as Landsat imagery. Here, we use samples of ALS data and Landsat time-series metrics to produce estimates of top height, basal area, and net stem volume for two timber supply areas near Kamloops, British Columbia, Canada, using an imputation approach. Both single-year and time series metrics were calculated from annual, gap-free Landsat reflectance composites representing 1984-2014. Metrics included long-term means of vegetation indices, as well as measures of the variance and slope of the indices through time. Terrain metrics, generated from a 30 m digital elevation model, were also included as predictors. We found that imputation models improved with the inclusion of Landsat time series metrics when compared to single-year Landsat metrics (relative RMSE decreased from 22.8% to 16.5% for top height, from 32.1% to 23.3% for basal area, and from 45.6% to 34.1% for net stem volume). Landsat metrics that characterized 30-years of stand history resulted in more accurate models (for all three structural attributes) than Landsat metrics that characterized only the most recent 10 or 20 years of stand history. To test model transferability, we compared imputed attributes against ALS-based estimates in nearby forest blocks (>150,000 ha) that were not included in model training or testing. Landsat-imputed attributes correlated strongly to ALS-based estimates in these blocks (R2 = 0.62 and relative RMSE = 13.1% for top height, R2 = 0.75 and relative RMSE = 17.8% for basal area, and R2 = 0.67 and relative RMSE = 26.5% for net stem volume), indicating model transferability. These findings suggest that in areas containing spatially-limited ALS data acquisitions, imputation models, and Landsat time series and terrain metrics can be effectively used to produce wall-to-wall estimates of key inventory attributes, providing an opportunity to update estimates of forest attributes in areas where inventory information is either out of date or non-existent.
Evaluating true BCI communication rate through mutual information and language models.
Speier, William; Arnold, Corey; Pouratian, Nader
2013-01-01
Brain-computer interface (BCI) systems are a promising means for restoring communication to patients suffering from "locked-in" syndrome. Research to improve system performance primarily focuses on means to overcome the low signal to noise ratio of electroencephalogric (EEG) recordings. However, the literature and methods are difficult to compare due to the array of evaluation metrics and assumptions underlying them, including that: 1) all characters are equally probable, 2) character selection is memoryless, and 3) errors occur completely at random. The standardization of evaluation metrics that more accurately reflect the amount of information contained in BCI language output is critical to make progress. We present a mutual information-based metric that incorporates prior information and a model of systematic errors. The parameters of a system used in one study were re-optimized, showing that the metric used in optimization significantly affects the parameter values chosen and the resulting system performance. The results of 11 BCI communication studies were then evaluated using different metrics, including those previously used in BCI literature and the newly advocated metric. Six studies' results varied based on the metric used for evaluation and the proposed metric produced results that differed from those originally published in two of the studies. Standardizing metrics to accurately reflect the rate of information transmission is critical to properly evaluate and compare BCI communication systems and advance the field in an unbiased manner.
A New Metric for Quantifying Performance Impairment on the Psychomotor Vigilance Test
2012-01-01
used the coefficient of determination (R2) and the P-values based on Bartelss test of randomness of the residual error to quantify the goodness - of - fit ...we used the goodness - of - fit between each metric and the corresponding individualized two-process model output (Rajaraman et al., 2008, 2009) to assess...individualized two-process model fits for each of the 12 subjects using the five metrics. The P-values are for Bartelss
NASA Astrophysics Data System (ADS)
Demirel, Mehmet C.; Mai, Juliane; Mendiguren, Gorka; Koch, Julian; Samaniego, Luis; Stisen, Simon
2018-02-01
Satellite-based earth observations offer great opportunities to improve spatial model predictions by means of spatial-pattern-oriented model evaluations. In this study, observed spatial patterns of actual evapotranspiration (AET) are utilised for spatial model calibration tailored to target the pattern performance of the model. The proposed calibration framework combines temporally aggregated observed spatial patterns with a new spatial performance metric and a flexible spatial parameterisation scheme. The mesoscale hydrologic model (mHM) is used to simulate streamflow and AET and has been selected due to its soil parameter distribution approach based on pedo-transfer functions and the build in multi-scale parameter regionalisation. In addition two new spatial parameter distribution options have been incorporated in the model in order to increase the flexibility of root fraction coefficient and potential evapotranspiration correction parameterisations, based on soil type and vegetation density. These parameterisations are utilised as they are most relevant for simulated AET patterns from the hydrologic model. Due to the fundamental challenges encountered when evaluating spatial pattern performance using standard metrics, we developed a simple but highly discriminative spatial metric, i.e. one comprised of three easily interpretable components measuring co-location, variation and distribution of the spatial data. The study shows that with flexible spatial model parameterisation used in combination with the appropriate objective functions, the simulated spatial patterns of actual evapotranspiration become substantially more similar to the satellite-based estimates. Overall 26 parameters are identified for calibration through a sequential screening approach based on a combination of streamflow and spatial pattern metrics. The robustness of the calibrations is tested using an ensemble of nine calibrations based on different seed numbers using the shuffled complex evolution optimiser. The calibration results reveal a limited trade-off between streamflow dynamics and spatial patterns illustrating the benefit of combining separate observation types and objective functions. At the same time, the simulated spatial patterns of AET significantly improved when an objective function based on observed AET patterns and a novel spatial performance metric compared to traditional streamflow-only calibration were included. Since the overall water balance is usually a crucial goal in hydrologic modelling, spatial-pattern-oriented optimisation should always be accompanied by traditional discharge measurements. In such a multi-objective framework, the current study promotes the use of a novel bias-insensitive spatial pattern metric, which exploits the key information contained in the observed patterns while allowing the water balance to be informed by discharge observations.
NASA Astrophysics Data System (ADS)
Yu, Xuelian; Chen, Qian; Gu, Guohua; Ren, Jianle; Sui, Xiubao
2015-02-01
Designing objective quality assessment of color-fused image is a very demanding and challenging task. We propose four no-reference metrics based on human visual system characteristics for objectively evaluating the quality of false color fusion image. The perceived edge metric (PEM) is defined based on visual perception model and color image gradient similarity between the fused image and the source images. The perceptual contrast metric (PCM) is established associating multi-scale contrast and varying contrast sensitivity filter (CSF) with color components. The linear combination of the standard deviation and mean value over the fused image construct the image colorfulness metric (ICM). The color comfort metric (CCM) is designed by the average saturation and the ratio of pixels with high and low saturation. The qualitative and quantitative experimental results demonstrate that the proposed metrics have a good agreement with subjective perception.
Validation of Metrics as Error Predictors
NASA Astrophysics Data System (ADS)
Mendling, Jan
In this chapter, we test the validity of metrics that were defined in the previous chapter for predicting errors in EPC business process models. In Section 5.1, we provide an overview of how the analysis data is generated. Section 5.2 describes the sample of EPCs from practice that we use for the analysis. Here we discuss a disaggregation by the EPC model group and by error as well as a correlation analysis between metrics and error. Based on this sample, we calculate a logistic regression model for predicting error probability with the metrics as input variables in Section 5.3. In Section 5.4, we then test the regression function for an independent sample of EPC models from textbooks as a cross-validation. Section 5.5 summarizes the findings.
Considering the difficulty in measuring restoration success for nonpoint source pollutants, nutrient assimilative capacity (NAS) offers an attractive systems-based metric. Here NAS was defined using an impulse-response model of nitrate fate and transport. Eleven parameters were e...
A New Metric for Land-Atmosphere Coupling Strength: Applications on Observations and Modeling
NASA Astrophysics Data System (ADS)
Tang, Q.; Xie, S.; Zhang, Y.; Phillips, T. J.; Santanello, J. A., Jr.; Cook, D. R.; Riihimaki, L.; Gaustad, K.
2017-12-01
A new metric is proposed to quantify the land-atmosphere (LA) coupling strength and is elaborated by correlating the surface evaporative fraction and impacting land and atmosphere variables (e.g., soil moisture, vegetation, and radiation). Based upon multiple linear regression, this approach simultaneously considers multiple factors and thus represents complex LA coupling mechanisms better than existing single variable metrics. The standardized regression coefficients quantify the relative contributions from individual drivers in a consistent manner, avoiding the potential inconsistency in relative influence of conventional metrics. Moreover, the unique expendable feature of the new method allows us to verify and explore potentially important coupling mechanisms. Our observation-based application of the new metric shows moderate coupling with large spatial variations at the U.S. Southern Great Plains. The relative importance of soil moisture vs. vegetation varies by location. We also show that LA coupling strength is generally underestimated by single variable methods due to their incompleteness. We also apply this new metric to evaluate the representation of LA coupling in the Accelerated Climate Modeling for Energy (ACME) V1 Contiguous United States (CONUS) regionally refined model (RRM). This work is performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-734201
Person Re-Identification via Distance Metric Learning With Latent Variables.
Sun, Chong; Wang, Dong; Lu, Huchuan
2017-01-01
In this paper, we propose an effective person re-identification method with latent variables, which represents a pedestrian as the mixture of a holistic model and a number of flexible models. Three types of latent variables are introduced to model uncertain factors in the re-identification problem, including vertical misalignments, horizontal misalignments and leg posture variations. The distance between two pedestrians can be determined by minimizing a given distance function with respect to latent variables, and then be used to conduct the re-identification task. In addition, we develop a latent metric learning method for learning the effective metric matrix, which can be solved via an iterative manner: once latent information is specified, the metric matrix can be obtained based on some typical metric learning methods; with the computed metric matrix, the latent variables can be determined by searching the state space exhaustively. Finally, extensive experiments are conducted on seven databases to evaluate the proposed method. The experimental results demonstrate that our method achieves better performance than other competing algorithms.
A Comprehensive Validation Methodology for Sparse Experimental Data
NASA Technical Reports Server (NTRS)
Norman, Ryan B.; Blattnig, Steve R.
2010-01-01
A comprehensive program of verification and validation has been undertaken to assess the applicability of models to space radiation shielding applications and to track progress as models are developed over time. The models are placed under configuration control, and automated validation tests are used so that comparisons can readily be made as models are improved. Though direct comparisons between theoretical results and experimental data are desired for validation purposes, such comparisons are not always possible due to lack of data. In this work, two uncertainty metrics are introduced that are suitable for validating theoretical models against sparse experimental databases. The nuclear physics models, NUCFRG2 and QMSFRG, are compared to an experimental database consisting of over 3600 experimental cross sections to demonstrate the applicability of the metrics. A cumulative uncertainty metric is applied to the question of overall model accuracy, while a metric based on the median uncertainty is used to analyze the models from the perspective of model development by analyzing subsets of the model parameter space.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klawikowski, S; Christian, J; Schott, D
Purpose: Pilot study developing a CT-texture based model for early assessment of treatment response during the delivery of chemoradiation therapy (CRT) for pancreatic cancer. Methods: Daily CT data acquired for 24 pancreatic head cancer patients using CT-on-rails, during the routine CT-guided CRT delivery with a radiation dose of 50.4 Gy in 28 fractions, were analyzed. The pancreas head was contoured on each daily CT. Texture analysis was performed within the pancreas head contour using a research tool (IBEX). Over 1300 texture metrics including: grey level co-occurrence, run-length, histogram, neighborhood intensity difference, and geometrical shape features were calculated for each dailymore » CT. Metric-trend information was established by finding the best fit of either a linear, quadratic, or exponential function for each metric value verses accumulated dose. Thus all the daily CT texture information was consolidated into a best-fit trend type for a given patient and texture metric. Linear correlation was performed between the patient histological response vector (good, medium, poor) and all combinations of 23 patient subgroups (statistical jackknife) determining which metrics were most correlated to response and repeatedly reliable across most patients. Control correlations against CT scanner, reconstruction kernel, and gated/nongated CT images were also calculated. Euclidean distance measure was used to group/sort patient vectors based on the data of these trend-response metrics. Results: We found four specific trend-metrics (Gray Level Coocurence Matrix311-1InverseDiffMomentNorm, Gray Level Coocurence Matrix311-1InverseDiffNorm, Gray Level Coocurence Matrix311-1 Homogeneity2, and Intensity Direct Local StdMean) that were highly correlated with patient response and repeatedly reliable. Our four trend-metric model successfully ordered our pilot response dataset (p=0.00070). We found no significant correlation to our control parameters: gating (p=0.7717), scanner (p=0.9741), and kernel (p=0.8586). Conclusion: We have successfully created a CT-texture based early treatment response prediction model using the CTs acquired during the delivery of chemoradiation therapy for pancreatic cancer. Future testing is required to validate the model with more patient data.« less
Simplified process model discovery based on role-oriented genetic mining.
Zhao, Weidong; Liu, Xi; Dai, Weihui
2014-01-01
Process mining is automated acquisition of process models from event logs. Although many process mining techniques have been developed, most of them are based on control flow. Meanwhile, the existing role-oriented process mining methods focus on correctness and integrity of roles while ignoring role complexity of the process model, which directly impacts understandability and quality of the model. To address these problems, we propose a genetic programming approach to mine the simplified process model. Using a new metric of process complexity in terms of roles as the fitness function, we can find simpler process models. The new role complexity metric of process models is designed from role cohesion and coupling, and applied to discover roles in process models. Moreover, the higher fitness derived from role complexity metric also provides a guideline for redesigning process models. Finally, we conduct case study and experiments to show that the proposed method is more effective for streamlining the process by comparing with related studies.
Validation metrics for turbulent plasma transport
Holland, C.
2016-06-22
Developing accurate models of plasma dynamics is essential for confident predictive modeling of current and future fusion devices. In modern computer science and engineering, formal verification and validation processes are used to assess model accuracy and establish confidence in the predictive capabilities of a given model. This paper provides an overview of the key guiding principles and best practices for the development of validation metrics, illustrated using examples from investigations of turbulent transport in magnetically confined plasmas. Particular emphasis is given to the importance of uncertainty quantification and its inclusion within the metrics, and the need for utilizing synthetic diagnosticsmore » to enable quantitatively meaningful comparisons between simulation and experiment. As a starting point, the structure of commonly used global transport model metrics and their limitations is reviewed. An alternate approach is then presented, which focuses upon comparisons of predicted local fluxes, fluctuations, and equilibrium gradients against observation. Furthermore, the utility of metrics based upon these comparisons is demonstrated by applying them to gyrokinetic predictions of turbulent transport in a variety of discharges performed on the DIII-D tokamak, as part of a multi-year transport model validation activity.« less
Validation metrics for turbulent plasma transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holland, C.
Developing accurate models of plasma dynamics is essential for confident predictive modeling of current and future fusion devices. In modern computer science and engineering, formal verification and validation processes are used to assess model accuracy and establish confidence in the predictive capabilities of a given model. This paper provides an overview of the key guiding principles and best practices for the development of validation metrics, illustrated using examples from investigations of turbulent transport in magnetically confined plasmas. Particular emphasis is given to the importance of uncertainty quantification and its inclusion within the metrics, and the need for utilizing synthetic diagnosticsmore » to enable quantitatively meaningful comparisons between simulation and experiment. As a starting point, the structure of commonly used global transport model metrics and their limitations is reviewed. An alternate approach is then presented, which focuses upon comparisons of predicted local fluxes, fluctuations, and equilibrium gradients against observation. Furthermore, the utility of metrics based upon these comparisons is demonstrated by applying them to gyrokinetic predictions of turbulent transport in a variety of discharges performed on the DIII-D tokamak, as part of a multi-year transport model validation activity.« less
Correlation between centrality metrics and their application to the opinion model
NASA Astrophysics Data System (ADS)
Li, Cong; Li, Qian; Van Mieghem, Piet; Stanley, H. Eugene; Wang, Huijuan
2015-03-01
In recent decades, a number of centrality metrics describing network properties of nodes have been proposed to rank the importance of nodes. In order to understand the correlations between centrality metrics and to approximate a high-complexity centrality metric by a strongly correlated low-complexity metric, we first study the correlation between centrality metrics in terms of their Pearson correlation coefficient and their similarity in ranking of nodes. In addition to considering the widely used centrality metrics, we introduce a new centrality measure, the degree mass. The mth-order degree mass of a node is the sum of the weighted degree of the node and its neighbors no further than m hops away. We find that the betweenness, the closeness, and the components of the principal eigenvector of the adjacency matrix are strongly correlated with the degree, the 1st-order degree mass and the 2nd-order degree mass, respectively, in both network models and real-world networks. We then theoretically prove that the Pearson correlation coefficient between the principal eigenvector and the 2nd-order degree mass is larger than that between the principal eigenvector and a lower order degree mass. Finally, we investigate the effect of the inflexible contrarians selected based on different centrality metrics in helping one opinion to compete with another in the inflexible contrarian opinion (ICO) model. Interestingly, we find that selecting the inflexible contrarians based on the leverage, the betweenness, or the degree is more effective in opinion-competition than using other centrality metrics in all types of networks. This observation is supported by our previous observations, i.e., that there is a strong linear correlation between the degree and the betweenness, as well as a high centrality similarity between the leverage and the degree.
Patrick, Christopher J; Yuan, Lester L
2017-07-01
Flow alteration is widespread in streams, but current understanding of the effects of differences in flow characteristics on stream biological communities is incomplete. We tested hypotheses about the effect of variation in hydrology on stream communities by using generalized additive models to relate watershed information to the values of different flow metrics at gauged sites. Flow models accounted for 54-80% of the spatial variation in flow metric values among gauged sites. We then used these models to predict flow metrics in 842 ungauged stream sites in the mid-Atlantic United States that were sampled for fish, macroinvertebrates, and environmental covariates. Fish and macroinvertebrate assemblages were characterized in terms of a suite of metrics that quantified aspects of community composition, diversity, and functional traits that were expected to be associated with differences in flow characteristics. We related modeled flow metrics to biological metrics in a series of stressor-response models. Our analyses identified both drying and base flow instability as explaining 30-50% of the observed variability in fish and invertebrate community composition. Variations in community composition were related to variations in the prevalence of dispersal traits in invertebrates and trophic guilds in fish. The results demonstrate that we can use statistical models to predict hydrologic conditions at bioassessment sites, which, in turn, we can use to estimate relationships between flow conditions and biological characteristics. This analysis provides an approach to quantify the effects of spatial variation in flow metrics using readily available biomonitoring data. © 2017 by the Ecological Society of America.
Standardized reporting of functioning information on ICF-based common metrics.
Prodinger, Birgit; Tennant, Alan; Stucki, Gerold
2018-02-01
In clinical practice and research a variety of clinical data collection tools are used to collect information on people's functioning for clinical practice and research and national health information systems. Reporting on ICF-based common metrics enables standardized documentation of functioning information in national health information systems. The objective of this methodological note on applying the ICF in rehabilitation is to demonstrate how to report functioning information collected with a data collection tool on ICF-based common metrics. We first specify the requirements for the standardized reporting of functioning information. Secondly, we introduce the methods needed for transforming functioning data to ICF-based common metrics. Finally, we provide an example. The requirements for standardized reporting are as follows: 1) having a common conceptual framework to enable content comparability between any health information; and 2) a measurement framework so that scores between two or more clinical data collection tools can be directly compared. The methods needed to achieve these requirements are the ICF Linking Rules and the Rasch measurement model. Using data collected incorporating the 36-item Short Form Health Survey (SF-36), the World Health Organization Disability Assessment Schedule 2.0 (WHODAS 2.0), and the Stroke Impact Scale 3.0 (SIS 3.0), the application of the standardized reporting based on common metrics is demonstrated. A subset of items from the three tools linked to common chapters of the ICF (d4 Mobility, d5 Self-care and d6 Domestic life), were entered as "super items" into the Rasch model. Good fit was achieved with no residual local dependency and a unidimensional metric. A transformation table allows for comparison between scales, and between a scale and the reporting common metric. Being able to report functioning information collected with commonly used clinical data collection tools with ICF-based common metrics enables clinicians and researchers to continue using their tools while still being able to compare and aggregate the information within and across tools.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Owen, D; Anderson, C; Mayo, C
Purpose: To extend the functionality of a commercial treatment planning system (TPS) to support (i) direct use of quantitative image-based metrics within treatment plan optimization and (ii) evaluation of dose-functional volume relationships to assist in functional image adaptive radiotherapy. Methods: A script was written that interfaces with a commercial TPS via an Application Programming Interface (API). The script executes a program that performs dose-functional volume analyses. Written in C#, the script reads the dose grid and correlates it with image data on a voxel-by-voxel basis through API extensions that can access registration transforms. A user interface was designed through WinFormsmore » to input parameters and display results. To test the performance of this program, image- and dose-based metrics computed from perfusion SPECT images aligned to the treatment planning CT were generated, validated, and compared. Results: The integration of image analysis information was successfully implemented as a plug-in to a commercial TPS. Perfusion SPECT images were used to validate the calculation and display of image-based metrics as well as dose-intensity metrics and histograms for defined structures on the treatment planning CT. Various biological dose correction models, custom image-based metrics, dose-intensity computations, and dose-intensity histograms were applied to analyze the image-dose profile. Conclusion: It is possible to add image analysis features to commercial TPSs through custom scripting applications. A tool was developed to enable the evaluation of image-intensity-based metrics in the context of functional targeting and avoidance. In addition to providing dose-intensity metrics and histograms that can be easily extracted from a plan database and correlated with outcomes, the system can also be extended to a plug-in optimization system, which can directly use the computed metrics for optimization of post-treatment tumor or normal tissue response models. Supported by NIH - P01 - CA059827.« less
ARM Data-Oriented Metrics and Diagnostics Package for Climate Model Evaluation Value-Added Product
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Chengzhu; Xie, Shaocheng
A Python-based metrics and diagnostics package is currently being developed by the U.S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Infrastructure Team at Lawrence Livermore National Laboratory (LLNL) to facilitate the use of long-term, high-frequency measurements from the ARM Facility in evaluating the regional climate simulation of clouds, radiation, and precipitation. This metrics and diagnostics package computes climatological means of targeted climate model simulation and generates tables and plots for comparing the model simulation with ARM observational data. The Coupled Model Intercomparison Project (CMIP) model data sets are also included in the package to enable model intercomparison as demonstratedmore » in Zhang et al. (2017). The mean of the CMIP model can serve as a reference for individual models. Basic performance metrics are computed to measure the accuracy of mean state and variability of climate models. The evaluated physical quantities include cloud fraction, temperature, relative humidity, cloud liquid water path, total column water vapor, precipitation, sensible and latent heat fluxes, and radiative fluxes, with plan to extend to more fields, such as aerosol and microphysics properties. Process-oriented diagnostics focusing on individual cloud- and precipitation-related phenomena are also being developed for the evaluation and development of specific model physical parameterizations. The version 1.0 package is designed based on data collected at ARM’s Southern Great Plains (SGP) Research Facility, with the plan to extend to other ARM sites. The metrics and diagnostics package is currently built upon standard Python libraries and additional Python packages developed by DOE (such as CDMS and CDAT). The ARM metrics and diagnostic package is available publicly with the hope that it can serve as an easy entry point for climate modelers to compare their models with ARM data. In this report, we first present the input data, which constitutes the core content of the metrics and diagnostics package in section 2, and a user's guide documenting the workflow/structure of the version 1.0 codes, and including step-by-step instruction for running the package in section 3.« less
Adaptive distance metric learning for diffusion tensor image segmentation.
Kong, Youyong; Wang, Defeng; Shi, Lin; Hui, Steve C N; Chu, Winnie C W
2014-01-01
High quality segmentation of diffusion tensor images (DTI) is of key interest in biomedical research and clinical application. In previous studies, most efforts have been made to construct predefined metrics for different DTI segmentation tasks. These methods require adequate prior knowledge and tuning parameters. To overcome these disadvantages, we proposed to automatically learn an adaptive distance metric by a graph based semi-supervised learning model for DTI segmentation. An original discriminative distance vector was first formulated by combining both geometry and orientation distances derived from diffusion tensors. The kernel metric over the original distance and labels of all voxels were then simultaneously optimized in a graph based semi-supervised learning approach. Finally, the optimization task was efficiently solved with an iterative gradient descent method to achieve the optimal solution. With our approach, an adaptive distance metric could be available for each specific segmentation task. Experiments on synthetic and real brain DTI datasets were performed to demonstrate the effectiveness and robustness of the proposed distance metric learning approach. The performance of our approach was compared with three classical metrics in the graph based semi-supervised learning framework.
Adaptive Distance Metric Learning for Diffusion Tensor Image Segmentation
Kong, Youyong; Wang, Defeng; Shi, Lin; Hui, Steve C. N.; Chu, Winnie C. W.
2014-01-01
High quality segmentation of diffusion tensor images (DTI) is of key interest in biomedical research and clinical application. In previous studies, most efforts have been made to construct predefined metrics for different DTI segmentation tasks. These methods require adequate prior knowledge and tuning parameters. To overcome these disadvantages, we proposed to automatically learn an adaptive distance metric by a graph based semi-supervised learning model for DTI segmentation. An original discriminative distance vector was first formulated by combining both geometry and orientation distances derived from diffusion tensors. The kernel metric over the original distance and labels of all voxels were then simultaneously optimized in a graph based semi-supervised learning approach. Finally, the optimization task was efficiently solved with an iterative gradient descent method to achieve the optimal solution. With our approach, an adaptive distance metric could be available for each specific segmentation task. Experiments on synthetic and real brain DTI datasets were performed to demonstrate the effectiveness and robustness of the proposed distance metric learning approach. The performance of our approach was compared with three classical metrics in the graph based semi-supervised learning framework. PMID:24651858
MJO simulation in CMIP5 climate models: MJO skill metrics and process-oriented diagnosis
NASA Astrophysics Data System (ADS)
Ahn, Min-Seop; Kim, Daehyun; Sperber, Kenneth R.; Kang, In-Sik; Maloney, Eric; Waliser, Duane; Hendon, Harry
2017-12-01
The Madden-Julian Oscillation (MJO) simulation diagnostics developed by MJO Working Group and the process-oriented MJO simulation diagnostics developed by MJO Task Force are applied to 37 Coupled Model Intercomparison Project phase 5 (CMIP5) models in order to assess model skill in representing amplitude, period, and coherent eastward propagation of the MJO, and to establish a link between MJO simulation skill and parameterized physical processes. Process-oriented diagnostics include the Relative Humidity Composite based on Precipitation (RHCP), Normalized Gross Moist Stability (NGMS), and the Greenhouse Enhancement Factor (GEF). Numerous scalar metrics are developed to quantify the results. Most CMIP5 models underestimate MJO amplitude, especially when outgoing longwave radiation (OLR) is used in the evaluation, and exhibit too fast phase speed while lacking coherence between eastward propagation of precipitation/convection and the wind field. The RHCP-metric, indicative of the sensitivity of simulated convection to low-level environmental moisture, and the NGMS-metric, indicative of the efficiency of a convective atmosphere for exporting moist static energy out of the column, show robust correlations with a large number of MJO skill metrics. The GEF-metric, indicative of the strength of the column-integrated longwave radiative heating due to cloud-radiation interaction, is also correlated with the MJO skill metrics, but shows relatively lower correlations compared to the RHCP- and NGMS-metrics. Our results suggest that modifications to processes associated with moisture-convection coupling and the gross moist stability might be the most fruitful for improving simulations of the MJO. Though the GEF-metric exhibits lower correlations with the MJO skill metrics, the longwave radiation feedback is highly relevant for simulating the weak precipitation anomaly regime that may be important for the establishment of shallow convection and the transition to deep convection.
MJO simulation in CMIP5 climate models: MJO skill metrics and process-oriented diagnosis
Ahn, Min-Seop; Kim, Daehyun; Sperber, Kenneth R.; ...
2017-03-23
The Madden-Julian Oscillation (MJO) simulation diagnostics developed by MJO Working Group and the process-oriented MJO simulation diagnostics developed by MJO Task Force are applied to 37 Coupled Model Intercomparison Project phase 5 (CMIP5) models in order to assess model skill in representing amplitude, period, and coherent eastward propagation of the MJO, and to establish a link between MJO simulation skill and parameterized physical processes. Process-oriented diagnostics include the Relative Humidity Composite based on Precipitation (RHCP), Normalized Gross Moist Stability (NGMS), and the Greenhouse Enhancement Factor (GEF). Numerous scalar metrics are developed to quantify the results. Most CMIP5 models underestimate MJOmore » amplitude, especially when outgoing longwave radiation (OLR) is used in the evaluation, and exhibit too fast phase speed while lacking coherence between eastward propagation of precipitation/convection and the wind field. The RHCP-metric, indicative of the sensitivity of simulated convection to low-level environmental moisture, and the NGMS-metric, indicative of the efficiency of a convective atmosphere for exporting moist static energy out of the column, show robust correlations with a large number of MJO skill metrics. The GEF-metric, indicative of the strength of the column-integrated longwave radiative heating due to cloud-radiation interaction, is also correlated with the MJO skill metrics, but shows relatively lower correlations compared to the RHCP- and NGMS-metrics. Our results suggest that modifications to processes associated with moisture-convection coupling and the gross moist stability might be the most fruitful for improving simulations of the MJO. Though the GEF-metric exhibits lower correlations with the MJO skill metrics, the longwave radiation feedback is highly relevant for simulating the weak precipitation anomaly regime that may be important for the establishment of shallow convection and the transition to deep convection.« less
MJO simulation in CMIP5 climate models: MJO skill metrics and process-oriented diagnosis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahn, Min-Seop; Kim, Daehyun; Sperber, Kenneth R.
The Madden-Julian Oscillation (MJO) simulation diagnostics developed by MJO Working Group and the process-oriented MJO simulation diagnostics developed by MJO Task Force are applied to 37 Coupled Model Intercomparison Project phase 5 (CMIP5) models in order to assess model skill in representing amplitude, period, and coherent eastward propagation of the MJO, and to establish a link between MJO simulation skill and parameterized physical processes. Process-oriented diagnostics include the Relative Humidity Composite based on Precipitation (RHCP), Normalized Gross Moist Stability (NGMS), and the Greenhouse Enhancement Factor (GEF). Numerous scalar metrics are developed to quantify the results. Most CMIP5 models underestimate MJOmore » amplitude, especially when outgoing longwave radiation (OLR) is used in the evaluation, and exhibit too fast phase speed while lacking coherence between eastward propagation of precipitation/convection and the wind field. The RHCP-metric, indicative of the sensitivity of simulated convection to low-level environmental moisture, and the NGMS-metric, indicative of the efficiency of a convective atmosphere for exporting moist static energy out of the column, show robust correlations with a large number of MJO skill metrics. The GEF-metric, indicative of the strength of the column-integrated longwave radiative heating due to cloud-radiation interaction, is also correlated with the MJO skill metrics, but shows relatively lower correlations compared to the RHCP- and NGMS-metrics. Our results suggest that modifications to processes associated with moisture-convection coupling and the gross moist stability might be the most fruitful for improving simulations of the MJO. Though the GEF-metric exhibits lower correlations with the MJO skill metrics, the longwave radiation feedback is highly relevant for simulating the weak precipitation anomaly regime that may be important for the establishment of shallow convection and the transition to deep convection.« less
2004-06-01
18 EBO Cognitive or Memetic input type ..................................................................... 18 Unanticipated EBO generated... Memetic Effects Based COA.................................................................................... 23 Policy...41 Belief systems or Memetic Content Metrics
A Linearized Model for Flicker and Contrast Thresholds at Various Retinal Illuminances
NASA Technical Reports Server (NTRS)
Ahumada, Albert; Watson, Andrew
2015-01-01
We previously proposed a flicker visibility metric for bright displays, based on psychophysical data collected at a high mean luminance. Here we extend the metric to other mean luminances. This extension relies on a linear relation between log sensitivity and critical fusion frequency, and a linear relation between critical fusion frequency and log retina lilluminance. Consistent with our previous metric, the extended flicker visibility metric is measured in just-noticeable differences (JNDs).
Towards a Visual Quality Metric for Digital Video
NASA Technical Reports Server (NTRS)
Watson, Andrew B.
1998-01-01
The advent of widespread distribution of digital video creates a need for automated methods for evaluating visual quality of digital video. This is particularly so since most digital video is compressed using lossy methods, which involve the controlled introduction of potentially visible artifacts. Compounding the problem is the bursty nature of digital video, which requires adaptive bit allocation based on visual quality metrics. In previous work, we have developed visual quality metrics for evaluating, controlling, and optimizing the quality of compressed still images. These metrics incorporate simplified models of human visual sensitivity to spatial and chromatic visual signals. The challenge of video quality metrics is to extend these simplified models to temporal signals as well. In this presentation I will discuss a number of the issues that must be resolved in the design of effective video quality metrics. Among these are spatial, temporal, and chromatic sensitivity and their interactions, visual masking, and implementation complexity. I will also touch on the question of how to evaluate the performance of these metrics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, T; Ruan, D
Purpose: The growing size and heterogeneity in training atlas necessitates sophisticated schemes to identify only the most relevant atlases for the specific multi-atlas-based image segmentation problem. This study aims to develop a model to infer the inaccessible oracle geometric relevance metric from surrogate image similarity metrics, and based on such model, provide guidance to atlas selection in multi-atlas-based image segmentation. Methods: We relate the oracle geometric relevance metric in label space to the surrogate metric in image space, by a monotonically non-decreasing function with additive random perturbations. Subsequently, a surrogate’s ability to prognosticate the oracle order for atlas subset selectionmore » is quantified probabilistically. Finally, important insights and guidance are provided for the design of fusion set size, balancing the competing demands to include the most relevant atlases and to exclude the most irrelevant ones. A systematic solution is derived based on an optimization framework. Model verification and performance assessment is performed based on clinical prostate MR images. Results: The proposed surrogate model was exemplified by a linear map with normally distributed perturbation, and verified with several commonly-used surrogates, including MSD, NCC and (N)MI. The derived behaviors of different surrogates in atlas selection and their corresponding performance in ultimate label estimate were validated. The performance of NCC and (N)MI was similarly superior to MSD, with a 10% higher atlas selection probability and a segmentation performance increase in DSC by 0.10 with the first and third quartiles of (0.83, 0.89), compared to (0.81, 0.89). The derived optimal fusion set size, valued at 7/8/8/7 for MSD/NCC/MI/NMI, agreed well with the appropriate range [4, 9] from empirical observation. Conclusion: This work has developed an efficacious probabilistic model to characterize the image-based surrogate metric on atlas selection. Analytical insights lead to valid guiding principles on fusion set size design.« less
A causal examination of the effects of confounding factors on multimetric indices
Schoolmaster, Donald R.; Grace, James B.; Schweiger, E. William; Mitchell, Brian R.; Guntenspergen, Glenn R.
2013-01-01
The development of multimetric indices (MMIs) as a means of providing integrative measures of ecosystem condition is becoming widespread. An increasingly recognized problem for the interpretability of MMIs is controlling for the potentially confounding influences of environmental covariates. Most common approaches to handling covariates are based on simple notions of statistical control, leaving the causal implications of covariates and their adjustment unstated. In this paper, we use graphical models to examine some of the potential impacts of environmental covariates on the observed signals between human disturbance and potential response metrics. Using simulations based on various causal networks, we show how environmental covariates can both obscure and exaggerate the effects of human disturbance on individual metrics. We then examine from a causal interpretation standpoint the common practice of adjusting ecological metrics for environmental influences using only the set of sites deemed to be in reference condition. We present and examine the performance of an alternative approach to metric adjustment that uses the whole set of sites and models both environmental and human disturbance effects simultaneously. The findings from our analyses indicate that failing to model and adjust metrics can result in a systematic bias towards those metrics in which environmental covariates function to artificially strengthen the metric–disturbance relationship resulting in MMIs that do not accurately measure impacts of human disturbance. We also find that a “whole-set modeling approach” requires fewer assumptions and is more efficient with the given information than the more commonly applied “reference-set” approach.
Duran, Cassidy; Estrada, Sean; O'Malley, Marcia; Sheahan, Malachi G; Shames, Murray L; Lee, Jason T; Bismuth, Jean
2015-12-01
Fundamental skills testing is now required for certification in general surgery. No model for assessing fundamental endovascular skills exists. Our objective was to develop a model that tests the fundamental endovascular skills and differentiates competent from noncompetent performance. The Fundamentals of Endovascular Surgery model was developed in silicon and virtual-reality versions. Twenty individuals (with a range of experience) performed four tasks on each model in three separate sessions. Tasks on the silicon model were performed under fluoroscopic guidance, and electromagnetic tracking captured motion metrics for catheter tip position. Image processing captured tool tip position and motion on the virtual model. Performance was evaluated using a global rating scale, blinded video assessment of error metrics, and catheter tip movement and position. Motion analysis was based on derivations of speed and position that define proficiency of movement (spectral arc length, duration of submovement, and number of submovements). Performance was significantly different between competent and noncompetent interventionalists for the three performance measures of motion metrics, error metrics, and global rating scale. The mean error metric score was 6.83 for noncompetent individuals and 2.51 for the competent group (P < .0001). Median global rating scores were 2.25 for the noncompetent group and 4.75 for the competent users (P < .0001). The Fundamentals of Endovascular Surgery model successfully differentiates competent and noncompetent performance of fundamental endovascular skills based on a series of objective performance measures. This model could serve as a platform for skills testing for all trainees. Copyright © 2015 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.
Human-centric predictive model of task difficulty for human-in-the-loop control tasks
Majewicz Fey, Ann
2018-01-01
Quantitatively measuring the difficulty of a manipulation task in human-in-the-loop control systems is ill-defined. Currently, systems are typically evaluated through task-specific performance measures and post-experiment user surveys; however, these methods do not capture the real-time experience of human users. In this study, we propose to analyze and predict the difficulty of a bivariate pointing task, with a haptic device interface, using human-centric measurement data in terms of cognition, physical effort, and motion kinematics. Noninvasive sensors were used to record the multimodal response of human user for 14 subjects performing the task. A data-driven approach for predicting task difficulty was implemented based on several task-independent metrics. We compare four possible models for predicting task difficulty to evaluated the roles of the various types of metrics, including: (I) a movement time model, (II) a fusion model using both physiological and kinematic metrics, (III) a model only with kinematic metrics, and (IV) a model only with physiological metrics. The results show significant correlation between task difficulty and the user sensorimotor response. The fusion model, integrating user physiology and motion kinematics, provided the best estimate of task difficulty (R2 = 0.927), followed by a model using only kinematic metrics (R2 = 0.921). Both models were better predictors of task difficulty than the movement time model (R2 = 0.847), derived from Fitt’s law, a well studied difficulty model for human psychomotor control. PMID:29621301
Measuring distance “as the horse runs”: Cross-scale comparison of terrain-based metrics
Buttenfield, Barbara P.; Ghandehari, M; Leyk, S; Stanislawski, Larry V.; Brantley, M E; Qiang, Yi
2016-01-01
Distance metrics play significant roles in spatial modeling tasks, such as flood inundation (Tucker and Hancock 2010), stream extraction (Stanislawski et al. 2015), power line routing (Kiessling et al. 2003) and analysis of surface pollutants such as nitrogen (Harms et al. 2009). Avalanche risk is based on slope, aspect, and curvature, all directly computed from distance metrics (Gutiérrez 2012). Distance metrics anchor variogram analysis, kernel estimation, and spatial interpolation (Cressie 1993). Several approaches are employed to measure distance. Planar metrics measure straight line distance between two points (“as the crow flies”) and are simple and intuitive, but suffer from uncertainties. Planar metrics assume that Digital Elevation Model (DEM) pixels are rigid and flat, as tiny facets of ceramic tile approximating a continuous terrain surface. In truth, terrain can bend, twist and undulate within each pixel.Work with Light Detection and Ranging (lidar) data or High Resolution Topography to achieve precise measurements present challenges, as filtering can eliminate or distort significant features (Passalacqua et al. 2015). The current availability of lidar data is far from comprehensive in developed nations, and non-existent in many rural and undeveloped regions. Notwithstanding computational advances, distance estimation on DEMs has never been systematically assessed, due to assumptions that improvements are so small that surface adjustment is unwarranted. For individual pixels inaccuracies may be small, but additive effects can propagate dramatically, especially in regional models (e.g., disaster evacuation) or global models (e.g., sea level rise) where pixels span dozens to hundreds of kilometers (Usery et al 2003). Such models are increasingly common, lending compelling reasons to understand shortcomings in the use of planar distance metrics. Researchers have studied curvature-based terrain modeling. Jenny et al. (2011) use curvature to generate hierarchical terrain models. Schneider (2001) creates a ‘plausibility’ metric for DEM-extracted structure lines. d’Oleire- Oltmanns et al. (2014) adopt object-based image processing as an alternative to working with DEMs; acknowledging the pre-processing involved in converting terrain into an object model is computationally intensive, and likely infeasible for some applications.This paper compares planar distance with surface adjusted distance, evolving from distance “as the crow flies” to distance “as the horse runs”. Several methods are compared for DEMs spanning a range of resolutions for the study area and validated against a 3 meter (m) lidar data benchmark. Error magnitudes vary with pixel size and with the method of surface adjustment. The rate of error increase may also vary with landscape type (terrain roughness, precipitation regimes and land settlement patterns). Cross-scale analysis for a single study area is reported here. Additional areas will be presented at the conference.
Michael E. Goerndt; Vincente J. Monleon; Hailemariam. Temesgen
2010-01-01
Three sets of linear models were developed to predict several forest attributes, using stand-level and single-tree remote sensing (STRS) light detection and ranging (LiDAR) metrics as predictor variables. The first used only area-level metrics (ALM) associated with first-return height distribution, percentage of cover, and canopy transparency. The second alternative...
Comparison of Highly Resolved Model-Based Exposure ...
Human exposure to air pollution in many studies is represented by ambient concentrations from space-time kriging of observed values. Space-time kriging techniques based on a limited number of ambient monitors may fail to capture the concentration from local sources. Further, because people spend more time indoors, using ambient concentration to represent exposure may cause error. To quantify the associated exposure error, we computed a series of six different hourly-based exposure metrics at 16,095 Census blocks of three Counties in North Carolina for CO, NOx, PM2.5, and elemental carbon (EC) during 2012. These metrics include ambient background concentration from space-time ordinary kriging (STOK), ambient on-road concentration from the Research LINE source dispersion model (R-LINE), a hybrid concentration combining STOK and R-LINE, and their associated indoor concentrations from an indoor infiltration mass balance model. Using a hybrid-based indoor concentration as the standard, the comparison showed that outdoor STOK metrics yielded large error at both population (67% to 93%) and individual level (average bias between −10% to 95%). For pollutants with significant contribution from on-road emission (EC and NOx), the on-road based indoor metric performs the best at the population level (error less than 52%). At the individual level, however, the STOK-based indoor concentration performs the best (average bias below 30%). For PM2.5, due to the relatively low co
SU-E-I-71: Quality Assessment of Surrogate Metrics in Multi-Atlas-Based Image Segmentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, T; Ruan, D
Purpose: With the ever-growing data of heterogeneous quality, relevance assessment of atlases becomes increasingly critical for multi-atlas-based image segmentation. However, there is no universally recognized best relevance metric and even a standard to compare amongst candidates remains elusive. This study, for the first time, designs a quantification to assess relevance metrics’ quality, based on a novel perspective of the metric as surrogate for inferring the inaccessible oracle geometric agreement. Methods: We first develop an inference model to relate surrogate metrics in image space to the underlying oracle relevance metric in segmentation label space, with a monotonically non-decreasing function subject tomore » random perturbations. Subsequently, we investigate model parameters to reveal key contributing factors to surrogates’ ability in prognosticating the oracle relevance value, for the specific task of atlas selection. Finally, we design an effective contract-to-noise ratio (eCNR) to quantify surrogates’ quality based on insights from these analyses and empirical observations. Results: The inference model was specialized to a linear function with normally distributed perturbations, with surrogate metric exemplified by several widely-used image similarity metrics, i.e., MSD/NCC/(N)MI. Surrogates’ behaviors in selecting the most relevant atlases were assessed under varying eCNR, showing that surrogates with high eCNR dominated those with low eCNR in retaining the most relevant atlases. In an end-to-end validation, NCC/(N)MI with eCNR of 0.12 compared to MSD with eCNR of 0.10 resulted in statistically better segmentation with mean DSC of about 0.85 and the first and third quartiles of (0.83, 0.89), compared to MSD with mean DSC of 0.84 and the first and third quartiles of (0.81, 0.89). Conclusion: The designed eCNR is capable of characterizing surrogate metrics’ quality in prognosticating the oracle relevance value. It has been demonstrated to be correlated with the performance of relevant atlas selection and ultimate label fusion.« less
NASA Astrophysics Data System (ADS)
Bulliner, E. A., IV; Lindner, G. A.; Bouska, K.; Paukert, C.; Jacobson, R. B.
2017-12-01
Within large-river ecosystems, floodplains serve a variety of important ecological functions. A recent survey of 80 managers of floodplain conservation lands along the Upper and Middle Mississippi and Lower Missouri Rivers in the central United States found that the most critical information needed to improve floodplain management centered on metrics for characterizing depth, extent, frequency, duration, and timing of inundation. These metrics can be delivered to managers efficiently through cloud-based interactive maps. To calculate these metrics, we interpolated an existing one-dimensional hydraulic model for the Lower Missouri River, which simulated water surface elevations at cross sections spaced (<1 km) to sufficiently characterize water surface profiles along an approximately 800 km stretch upstream from the confluence with the Mississippi River over an 80-year record at a daily time step. To translate these water surface elevations to inundation depths, we subtracted a merged terrain model consisting of floodplain LIDAR and bathymetric surveys of the river channel. This approach resulted in a 29000+ day time series of inundation depths across the floodplain using grid cells with 30 m spatial resolution. Initially, we used these data on a local workstation to calculate a suite of nine spatially distributed inundation metrics for the entire model domain. These metrics are calculated on a per pixel basis and encompass a variety of temporal criteria generally relevant to flora and fauna of interest to floodplain managers, including, for example, the average number of days inundated per year within a growing season. Using a local workstation, calculating these metrics for the entire model domain requires several hours. However, for the needs of individual floodplain managers working at site scales, these metrics may be too general and inflexible. Instead of creating a priori a suite of inundation metrics able to satisfy all user needs, we present the usage of Google's cloud-based Earth Engine API to allow users to define and query their own inundation metrics from our dataset and produce maps nearly instantaneously. This approach allows users to select the time periods and inundation depths germane to managing local species, potentially facilitating conservation of floodplain ecosystems.
Modeling nutrient in-stream processes at the watershed scale using Nutrient Spiralling metrics
NASA Astrophysics Data System (ADS)
Marcé, R.; Armengol, J.
2009-01-01
One of the fundamental problems of using large-scale biogeochemical models is the uncertainty involved in aggregating the components of fine-scale deterministic models in watershed applications, and in extrapolating the results of field-scale measurements to larger spatial scales. Although spatial or temporal lumping may reduce the problem, information obtained during fine-scale research may not apply to lumped categories. Thus, the use of knowledge gained through fine-scale studies to predict coarse-scale phenomena is not straightforward. In this study, we used the nutrient uptake metrics defined in the Nutrient Spiralling concept to formulate the equations governing total phosphorus in-stream fate in a watershed-scale biogeochemical model. The rationale of this approach relies on the fact that the working unit for the nutrient in-stream processes of most watershed-scale models is the reach, the same unit used in field research based on the Nutrient Spiralling concept. Automatic calibration of the model using data from the study watershed confirmed that the Nutrient Spiralling formulation is a convenient simplification of the biogeochemical transformations involved in total phosphorus in-stream fate. Following calibration, the model was used as a heuristic tool in two ways. First, we compared the Nutrient Spiralling metrics obtained during calibration with results obtained during field-based research in the study watershed. The simulated and measured metrics were similar, suggesting that information collected at the reach scale during research based on the Nutrient Spiralling concept can be directly incorporated into models, without the problems associated with upscaling results from fine-scale studies. Second, we used results from our model to examine some patterns observed in several reports on Nutrient Spiralling metrics measured in impaired streams. Although these two exercises involve circular reasoning and, consequently, cannot validate any hypothesis, this is a powerful example of how models can work as heuristic tools to compare hypotheses and stimulate research in ecology.
Tang, Tao; Stevenson, R Jan; Infante, Dana M
2016-10-15
Regional variation in both natural environment and human disturbance can influence performance of ecological assessments. In this study we calculated 5 types of benthic diatom multimetric indices (MMIs) with 3 different approaches to account for variation in ecological assessments. We used: site groups defined by ecoregions or diatom typologies; the same or different sets of metrics among site groups; and unmodeled or modeled MMIs, where models accounted for natural variation in metrics within site groups by calculating an expected reference condition for each metric and each site. We used data from the USEPA's National Rivers and Streams Assessment to calculate the MMIs and evaluate changes in MMI performance. MMI performance was evaluated with indices of precision, bias, responsiveness, sensitivity and relevancy which were respectively measured as MMI variation among reference sites, effects of natural variables on MMIs, difference between MMIs at reference and highly disturbed sites, percent of highly disturbed sites properly classified, and relation of MMIs to human disturbance and stressors. All 5 types of MMIs showed considerable discrimination ability. Using different metrics among ecoregions sometimes reduced precision, but it consistently increased responsiveness, sensitivity, and relevancy. Site specific metric modeling reduced bias and increased responsiveness. Combined use of different metrics among site groups and site specific modeling significantly improved MMI performance irrespective of site grouping approach. Compared to ecoregion site classification, grouping sites based on diatom typologies improved precision, but did not improve overall performance of MMIs if we accounted for natural variation in metrics with site specific models. We conclude that using different metrics among ecoregions and site specific metric modeling improve MMI performance, particularly when used together. Applications of these MMI approaches in ecological assessments introduced a tradeoff with assessment consistency when metrics differed across site groups, but they justified the convenient and consistent use of ecoregions. Copyright © 2016 Elsevier B.V. All rights reserved.
Validation metrics for turbulent plasma transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holland, C., E-mail: chholland@ucsd.edu
Developing accurate models of plasma dynamics is essential for confident predictive modeling of current and future fusion devices. In modern computer science and engineering, formal verification and validation processes are used to assess model accuracy and establish confidence in the predictive capabilities of a given model. This paper provides an overview of the key guiding principles and best practices for the development of validation metrics, illustrated using examples from investigations of turbulent transport in magnetically confined plasmas. Particular emphasis is given to the importance of uncertainty quantification and its inclusion within the metrics, and the need for utilizing synthetic diagnosticsmore » to enable quantitatively meaningful comparisons between simulation and experiment. As a starting point, the structure of commonly used global transport model metrics and their limitations is reviewed. An alternate approach is then presented, which focuses upon comparisons of predicted local fluxes, fluctuations, and equilibrium gradients against observation. The utility of metrics based upon these comparisons is demonstrated by applying them to gyrokinetic predictions of turbulent transport in a variety of discharges performed on the DIII-D tokamak [J. L. Luxon, Nucl. Fusion 42, 614 (2002)], as part of a multi-year transport model validation activity.« less
Turbulence Hazard Metric Based on Peak Accelerations for Jetliner Passengers
NASA Technical Reports Server (NTRS)
Stewart, Eric C.
2005-01-01
Calculations are made of the approximate hazard due to peak normal accelerations of an airplane flying through a simulated vertical wind field associated with a convective frontal system. The calculations are based on a hazard metric developed from a systematic application of a generic math model to 1-cosine discrete gusts of various amplitudes and gust lengths. The math model simulates the three degree-of- freedom longitudinal rigid body motion to vertical gusts and includes (1) fuselage flexibility, (2) the lag in the downwash from the wing to the tail, (3) gradual lift effects, (4) a simplified autopilot, and (5) motion of an unrestrained passenger in the rear cabin. Airplane and passenger response contours are calculated for a matrix of gust amplitudes and gust lengths. The airplane response contours are used to develop an approximate hazard metric of peak normal accelerations as a function of gust amplitude and gust length. The hazard metric is then applied to a two-dimensional simulated vertical wind field of a convective frontal system. The variations of the hazard metric with gust length and airplane heading are demonstrated.
Quantification of Dynamic Model Validation Metrics Using Uncertainty Propagation from Requirements
NASA Technical Reports Server (NTRS)
Brown, Andrew M.; Peck, Jeffrey A.; Stewart, Eric C.
2018-01-01
The Space Launch System, NASA's new large launch vehicle for long range space exploration, is presently in the final design and construction phases, with the first launch scheduled for 2019. A dynamic model of the system has been created and is critical for calculation of interface loads and natural frequencies and mode shapes for guidance, navigation, and control (GNC). Because of the program and schedule constraints, a single modal test of the SLS will be performed while bolted down to the Mobile Launch Pad just before the first launch. A Monte Carlo and optimization scheme will be performed to create thousands of possible models based on given dispersions in model properties and to determine which model best fits the natural frequencies and mode shapes from modal test. However, the question still remains as to whether this model is acceptable for the loads and GNC requirements. An uncertainty propagation and quantification (UP and UQ) technique to develop a quantitative set of validation metrics that is based on the flight requirements has therefore been developed and is discussed in this paper. There has been considerable research on UQ and UP and validation in the literature, but very little on propagating the uncertainties from requirements, so most validation metrics are "rules-of-thumb;" this research seeks to come up with more reason-based metrics. One of the main assumptions used to achieve this task is that the uncertainty in the modeling of the fixed boundary condition is accurate, so therefore that same uncertainty can be used in propagating the fixed-test configuration to the free-free actual configuration. The second main technique applied here is the usage of the limit-state formulation to quantify the final probabilistic parameters and to compare them with the requirements. These techniques are explored with a simple lumped spring-mass system and a simplified SLS model. When completed, it is anticipated that this requirements-based validation metric will provide a quantified confidence and probability of success for the final SLS dynamics model, which will be critical for a successful launch program, and can be applied in the many other industries where an accurate dynamic model is required.
Toward a perceptual video-quality metric
NASA Astrophysics Data System (ADS)
Watson, Andrew B.
1998-07-01
The advent of widespread distribution of digital video creates a need for automated methods for evaluating the visual quality of digital video. This is particularly so since most digital video is compressed using lossy methods, which involve the controlled introduction of potentially visible artifacts. Compounding the problem is the bursty nature of digital video, which requires adaptive bit allocation based on visual quality metrics, and the economic need to reduce bit-rate to the lowest level that yields acceptable quality. In previous work, we have developed visual quality metrics for evaluating, controlling,a nd optimizing the quality of compressed still images. These metrics incorporate simplified models of human visual sensitivity to spatial and chromatic visual signals. Here I describe a new video quality metric that is an extension of these still image metrics into the time domain. Like the still image metrics, it is based on the Discrete Cosine Transform. An effort has been made to minimize the amount of memory and computation required by the metric, in order that might be applied in the widest range of applications. To calibrate the basic sensitivity of this metric to spatial and temporal signals we have made measurements of visual thresholds for temporally varying samples of DCT quantization noise.
Stability and Performance Metrics for Adaptive Flight Control
NASA Technical Reports Server (NTRS)
Stepanyan, Vahram; Krishnakumar, Kalmanje; Nguyen, Nhan; VanEykeren, Luarens
2009-01-01
This paper addresses the problem of verifying adaptive control techniques for enabling safe flight in the presence of adverse conditions. Since the adaptive systems are non-linear by design, the existing control verification metrics are not applicable to adaptive controllers. Moreover, these systems are in general highly uncertain. Hence, the system's characteristics cannot be evaluated by relying on the available dynamical models. This necessitates the development of control verification metrics based on the system's input-output information. For this point of view, a set of metrics is introduced that compares the uncertain aircraft's input-output behavior under the action of an adaptive controller to that of a closed-loop linear reference model to be followed by the aircraft. This reference model is constructed for each specific maneuver using the exact aerodynamic and mass properties of the aircraft to meet the stability and performance requirements commonly accepted in flight control. The proposed metrics are unified in the sense that they are model independent and not restricted to any specific adaptive control methods. As an example, we present simulation results for a wing damaged generic transport aircraft with several existing adaptive controllers.
New t-gap insertion-deletion-like metrics for DNA hybridization thermodynamic modeling.
D'yachkov, Arkadii G; Macula, Anthony J; Pogozelski, Wendy K; Renz, Thomas E; Rykov, Vyacheslav V; Torney, David C
2006-05-01
We discuss the concept of t-gap block isomorphic subsequences and use it to describe new abstract string metrics that are similar to the Levenshtein insertion-deletion metric. Some of the metrics that we define can be used to model a thermodynamic distance function on single-stranded DNA sequences. Our model captures a key aspect of the nearest neighbor thermodynamic model for hybridized DNA duplexes. One version of our metric gives the maximum number of stacked pairs of hydrogen bonded nucleotide base pairs that can be present in any secondary structure in a hybridized DNA duplex without pseudoknots. Thermodynamic distance functions are important components in the construction of DNA codes, and DNA codes are important components in biomolecular computing, nanotechnology, and other biotechnical applications that employ DNA hybridization assays. We show how our new distances can be calculated by using a dynamic programming method, and we derive a Varshamov-Gilbert-like lower bound on the size of some of codes using these distance functions as constraints. We also discuss software implementation of our DNA code design methods.
[Perceptual sharpness metric for visible and infrared color fusion images].
Gao, Shao-Shu; Jin, Wei-Qi; Wang, Xia; Wang, Ling-Xue; Luo, Yuan
2012-12-01
For visible and infrared color fusion images, objective sharpness assessment model is proposed to measure the clarity of detail and edge definition of the fusion image. Firstly, the contrast sensitivity functions (CSF) of the human visual system is used to reduce insensitive frequency components under certain viewing conditions. Secondly, perceptual contrast model, which takes human luminance masking effect into account, is proposed based on local band-limited contrast model. Finally, the perceptual contrast is calculated in the region of interest (contains image details and edges) in the fusion image to evaluate image perceptual sharpness. Experimental results show that the proposed perceptual sharpness metrics provides better predictions, which are more closely matched to human perceptual evaluations, than five existing sharpness (blur) metrics for color images. The proposed perceptual sharpness metrics can evaluate the perceptual sharpness for color fusion images effectively.
Orientation estimation of anatomical structures in medical images for object recognition
NASA Astrophysics Data System (ADS)
Bağci, Ulaş; Udupa, Jayaram K.; Chen, Xinjian
2011-03-01
Recognition of anatomical structures is an important step in model based medical image segmentation. It provides pose estimation of objects and information about "where" roughly the objects are in the image and distinguishing them from other object-like entities. In,1 we presented a general method of model-based multi-object recognition to assist in segmentation (delineation) tasks. It exploits the pose relationship that can be encoded, via the concept of ball scale (b-scale), between the binary training objects and their associated grey images. The goal was to place the model, in a single shot, close to the right pose (position, orientation, and scale) in a given image so that the model boundaries fall in the close vicinity of object boundaries in the image. Unlike position and scale parameters, we observe that orientation parameters require more attention when estimating the pose of the model as even small differences in orientation parameters can lead to inappropriate recognition. Motivated from the non-Euclidean nature of the pose information, we propose in this paper the use of non-Euclidean metrics to estimate orientation of the anatomical structures for more accurate recognition and segmentation. We statistically analyze and evaluate the following metrics for orientation estimation: Euclidean, Log-Euclidean, Root-Euclidean, Procrustes Size-and-Shape, and mean Hermitian metrics. The results show that mean Hermitian and Cholesky decomposition metrics provide more accurate orientation estimates than other Euclidean and non-Euclidean metrics.
Multi-Dimensional Calibration of Impact Dynamic Models
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Reaves, Mercedes C.; Annett, Martin S.; Jackson, Karen E.
2011-01-01
NASA Langley, under the Subsonic Rotary Wing Program, recently completed two helicopter tests in support of an in-house effort to study crashworthiness. As part of this effort, work is on-going to investigate model calibration approaches and calibration metrics for impact dynamics models. Model calibration of impact dynamics problems has traditionally assessed model adequacy by comparing time histories from analytical predictions to test at only a few critical locations. Although this approach provides for a direct measure of the model predictive capability, overall system behavior is only qualitatively assessed using full vehicle animations. In order to understand the spatial and temporal relationships of impact loads as they migrate throughout the structure, a more quantitative approach is needed. In this work impact shapes derived from simulated time history data are used to recommend sensor placement and to assess model adequacy using time based metrics and orthogonality multi-dimensional metrics. An approach for model calibration is presented that includes metric definitions, uncertainty bounds, parameter sensitivity, and numerical optimization to estimate parameters to reconcile test with analysis. The process is illustrated using simulated experiment data.
Up Periscope! Designing a New Perceptual Metric for Imaging System Performance
NASA Technical Reports Server (NTRS)
Watson, Andrew B.
2016-01-01
Modern electronic imaging systems include optics, sensors, sampling, noise, processing, compression, transmission and display elements, and are viewed by the human eye. Many of these elements cannot be assessed by traditional imaging system metrics such as the MTF. More complex metrics such as NVTherm do address these elements, but do so largely through parametric adjustment of an MTF-like metric. The parameters are adjusted through subjective testing of human observers identifying specific targets in a set of standard images. We have designed a new metric that is based on a model of human visual pattern classification. In contrast to previous metrics, ours simulates the human observer identifying the standard targets. One application of this metric is to quantify performance of modern electronic periscope systems on submarines.
Automated Assessment of Visual Quality of Digital Video
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Ellis, Stephen R. (Technical Monitor)
1997-01-01
The advent of widespread distribution of digital video creates a need for automated methods for evaluating visual quality of digital video. This is particularly so since most digital video is compressed using lossy methods, which involve the controlled introduction of potentially visible artifacts. Compounding the problem is the bursty nature of digital video, which requires adaptive bit allocation based on visual quality metrics. In previous work, we have developed visual quality metrics for evaluating, controlling, and optimizing the quality of compressed still images[1-4]. These metrics incorporate simplified models of human visual sensitivity to spatial and chromatic visual signals. The challenge of video quality metrics is to extend these simplified models to temporal signals as well. In this presentation I will discuss a number of the issues that must be resolved in the design of effective video quality metrics. Among these are spatial, temporal, and chromatic sensitivity and their interactions, visual masking, and implementation complexity. I will also touch on the question of how to evaluate the performance of these metrics.
NASA Technical Reports Server (NTRS)
Shim, J. S.; Kuznetsova, M.; Rastatter, L.; Hesse, M.; Bilitza, D.; Butala, M.; Codrescu, M.; Emery, B.; Foster, B.; Fuller-Rowell, T.;
2011-01-01
Objective quantification of model performance based on metrics helps us evaluate the current state of space physics modeling capability, address differences among various modeling approaches, and track model improvements over time. The Coupling, Energetics, and Dynamics of Atmospheric Regions (CEDAR) Electrodynamics Thermosphere Ionosphere (ETI) Challenge was initiated in 2009 to assess accuracy of various ionosphere/thermosphere models in reproducing ionosphere and thermosphere parameters. A total of nine events and five physical parameters were selected to compare between model outputs and observations. The nine events included two strong and one moderate geomagnetic storm events from GEM Challenge events and three moderate storms and three quiet periods from the first half of the International Polar Year (IPY) campaign, which lasted for 2 years, from March 2007 to March 2009. The five physical parameters selected were NmF2 and hmF2 from ISRs and LEO satellites such as CHAMP and COSMIC, vertical drifts at Jicamarca, and electron and neutral densities along the track of the CHAMP satellite. For this study, four different metrics and up to 10 models were used. In this paper, we focus on preliminary results of the study using ground-based measurements, which include NmF2 and hmF2 from Incoherent Scatter Radars (ISRs), and vertical drifts at Jicamarca. The results show that the model performance strongly depends on the type of metrics used, and thus no model is ranked top for all used metrics. The analysis further indicates that performance of the model also varies with latitude and geomagnetic activity level.
Voxel-based statistical analysis of uncertainties associated with deformable image registration
NASA Astrophysics Data System (ADS)
Li, Shunshan; Glide-Hurst, Carri; Lu, Mei; Kim, Jinkoo; Wen, Ning; Adams, Jeffrey N.; Gordon, James; Chetty, Indrin J.; Zhong, Hualiang
2013-09-01
Deformable image registration (DIR) algorithms have inherent uncertainties in their displacement vector fields (DVFs).The purpose of this study is to develop an optimal metric to estimate DIR uncertainties. Six computational phantoms have been developed from the CT images of lung cancer patients using a finite element method (FEM). The FEM generated DVFs were used as a standard for registrations performed on each of these phantoms. A mechanics-based metric, unbalanced energy (UE), was developed to evaluate these registration DVFs. The potential correlation between UE and DIR errors was explored using multivariate analysis, and the results were validated by landmark approach and compared with two other error metrics: DVF inverse consistency (IC) and image intensity difference (ID). Landmark-based validation was performed using the POPI-model. The results show that the Pearson correlation coefficient between UE and DIR error is rUE-error = 0.50. This is higher than rIC-error = 0.29 for IC and DIR error and rID-error = 0.37 for ID and DIR error. The Pearson correlation coefficient between UE and the product of the DIR displacements and errors is rUE-error × DVF = 0.62 for the six patients and rUE-error × DVF = 0.73 for the POPI-model data. It has been demonstrated that UE has a strong correlation with DIR errors, and the UE metric outperforms the IC and ID metrics in estimating DIR uncertainties. The quantified UE metric can be a useful tool for adaptive treatment strategies, including probability-based adaptive treatment planning.
Perceptual video quality assessment in H.264 video coding standard using objective modeling.
Karthikeyan, Ramasamy; Sainarayanan, Gopalakrishnan; Deepa, Subramaniam Nachimuthu
2014-01-01
Since usage of digital video is wide spread nowadays, quality considerations have become essential, and industry demand for video quality measurement is rising. This proposal provides a method of perceptual quality assessment in H.264 standard encoder using objective modeling. For this purpose, quality impairments are calculated and a model is developed to compute the perceptual video quality metric based on no reference method. Because of the shuttle difference between the original video and the encoded video the quality of the encoded picture gets degraded, this quality difference is introduced by the encoding process like Intra and Inter prediction. The proposed model takes into account of the artifacts introduced by these spatial and temporal activities in the hybrid block based coding methods and an objective modeling of these artifacts into subjective quality estimation is proposed. The proposed model calculates the objective quality metric using subjective impairments; blockiness, blur and jerkiness compared to the existing bitrate only calculation defined in the ITU G 1070 model. The accuracy of the proposed perceptual video quality metrics is compared against popular full reference objective methods as defined by VQEG.
Shilling Attacks Detection in Recommender Systems Based on Target Item Analysis
Zhou, Wei; Wen, Junhao; Koh, Yun Sing; Xiong, Qingyu; Gao, Min; Dobbie, Gillian; Alam, Shafiq
2015-01-01
Recommender systems are highly vulnerable to shilling attacks, both by individuals and groups. Attackers who introduce biased ratings in order to affect recommendations, have been shown to negatively affect collaborative filtering (CF) algorithms. Previous research focuses only on the differences between genuine profiles and attack profiles, ignoring the group characteristics in attack profiles. In this paper, we study the use of statistical metrics to detect rating patterns of attackers and group characteristics in attack profiles. Another question is that most existing detecting methods are model specific. Two metrics, Rating Deviation from Mean Agreement (RDMA) and Degree of Similarity with Top Neighbors (DegSim), are used for analyzing rating patterns between malicious profiles and genuine profiles in attack models. Building upon this, we also propose and evaluate a detection structure called RD-TIA for detecting shilling attacks in recommender systems using a statistical approach. In order to detect more complicated attack models, we propose a novel metric called DegSim’ based on DegSim. The experimental results show that our detection model based on target item analysis is an effective approach for detecting shilling attacks. PMID:26222882
A Validation of Object-Oriented Design Metrics as Quality Indicators
NASA Technical Reports Server (NTRS)
Basili, Victor R.; Briand, Lionel C.; Melo, Walcelio
1997-01-01
This paper presents the results of a study in which we empirically investigated the suits of object-oriented (00) design metrics introduced in another work. More specifically, our goal is to assess these metrics as predictors of fault-prone classes and, therefore, determine whether they can be used as early quality indicators. This study is complementary to the work described where the same suite of metrics had been used to assess frequencies of maintenance changes to classes. To perform our validation accurately, we collected data on the development of eight medium-sized information management systems based on identical requirements. All eight projects were developed using a sequential life cycle model, a well-known 00 analysis/design method and the C++ programming language. Based on empirical and quantitative analysis, the advantages and drawbacks of these 00 metrics are discussed. Several of Chidamber and Kamerer's 00 metrics appear to be useful to predict class fault-proneness during the early phases of the life-cycle. Also, on our data set, they are better predictors than 'traditional' code metrics, which can only be collected at a later phase of the software development processes.
A Validation of Object-Oriented Design Metrics
NASA Technical Reports Server (NTRS)
Basili, Victor R.; Briand, Lionel; Melo, Walcelio L.
1995-01-01
This paper presents the results of a study conducted at the University of Maryland in which we experimentally investigated the suite of Object-Oriented (00) design metrics introduced by [Chidamber and Kemerer, 1994]. In order to do this, we assessed these metrics as predictors of fault-prone classes. This study is complementary to [Lieand Henry, 1993] where the same suite of metrics had been used to assess frequencies of maintenance changes to classes. To perform our validation accurately, we collected data on the development of eight medium-sized information management systems based on identical requirements. All eight projects were developed using a sequential life cycle model, a well-known 00 analysis/design method and the C++ programming language. Based on experimental results, the advantages and drawbacks of these 00 metrics are discussed and suggestions for improvement are provided. Several of Chidamber and Kemerer's 00 metrics appear to be adequate to predict class fault-proneness during the early phases of the life-cycle. We also showed that they are, on our data set, better predictors than "traditional" code metrics, which can only be collected at a later phase of the software development processes.
Application of Support Vector Machine to Forex Monitoring
NASA Astrophysics Data System (ADS)
Kamruzzaman, Joarder; Sarker, Ruhul A.
Previous studies have demonstrated superior performance of artificial neural network (ANN) based forex forecasting models over traditional regression models. This paper applies support vector machines to build a forecasting model from the historical data using six simple technical indicators and presents a comparison with an ANN based model trained by scaled conjugate gradient (SCG) learning algorithm. The models are evaluated and compared on the basis of five commonly used performance metrics that measure closeness of prediction as well as correctness in directional change. Forecasting results of six different currencies against Australian dollar reveal superior performance of SVM model using simple linear kernel over ANN-SCG model in terms of all the evaluation metrics. The effect of SVM parameter selection on prediction performance is also investigated and analyzed.
Bever, Aaron J.; MacWilliams, Michael L.; Herbold, Bruce; Brown, Larry R.; Feyrer, Frederick V.
2016-01-01
Long-term fish sampling data from the San Francisco Estuary were combined with detailed three dimensional hydrodynamic modeling to investigate the relationship between historical fish catch and hydrodynamic complexity. Delta Smelt catch data at 45 stations from the Fall Midwater Trawl (FMWT) survey in the vicinity of Suisun Bay were used to develop a quantitative catch-based station index. This index was used to rank stations based on historical Delta Smelt catch. The correlations between historical Delta Smelt catch and 35 quantitative metrics of environmental complexity were evaluated at each station. Eight metrics of environmental conditions were derived from FMWT data and 27 metrics were derived from model predictions at each FMWT station. To relate the station index to conceptual models of Delta Smelt habitat, the metrics were used to predict the station ranking based on the quantified environmental conditions. Salinity, current speed, and turbidity metrics were used to predict the relative ranking of each station for Delta Smelt catch. Including a measure of the current speed at each station improved predictions of the historical ranking for Delta Smelt catch relative to similar predictions made using only salinity and turbidity. Current speed was also found to be a better predictor of historical Delta Smelt catch than water depth. The quantitative approach developed using the FMWT data was validated using the Delta Smelt catch data from the San Francisco Bay Study. Complexity metrics in Suisun Bay were-evaluated during 2010 and 2011. This analysis indicated that a key to historical Delta Smelt catch is the overlap of low salinity, low maximum velocity, and low Secchi depth regions. This overlap occurred in Suisun Bay during 2011, and may have contributed to higher Delta Smelt abundance in 2011 than in 2010 when the favorable ranges of the metrics did not overlap in Suisun Bay.
NASA Astrophysics Data System (ADS)
Yang, Zhongming; Dou, Jiantai; Du, Jinyu; Gao, Zhishan
2018-03-01
Non-null interferometry could use to measure the radius of curvature (ROC), we have presented a virtual quadratic Newton rings phase-shifting moiré-fringes measurement method for large ROC measurement (Yang et al., 2016). In this paper, we propose a large ROC measurement method based on the evaluation of the interferogram-quality metric by the non-null interferometer. With the multi-configuration model of the non-null interferometric system in ZEMAX, the retrace errors and the phase introduced by the test surface are reconstructed. The interferogram-quality metric is obtained by the normalized phase-shifted testing Newton rings with the spherical surface model in the non-null interferometric system. The radius curvature of the test spherical surface can be obtained until the minimum of the interferogram-quality metric is found. Simulations and experimental results are verified the feasibility of our proposed method. For a spherical mirror with a ROC of 41,400 mm, the measurement accuracy is better than 0.13%.
Carroll, Carlos; Roberts, David R; Michalak, Julia L; Lawler, Joshua J; Nielsen, Scott E; Stralberg, Diana; Hamann, Andreas; Mcrae, Brad H; Wang, Tongli
2017-11-01
As most regions of the earth transition to altered climatic conditions, new methods are needed to identify refugia and other areas whose conservation would facilitate persistence of biodiversity under climate change. We compared several common approaches to conservation planning focused on climate resilience over a broad range of ecological settings across North America and evaluated how commonalities in the priority areas identified by different methods varied with regional context and spatial scale. Our results indicate that priority areas based on different environmental diversity metrics differed substantially from each other and from priorities based on spatiotemporal metrics such as climatic velocity. Refugia identified by diversity or velocity metrics were not strongly associated with the current protected area system, suggesting the need for additional conservation measures including protection of refugia. Despite the inherent uncertainties in predicting future climate, we found that variation among climatic velocities derived from different general circulation models and emissions pathways was less than the variation among the suite of environmental diversity metrics. To address uncertainty created by this variation, planners can combine priorities identified by alternative metrics at a single resolution and downweight areas of high variation between metrics. Alternately, coarse-resolution velocity metrics can be combined with fine-resolution diversity metrics in order to leverage the respective strengths of the two groups of metrics as tools for identification of potential macro- and microrefugia that in combination maximize both transient and long-term resilience to climate change. Planners should compare and integrate approaches that span a range of model complexity and spatial scale to match the range of ecological and physical processes influencing persistence of biodiversity and identify a conservation network resilient to threats operating at multiple scales. © 2017 The Authors. Global Change Biology Published by John Wiley & Sons Ltd.
Research on cardiovascular disease prediction based on distance metric learning
NASA Astrophysics Data System (ADS)
Ni, Zhuang; Liu, Kui; Kang, Guixia
2018-04-01
Distance metric learning algorithm has been widely applied to medical diagnosis and exhibited its strengths in classification problems. The k-nearest neighbour (KNN) is an efficient method which treats each feature equally. The large margin nearest neighbour classification (LMNN) improves the accuracy of KNN by learning a global distance metric, which did not consider the locality of data distributions. In this paper, we propose a new distance metric algorithm adopting cosine metric and LMNN named COS-SUBLMNN which takes more care about local feature of data to overcome the shortage of LMNN and improve the classification accuracy. The proposed methodology is verified on CVDs patient vector derived from real-world medical data. The Experimental results show that our method provides higher accuracy than KNN and LMNN did, which demonstrates the effectiveness of the Risk predictive model of CVDs based on COS-SUBLMNN.
Towards the XML schema measurement based on mapping between XML and OO domain
NASA Astrophysics Data System (ADS)
Rakić, Gordana; Budimac, Zoran; Heričko, Marjan; Pušnik, Maja
2017-07-01
Measuring quality of IT solutions is a priority in software engineering. Although numerous metrics for measuring object-oriented code already exist, measuring quality of UML models or XML Schemas is still developing. One of the research questions in the overall research leaded by ideas described in this paper is whether we can apply already defined object-oriented design metrics on XML schemas based on predefined mappings. In this paper, basic ideas for mentioned mapping are presented. This mapping is prerequisite for setting the future approach to XML schema quality measuring with object-oriented metrics.
Metrics for Radiologists in the Era of Value-based Health Care Delivery.
Sarwar, Ammar; Boland, Giles; Monks, Annamarie; Kruskal, Jonathan B
2015-01-01
Accelerated by the Patient Protection and Affordable Care Act of 2010, health care delivery in the United States is poised to move from a model that rewards the volume of services provided to one that rewards the value provided by such services. Radiology department operations are currently managed by an array of metrics that assess various departmental missions, but many of these metrics do not measure value. Regulators and other stakeholders also influence what metrics are used to assess medical imaging. Metrics such as the Physician Quality Reporting System are increasingly being linked to financial penalties. In addition, metrics assessing radiology's contribution to cost or outcomes are currently lacking. In fact, radiology is widely viewed as a contributor to health care costs without an adequate understanding of its contribution to downstream cost savings or improvement in patient outcomes. The new value-based system of health care delivery and reimbursement will measure a provider's contribution to reducing costs and improving patient outcomes with the intention of making reimbursement commensurate with adherence to these metrics. The authors describe existing metrics and their application to the practice of radiology, discuss the so-called value equation, and suggest possible metrics that will be useful for demonstrating the value of radiologists' services to their patients. (©)RSNA, 2015.
On the Tradeoff Between Altruism and Selfishness in MANET Trust Management
2016-04-07
to discourage selfish behaviors, using a hidden Markov model (HMM) to quanti - tatively measure the trustworthiness of nodes. Adams et al. [18...based reliability metric to predict trust-based system survivability. Section 4 analyzes numerical results obtained through the evaluation of our SPN...concepts in MANETs, trust man- agement for MANETs should consider the following design features: trust metrics must be customizable, evaluation of
On Learning: Metrics Based Systems for Countering Asymmetric Threats
2006-05-25
of self- education . While this alone cannot guarantee organizational learning , no organization can learn without a spirit of individual learning ...held views on globalization and the impact of Information Age technology . The emerging environment conceptually links to the learning model as part...On Learning : Metrics Based Systems for Countering Asymmetric Threats A Monograph by MAJ Rafael Lopez U.S. Army School of Advanced Military
Geospace environment modeling 2008--2009 challenge: Dst index
Rastätter, L.; Kuznetsova, M.M.; Glocer, A.; Welling, D.; Meng, X.; Raeder, J.; Wittberger, M.; Jordanova, V.K.; Yu, Y.; Zaharia, S.; Weigel, R.S.; Sazykin, S.; Boynton, R.; Wei, H.; Eccles, V.; Horton, W.; Mays, M.L.; Gannon, J.
2013-01-01
This paper reports the metrics-based results of the Dst index part of the 2008–2009 GEM Metrics Challenge. The 2008–2009 GEM Metrics Challenge asked modelers to submit results for four geomagnetic storm events and five different types of observations that can be modeled by statistical, climatological or physics-based models of the magnetosphere-ionosphere system. We present the results of 30 model settings that were run at the Community Coordinated Modeling Center and at the institutions of various modelers for these events. To measure the performance of each of the models against the observations, we use comparisons of 1 hour averaged model data with the Dst index issued by the World Data Center for Geomagnetism, Kyoto, Japan, and direct comparison of 1 minute model data with the 1 minute Dst index calculated by the United States Geological Survey. The latter index can be used to calculate spectral variability of model outputs in comparison to the index. We find that model rankings vary widely by skill score used. None of the models consistently perform best for all events. We find that empirical models perform well in general. Magnetohydrodynamics-based models of the global magnetosphere with inner magnetosphere physics (ring current model) included and stand-alone ring current models with properly defined boundary conditions perform well and are able to match or surpass results from empirical models. Unlike in similar studies, the statistical models used in this study found their challenge in the weakest events rather than the strongest events.
DeJournett, Jeremy; DeJournett, Leon
2017-11-01
Effective glucose control in the intensive care unit (ICU) setting has the potential to decrease morbidity and mortality rates and thereby decrease health care expenditures. To evaluate what constitutes effective glucose control, typically several metrics are reported, including time in range, time in mild and severe hypoglycemia, coefficient of variation, and others. To date, there is no one metric that combines all of these individual metrics to give a number indicative of overall performance. We proposed a composite metric that combines 5 commonly reported metrics, and we used this composite metric to compare 6 glucose controllers. We evaluated the following controllers: Ideal Medical Technologies (IMT) artificial-intelligence-based controller, Yale protocol, Glucommander, Wintergerst et al PID controller, GRIP, and NICE-SUGAR. We evaluated each controller across 80 simulated patients, 4 clinically relevant exogenous dextrose infusions, and one nonclinical infusion as a test of the controller's ability to handle difficult situations. This gave a total of 2400 5-day simulations, and 585 604 individual glucose values for analysis. We used a random walk sensor error model that gave a 10% MARD. For each controller, we calculated severe hypoglycemia (<40 mg/dL), mild hypoglycemia (40-69 mg/dL), normoglycemia (70-140 mg/dL), hyperglycemia (>140 mg/dL), and coefficient of variation (CV), as well as our novel controller metric. For the controllers tested, we achieved the following median values for our novel controller scoring metric: IMT: 88.1, YALE: 46.7, GLUC: 47.2, PID: 50, GRIP: 48.2, NICE: 46.4. The novel scoring metric employed in this study shows promise as a means for evaluating new and existing ICU-based glucose controllers, and it could be used in the future to compare results of glucose control studies in critical care. The IMT AI-based glucose controller demonstrated the most consistent performance results based on this new metric.
Target Scattering Metrics: Model-Model and Model-Data Comparisons
2017-12-13
measured synthetic aperture sonar (SAS) data or from numerical models is investigated. Metrics are needed for quantitative comparisons for signals...candidate metrics for model-model comparisons are examined here with a goal to consider raw data prior to its reduction to data products, which may...be suitable for input to classification schemes. The investigated metrics are then applied to model-data comparisons. INTRODUCTION Metrics for
Target Scattering Metrics: Model-Model and Model Data comparisons
2017-12-13
measured synthetic aperture sonar (SAS) data or from numerical models is investigated. Metrics are needed for quantitative comparisons for signals...candidate metrics for model-model comparisons are examined here with a goal to consider raw data prior to its reduction to data products, which may...be suitable for input to classification schemes. The investigated metrics are then applied to model-data comparisons. INTRODUCTION Metrics for
Quality metrics in high-dimensional data visualization: an overview and systematization.
Bertini, Enrico; Tatu, Andrada; Keim, Daniel
2011-12-01
In this paper, we present a systematization of techniques that use quality metrics to help in the visual exploration of meaningful patterns in high-dimensional data. In a number of recent papers, different quality metrics are proposed to automate the demanding search through large spaces of alternative visualizations (e.g., alternative projections or ordering), allowing the user to concentrate on the most promising visualizations suggested by the quality metrics. Over the last decade, this approach has witnessed a remarkable development but few reflections exist on how these methods are related to each other and how the approach can be developed further. For this purpose, we provide an overview of approaches that use quality metrics in high-dimensional data visualization and propose a systematization based on a thorough literature review. We carefully analyze the papers and derive a set of factors for discriminating the quality metrics, visualization techniques, and the process itself. The process is described through a reworked version of the well-known information visualization pipeline. We demonstrate the usefulness of our model by applying it to several existing approaches that use quality metrics, and we provide reflections on implications of our model for future research. © 2010 IEEE
Modelling the B2C Marketplace: Evaluation of a Reputation Metric for e-Commerce
NASA Astrophysics Data System (ADS)
Gutowska, Anna; Sloane, Andrew
This paper evaluates recently developed novel and comprehensive reputation metric designed for the distributed multi-agent reputation system for the Business-to-Consumer (B2C) E-commerce applications. To do that an agent-based simulation framework was implemented which models different types of behaviours in the marketplace. The trustworthiness of different types of providers is investigated to establish whether the simulation models behaviour of B2C e-Commerce systems as they are expected to behave in real life.
NASA Astrophysics Data System (ADS)
Russell, J. L.
2014-12-01
The exchange of heat and carbon dioxide between the atmosphere and ocean are major controls on Earth's climate under conditions of anthropogenic forcing. The Southern Ocean south of 30°S, occupying just over ¼ of the surface ocean area, accounts for a disproportionate share of the vertical exchange of properties between the deep and surface waters of the ocean and between the surface ocean and the atmosphere; thus this region can be disproportionately influential on the climate system. Despite the crucial role of the Southern Ocean in the climate system, understanding of the particular mechanisms involved remains inadequate, and the model studies underlying many of these results are highly controversial. As part of the overall goal of working toward reducing uncertainties in climate projections, we present an analysis using new data/model metrics based on a unified framework of theory, quantitative datasets, and numerical modeling. These new metrics quantify the mechanisms, processes, and tendencies relevant to the role of the Southern Ocean in climate.
Model-based color halftoning using direct binary search.
Agar, A Ufuk; Allebach, Jan P
2005-12-01
In this paper, we develop a model-based color halftoning method using the direct binary search (DBS) algorithm. Our method strives to minimize the perceived error between the continuous tone original color image and the color halftone image. We exploit the differences in how the human viewers respond to luminance and chrominance information and use the total squared error in a luminance/chrominance based space as our metric. Starting with an initial halftone, we minimize this error metric using the DBS algorithm. Our method also incorporates a measurement based color printer dot interaction model to prevent the artifacts due to dot overlap and to improve color texture quality. We calibrate our halftoning algorithm to ensure accurate colorant distributions in resulting halftones. We present the color halftones which demonstrate the efficacy of our method.
Miller, Vonda H; Jansen, Ben H
2008-12-01
Computer algorithms that match human performance in recognizing written text or spoken conversation remain elusive. The reasons why the human brain far exceeds any existing recognition scheme to date in the ability to generalize and to extract invariant characteristics relevant to category matching are not clear. However, it has been postulated that the dynamic distribution of brain activity (spatiotemporal activation patterns) is the mechanism by which stimuli are encoded and matched to categories. This research focuses on supervised learning using a trajectory based distance metric for category discrimination in an oscillatory neural network model. Classification is accomplished using a trajectory based distance metric. Since the distance metric is differentiable, a supervised learning algorithm based on gradient descent is demonstrated. Classification of spatiotemporal frequency transitions and their relation to a priori assessed categories is shown along with the improved classification results after supervised training. The results indicate that this spatiotemporal representation of stimuli and the associated distance metric is useful for simple pattern recognition tasks and that supervised learning improves classification results.
Model-Based Referenceless Quality Metric of 3D Synthesized Images Using Local Image Description.
Gu, Ke; Jakhetiya, Vinit; Qiao, Jun-Fei; Li, Xiaoli; Lin, Weisi; Thalmann, Daniel
2017-07-28
New challenges have been brought out along with the emerging of 3D-related technologies such as virtual reality (VR), augmented reality (AR), and mixed reality (MR). Free viewpoint video (FVV), due to its applications in remote surveillance, remote education, etc, based on the flexible selection of direction and viewpoint, has been perceived as the development direction of next-generation video technologies and has drawn a wide range of researchers' attention. Since FVV images are synthesized via a depth image-based rendering (DIBR) procedure in the "blind" environment (without reference images), a reliable real-time blind quality evaluation and monitoring system is urgently required. But existing assessment metrics do not render human judgments faithfully mainly because geometric distortions are generated by DIBR. To this end, this paper proposes a novel referenceless quality metric of DIBR-synthesized images using the autoregression (AR)-based local image description. It was found that, after the AR prediction, the reconstructed error between a DIBR-synthesized image and its AR-predicted image can accurately capture the geometry distortion. The visual saliency is then leveraged to modify the proposed blind quality metric to a sizable margin. Experiments validate the superiority of our no-reference quality method as compared with prevailing full-, reduced- and no-reference models.
Model-based metrics of human-automation function allocation in complex work environments
NASA Astrophysics Data System (ADS)
Kim, So Young
Function allocation is the design decision which assigns work functions to all agents in a team, both human and automated. Efforts to guide function allocation systematically has been studied in many fields such as engineering, human factors, team and organization design, management science, and cognitive systems engineering. Each field focuses on certain aspects of function allocation, but not all; thus, an independent discussion of each does not address all necessary issues with function allocation. Four distinctive perspectives emerged from a review of these fields: technology-centered, human-centered, team-oriented, and work-oriented. Each perspective focuses on different aspects of function allocation: capabilities and characteristics of agents (automation or human), team structure and processes, and work structure and the work environment. Together, these perspectives identify the following eight issues with function allocation: 1) Workload, 2) Incoherency in function allocations, 3) Mismatches between responsibility and authority, 4) Interruptive automation, 5) Automation boundary conditions, 6) Function allocation preventing human adaptation to context, 7) Function allocation destabilizing the humans' work environment, and 8) Mission Performance. Addressing these issues systematically requires formal models and simulations that include all necessary aspects of human-automation function allocation: the work environment, the dynamics inherent to the work, agents, and relationships among them. Also, addressing these issues requires not only a (static) model, but also a (dynamic) simulation that captures temporal aspects of work such as the timing of actions and their impact on the agent's work. Therefore, with properly modeled work as described by the work environment, the dynamics inherent to the work, agents, and relationships among them, a modeling framework developed by this thesis, which includes static work models and dynamic simulation, can capture the issues with function allocation. Then, based on the eight issues, eight types of metrics are established. The purpose of these metrics is to assess the extent to which each issue exists with a given function allocation. Specifically, the eight types of metrics assess workload, coherency of a function allocation, mismatches between responsibility and authority, interruptive automation, automation boundary conditions, human adaptation to context, stability of the human's work environment, and mission performance. Finally, to validate the modeling framework and the metrics, a case study was conducted modeling four different function allocations between a pilot and flight deck automation during the arrival and approach phases of flight. A range of pilot cognitive control modes and maximum human taskload limits were also included in the model. The metrics were assessed for these four function allocations and analyzed to validate capability of the metrics to identify important issues in given function allocations. In addition, the design insights provided by the metrics are highlighted. This thesis concludes with a discussion of mechanisms for further validating the modeling framework and function allocation metrics developed here, and highlights where these developments can be applied in research and in the design of function allocations in complex work environments such as aviation operations.
Automated map sharpening by maximization of detail and connectivity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Terwilliger, Thomas C.; Sobolev, Oleg V.; Afonine, Pavel V.
An algorithm for automatic map sharpening is presented that is based on optimization of the detail and connectivity of the sharpened map. The detail in the map is reflected in the surface area of an iso-contour surface that contains a fixed fraction of the volume of the map, where a map with high level of detail has a high surface area. The connectivity of the sharpened map is reflected in the number of connected regions defined by the same iso-contour surfaces, where a map with high connectivity has a small number of connected regions. By combining these two measures inmore » a metric termed the `adjusted surface area', map quality can be evaluated in an automated fashion. This metric was used to choose optimal map-sharpening parameters without reference to a model or other interpretations of the map. Map sharpening by optimization of the adjusted surface area can be carried out for a map as a whole or it can be carried out locally, yielding a locally sharpened map. To evaluate the performance of various approaches, a simple metric based on map–model correlation that can reproduce visual choices of optimally sharpened maps was used. The map–model correlation is calculated using a model withBfactors (atomic displacement factors; ADPs) set to zero. Finally, this model-based metric was used to evaluate map sharpening and to evaluate map-sharpening approaches, and it was found that optimization of the adjusted surface area can be an effective tool for map sharpening.« less
Automated map sharpening by maximization of detail and connectivity
Terwilliger, Thomas C.; Sobolev, Oleg V.; Afonine, Pavel V.; ...
2018-05-18
An algorithm for automatic map sharpening is presented that is based on optimization of the detail and connectivity of the sharpened map. The detail in the map is reflected in the surface area of an iso-contour surface that contains a fixed fraction of the volume of the map, where a map with high level of detail has a high surface area. The connectivity of the sharpened map is reflected in the number of connected regions defined by the same iso-contour surfaces, where a map with high connectivity has a small number of connected regions. By combining these two measures inmore » a metric termed the `adjusted surface area', map quality can be evaluated in an automated fashion. This metric was used to choose optimal map-sharpening parameters without reference to a model or other interpretations of the map. Map sharpening by optimization of the adjusted surface area can be carried out for a map as a whole or it can be carried out locally, yielding a locally sharpened map. To evaluate the performance of various approaches, a simple metric based on map–model correlation that can reproduce visual choices of optimally sharpened maps was used. The map–model correlation is calculated using a model withBfactors (atomic displacement factors; ADPs) set to zero. Finally, this model-based metric was used to evaluate map sharpening and to evaluate map-sharpening approaches, and it was found that optimization of the adjusted surface area can be an effective tool for map sharpening.« less
Distance Metric Learning via Iterated Support Vector Machines.
Zuo, Wangmeng; Wang, Faqiang; Zhang, David; Lin, Liang; Huang, Yuchi; Meng, Deyu; Zhang, Lei
2017-07-11
Distance metric learning aims to learn from the given training data a valid distance metric, with which the similarity between data samples can be more effectively evaluated for classification. Metric learning is often formulated as a convex or nonconvex optimization problem, while most existing methods are based on customized optimizers and become inefficient for large scale problems. In this paper, we formulate metric learning as a kernel classification problem with the positive semi-definite constraint, and solve it by iterated training of support vector machines (SVMs). The new formulation is easy to implement and efficient in training with the off-the-shelf SVM solvers. Two novel metric learning models, namely Positive-semidefinite Constrained Metric Learning (PCML) and Nonnegative-coefficient Constrained Metric Learning (NCML), are developed. Both PCML and NCML can guarantee the global optimality of their solutions. Experiments are conducted on general classification, face verification and person re-identification to evaluate our methods. Compared with the state-of-the-art approaches, our methods can achieve comparable classification accuracy and are efficient in training.
A reservoir morphology database for the conterminous United States
Rodgers, Kirk D.
2017-09-13
The U.S. Geological Survey, in cooperation with the Reservoir Fisheries Habitat Partnership, combined multiple national databases to create one comprehensive national reservoir database and to calculate new morphological metrics for 3,828 reservoirs. These new metrics include, but are not limited to, shoreline development index, index of basin permanence, development of volume, and other descriptive metrics based on established morphometric formulas. The new database also contains modeled chemical and physical metrics. Because of the nature of the existing databases used to compile the Reservoir Morphology Database and the inherent missing data, some metrics were not populated. One comprehensive database will assist water-resource managers in their understanding of local reservoir morphology and water chemistry characteristics throughout the continental United States.
Control algorithms and applications of the wavefront sensorless adaptive optics
NASA Astrophysics Data System (ADS)
Ma, Liang; Wang, Bin; Zhou, Yuanshen; Yang, Huizhen
2017-10-01
Compared with the conventional adaptive optics (AO) system, the wavefront sensorless (WFSless) AO system need not to measure the wavefront and reconstruct it. It is simpler than the conventional AO in system architecture and can be applied to the complex conditions. Based on the analysis of principle and system model of the WFSless AO system, wavefront correction methods of the WFSless AO system were divided into two categories: model-free-based and model-based control algorithms. The WFSless AO system based on model-free-based control algorithms commonly considers the performance metric as a function of the control parameters and then uses certain control algorithm to improve the performance metric. The model-based control algorithms include modal control algorithms, nonlinear control algorithms and control algorithms based on geometrical optics. Based on the brief description of above typical control algorithms, hybrid methods combining the model-free-based control algorithm with the model-based control algorithm were generalized. Additionally, characteristics of various control algorithms were compared and analyzed. We also discussed the extensive applications of WFSless AO system in free space optical communication (FSO), retinal imaging in the human eye, confocal microscope, coherent beam combination (CBC) techniques and extended objects.
Kasthurirathne, Suranga N; Dixon, Brian E; Gichoya, Judy; Xu, Huiping; Xia, Yuni; Mamlin, Burke; Grannis, Shaun J
2017-05-01
Existing approaches to derive decision models from plaintext clinical data frequently depend on medical dictionaries as the sources of potential features. Prior research suggests that decision models developed using non-dictionary based feature sourcing approaches and "off the shelf" tools could predict cancer with performance metrics between 80% and 90%. We sought to compare non-dictionary based models to models built using features derived from medical dictionaries. We evaluated the detection of cancer cases from free text pathology reports using decision models built with combinations of dictionary or non-dictionary based feature sourcing approaches, 4 feature subset sizes, and 5 classification algorithms. Each decision model was evaluated using the following performance metrics: sensitivity, specificity, accuracy, positive predictive value, and area under the receiver operating characteristics (ROC) curve. Decision models parameterized using dictionary and non-dictionary feature sourcing approaches produced performance metrics between 70 and 90%. The source of features and feature subset size had no impact on the performance of a decision model. Our study suggests there is little value in leveraging medical dictionaries for extracting features for decision model building. Decision models built using features extracted from the plaintext reports themselves achieve comparable results to those built using medical dictionaries. Overall, this suggests that existing "off the shelf" approaches can be leveraged to perform accurate cancer detection using less complex Named Entity Recognition (NER) based feature extraction, automated feature selection and modeling approaches. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Buzan, J. R.; Oleson, K.; Huber, M.
2014-08-01
We implement and analyze 13 different metrics (4 moist thermodynamic quantities and 9 heat stress metrics) in the Community Land Model (CLM4.5), the land surface component of the Community Earth System Model (CESM). We call these routines the HumanIndexMod. These heat stress metrics embody three philosophical approaches: comfort, physiology, and empirically based algorithms. The metrics are directly connected to CLM4.5 BareGroundFuxesMod, CanopyFluxesMod, SlakeFluxesMod, and UrbanMod modules in order to differentiate between the distinct regimes even within one gridcell. This allows CLM4.5 to calculate the instantaneous heat stress at every model time step, for every land surface type, capturing all aspects of non-linearity in moisture-temperature covariance. Secondary modules for initialization and archiving are modified to generate the metrics as standard output. All of the metrics implemented depend on the covariance of near surface atmospheric variables: temperature, pressure, and humidity. Accurate wet bulb temperatures are critical for quantifying heat stress (used by 5 of the 9 heat stress metrics). Unfortunately, moist thermodynamic calculations for calculating accurate wet bulb temperatures are not in CLM4.5. To remedy this, we incorporated comprehensive water vapor calculations into CLM4.5. The three advantages of adding these metrics to CLM4.5 are (1) improved thermodynamic calculations within climate models, (2) quantifying human heat stress, and (3) that these metrics may be applied to other animals as well as industrial applications. Additionally, an offline version of the HumanIndexMod is available for applications with weather and climate datasets. Examples of such applications are the high temporal resolution CMIP5 archived data, weather and research forecasting models, CLM4.5 flux tower simulations (or other land surface model validation studies), and local weather station data analysis. To demonstrate the capabilities of the HumanIndexMod, we analyze the top 1% of heat stress events from 1901-2010 at a 4 × daily resolution from a global CLM4.5 simulation. We cross compare these events to the input moisture and temperature conditions, and with each metric. Our results show that heat stress may be divided into two regimes: arid and non-arid. The highest heat stress values are in areas with strong convection (±30° latitude). Equatorial regions have low variability in heat stress values (±20° latitude). Arid regions have large variability in extreme heat stress as compared to the low latitudes.
The Death of Socrates: Managerialism, Metrics and Bureaucratisation in Universities
ERIC Educational Resources Information Center
Orr, Yancey; Orr, Raymond
2016-01-01
Neoliberalism exults the ability of unregulated markets to optimise human relations. Yet, as David Graeber has recently illustrated, it is paradoxically built on rigorous systems of rules, metrics and managers. The potential transition to a market-based tuition and research-funding model for higher education in Australia has, not surprisingly,…
Temporal Variability of Daily Personal Magnetic Field Exposure Metrics in Pregnant Women
Lewis, Ryan C.; Evenson, Kelly R.; Savitz, David A.; Meeker, John D.
2015-01-01
Recent epidemiology studies of power-frequency magnetic fields and reproductive health have characterized exposures using data collected from personal exposure monitors over a single day, possibly resulting in exposure misclassification due to temporal variability in daily personal magnetic field exposure metrics, but relevant data in adults are limited. We assessed the temporal variability of daily central tendency (time-weighted average, median) and peak (upper percentiles, maximum) personal magnetic field exposure metrics over seven consecutive days in 100 pregnant women. When exposure was modeled as a continuous variable, central tendency metrics had substantial reliability, whereas peak metrics had fair (maximum) to moderate (upper percentiles) reliability. The predictive ability of a single day metric to accurately classify participants into exposure categories based on a weeklong metric depended on the selected exposure threshold, with sensitivity decreasing with increasing exposure threshold. Consistent with the continuous measures analysis, sensitivity was higher for central tendency metrics than for peak metrics. If there is interest in peak metrics, more than one day of measurement is needed over the window of disease susceptibility to minimize measurement error, but one day may be sufficient for central tendency metrics. PMID:24691007
Technical Note: Approximate Bayesian parameterization of a process-based tropical forest model
NASA Astrophysics Data System (ADS)
Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.
2014-02-01
Inverse parameter estimation of process-based models is a long-standing problem in many scientific disciplines. A key question for inverse parameter estimation is how to define the metric that quantifies how well model predictions fit to the data. This metric can be expressed by general cost or objective functions, but statistical inversion methods require a particular metric, the probability of observing the data given the model parameters, known as the likelihood. For technical and computational reasons, likelihoods for process-based stochastic models are usually based on general assumptions about variability in the observed data, and not on the stochasticity generated by the model. Only in recent years have new methods become available that allow the generation of likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional Markov chain Monte Carlo (MCMC) sampler, performs well in retrieving known parameter values from virtual inventory data generated by the forest model. We analyze the results of the parameter estimation, examine its sensitivity to the choice and aggregation of model outputs and observed data (summary statistics), and demonstrate the application of this method by fitting the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss how this approach differs from approximate Bayesian computation (ABC), another method commonly used to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can be successfully applied to process-based models of high complexity. The methodology is particularly suitable for heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models.
NASA Astrophysics Data System (ADS)
Dhungel, S.; Barber, M. E.
2016-12-01
The objectives of this paper are to use an automated satellite-based remote sensing evapotranspiration (ET) model to assist in parameterization of a cropping system model (CropSyst) and to examine the variability of consumptive water use of various crops across the watershed. The remote sensing model is a modified version of the Mapping Evapotranspiration at high Resolution with Internalized Calibration (METRIC™) energy balance model. We present the application of an automated python-based implementation of METRIC to estimate ET as consumptive water use for agricultural areas in three watersheds in Eastern Washington - Walla Walla, Lower Yakima and Okanogan. We used these ET maps with USDA crop data to identify the variability of crop growth and water use for the major crops in these three watersheds. Some crops, such as grapes and alfalfa, showed high variability in water use in the watershed while others, such as corn, had comparatively less variability. The results helped us to estimate the range and variability of various crop parameters that are used in CropSyst. The paper also presents a systematic approach to estimate parameters of CropSyst for a crop in a watershed using METRIC results. Our initial application of this approach was used to estimate irrigation application rate for CropSyst for a selected farm in Walla Walla and was validated by comparing crop growth (as Leaf Area Index - LAI) and consumptive water use (ET) from METRIC and CropSyst. This coupling of METRIC with CropSyst will allow for more robust parameters in CropSyst and will enable accurate predictions of changes in irrigation practices and crop rotation, which are a challenge in many cropping system models.
Consumer Neuroscience-Based Metrics Predict Recall, Liking and Viewing Rates in Online Advertising.
Guixeres, Jaime; Bigné, Enrique; Ausín Azofra, Jose M; Alcañiz Raya, Mariano; Colomer Granero, Adrián; Fuentes Hurtado, Félix; Naranjo Ornedo, Valery
2017-01-01
The purpose of the present study is to investigate whether the effectiveness of a new ad on digital channels (YouTube) can be predicted by using neural networks and neuroscience-based metrics (brain response, heart rate variability and eye tracking). Neurophysiological records from 35 participants were exposed to 8 relevant TV Super Bowl commercials. Correlations between neurophysiological-based metrics, ad recall, ad liking, the ACE metrix score and the number of views on YouTube during a year were investigated. Our findings suggest a significant correlation between neuroscience metrics and self-reported of ad effectiveness and the direct number of views on the YouTube channel. In addition, and using an artificial neural network based on neuroscience metrics, the model classifies (82.9% of average accuracy) and estimate the number of online views (mean error of 0.199). The results highlight the validity of neuromarketing-based techniques for predicting the success of advertising responses. Practitioners can consider the proposed methodology at the design stages of advertising content, thus enhancing advertising effectiveness. The study pioneers the use of neurophysiological methods in predicting advertising success in a digital context. This is the first article that has examined whether these measures could actually be used for predicting views for advertising on YouTube.
Consumer Neuroscience-Based Metrics Predict Recall, Liking and Viewing Rates in Online Advertising
Guixeres, Jaime; Bigné, Enrique; Ausín Azofra, Jose M.; Alcañiz Raya, Mariano; Colomer Granero, Adrián; Fuentes Hurtado, Félix; Naranjo Ornedo, Valery
2017-01-01
The purpose of the present study is to investigate whether the effectiveness of a new ad on digital channels (YouTube) can be predicted by using neural networks and neuroscience-based metrics (brain response, heart rate variability and eye tracking). Neurophysiological records from 35 participants were exposed to 8 relevant TV Super Bowl commercials. Correlations between neurophysiological-based metrics, ad recall, ad liking, the ACE metrix score and the number of views on YouTube during a year were investigated. Our findings suggest a significant correlation between neuroscience metrics and self-reported of ad effectiveness and the direct number of views on the YouTube channel. In addition, and using an artificial neural network based on neuroscience metrics, the model classifies (82.9% of average accuracy) and estimate the number of online views (mean error of 0.199). The results highlight the validity of neuromarketing-based techniques for predicting the success of advertising responses. Practitioners can consider the proposed methodology at the design stages of advertising content, thus enhancing advertising effectiveness. The study pioneers the use of neurophysiological methods in predicting advertising success in a digital context. This is the first article that has examined whether these measures could actually be used for predicting views for advertising on YouTube. PMID:29163251
A Metric-Based Validation Process to Assess the Realism of Synthetic Power Grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Birchfield, Adam; Schweitzer, Eran; Athari, Mir
Public power system test cases that are of high quality benefit the power systems research community with expanded resources for testing, demonstrating, and cross-validating new innovations. Building synthetic grid models for this purpose is a relatively new problem, for which a challenge is to show that created cases are sufficiently realistic. This paper puts forth a validation process based on a set of metrics observed from actual power system cases. These metrics follow the structure, proportions, and parameters of key power system elements, which can be used in assessing and validating the quality of synthetic power grids. Though wide diversitymore » exists in the characteristics of power systems, the paper focuses on an initial set of common quantitative metrics to capture the distribution of typical values from real power systems. The process is applied to two new public test cases, which are shown to meet the criteria specified in the metrics of this paper.« less
Performance metrics for the assessment of satellite data products: an ocean color case study
Seegers, Bridget N.; Stumpf, Richard P.; Schaeffer, Blake A.; Loftin, Keith A.; Werdell, P. Jeremy
2018-01-01
Performance assessment of ocean color satellite data has generally relied on statistical metrics chosen for their common usage and the rationale for selecting certain metrics is infrequently explained. Commonly reported statistics based on mean squared errors, such as the coefficient of determination (r2), root mean square error, and regression slopes, are most appropriate for Gaussian distributions without outliers and, therefore, are often not ideal for ocean color algorithm performance assessment, which is often limited by sample availability. In contrast, metrics based on simple deviations, such as bias and mean absolute error, as well as pair-wise comparisons, often provide more robust and straightforward quantities for evaluating ocean color algorithms with non-Gaussian distributions and outliers. This study uses a SeaWiFS chlorophyll-a validation data set to demonstrate a framework for satellite data product assessment and recommends a multi-metric and user-dependent approach that can be applied within science, modeling, and resource management communities. PMID:29609296
A Metric-Based Validation Process to Assess the Realism of Synthetic Power Grids
Birchfield, Adam; Schweitzer, Eran; Athari, Mir; ...
2017-08-19
Public power system test cases that are of high quality benefit the power systems research community with expanded resources for testing, demonstrating, and cross-validating new innovations. Building synthetic grid models for this purpose is a relatively new problem, for which a challenge is to show that created cases are sufficiently realistic. This paper puts forth a validation process based on a set of metrics observed from actual power system cases. These metrics follow the structure, proportions, and parameters of key power system elements, which can be used in assessing and validating the quality of synthetic power grids. Though wide diversitymore » exists in the characteristics of power systems, the paper focuses on an initial set of common quantitative metrics to capture the distribution of typical values from real power systems. The process is applied to two new public test cases, which are shown to meet the criteria specified in the metrics of this paper.« less
Favazza, Christopher P; Fetterly, Kenneth A; Hangiandreou, Nicholas J; Leng, Shuai; Schueler, Beth A
2015-01-01
Evaluation of flat-panel angiography equipment through conventional image quality metrics is limited by the scope of standard spatial-domain image quality metric(s), such as contrast-to-noise ratio and spatial resolution, or by restricted access to appropriate data to calculate Fourier domain measurements, such as modulation transfer function, noise power spectrum, and detective quantum efficiency. Observer models have been shown capable of overcoming these limitations and are able to comprehensively evaluate medical-imaging systems. We present a spatial domain-based channelized Hotelling observer model to calculate the detectability index (DI) of our different sized disks and compare the performance of different imaging conditions and angiography systems. When appropriate, changes in DIs were compared to expectations based on the classical Rose model of signal detection to assess linearity of the model with quantum signal-to-noise ratio (SNR) theory. For these experiments, the estimated uncertainty of the DIs was less than 3%, allowing for precise comparison of imaging systems or conditions. For most experimental variables, DI changes were linear with expectations based on quantum SNR theory. DIs calculated for the smallest objects demonstrated nonlinearity with quantum SNR theory due to system blur. Two angiography systems with different detector element sizes were shown to perform similarly across the majority of the detection tasks.
Teschke, Kay; Spierings, Judith; Marion, Stephen A; Demers, Paul A; Davies, Hugh W; Kennedy, Susan M
2004-12-01
In a study of wood dust exposure and lung function, we tested the effect on the exposure-response relationship of six different exposure metrics using the mean measured exposure of each subject versus the mean exposure based on various methods of grouping subjects, including job-based groups and groups based on an empirical model of the determinants of exposure. Multiple linear regression was used to examine the association between wood dust concentration and forced expiratory volume in 1s (FEV(1)), adjusting for age, sex, height, race, pediatric asthma, and smoking. Stronger point estimates of the exposure-response relationships were observed when exposures were based on increasing levels of aggregation, allowing the relationships to be found statistically significant in four of the six metrics. The strongest point estimates were found when exposures were based on the determinants of exposure model. Determinants of exposure modeling offers the potential for improvement in risk estimation equivalent to or beyond that from job-based exposure grouping.
Gamut Volume Index: a color preference metric based on meta-analysis and optimized colour samples.
Liu, Qiang; Huang, Zheng; Xiao, Kaida; Pointer, Michael R; Westland, Stephen; Luo, M Ronnier
2017-07-10
A novel metric named Gamut Volume Index (GVI) is proposed for evaluating the colour preference of lighting. This metric is based on the absolute gamut volume of optimized colour samples. The optimal colour set of the proposed metric was obtained by optimizing the weighted average correlation between the metric predictions and the subjective ratings for 8 psychophysical studies. The performance of 20 typical colour metrics was also investigated, which included colour difference based metrics, gamut based metrics, memory based metrics as well as combined metrics. It was found that the proposed GVI outperformed the existing counterparts, especially for the conditions where correlated colour temperatures differed.
NASA Astrophysics Data System (ADS)
Gulliver, John; de Hoogh, Kees; Fecht, Daniela; Vienneau, Danielle; Briggs, David
2011-12-01
The development of geographical information system techniques has opened up a wide array of methods for air pollution exposure assessment. The extent to which these provide reliable estimates of air pollution concentrations is nevertheless not clearly established. Nor is it clear which methods or metrics should be preferred in epidemiological studies. This paper compares the performance of ten different methods and metrics in terms of their ability to predict mean annual PM 10 concentrations across 52 monitoring sites in London, UK. Metrics analysed include indicators (distance to nearest road, traffic volume on nearest road, heavy duty vehicle (HDV) volume on nearest road, road density within 150 m, traffic volume within 150 m and HDV volume within 150 m) and four modelling approaches: based on the nearest monitoring site, kriging, dispersion modelling and land use regression (LUR). Measures were computed in a GIS, and resulting metrics calibrated and validated against monitoring data using a form of grouped jack-knife analysis. The results show that PM 10 concentrations across London show little spatial variation. As a consequence, most methods can predict the average without serious bias. Few of the approaches, however, show good correlations with monitored PM 10 concentrations, and most predict no better than a simple classification based on site type. Only land use regression reaches acceptable levels of correlation ( R2 = 0.47), though this can be improved by also including information on site type. This might therefore be taken as a recommended approach in many studies, though care is needed in developing meaningful land use regression models, and like any method they need to be validated against local data before their application as part of epidemiological studies.
Quantifying seascape structure: Extending terrestrial spatial pattern metrics to the marine realm
Wedding, L.M.; Christopher, L.A.; Pittman, S.J.; Friedlander, A.M.; Jorgensen, S.
2011-01-01
Spatial pattern metrics have routinely been applied to characterize and quantify structural features of terrestrial landscapes and have demonstrated great utility in landscape ecology and conservation planning. The important role of spatial structure in ecology and management is now commonly recognized, and recent advances in marine remote sensing technology have facilitated the application of spatial pattern metrics to the marine environment. However, it is not yet clear whether concepts, metrics, and statistical techniques developed for terrestrial ecosystems are relevant for marine species and seascapes. To address this gap in our knowledge, we reviewed, synthesized, and evaluated the utility and application of spatial pattern metrics in the marine science literature over the past 30 yr (1980 to 2010). In total, 23 studies characterized seascape structure, of which 17 quantified spatial patterns using a 2-dimensional patch-mosaic model and 5 used a continuously varying 3-dimensional surface model. Most seascape studies followed terrestrial-based studies in their search for ecological patterns and applied or modified existing metrics. Only 1 truly unique metric was found (hydrodynamic aperture applied to Pacific atolls). While there are still relatively few studies using spatial pattern metrics in the marine environment, they have suffered from similar misuse as reported for terrestrial studies, such as the lack of a priori considerations or the problem of collinearity between metrics. Spatial pattern metrics offer great potential for ecological research and environmental management in marine systems, and future studies should focus on (1) the dynamic boundary between the land and sea; (2) quantifying 3-dimensional spatial patterns; and (3) assessing and monitoring seascape change. ?? Inter-Research 2011.
Research and development on performance models of thermal imaging systems
NASA Astrophysics Data System (ADS)
Wang, Ji-hui; Jin, Wei-qi; Wang, Xia; Cheng, Yi-nan
2009-07-01
Traditional ACQUIRE models perform the discrimination tasks of detection (target orientation, recognition and identification) for military target based upon minimum resolvable temperature difference (MRTD) and Johnson criteria for thermal imaging systems (TIS). Johnson criteria is generally pessimistic for performance predict of sampled imager with the development of focal plane array (FPA) detectors and digital image process technology. Triangle orientation discrimination threshold (TOD) model, minimum temperature difference perceived (MTDP)/ thermal range model (TRM3) Model and target task performance (TTP) metric have been developed to predict the performance of sampled imager, especially TTP metric can provides better accuracy than the Johnson criteria. In this paper, the performance models above are described; channel width metrics have been presented to describe the synthesis performance including modulate translate function (MTF) channel width for high signal noise to ration (SNR) optoelectronic imaging systems and MRTD channel width for low SNR TIS; the under resolvable questions for performance assessment of TIS are indicated; last, the development direction of performance models for TIS are discussed.
Qu, Xin; Hall, Alex; DeAngelis, Anthony M.; ...
2018-01-11
Differences among climate models in equilibrium climate sensitivity (ECS; the equilibrium surface temperature response to a doubling of atmospheric CO2) remain a significant barrier to the accurate assessment of societally important impacts of climate change. Relationships between ECS and observable metrics of the current climate in model ensembles, so-called emergent constraints, have been used to constrain ECS. Here a statistical method (including a backward selection process) is employed to achieve a better statistical understanding of the connections between four recently proposed emergent constraint metrics and individual feedbacks influencing ECS. The relationship between each metric and ECS is largely attributable tomore » a statistical connection with shortwave low cloud feedback, the leading cause of intermodel ECS spread. This result bolsters confidence in some of the metrics, which had assumed such a connection in the first place. Additional analysis is conducted with a few thousand artificial metrics that are randomly generated but are well correlated with ECS. The relationships between the contrived metrics and ECS can also be linked statistically to shortwave cloud feedback. Thus, any proposed or forthcoming ECS constraint based on the current generation of climate models should be viewed as a potential constraint on shortwave cloud feedback, and physical links with that feedback should be investigated to verify that the constraint is real. Additionally, any proposed ECS constraint should not be taken at face value since other factors influencing ECS besides shortwave cloud feedback could be systematically biased in the models.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qu, Xin; Hall, Alex; DeAngelis, Anthony M.
Differences among climate models in equilibrium climate sensitivity (ECS; the equilibrium surface temperature response to a doubling of atmospheric CO2) remain a significant barrier to the accurate assessment of societally important impacts of climate change. Relationships between ECS and observable metrics of the current climate in model ensembles, so-called emergent constraints, have been used to constrain ECS. Here a statistical method (including a backward selection process) is employed to achieve a better statistical understanding of the connections between four recently proposed emergent constraint metrics and individual feedbacks influencing ECS. The relationship between each metric and ECS is largely attributable tomore » a statistical connection with shortwave low cloud feedback, the leading cause of intermodel ECS spread. This result bolsters confidence in some of the metrics, which had assumed such a connection in the first place. Additional analysis is conducted with a few thousand artificial metrics that are randomly generated but are well correlated with ECS. The relationships between the contrived metrics and ECS can also be linked statistically to shortwave cloud feedback. Thus, any proposed or forthcoming ECS constraint based on the current generation of climate models should be viewed as a potential constraint on shortwave cloud feedback, and physical links with that feedback should be investigated to verify that the constraint is real. Additionally, any proposed ECS constraint should not be taken at face value since other factors influencing ECS besides shortwave cloud feedback could be systematically biased in the models.« less
Automation of Endmember Pixel Selection in SEBAL/METRIC Model
NASA Astrophysics Data System (ADS)
Bhattarai, N.; Quackenbush, L. J.; Im, J.; Shaw, S. B.
2015-12-01
The commonly applied surface energy balance for land (SEBAL) and its variant, mapping evapotranspiration (ET) at high resolution with internalized calibration (METRIC) models require manual selection of endmember (i.e. hot and cold) pixels to calibrate sensible heat flux. Current approaches for automating this process are based on statistical methods and do not appear to be robust under varying climate conditions and seasons. In this paper, we introduce a new approach based on simple machine learning tools and search algorithms that provides an automatic and time efficient way of identifying endmember pixels for use in these models. The fully automated models were applied on over 100 cloud-free Landsat images with each image covering several eddy covariance flux sites in Florida and Oklahoma. Observed land surface temperatures at automatically identified hot and cold pixels were within 0.5% of those from pixels manually identified by an experienced operator (coefficient of determination, R2, ≥ 0.92, Nash-Sutcliffe efficiency, NSE, ≥ 0.92, and root mean squared error, RMSE, ≤ 1.67 K). Daily ET estimates derived from the automated SEBAL and METRIC models were in good agreement with their manual counterparts (e.g., NSE ≥ 0.91 and RMSE ≤ 0.35 mm day-1). Automated and manual pixel selection resulted in similar estimates of observed ET across all sites. The proposed approach should reduce time demands for applying SEBAL/METRIC models and allow for their more widespread and frequent use. This automation can also reduce potential bias that could be introduced by an inexperienced operator and extend the domain of the models to new users.
Technical Note: Approximate Bayesian parameterization of a complex tropical forest model
NASA Astrophysics Data System (ADS)
Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.
2013-08-01
Inverse parameter estimation of process-based models is a long-standing problem in ecology and evolution. A key problem of inverse parameter estimation is to define a metric that quantifies how well model predictions fit to the data. Such a metric can be expressed by general cost or objective functions, but statistical inversion approaches are based on a particular metric, the probability of observing the data given the model, known as the likelihood. Deriving likelihoods for dynamic models requires making assumptions about the probability for observations to deviate from mean model predictions. For technical reasons, these assumptions are usually derived without explicit consideration of the processes in the simulation. Only in recent years have new methods become available that allow generating likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional MCMC, performs well in retrieving known parameter values from virtual field data generated by the forest model. We analyze the results of the parameter estimation, examine the sensitivity towards the choice and aggregation of model outputs and observed data (summary statistics), and show results from using this method to fit the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss differences of this approach to Approximate Bayesian Computing (ABC), another commonly used method to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can successfully be applied to process-based models of high complexity. The methodology is particularly suited to heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models in ecology and evolution.
NASA Astrophysics Data System (ADS)
Sullivan, F.; Ollinger, S. V.; Palace, M. W.; Ouimette, A.; Sanders-DeMott, R.; Lepine, L. C.
2017-12-01
The correlation between near-infrared reflectance and forest canopy nitrogen concentration has been demonstrated at varying scales using a range of optical sensors on airborne and satellite platforms. Although the mechanism underpinning the relationship is unclear, at its basis are biologically-driven functional relationships of multiple plant traits that affect canopy chemistry and structure. The link between near-infrared reflectance and canopy nitrogen has been hypothesized to be partially driven by covariation of canopy nitrogen with canopy structure. In this study, we used a combination of airborne LiDAR data and field measured leaf and canopy chemical and structural traits to explore interrelationships between canopy nitrogen, near-infrared reflectance, and canopy structure on plots at Bartlett Experimental Forest in the White Mountain National Forest, New Hampshire. Over each plot, we developed a 1-meter resolution canopy height profile and a 1-meter resolution canopy height model. From canopy height profiles and canopy height models, we calculated a set of metrics describing the plot-level variability, breadth, depth, and arrangement of LiDAR returns. This combination of metrics was used to describe both vertical and horizontal variation in structure. In addition, we developed and measured several field-based metrics of leaf and canopy structure at the plot scale by directly measuring the canopy or by weighting leaf-level metrics by species leaf area contribution. We assessed relationships between leaf and structural metrics, near-infrared reflectance and canopy nitrogen concentration using multiple linear regression and mixed effects modeling. Consistent with our hypothesis, we found moderately strong links between both near-infrared reflectance and canopy nitrogen concentration with LiDAR-derived structural metrics, and we additionally found that leaf-level metrics scaled to the plot level share an important role in canopy reflectance. We suggest that canopy structure has a governing role in canopy reflectance, reducing maximum potential reflectance as structural complexity increases, and therefore also influences the relationship between canopy nitrogen and NIR reflectance.
A biologically plausible computational model for auditory object recognition.
Larson, Eric; Billimoria, Cyrus P; Sen, Kamal
2009-01-01
Object recognition is a task of fundamental importance for sensory systems. Although this problem has been intensively investigated in the visual system, relatively little is known about the recognition of complex auditory objects. Recent work has shown that spike trains from individual sensory neurons can be used to discriminate between and recognize stimuli. Multiple groups have developed spike similarity or dissimilarity metrics to quantify the differences between spike trains. Using a nearest-neighbor approach the spike similarity metrics can be used to classify the stimuli into groups used to evoke the spike trains. The nearest prototype spike train to the tested spike train can then be used to identify the stimulus. However, how biological circuits might perform such computations remains unclear. Elucidating this question would facilitate the experimental search for such circuits in biological systems, as well as the design of artificial circuits that can perform such computations. Here we present a biologically plausible model for discrimination inspired by a spike distance metric using a network of integrate-and-fire model neurons coupled to a decision network. We then apply this model to the birdsong system in the context of song discrimination and recognition. We show that the model circuit is effective at recognizing individual songs, based on experimental input data from field L, the avian primary auditory cortex analog. We also compare the performance and robustness of this model to two alternative models of song discrimination: a model based on coincidence detection and a model based on firing rate.
Multistressor predictive models of invertebrate condition in the Corn Belt, USA
Waite, Ian R.; Van Metre, Peter C.
2017-01-01
Understanding the complex relations between multiple environmental stressors and ecological conditions in streams can help guide resource-management decisions. During 14 weeks in spring/summer 2013, personnel from the US Geological Survey and the US Environmental Protection Agency sampled 98 wadeable streams across the Midwest Corn Belt region of the USA for water and sediment quality, physical and habitat characteristics, and ecological communities. We used these data to develop independent predictive disturbance models for 3 macroinvertebrate metrics and a multimetric index. We developed the models based on boosted regression trees (BRT) for 3 stressor categories, land use/land cover (geographic information system [GIS]), all in-stream stressors combined (nutrients, habitat, and contaminants), and for GIS plus in-stream stressors. The GIS plus in-stream stressor models had the best overall performance with an average cross-validation R2 across all models of 0.41. The models were generally consistent in the explanatory variables selected within each stressor group across the 4 invertebrate metrics modeled. Variables related to riparian condition, substrate size or embeddedness, velocity and channel shape, nutrients (primarily NH3), and contaminants (pyrethroid degradates) were important descriptors of the invertebrate metrics. Models based on all measured in-stream stressors performed comparably to models based on GIS landscape variables, suggesting that the in-stream stressor characterization reasonably represents the dominant factors affecting invertebrate communities and that GIS variables are acting as surrogates for in-stream stressors that directly affect in-stream biota.
Advanced Life Support Research and Technology Development Metric
NASA Technical Reports Server (NTRS)
Hanford, A. J.
2004-01-01
The Metric is one of several measures employed by the NASA to assess the Agency s progress as mandated by the United States Congress and the Office of Management and Budget. Because any measure must have a reference point, whether explicitly defined or implied, the Metric is a comparison between a selected ALS Project life support system and an equivalently detailed life support system using technology from the Environmental Control and Life Support System (ECLSS) for the International Space Station (ISS). This document provides the official calculation of the Advanced Life Support (ALS) Research and Technology Development Metric (the Metric) for Fiscal Year 2004. The values are primarily based on Systems Integration, Modeling, and Analysis (SIMA) Element approved software tools or reviewed and approved reference documents. For Fiscal Year 2004, the Advanced Life Support Research and Technology Development Metric value is 2.03 for an Orbiting Research Facility and 1.62 for an Independent Exploration Mission.
NASA Astrophysics Data System (ADS)
Safari, A.; Sohrabi, H.
2016-06-01
The role of forests as a reservoir for carbon has prompted the need for timely and reliable estimation of aboveground carbon stocks. Since measurement of aboveground carbon stocks of forests is a destructive, costly and time-consuming activity, aerial and satellite remote sensing techniques have gained many attentions in this field. Despite the fact that using aerial data for predicting aboveground carbon stocks has been proved as a highly accurate method, there are challenges related to high acquisition costs, small area coverage, and limited availability of these data. These challenges are more critical for non-commercial forests located in low-income countries. Landsat program provides repetitive acquisition of high-resolution multispectral data, which are freely available. The aim of this study was to assess the potential of multispectral Landsat 8 Operational Land Imager (OLI) derived texture metrics in quantifying aboveground carbon stocks of coppice Oak forests in Zagros Mountains, Iran. We used four different window sizes (3×3, 5×5, 7×7, and 9×9), and four different offsets ([0,1], [1,1], [1,0], and [1,-1]) to derive nine texture metrics (angular second moment, contrast, correlation, dissimilar, entropy, homogeneity, inverse difference, mean, and variance) from four bands (blue, green, red, and infrared). Totally, 124 sample plots in two different forests were measured and carbon was calculated using species-specific allometric models. Stepwise regression analysis was applied to estimate biomass from derived metrics. Results showed that, in general, larger size of window for deriving texture metrics resulted models with better fitting parameters. In addition, the correlation of the spectral bands for deriving texture metrics in regression models was ranked as b4>b3>b2>b5. The best offset was [1,-1]. Amongst the different metrics, mean and entropy were entered in most of the regression models. Overall, different models based on derived texture metrics were able to explain about half of the variation in aboveground carbon stocks. These results demonstrated that Landsat 8 derived texture metrics can be applied for mapping aboveground carbon stocks of coppice Oak Forests in large areas.
McShane, Ryan R.; Driscoll, Katelyn P.; Sando, Roy
2017-09-27
Many approaches have been developed for measuring or estimating actual evapotranspiration (ETa), and research over many years has led to the development of remote sensing methods that are reliably reproducible and effective in estimating ETa. Several remote sensing methods can be used to estimate ETa at the high spatial resolution of agricultural fields and the large extent of river basins. More complex remote sensing methods apply an analytical approach to ETa estimation using physically based models of varied complexity that require a combination of ground-based and remote sensing data, and are grounded in the theory behind the surface energy balance model. This report, funded through cooperation with the International Joint Commission, provides an overview of selected remote sensing methods used for estimating water consumed through ETa and focuses on Mapping Evapotranspiration at High Resolution with Internalized Calibration (METRIC) and Operational Simplified Surface Energy Balance (SSEBop), two energy balance models for estimating ETa that are currently applied successfully in the United States. The METRIC model can produce maps of ETa at high spatial resolution (30 meters using Landsat data) for specific areas smaller than several hundred square kilometers in extent, an improvement in practice over methods used more generally at larger scales. Many studies validating METRIC estimates of ETa against measurements from lysimeters have shown model accuracies on daily to seasonal time scales ranging from 85 to 95 percent. The METRIC model is accurate, but the greater complexity of METRIC results in greater data requirements, and the internalized calibration of METRIC leads to greater skill required for implementation. In contrast, SSEBop is a simpler model, having reduced data requirements and greater ease of implementation without a substantial loss of accuracy in estimating ETa. The SSEBop model has been used to produce maps of ETa over very large extents (the conterminous United States) using lower spatial resolution (1 kilometer) Moderate Resolution Imaging Spectroradiometer (MODIS) data. Model accuracies ranging from 80 to 95 percent on daily to annual time scales have been shown in numerous studies that validated ETa estimates from SSEBop against eddy covariance measurements. The METRIC and SSEBop models can incorporate low and high spatial resolution data from MODIS and Landsat, but the high spatiotemporal resolution of ETa estimates using Landsat data over large extents takes immense computing power. Cloud computing is providing an opportunity for processing an increasing amount of geospatial “big data” in a decreasing period of time. For example, Google Earth EngineTM has been used to implement METRIC with automated calibration for regional-scale estimates of ETa using Landsat data. The U.S. Geological Survey also is using Google Earth EngineTM to implement SSEBop for estimating ETa in the United States at a continental scale using Landsat data.
NASA Astrophysics Data System (ADS)
Schwabe, O.; Shehab, E.; Erkoyuncu, J.
2015-08-01
The lack of defensible methods for quantifying cost estimate uncertainty over the whole product life cycle of aerospace innovations such as propulsion systems or airframes poses a significant challenge to the creation of accurate and defensible cost estimates. Based on the axiomatic definition of uncertainty as the actual prediction error of the cost estimate, this paper provides a comprehensive overview of metrics used for the uncertainty quantification of cost estimates based on a literature review, an evaluation of publicly funded projects such as part of the CORDIS or Horizon 2020 programs, and an analysis of established approaches used by organizations such NASA, the U.S. Department of Defence, the ESA, and various commercial companies. The metrics are categorized based on their foundational character (foundations), their use in practice (state-of-practice), their availability for practice (state-of-art) and those suggested for future exploration (state-of-future). Insights gained were that a variety of uncertainty quantification metrics exist whose suitability depends on the volatility of available relevant information, as defined by technical and cost readiness level, and the number of whole product life cycle phases the estimate is intended to be valid for. Information volatility and number of whole product life cycle phases can hereby be considered as defining multi-dimensional probability fields admitting various uncertainty quantification metric families with identifiable thresholds for transitioning between them. The key research gaps identified were the lacking guidance grounded in theory for the selection of uncertainty quantification metrics and lacking practical alternatives to metrics based on the Central Limit Theorem. An innovative uncertainty quantification framework consisting of; a set-theory based typology, a data library, a classification system, and a corresponding input-output model are put forward to address this research gap as the basis for future work in this field.
ERIC Educational Resources Information Center
Hinrichs, Roy S., Comp.
The manual provides industrial arts instructors with information necessary to introduce and teach the metric system to their students. The instructional unit is based on a project, the building of a model automobile racer propelled by a carbon dioxide cartridge. To add interest and enthusiasm, Statewide racing competition in which students may…
A Metric Model for Intranet Portal Business Requirements
2003-12-01
ROIMI) with a common unit of analysis for both aggregate and sub-corporate levels through forms of the Knowledge Value Added (KVA) and Activity Based...means in which to calculate return on intranet metrics investment (ROIMI) with a common unit of analysis for both aggregate and sub-corporate levels...IT ANALYSIS APPROACHES.....................................................13 1. Corporate Analysis
NASA Astrophysics Data System (ADS)
Jaber, Salahuddin M.
Soil organic carbon (SOC) sequestration is a component of larger strategies to control the accumulation of greenhouse gases that may be causing global warming. To implement this approach, it is necessary to improve the methods of measuring SOC content. Among these methods are indirect remote sensing and geographic information systems (GIS) techniques that are required to provide non-intrusive, low cost, and spatially continuous information that cover large areas on a repetitive basis. The main goal of this study is to evaluate the effects of using Hyperion hyperspectral data on improving the existing remote sensing and GIS-based methodologies for rapidly, efficiently, and accurately measuring SOC content on farmland. The study area is Big Creek Watershed (BCW) in Southern Illinois. The methodology consists of compiling a GIS database (consisting of remote sensing and soil variables) for 303 composite soil samples collected from representative pixels along the Hyperion coverage area of the watershed. Stepwise procedures were used to calibrate and validate linear multiple regression models where SOC was regarded as the response and the other remote sensing and soil variables as the predictors. Two models were selected. The first was the best all variables model and the second was the best only raster variables model. Map algebra was implemented to extrapolate the best only raster variables model and produce a SOC map for the BGW. This study concluded that Hyperion data marginally improved the predictability of the existing SOC statistical models based on multispectral satellite remote sensing sensors with correlation coefficient of 0.37 and root mean square error of 3.19 metric tons/hectare to a 15-cm depth. The total SOC pool of the study area is about 225,232 metric tons to 15-cm depth. The nonforested wetlands contained the highest SOC density (34.3 metric tons/hectare/15cm) with total SOC content of about 2,003.5 metric tons to 15-cm depth, where croplands had the lowest SOC density (21.6 metric tons/hectare/15cm) with total SOC content of about 44,571.2 metric tons to 15-cm depth.
Resilience Metrics for the Electric Power System: A Performance-Based Approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vugrin, Eric D.; Castillo, Andrea R; Silva-Monroy, Cesar Augusto
Grid resilience is a concept related to a power system's ability to continue operating and delivering power even in the event that low probability, high-consequence disruptions such as hurricanes, earthquakes, and cyber-attacks occur. Grid resilience objectives focus on managing and, ideally, minimizing potential consequences that occur as a result of these disruptions. Currently, no formal grid resilience definitions, metrics, or analysis methods have been universally accepted. This document describes an effort to develop and describe grid resilience metrics and analysis methods. The metrics and methods described herein extend upon the Resilience Analysis Process (RAP) developed by Watson et al. formore » the 2015 Quadrennial Energy Review. The extension allows for both outputs from system models and for historical data to serve as the basis for creating grid resilience metrics and informing grid resilience planning and response decision-making. This document describes the grid resilience metrics and analysis methods. Demonstration of the metrics and methods is shown through a set of illustrative use cases.« less
Dean, Jamie A; Wong, Kee H; Welsh, Liam C; Jones, Ann-Britt; Schick, Ulrike; Newbold, Kate L; Bhide, Shreerang A; Harrington, Kevin J; Nutting, Christopher M; Gulliford, Sarah L
2016-07-01
Severe acute mucositis commonly results from head and neck (chemo)radiotherapy. A predictive model of mucositis could guide clinical decision-making and inform treatment planning. We aimed to generate such a model using spatial dose metrics and machine learning. Predictive models of severe acute mucositis were generated using radiotherapy dose (dose-volume and spatial dose metrics) and clinical data. Penalised logistic regression, support vector classification and random forest classification (RFC) models were generated and compared. Internal validation was performed (with 100-iteration cross-validation), using multiple metrics, including area under the receiver operating characteristic curve (AUC) and calibration slope, to assess performance. Associations between covariates and severe mucositis were explored using the models. The dose-volume-based models (standard) performed equally to those incorporating spatial information. Discrimination was similar between models, but the RFCstandard had the best calibration. The mean AUC and calibration slope for this model were 0.71 (s.d.=0.09) and 3.9 (s.d.=2.2), respectively. The volumes of oral cavity receiving intermediate and high doses were associated with severe mucositis. The RFCstandard model performance is modest-to-good, but should be improved, and requires external validation. Reducing the volumes of oral cavity receiving intermediate and high doses may reduce mucositis incidence. Copyright © 2016 The Author(s). Published by Elsevier Ireland Ltd.. All rights reserved.
Designing Industrial Networks Using Ecological Food Web Metrics.
Layton, Astrid; Bras, Bert; Weissburg, Marc
2016-10-18
Biologically Inspired Design (biomimicry) and Industrial Ecology both look to natural systems to enhance the sustainability and performance of engineered products, systems and industries. Bioinspired design (BID) traditionally has focused on a unit operation and single product level. In contrast, this paper describes how principles of network organization derived from analysis of ecosystem properties can be applied to industrial system networks. Specifically, this paper examines the applicability of particular food web matrix properties as design rules for economically and biologically sustainable industrial networks, using an optimization model developed for a carpet recycling network. Carpet recycling network designs based on traditional cost and emissions based optimization are compared to designs obtained using optimizations based solely on ecological food web metrics. The analysis suggests that networks optimized using food web metrics also were superior from a traditional cost and emissions perspective; correlations between optimization using ecological metrics and traditional optimization ranged generally from 0.70 to 0.96, with flow-based metrics being superior to structural parameters. Four structural food parameters provided correlations nearly the same as that obtained using all structural parameters, but individual structural parameters provided much less satisfactory correlations. The analysis indicates that bioinspired design principles from ecosystems can lead to both environmentally and economically sustainable industrial resource networks, and represent guidelines for designing sustainable industry networks.
NASA Astrophysics Data System (ADS)
Schneider, C. A.; Aggett, G. R.; Nevo, A.; Babel, N. C.; Hattendorf, M. J.
2008-12-01
The western United States face an increasing threat from drought - and the social, economic, and environmental impacts that come with it. The combination of diminished water supplies along with increasing demand for urban and other uses is rapidly depleting surface and ground water reserves traditionally allocated for agricultural use. Quantification of water consumptive use is increasingly important as water resources are placed under growing tension by increased users and interests. Scarce water supplies can be managed more efficiently through use of information and prediction tools accessible via the internet. METRIC (Mapping ET at high Resolution with Internalized Calibration) represents a maturing technology for deriving a remote sensing-based surface energy balance for estimating ET from the earth's surface. This technology has the potential to become widely adopted and used by water resources communities providing critical support to a host of water decision support tools. ET images created using METRIC or similar remote- sensing based processing systems could be routinely used as input to operational and planning models for water demand forecasting, reservoir operations, ground-water management, irrigation water supply planning, water rights regulation, and for the improvement, validation, and use of hydrological models. The ET modeling and subsequent validation and distribution of results via the web presented here provides a vehicle through which METRIC ET parameters can be made more accessible to hydrologic modelers. It will enable users of the data to assess the results of the spatially distributed ET modeling and compare with results from conventional ET estimation methods prior to assimilation in surface and ground water models. In addition, this ET-Server application will provide rapid and transparent access to the data enabling quantification of uncertainties due to errors in temporal sampling and METRIC modeling, while the GIS-based analytical tools will facilitate quality assessments associated with the selected spatio-temporal scale of interest.
Application of constrained k-means clustering in ground motion simulation validation
NASA Astrophysics Data System (ADS)
Khoshnevis, N.; Taborda, R.
2017-12-01
The validation of ground motion synthetics has received increased attention over the last few years due to the advances in physics-based deterministic and hybrid simulation methods. Unlike for low frequency simulations (f ≤ 0.5 Hz), for which it has become reasonable to expect a good match between synthetics and data, in the case of high-frequency simulations (f ≥ 1 Hz) it is not possible to match results on a wiggle-by-wiggle basis. This is mostly due to the various complexities and uncertainties involved in earthquake ground motion modeling. Therefore, in order to compare synthetics with data we turn to different time series metrics, which are used as a means to characterize how the synthetics match the data on qualitative and statistical sense. In general, these metrics provide GOF scores that measure the level of similarity in the time and frequency domains. It is common for these scores to be scaled from 0 to 10, with 10 representing a perfect match. Although using individual metrics for particular applications is considered more adequate, there is no consensus or a unified method to classify the comparison between a set of synthetic and recorded seismograms when the various metrics offer different scores. We study the relationship among these metrics through a constrained k-means clustering approach. We define 4 hypothetical stations with scores 3, 5, 7, and 9 for all metrics. We put these stations in the category of cannot-link constraints. We generate the dataset through the validation of the results from a deterministic (physics-based) ground motion simulation for a moderate magnitude earthquake in the greater Los Angeles basin using three velocity models. The maximum frequency of the simulation is 4 Hz. The dataset involves over 300 stations and 11 metrics, or features, as they are understood in the clustering process, where the metrics form a multi-dimensional space. We address the high-dimensional feature effects with a subspace-clustering analysis, generate a final labeled dataset of stations, and discuss the within-class statistical characteristics of each metric. Labeling these stations is the first step towards developing a unified metric to evaluate ground motion simulations in an application-independent manner.
Development of a human cadaver model for training in laparoscopic donor nephrectomy.
Sutton, Erica R H; Billeter, Adrian; Druen, Devin; Roberts, Henry; Rice, Jonathan
2017-06-01
The organ procurement network recommends a surgeon record 15 cases as surgeon or assistant for laparoscopic donor nephrectomies (LDN) prior to independent practice. The literature suggests that the learning curve for improved perioperative and patient outcomes is closer to 35 cases. In this article, we describe our development of a model utilizing fresh tissue and objective, quantifiable endpoints to document surgical progress, and efficiency in each of the major steps involved in LDN. Phase I of model development focused on the modifications necessary to maintain visualization for laparoscopic surgery in a human cadaver. Phase II tested proposed learner-based metrics of procedural competency for multiport LDN by timing procedural steps of LDN in a novice learner. Phases I and II required 12 and nine cadavers, with a total of 35 kidneys utilized. The following metrics improved with trial number for multiport LDN: time taken for dissection of the gonadal vein, ureter, renal hilum, adrenal and lumbrical veins, simulated warm ischemic time (WIT), and operative time. Human cadavers can be used for training in LDN as evidenced by improvements in timed learner-based metrics. This simulation-based model fills a gap in available training options for surgeons. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Riato, Luisa; Leira, Manel; Della Bella, Valentina; Oberholster, Paul J
2018-01-15
Acid mine drainage (AMD) from coal mining in the Mpumalanga Highveld region of South Africa has caused severe chemical and biological degradation of aquatic habitats, specifically depressional wetlands, as mines use these wetlands for storage of AMD. Diatom-based multimetric indices (MMIs) to assess wetland condition have mostly been developed to assess agricultural and urban land use impacts. No diatom MMI of wetland condition has been developed to assess AMD impacts related to mining activities. Previous approaches to diatom-based MMI development in wetlands have not accounted for natural variability. Natural variability among depressional wetlands may influence the accuracy of MMIs. Epiphytic diatom MMIs sensitive to AMD were developed for a range of depressional wetland types to account for natural variation in biological metrics. For this, we classified wetland types based on diatom typologies. A range of 4-15 final metrics were selected from a pool of ~140 candidate metrics to develop the MMIs based on their: (1) broad range, (2) high separation power and (3) low correlation among metrics. Final metrics were selected from three categories: similarity to reference sites, functional groups, and taxonomic composition, which represent different aspects of diatom assemblage structure and function. MMI performances were evaluated according to their precision in distinguishing reference sites, responsiveness to discriminate reference and disturbed sites, sensitivity to human disturbances and relevancy to AMD-related stressors. Each MMI showed excellent discriminatory power, whether or not it accounted for natural variation. However, accounting for variation by grouping sites based on diatom typologies improved overall performance of MMIs. Our study highlights the usefulness of diatom-based metrics and provides a model for the biological assessment of depressional wetland condition in South Africa and elsewhere. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhou, Rurui; Li, Yu; Lu, Di; Liu, Haixing; Zhou, Huicheng
2016-09-01
This paper investigates the use of an epsilon-dominance non-dominated sorted genetic algorithm II (ɛ-NSGAII) as a sampling approach with an aim to improving sampling efficiency for multiple metrics uncertainty analysis using Generalized Likelihood Uncertainty Estimation (GLUE). The effectiveness of ɛ-NSGAII based sampling is demonstrated compared with Latin hypercube sampling (LHS) through analyzing sampling efficiency, multiple metrics performance, parameter uncertainty and flood forecasting uncertainty with a case study of flood forecasting uncertainty evaluation based on Xinanjiang model (XAJ) for Qing River reservoir, China. Results obtained demonstrate the following advantages of the ɛ-NSGAII based sampling approach in comparison to LHS: (1) The former performs more effective and efficient than LHS, for example the simulation time required to generate 1000 behavioral parameter sets is shorter by 9 times; (2) The Pareto tradeoffs between metrics are demonstrated clearly with the solutions from ɛ-NSGAII based sampling, also their Pareto optimal values are better than those of LHS, which means better forecasting accuracy of ɛ-NSGAII parameter sets; (3) The parameter posterior distributions from ɛ-NSGAII based sampling are concentrated in the appropriate ranges rather than uniform, which accords with their physical significance, also parameter uncertainties are reduced significantly; (4) The forecasted floods are close to the observations as evaluated by three measures: the normalized total flow outside the uncertainty intervals (FOUI), average relative band-width (RB) and average deviation amplitude (D). The flood forecasting uncertainty is also reduced a lot with ɛ-NSGAII based sampling. This study provides a new sampling approach to improve multiple metrics uncertainty analysis under the framework of GLUE, and could be used to reveal the underlying mechanisms of parameter sets under multiple conflicting metrics in the uncertainty analysis process.
Performance Benchmarks for Scholarly Metrics Associated with Fisheries and Wildlife Faculty
Swihart, Robert K.; Sundaram, Mekala; Höök, Tomas O.; DeWoody, J. Andrew; Kellner, Kenneth F.
2016-01-01
Research productivity and impact are often considered in professional evaluations of academics, and performance metrics based on publications and citations increasingly are used in such evaluations. To promote evidence-based and informed use of these metrics, we collected publication and citation data for 437 tenure-track faculty members at 33 research-extensive universities in the United States belonging to the National Association of University Fisheries and Wildlife Programs. For each faculty member, we computed 8 commonly used performance metrics based on numbers of publications and citations, and recorded covariates including academic age (time since Ph.D.), sex, percentage of appointment devoted to research, and the sub-disciplinary research focus. Standardized deviance residuals from regression models were used to compare faculty after accounting for variation in performance due to these covariates. We also aggregated residuals to enable comparison across universities. Finally, we tested for temporal trends in citation practices to assess whether the “law of constant ratios”, used to enable comparison of performance metrics between disciplines that differ in citation and publication practices, applied to fisheries and wildlife sub-disciplines when mapped to Web of Science Journal Citation Report categories. Our regression models reduced deviance by ¼ to ½. Standardized residuals for each faculty member, when combined across metrics as a simple average or weighted via factor analysis, produced similar results in terms of performance based on percentile rankings. Significant variation was observed in scholarly performance across universities, after accounting for the influence of covariates. In contrast to findings for other disciplines, normalized citation ratios for fisheries and wildlife sub-disciplines increased across years. Increases were comparable for all sub-disciplines except ecology. We discuss the advantages and limitations of our methods, illustrate their use when applied to new data, and suggest future improvements. Our benchmarking approach may provide a useful tool to augment detailed, qualitative assessment of performance. PMID:27152838
Performance Benchmarks for Scholarly Metrics Associated with Fisheries and Wildlife Faculty.
Swihart, Robert K; Sundaram, Mekala; Höök, Tomas O; DeWoody, J Andrew; Kellner, Kenneth F
2016-01-01
Research productivity and impact are often considered in professional evaluations of academics, and performance metrics based on publications and citations increasingly are used in such evaluations. To promote evidence-based and informed use of these metrics, we collected publication and citation data for 437 tenure-track faculty members at 33 research-extensive universities in the United States belonging to the National Association of University Fisheries and Wildlife Programs. For each faculty member, we computed 8 commonly used performance metrics based on numbers of publications and citations, and recorded covariates including academic age (time since Ph.D.), sex, percentage of appointment devoted to research, and the sub-disciplinary research focus. Standardized deviance residuals from regression models were used to compare faculty after accounting for variation in performance due to these covariates. We also aggregated residuals to enable comparison across universities. Finally, we tested for temporal trends in citation practices to assess whether the "law of constant ratios", used to enable comparison of performance metrics between disciplines that differ in citation and publication practices, applied to fisheries and wildlife sub-disciplines when mapped to Web of Science Journal Citation Report categories. Our regression models reduced deviance by ¼ to ½. Standardized residuals for each faculty member, when combined across metrics as a simple average or weighted via factor analysis, produced similar results in terms of performance based on percentile rankings. Significant variation was observed in scholarly performance across universities, after accounting for the influence of covariates. In contrast to findings for other disciplines, normalized citation ratios for fisheries and wildlife sub-disciplines increased across years. Increases were comparable for all sub-disciplines except ecology. We discuss the advantages and limitations of our methods, illustrate their use when applied to new data, and suggest future improvements. Our benchmarking approach may provide a useful tool to augment detailed, qualitative assessment of performance.
Indicators and metrics for the assessment of climate engineering
NASA Astrophysics Data System (ADS)
Oschlies, A.; Held, H.; Keller, D.; Keller, K.; Mengis, N.; Quaas, M.; Rickels, W.; Schmidt, H.
2017-01-01
Selecting appropriate indicators is essential to aggregate the information provided by climate model outputs into a manageable set of relevant metrics on which assessments of climate engineering (CE) can be based. From all the variables potentially available from climate models, indicators need to be selected that are able to inform scientists and society on the development of the Earth system under CE, as well as on possible impacts and side effects of various ways of deploying CE or not. However, the indicators used so far have been largely identical to those used in climate change assessments and do not visibly reflect the fact that indicators for assessing CE (and thus the metrics composed of these indicators) may be different from those used to assess global warming. Until now, there has been little dedicated effort to identifying specific indicators and metrics for assessing CE. We here propose that such an effort should be facilitated by a more decision-oriented approach and an iterative procedure in close interaction between academia, decision makers, and stakeholders. Specifically, synergies and trade-offs between social objectives reflected by individual indicators, as well as decision-relevant uncertainties should be considered in the development of metrics, so that society can take informed decisions about climate policy measures under the impression of the options available, their likely effects and side effects, and the quality of the underlying knowledge base.
Context and meter enhance long-range planning in music performance
Mathias, Brian; Pfordresher, Peter Q.; Palmer, Caroline
2015-01-01
Neural responses demonstrate evidence of resonance, or oscillation, during the production of periodic auditory events. Music contains periodic auditory events that give rise to a sense of beat, which in turn generates a sense of meter on the basis of multiple periodicities. Metrical hierarchies may aid memory for music by facilitating similarity-based associations among sequence events at different periodic distances that unfold in longer contexts. A fundamental question is how metrical associations arising from a musical context influence memory during music performance. Longer contexts may facilitate metrical associations at higher hierarchical levels more than shorter contexts, a prediction of the range model, a formal model of planning processes in music performance (Palmer and Pfordresher, 2003; Pfordresher et al., 2007). Serial ordering errors, in which intended sequence events are produced in incorrect sequence positions, were measured as skilled pianists performed musical pieces that contained excerpts embedded in long or short musical contexts. Pitch errors arose from metrically similar positions and further sequential distances more often when the excerpt was embedded in long contexts compared to short contexts. Musicians’ keystroke intensities and error rates also revealed influences of metrical hierarchies, which differed for performances in long and short contexts. The range model accounted for contextual effects and provided better fits to empirical findings when metrical associations between sequence events were included. Longer sequence contexts may facilitate planning during sequence production by increasing conceptual similarity between hierarchically associated events. These findings are consistent with the notion that neural oscillations at multiple periodicities may strengthen metrical associations across sequence events during planning. PMID:25628550
Cho, Woon; Jang, Jinbeum; Koschan, Andreas; Abidi, Mongi A; Paik, Joonki
2016-11-28
A fundamental limitation of hyperspectral imaging is the inter-band misalignment correlated with subject motion during data acquisition. One way of resolving this problem is to assess the alignment quality of hyperspectral image cubes derived from the state-of-the-art alignment methods. In this paper, we present an automatic selection framework for the optimal alignment method to improve the performance of face recognition. Specifically, we develop two qualitative prediction models based on: 1) a principal curvature map for evaluating the similarity index between sequential target bands and a reference band in the hyperspectral image cube as a full-reference metric; and 2) the cumulative probability of target colors in the HSV color space for evaluating the alignment index of a single sRGB image rendered using all of the bands of the hyperspectral image cube as a no-reference metric. We verify the efficacy of the proposed metrics on a new large-scale database, demonstrating a higher prediction accuracy in determining improved alignment compared to two full-reference and five no-reference image quality metrics. We also validate the ability of the proposed framework to improve hyperspectral face recognition.
Integrated framework for developing search and discrimination metrics
NASA Astrophysics Data System (ADS)
Copeland, Anthony C.; Trivedi, Mohan M.
1997-06-01
This paper presents an experimental framework for evaluating target signature metrics as models of human visual search and discrimination. This framework is based on a prototype eye tracking testbed, the Integrated Testbed for Eye Movement Studies (ITEMS). ITEMS determines an observer's visual fixation point while he studies a displayed image scene, by processing video of the observer's eye. The utility of this framework is illustrated with an experiment using gray-scale images of outdoor scenes that contain randomly placed targets. Each target is a square region of a specific size containing pixel values from another image of an outdoor scene. The real-world analogy of this experiment is that of a military observer looking upon the sensed image of a static scene to find camouflaged enemy targets that are reported to be in the area. ITEMS provides the data necessary to compute various statistics for each target to describe how easily the observers located it, including the likelihood the target was fixated or identified and the time required to do so. The computed values of several target signature metrics are compared to these statistics, and a second-order metric based on a model of image texture was found to be the most highly correlated.
Regime-based evaluation of cloudiness in CMIP5 models
NASA Astrophysics Data System (ADS)
Jin, Daeho; Oreopoulos, Lazaros; Lee, Dongmin
2017-01-01
The concept of cloud regimes (CRs) is used to develop a framework for evaluating the cloudiness of 12 fifth Coupled Model Intercomparison Project (CMIP5) models. Reference CRs come from existing global International Satellite Cloud Climatology Project (ISCCP) weather states. The evaluation is made possible by the implementation in several CMIP5 models of the ISCCP simulator generating in each grid cell daily joint histograms of cloud optical thickness and cloud top pressure. Model performance is assessed with several metrics such as CR global cloud fraction (CF), CR relative frequency of occurrence (RFO), their product [long-term average total cloud amount (TCA)], cross-correlations of CR RFO maps, and a metric of resemblance between model and ISCCP CRs. In terms of CR global RFO, arguably the most fundamental metric, the models perform unsatisfactorily overall, except for CRs representing thick storm clouds. Because model CR CF is internally constrained by our method, RFO discrepancies yield also substantial TCA errors. Our results support previous findings that CMIP5 models underestimate cloudiness. The multi-model mean performs well in matching observed RFO maps for many CRs, but is still not the best for this or other metrics. When overall performance across all CRs is assessed, some models, despite shortcomings, apparently outperform Moderate Resolution Imaging Spectroradiometer cloud observations evaluated against ISCCP like another model output. Lastly, contrasting cloud simulation performance against each model's equilibrium climate sensitivity in order to gain insight on whether good cloud simulation pairs with particular values of this parameter, yields no clear conclusions.
Groundwater modelling in decision support: reflections on a unified conceptual framework
NASA Astrophysics Data System (ADS)
Doherty, John; Simmons, Craig T.
2013-11-01
Groundwater models are commonly used as basis for environmental decision-making. There has been discussion and debate in recent times regarding the issue of model simplicity and complexity. This paper contributes to this ongoing discourse. The selection of an appropriate level of model structural and parameterization complexity is not a simple matter. Although the metrics on which such selection should be based are simple, there are many competing, and often unquantifiable, considerations which must be taken into account as these metrics are applied. A unified conceptual framework is introduced and described which is intended to underpin groundwater modelling in decision support with a direct focus on matters regarding model simplicity and complexity.
Favazza, Christopher P.; Fetterly, Kenneth A.; Hangiandreou, Nicholas J.; Leng, Shuai; Schueler, Beth A.
2015-01-01
Abstract. Evaluation of flat-panel angiography equipment through conventional image quality metrics is limited by the scope of standard spatial-domain image quality metric(s), such as contrast-to-noise ratio and spatial resolution, or by restricted access to appropriate data to calculate Fourier domain measurements, such as modulation transfer function, noise power spectrum, and detective quantum efficiency. Observer models have been shown capable of overcoming these limitations and are able to comprehensively evaluate medical-imaging systems. We present a spatial domain-based channelized Hotelling observer model to calculate the detectability index (DI) of our different sized disks and compare the performance of different imaging conditions and angiography systems. When appropriate, changes in DIs were compared to expectations based on the classical Rose model of signal detection to assess linearity of the model with quantum signal-to-noise ratio (SNR) theory. For these experiments, the estimated uncertainty of the DIs was less than 3%, allowing for precise comparison of imaging systems or conditions. For most experimental variables, DI changes were linear with expectations based on quantum SNR theory. DIs calculated for the smallest objects demonstrated nonlinearity with quantum SNR theory due to system blur. Two angiography systems with different detector element sizes were shown to perform similarly across the majority of the detection tasks. PMID:26158086
Predicting streamflow regime metrics for ungauged streamsin Colorado, Washington, and Oregon
NASA Astrophysics Data System (ADS)
Sanborn, Stephen C.; Bledsoe, Brian P.
2006-06-01
Streamflow prediction in ungauged basins provides essential information for water resources planning and management and ecohydrological studies yet remains a fundamental challenge to the hydrological sciences. A methodology is presented for stratifying streamflow regimes of gauged locations, classifying the regimes of ungauged streams, and developing models for predicting a suite of ecologically pertinent streamflow metrics for these streams. Eighty-four streamflow metrics characterizing various flow regime attributes were computed along with physical and climatic drainage basin characteristics for 150 streams with little or no streamflow modification in Colorado, Washington, and Oregon. The diverse hydroclimatology of the study area necessitates flow regime stratification and geographically independent clusters were identified and used to develop separate predictive models for each flow regime type. Multiple regression models for flow magnitude, timing, and rate of change metrics were quite accurate with many adjusted R2 values exceeding 0.80, while models describing streamflow variability did not perform as well. Separate stratification schemes for high, low, and average flows did not considerably improve models for metrics describing those particular aspects of the regime over a scheme based on the entire flow regime. Models for streams identified as 'snowmelt' type were improved if sites in Colorado and the Pacific Northwest were separated to better stratify the processes driving streamflow in these regions thus revealing limitations of geographically independent streamflow clusters. This study demonstrates that a broad suite of ecologically relevant streamflow characteristics can be accurately modeled across large heterogeneous regions using this framework. Applications of the resulting models include stratifying biomonitoring sites and quantifying linkages between specific aspects of flow regimes and aquatic community structure. In particular, the results bode well for modeling ecological processes related to high-flow magnitude, timing, and rate of change such as the recruitment of fish and riparian vegetation across large regions.
Identifying Seizure Onset Zone From the Causal Connectivity Inferred Using Directed Information
NASA Astrophysics Data System (ADS)
Malladi, Rakesh; Kalamangalam, Giridhar; Tandon, Nitin; Aazhang, Behnaam
2016-10-01
In this paper, we developed a model-based and a data-driven estimator for directed information (DI) to infer the causal connectivity graph between electrocorticographic (ECoG) signals recorded from brain and to identify the seizure onset zone (SOZ) in epileptic patients. Directed information, an information theoretic quantity, is a general metric to infer causal connectivity between time-series and is not restricted to a particular class of models unlike the popular metrics based on Granger causality or transfer entropy. The proposed estimators are shown to be almost surely convergent. Causal connectivity between ECoG electrodes in five epileptic patients is inferred using the proposed DI estimators, after validating their performance on simulated data. We then proposed a model-based and a data-driven SOZ identification algorithm to identify SOZ from the causal connectivity inferred using model-based and data-driven DI estimators respectively. The data-driven SOZ identification outperforms the model-based SOZ identification algorithm when benchmarked against visual analysis by neurologist, the current clinical gold standard. The causal connectivity analysis presented here is the first step towards developing novel non-surgical treatments for epilepsy.
Tribby, Calvin P.; Miller, Harvey J.; Brown, Barbara B.; Smith, Ken R.; Werner, Carol M.
2017-01-01
There is growing international evidence that supportive built environments encourage active travel such as walking. An unsettled question is the role of geographic regions for analyzing the relationship between the built environment and active travel. This paper examines the geographic region question by assessing walking trip models that use two different regions: walking activity spaces and self-defined neighborhoods. We also use two types of built environment metrics, perceived and audit data, and two types of study design, cross-sectional and longitudinal, to assess these regions. We find that the built environment associations with walking are dependent on the type of metric and the type of model. Audit measures summarized within walking activity spaces better explain walking trips compared to audit measures within self-defined neighborhoods. Perceived measures summarized within self-defined neighborhoods have mixed results. Finally, results differ based on study design. This suggests that results may not be comparable among different regions, metrics and designs; researchers need to consider carefully these choices when assessing active travel correlates. PMID:28237743
Gouda, Hebe N; Critchley, Julia; Powles, John; Capewell, Simon
2012-01-28
Reasons for the widespread declines in coronary heart disease (CHD) mortality in high income countries are controversial. Here we explore how the type of metric chosen for the analyses of these declines affects the answer obtained. The analyses we reviewed were performed using IMPACT, a large Excel based model of the determinants of temporal change in mortality from CHD. Assessments of the decline in CHD mortality in the USA between 1980 and 2000 served as the central case study. Analyses based in the metric of number of deaths prevented attributed about half the decline to treatments (including preventive medications) and half to favourable shifts in risk factors. However, when mortality change was expressed in the metric of life-years-gained, the share attributed to risk factor change rose to 65%. This happened because risk factor changes were modelled as slowing disease progression, such that the hypothetical deaths averted resulted in longer average remaining lifetimes gained than the deaths averted by better treatments. This result was robust to a range of plausible assumptions on the relative effect sizes of changes in treatments and risk factors. Time-based metrics (such as life years) are generally preferable because they direct attention to the changes in the natural history of disease that are produced by changes in key health determinants. The life-years attached to each death averted will also weight deaths in a way that better reflects social preferences.
Shaikh, Faiq; Hendrata, Kenneth; Kolowitz, Brian; Awan, Omer; Shrestha, Rasu; Deible, Christopher
2017-06-01
In the era of value-based healthcare, many aspects of medical care are being measured and assessed to improve quality and reduce costs. Radiology adds enormously to health care costs and is under pressure to adopt a more efficient system that incorporates essential metrics to assess its value and impact on outcomes. Most current systems tie radiologists' incentives and evaluations to RVU-based productivity metrics and peer-review-based quality metrics. In a new potential model, a radiologist's performance will have to increasingly depend on a number of parameters that define "value," beginning with peer review metrics that include referrer satisfaction and feedback from radiologists to the referring physician that evaluates the potency and validity of clinical information provided for a given study. These new dimensions of value measurement will directly impact the cascade of further medical management. We share our continued experience with this project that had two components: RESP (Referrer Evaluation System Pilot) and FRACI (Feedback from Radiologist Addressing Confounding Issues), which were introduced to the clinical radiology workflow in order to capture referrer-based and radiologist-based feedback on radiology reporting. We also share our insight into the principles of design thinking as applied in its planning and execution.
Decomposition-based transfer distance metric learning for image classification.
Luo, Yong; Liu, Tongliang; Tao, Dacheng; Xu, Chao
2014-09-01
Distance metric learning (DML) is a critical factor for image analysis and pattern recognition. To learn a robust distance metric for a target task, we need abundant side information (i.e., the similarity/dissimilarity pairwise constraints over the labeled data), which is usually unavailable in practice due to the high labeling cost. This paper considers the transfer learning setting by exploiting the large quantity of side information from certain related, but different source tasks to help with target metric learning (with only a little side information). The state-of-the-art metric learning algorithms usually fail in this setting because the data distributions of the source task and target task are often quite different. We address this problem by assuming that the target distance metric lies in the space spanned by the eigenvectors of the source metrics (or other randomly generated bases). The target metric is represented as a combination of the base metrics, which are computed using the decomposed components of the source metrics (or simply a set of random bases); we call the proposed method, decomposition-based transfer DML (DTDML). In particular, DTDML learns a sparse combination of the base metrics to construct the target metric by forcing the target metric to be close to an integration of the source metrics. The main advantage of the proposed method compared with existing transfer metric learning approaches is that we directly learn the base metric coefficients instead of the target metric. To this end, far fewer variables need to be learned. We therefore obtain more reliable solutions given the limited side information and the optimization tends to be faster. Experiments on the popular handwritten image (digit, letter) classification and challenge natural image annotation tasks demonstrate the effectiveness of the proposed method.
Predicting the Overall Spatial Quality of Automotive Audio Systems
NASA Astrophysics Data System (ADS)
Koya, Daisuke
The spatial quality of automotive audio systems is often compromised due to their unideal listening environments. Automotive audio systems need to be developed quickly due to industry demands. A suitable perceptual model could evaluate the spatial quality of automotive audio systems with similar reliability to formal listening tests but take less time. Such a model is developed in this research project by adapting an existing model of spatial quality for automotive audio use. The requirements for the adaptation were investigated in a literature review. A perceptual model called QESTRAL was reviewed, which predicts the overall spatial quality of domestic multichannel audio systems. It was determined that automotive audio systems are likely to be impaired in terms of the spatial attributes that were not considered in developing the QESTRAL model, but metrics are available that might predict these attributes. To establish whether the QESTRAL model in its current form can accurately predict the overall spatial quality of automotive audio systems, MUSHRA listening tests using headphone auralisation with head tracking were conducted to collect results to be compared against predictions by the model. Based on guideline criteria, the model in its current form could not accurately predict the overall spatial quality of automotive audio systems. To improve prediction performance, the QESTRAL model was recalibrated and modified using existing metrics of the model, those that were proposed from the literature review, and newly developed metrics. The most important metrics for predicting the overall spatial quality of automotive audio systems included those that were interaural cross-correlation (IACC) based, relate to localisation of the frontal audio scene, and account for the perceived scene width in front of the listener. Modifying the model for automotive audio systems did not invalidate its use for domestic audio systems. The resulting model predicts the overall spatial quality of 2- and 5-channel automotive audio systems with a cross-validation performance of R. 2 = 0.85 and root-mean-squareerror (RMSE) = 11.03%.
New two-metric theory of gravity with prior geometry
NASA Technical Reports Server (NTRS)
Lightman, A. P.; Lee, D. L.
1973-01-01
A Lagrangian-based metric theory of gravity is developed with three adjustable constants and two tensor fields, one of which is a nondynamic 'flat space metric' eta. With a suitable cosmological model and a particular choice of the constants, the 'post-Newtonian limit' of the theory agrees, in the current epoch, with that of general relativity theory (GRT); consequently the theory is consistent with current gravitation experiments. Because of the role of eta, the gravitational 'constant' G is time-dependent and gravitational waves travel null geodesics of eta rather than the physical metric g. Gravitational waves possess six degrees of freedom. The general exact static spherically-symmetric solution is a four-parameter family. Future experimental tests of the theory are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morrissey, Elmer; O'Donnell, James; Keane, Marcus
2004-03-29
Minimizing building life cycle energy consumption is becoming of paramount importance. Performance metrics tracking offers a clear and concise manner of relating design intent in a quantitative form. A methodology is discussed for storage and utilization of these performance metrics through an Industry Foundation Classes (IFC) instantiated Building Information Model (BIM). The paper focuses on storage of three sets of performance data from three distinct sources. An example of a performance metrics programming hierarchy is displayed for a heat pump and a solar array. Utilizing the sets of performance data, two discrete performance effectiveness ratios may be computed, thus offeringmore » an accurate method of quantitatively assessing building performance.« less
Conceptual model of comprehensive research metrics for improved human health and environment.
Engel-Cox, Jill A; Van Houten, Bennett; Phelps, Jerry; Rose, Shyanika W
2008-05-01
Federal, state, and private research agencies and organizations have faced increasing administrative and public demand for performance measurement. Historically, performance measurement predominantly consisted of near-term outputs measured through bibliometrics. The recent focus is on accountability for investment based on long-term outcomes. Developing measurable outcome-based metrics for research programs has been particularly challenging, because of difficulty linking research results to spatially and temporally distant outcomes. Our objective in this review is to build a logic model and associated metrics through which to measure the contribution of environmental health research programs to improvements in human health, the environment, and the economy. We used expert input and literature research on research impact assessment. With these sources, we developed a logic model that defines the components and linkages between extramural environmental health research grant programs and the outputs and outcomes related to health and social welfare, environmental quality and sustainability, economics, and quality of life. The logic model focuses on the environmental health research portfolio of the National Institute of Environmental Health Sciences (NIEHS) Division of Extramural Research and Training. The model delineates pathways for contributions by five types of institutional partners in the research process: NIEHS, other government (federal, state, and local) agencies, grantee institutions, business and industry, and community partners. The model is being applied to specific NIEHS research applications and the broader research community. We briefly discuss two examples and discuss the strengths and limits of outcome-based evaluation of research programs.
General relativity: An erfc metric
NASA Astrophysics Data System (ADS)
Plamondon, Réjean
2018-06-01
This paper proposes an erfc potential to incorporate in a symmetric metric. One key feature of this model is that it relies on the existence of an intrinsic physical constant σ, a star-specific proper length that scales all its surroundings. Based thereon, the new metric is used to study the space-time geometry of a static symmetric massive object, as seen from its interior. The analytical solutions to the Einstein equation are presented, highlighting the absence of singularities and discontinuities in such a model. The geodesics are derived in their second- and first-order differential formats. Recalling the slight impact of the new model on the classical general relativity tests in the solar system, a number of facts and open problems are briefly revisited on the basis of a heuristic definition of σ. A special attention is given to gravitational collapses and non-singular black holes.
Bessems, Jos G M; Paini, Alicia; Gajewska, Monika; Worth, Andrew
2017-12-01
Route-to-route extrapolation is a common part of human risk assessment. Data from oral animal toxicity studies are commonly used to assess the safety of various but specific human dermal exposure scenarios. Using theoretical examples of various user scenarios, it was concluded that delineation of a generally applicable human dermal limit value is not a practicable approach, due to the wide variety of possible human exposure scenarios, including its consequences for internal exposure. This paper uses physiologically based kinetic (PBK) modelling approaches to predict animal as well as human internal exposure dose metrics and for the first time, introduces the concept of Margin of Internal Exposure (MOIE) based on these internal dose metrics. Caffeine was chosen to illustrate this approach. It is a substance that is often found in cosmetics and for which oral repeated dose toxicity data were available. A rat PBK model was constructed in order to convert the oral NOAEL to rat internal exposure dose metrics, i.e. the area under the curve (AUC) and the maximum concentration (C max ), both in plasma. A human oral PBK model was constructed and calibrated using human volunteer data and adapted to accommodate dermal absorption following human dermal exposure. Use of the MOIE approach based on internal dose metrics predictions provides excellent opportunities to investigate the consequences of variations in human dermal exposure scenarios. It can accommodate within-day variation in plasma concentrations and is scientifically more robust than assuming just an exposure in mg/kg bw/day. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Investigation into Text Classification With Kernel Based Schemes
2010-03-01
Document Matrix TDMs Term-Document Matrices TMG Text to Matrix Generator TN True Negative TP True Positive VSM Vector Space Model xxii THIS PAGE...are represented as a term-document matrix, common evaluation metrics, and the software package Text to Matrix Generator ( TMG ). The classifier...AND METRICS This chapter introduces the indexing capabilities of the Text to Matrix Generator ( TMG ) Toolbox. Specific attention is placed on the
A Metric on Phylogenetic Tree Shapes
Plazzotta, G.
2018-01-01
Abstract The shapes of evolutionary trees are influenced by the nature of the evolutionary process but comparisons of trees from different processes are hindered by the challenge of completely describing tree shape. We present a full characterization of the shapes of rooted branching trees in a form that lends itself to natural tree comparisons. We use this characterization to define a metric, in the sense of a true distance function, on tree shapes. The metric distinguishes trees from random models known to produce different tree shapes. It separates trees derived from tropical versus USA influenza A sequences, which reflect the differing epidemiology of tropical and seasonal flu. We describe several metrics based on the same core characterization, and illustrate how to extend the metric to incorporate trees’ branch lengths or other features such as overall imbalance. Our approach allows us to construct addition and multiplication on trees, and to create a convex metric on tree shapes which formally allows computation of average tree shapes. PMID:28472435
Zhen, Chen; Brissette, Ian F.; Ruff, Ryan R.
2014-01-01
The obesity epidemic and excessive consumption of sugar-sweetened beverages have led to proposals of economics-based interventions to promote healthy eating in the United States. Targeted food and beverage taxes and subsidies are prominent examples of such potential intervention strategies. This paper examines the differential effects of taxing sugar-sweetened beverages by calories and by ounces on beverage demand. To properly measure the extent of substitution and complementarity between beverage products, we developed a fully modified distance metric model of differentiated product demand that endogenizes the cross-price effects. We illustrated the proposed methodology in a linear approximate almost ideal demand system, although other flexible demand systems can also be used. In the empirical application using supermarket scanner data, the product-level demand model consists of 178 beverage products with combined market share of over 90%. The novel demand model outperformed the conventional distance metric model in non-nested model comparison tests and in terms of the economic significance of model predictions. In the fully modified model, a calorie-based beverage tax was estimated to cost $1.40 less in compensating variation than an ounce-based tax per 3,500 beverage calories reduced. This difference in welfare cost estimates between two tax strategies is more than three times as much as the difference estimated by the conventional distance metric model. If applied to products purchased from all sources, a 0.04-cent per kcal tax on sugar-sweetened beverages is predicted to reduce annual per capita beverage intake by 5,800 kcal. PMID:25414517
Selection Metric for Photovoltaic Materials Screening Based on Detailed-Balance Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blank, Beatrix; Kirchartz, Thomas; Lany, Stephan
The success of recently discovered absorber materials for photovoltaic applications has been generating increasing interest in systematic materials screening over the last years. However, the key for a successful materials screening is a suitable selection metric that goes beyond the Shockley-Queisser theory that determines the thermodynamic efficiency limit of an absorber material solely by its band-gap energy. Here, we develop a selection metric to quantify the potential photovoltaic efficiency of a material. Our approach is compatible with detailed balance and applicable in computational and experimental materials screening. We use the complex refractive index to calculate radiative and nonradiative efficiency limitsmore » and the respective optimal thickness in the high mobility limit. We also compare our model to the widely applied selection metric by Yu and Zunger [Phys. Rev. Lett. 108, 068701 (2012)] with respect to their dependence on thickness, internal luminescence quantum efficiency, and refractive index. Finally, the model is applied to complex refractive indices calculated via electronic structure theory.« less
Selection Metric for Photovoltaic Materials Screening Based on Detailed-Balance Analysis
Blank, Beatrix; Kirchartz, Thomas; Lany, Stephan; ...
2017-08-31
The success of recently discovered absorber materials for photovoltaic applications has been generating increasing interest in systematic materials screening over the last years. However, the key for a successful materials screening is a suitable selection metric that goes beyond the Shockley-Queisser theory that determines the thermodynamic efficiency limit of an absorber material solely by its band-gap energy. Here, we develop a selection metric to quantify the potential photovoltaic efficiency of a material. Our approach is compatible with detailed balance and applicable in computational and experimental materials screening. We use the complex refractive index to calculate radiative and nonradiative efficiency limitsmore » and the respective optimal thickness in the high mobility limit. We also compare our model to the widely applied selection metric by Yu and Zunger [Phys. Rev. Lett. 108, 068701 (2012)] with respect to their dependence on thickness, internal luminescence quantum efficiency, and refractive index. Finally, the model is applied to complex refractive indices calculated via electronic structure theory.« less
Evaluation of diffusion kurtosis imaging in ex vivo hypomyelinated mouse brains.
Kelm, Nathaniel D; West, Kathryn L; Carson, Robert P; Gochberg, Daniel F; Ess, Kevin C; Does, Mark D
2016-01-01
Diffusion tensor imaging (DTI), diffusion kurtosis imaging (DKI), and DKI-derived white matter tract integrity metrics (WMTI) were experimentally evaluated ex vivo through comparisons to histological measurements and established magnetic resonance imaging (MRI) measures of myelin in two knockout mouse models with varying degrees of hypomyelination. DKI metrics of mean and radial kurtosis were found to be better indicators of myelin content than conventional DTI metrics. The biophysical WMTI model based on the DKI framework reported on axon water fraction with good accuracy in cases with near normal axon density, but did not provide additional specificity to myelination. Overall, DKI provided additional information regarding white matter microstructure compared with DTI, making it an attractive method for future assessments of white matter development and pathology. Copyright © 2015 Elsevier Inc. All rights reserved.
Laurent, Olivier; Wu, Jun; Li, Lianfa; Chung, Judith; Bartell, Scott
2013-02-17
Exposure to air pollution is frequently associated with reductions in birth weight but results of available studies vary widely, possibly in part because of differences in air pollution metrics. Further insight is needed to identify the air pollution metrics most strongly and consistently associated with birth weight. We used a hospital-based obstetric database of more than 70,000 births to study the relationships between air pollution and the risk of low birth weight (LBW, <2,500 g), as well as birth weight as a continuous variable, in term-born infants. Complementary metrics capturing different aspects of air pollution were used (measurements from ambient monitoring stations, predictions from land use regression models and from a Gaussian dispersion model, traffic density, and proximity to roads). Associations between air pollution metrics and birth outcomes were investigated using generalized additive models, adjusting for maternal age, parity, race/ethnicity, insurance status, poverty, gestational age and sex of the infants. Increased risks of LBW were associated with ambient O(3) concentrations as measured by monitoring stations, as well as traffic density and proximity to major roadways. LBW was not significantly associated with other air pollution metrics, except that a decreased risk was associated with ambient NO(2) concentrations as measured by monitoring stations. When birth weight was analyzed as a continuous variable, small increases in mean birth weight were associated with most air pollution metrics (<40 g per inter-quartile range in air pollution metrics). No such increase was observed for traffic density or proximity to major roadways, and a significant decrease in mean birth weight was associated with ambient O3 concentrations. We found contrasting results according to the different air pollution metrics examined. Unmeasured confounders and/or measurement errors might have produced spurious positive associations between birth weight and some air pollution metrics. Despite this, ambient O(3) was associated with a decrement in mean birth weight and significant increases in the risk of LBW were associated with traffic density, proximity to roads and ambient O(3). This suggests that in our study population, these air pollution metrics are more likely related to increased risks of LBW than the other metrics we studied. Further studies are necessary to assess the consistency of such patterns across populations.
2013-01-01
Background Exposure to air pollution is frequently associated with reductions in birth weight but results of available studies vary widely, possibly in part because of differences in air pollution metrics. Further insight is needed to identify the air pollution metrics most strongly and consistently associated with birth weight. Methods We used a hospital-based obstetric database of more than 70,000 births to study the relationships between air pollution and the risk of low birth weight (LBW, <2,500 g), as well as birth weight as a continuous variable, in term-born infants. Complementary metrics capturing different aspects of air pollution were used (measurements from ambient monitoring stations, predictions from land use regression models and from a Gaussian dispersion model, traffic density, and proximity to roads). Associations between air pollution metrics and birth outcomes were investigated using generalized additive models, adjusting for maternal age, parity, race/ethnicity, insurance status, poverty, gestational age and sex of the infants. Results Increased risks of LBW were associated with ambient O3 concentrations as measured by monitoring stations, as well as traffic density and proximity to major roadways. LBW was not significantly associated with other air pollution metrics, except that a decreased risk was associated with ambient NO2 concentrations as measured by monitoring stations. When birth weight was analyzed as a continuous variable, small increases in mean birth weight were associated with most air pollution metrics (<40 g per inter-quartile range in air pollution metrics). No such increase was observed for traffic density or proximity to major roadways, and a significant decrease in mean birth weight was associated with ambient O3 concentrations. Conclusions We found contrasting results according to the different air pollution metrics examined. Unmeasured confounders and/or measurement errors might have produced spurious positive associations between birth weight and some air pollution metrics. Despite this, ambient O3 was associated with a decrement in mean birth weight and significant increases in the risk of LBW were associated with traffic density, proximity to roads and ambient O3. This suggests that in our study population, these air pollution metrics are more likely related to increased risks of LBW than the other metrics we studied. Further studies are necessary to assess the consistency of such patterns across populations. PMID:23413962
[Predictive model based multimetric index of macroinvertebrates for river health assessment].
Chen, Kai; Yu, Hai Yan; Zhang, Ji Wei; Wang, Bei Xin; Chen, Qiu Wen
2017-06-18
Improving the stability of integrity of biotic index (IBI; i.e., multi-metric indices, MMI) across temporal and spatial scales is one of the most important issues in water ecosystem integrity bioassessment and water environment management. Using datasets of field-based macroinvertebrate and physicochemical variables and GIS-based natural predictors (e.g., geomorphology and climate) and land use variables collected at 227 river sites from 2004 to 2011 across the Zhejiang Province, China, we used random forests (RF) to adjust the effects of natural variations at temporal and spatial scales on macroinvertebrate metrics. We then developed natural variations adjusted (predictive) and unadjusted (null) MMIs and compared performance between them. The core me-trics selected for predictive and null MMIs were different from each other, and natural variations within core metrics in predictive MMI explained by RF models ranged between 11.4% and 61.2%. The predictive MMI was more precise and accurate, but less responsive and sensitive than null MMI. The multivariate nearest-neighbor test determined that 9 test sites and 1 most degraded site were flagged outside of the environmental space of the reference site network. We found that combination of predictive MMI developed by using predictive model and the nearest-neighbor test performed best and decreased risks of inferring type I (designating a water body as being in poor biological condition, when it was actually in good condition) and type II (designating a water body as being in good biological condition, when it was actually in poor condition) errors. Our results provided an effective method to improve the stability and performance of integrity of biotic index.
Super Ensemble-based Aviation Turbulence Guidance (SEATG) for Air Traffic Management (ATM)
NASA Astrophysics Data System (ADS)
Kim, Jung-Hoon; Chan, William; Sridhar, Banavar; Sharman, Robert
2014-05-01
Super Ensemble (ensemble of ten turbulence metrics from time-lagged ensemble members of weather forecast data)-based Aviation Turbulence Guidance (SEATG) is developed using Weather Research and Forecasting (WRF) model and in-situ eddy dissipation rate (EDR) observations equipped on commercial aircraft over the contiguous United States. SEATG is a sequence of five procedures including weather modeling, calculating turbulence metrics, mapping EDR-scale, evaluating metrics, and producing final SEATG forecast. This uses similar methodology to the operational Graphic Turbulence Guidance (GTG) with three major improvements. First, SEATG use a higher resolution (3-km) WRF model to capture cloud-resolving scale phenomena. Second, SEATG computes turbulence metrics for multiple forecasts that are combined at the same valid time resulting in an time-lagged ensemble of multiple turbulence metrics. Third, SEATG provides both deterministic and probabilistic turbulence forecasts to take into account weather uncertainties and user demands. It is found that the SEATG forecasts match well with observed radar reflectivity along a surface front as well as convectively induced turbulence outside the clouds on 7-8 Sep 2012. And, overall performance skill of deterministic SEATG against the observed EDR data during this period is superior to any single turbulence metrics. Finally, probabilistic SEATG is used as an example application of turbulence forecast for air-traffic management. In this study, a simple Wind-Optimal Route (WOR) passing through the potential areas of probabilistic SEATG and Lateral Turbulence Avoidance Route (LTAR) taking into account the SEATG are calculated at z = 35000 ft (z = 12 km) from Los Angeles to John F. Kennedy international airports. As a result, WOR takes total of 239 minutes with 16 minutes of SEATG areas for 40% of moderate turbulence potential, while LTAR takes total of 252 minutes travel time that 5% of fuel would be additionally consumed to entirely avoid the moderate SEATG regions.
NASA Astrophysics Data System (ADS)
Li, T.; Wang, Z.; Peng, J.
2018-04-01
Aboveground biomass (AGB) estimation is critical for quantifying carbon stocks and essential for evaluating carbon cycle. In recent years, airborne LiDAR shows its great ability for highly-precision AGB estimation. Most of the researches estimate AGB by the feature metrics extracted from the canopy height distribution of the point cloud which calculated based on precise digital terrain model (DTM). However, if forest canopy density is high, the probability of the LiDAR signal penetrating the canopy is lower, resulting in ground points is not enough to establish DTM. Then the distribution of forest canopy height is imprecise and some critical feature metrics which have a strong correlation with biomass such as percentiles, maximums, means and standard deviations of canopy point cloud can hardly be extracted correctly. In order to address this issue, we propose a strategy of first reconstructing LiDAR feature metrics through Auto-Encoder neural network and then using the reconstructed feature metrics to estimate AGB. To assess the prediction ability of the reconstructed feature metrics, both original and reconstructed feature metrics were regressed against field-observed AGB using the multiple stepwise regression (MS) and the partial least squares regression (PLS) respectively. The results showed that the estimation model using reconstructed feature metrics improved R2 by 5.44 %, 18.09 %, decreased RMSE value by 10.06 %, 22.13 % and reduced RMSEcv by 10.00 %, 21.70 % for AGB, respectively. Therefore, reconstructing LiDAR point feature metrics has potential for addressing AGB estimation challenge in dense canopy area.
Regime-Based Evaluation of Cloudiness in CMIP5 Models
NASA Technical Reports Server (NTRS)
Jin, Daeho; Oraiopoulos, Lazaros; Lee, Dong Min
2016-01-01
The concept of Cloud Regimes (CRs) is used to develop a framework for evaluating the cloudiness of 12 fifth Coupled Model Intercomparison Project (CMIP5) models. Reference CRs come from existing global International Satellite Cloud Climatology Project (ISCCP) weather states. The evaluation is made possible by the implementation in several CMIP5 models of the ISCCP simulator generating for each gridcell daily joint histograms of cloud optical thickness and cloud top pressure. Model performance is assessed with several metrics such as CR global cloud fraction (CF), CR relative frequency of occurrence (RFO), their product (long-term average total cloud amount [TCA]), cross-correlations of CR RFO maps, and a metric of resemblance between model and ISCCP CRs. In terms of CR global RFO, arguably the most fundamental metric, the models perform unsatisfactorily overall, except for CRs representing thick storm clouds. Because model CR CF is internally constrained by our method, RFO discrepancies yield also substantial TCA errors. Our findings support previous studies showing that CMIP5 models underestimate cloudiness. The multi-model mean performs well in matching observed RFO maps for many CRs, but is not the best for this or other metrics. When overall performance across all CRs is assessed, some models, despite their shortcomings, apparently outperform Moderate Resolution Imaging Spectroradiometer (MODIS) cloud observations evaluated against ISCCP as if they were another model output. Lastly, cloud simulation performance is contrasted with each model's equilibrium climate sensitivity (ECS) in order to gain insight on whether good cloud simulation pairs with particular values of this parameter.
Predicting the natural flow regime: Models for assessing hydrological alteration in streams
Carlisle, D.M.; Falcone, J.; Wolock, D.M.; Meador, M.R.; Norris, R.H.
2009-01-01
Understanding the extent to which natural streamflow characteristics have been altered is an important consideration for ecological assessments of streams. Assessing hydrologic condition requires that we quantify the attributes of the flow regime that would be expected in the absence of anthropogenic modifications. The objective of this study was to evaluate whether selected streamflow characteristics could be predicted at regional and national scales using geospatial data. Long-term, gaged river basins distributed throughout the contiguous US that had streamflow characteristics representing least disturbed or near pristine conditions were identified. Thirteen metrics of the magnitude, frequency, duration, timing and rate of change of streamflow were calculated using a 20-50 year period of record for each site. We used random forests (RF), a robust statistical modelling approach, to develop models that predicted the value for each streamflow metric using natural watershed characteristics. We compared the performance (i.e. bias and precision) of national- and regional-scale predictive models to that of models based on landscape classifications, including major river basins, ecoregions and hydrologic landscape regions (HLR). For all hydrologic metrics, landscape stratification models produced estimates that were less biased and more precise than a null model that accounted for no natural variability. Predictive models at the national and regional scale performed equally well, and substantially improved predictions of all hydrologic metrics relative to landscape stratification models. Prediction error rates ranged from 15 to 40%, but were 25% for most metrics. We selected three gaged, non-reference sites to illustrate how predictive models could be used to assess hydrologic condition. These examples show how the models accurately estimate predisturbance conditions and are sensitive to changes in streamflow variability associated with long-term land-use change. We also demonstrate how the models can be applied to predict expected natural flow characteristics at ungaged sites. ?? 2009 John Wiley & Sons, Ltd.
The Massachusetts Community College Performance-Based Funding Formula: A New Model for New England?
ERIC Educational Resources Information Center
Salomon-Fernandez, Yves
2014-01-01
The Massachusetts community college system is entering a second year with funding for each of its 15 schools determined using a new performance-based formula. Under the new model, 50% of each college's allocation is based on performance on metrics related to enrollment and student success, with added incentives for "at-risk" students…
Physics in space-time with scale-dependent metrics
NASA Astrophysics Data System (ADS)
Balankin, Alexander S.
2013-10-01
We construct three-dimensional space Rγ3 with the scale-dependent metric and the corresponding Minkowski space-time Mγ,β4 with the scale-dependent fractal (DH) and spectral (DS) dimensions. The local derivatives based on scale-dependent metrics are defined and differential vector calculus in Rγ3 is developed. We state that Mγ,β4 provides a unified phenomenological framework for dimensional flow observed in quite different models of quantum gravity. Nevertheless, the main attention is focused on the special case of flat space-time M1/3,14 with the scale-dependent Cantor-dust-like distribution of admissible states, such that DH increases from DH=2 on the scale ≪ℓ0 to DH=4 in the infrared limit ≫ℓ0, where ℓ0 is the characteristic length (e.g. the Planck length, or characteristic size of multi-fractal features in heterogeneous medium), whereas DS≡4 in all scales. Possible applications of approach based on the scale-dependent metric to systems of different nature are briefly discussed.
Assessment of six dissimilarity metrics for climate analogues
NASA Astrophysics Data System (ADS)
Grenier, Patrick; Parent, Annie-Claude; Huard, David; Anctil, François; Chaumont, Diane
2013-04-01
Spatial analogue techniques consist in identifying locations whose recent-past climate is similar in some aspects to the future climate anticipated at a reference location. When identifying analogues, one key step is the quantification of the dissimilarity between two climates separated in time and space, which involves the choice of a metric. In this communication, spatial analogues and their usefulness are briefly discussed. Next, six metrics are presented (the standardized Euclidean distance, the Kolmogorov-Smirnov statistic, the nearest-neighbor distance, the Zech-Aslan energy statistic, the Friedman-Rafsky runs statistic and the Kullback-Leibler divergence), along with a set of criteria used for their assessment. The related case study involves the use of numerical simulations performed with the Canadian Regional Climate Model (CRCM-v4.2.3), from which three annual indicators (total precipitation, heating degree-days and cooling degree-days) are calculated over 30-year periods (1971-2000 and 2041-2070). Results indicate that the six metrics identify comparable analogue regions at a relatively large scale, but best analogues may differ substantially. For best analogues, it is also shown that the uncertainty stemming from the metric choice does generally not exceed that stemming from the simulation or model choice. A synthesis of the advantages and drawbacks of each metric is finally presented, in which the Zech-Aslan energy statistic stands out as the most recommended metric for analogue studies, whereas the Friedman-Rafsky runs statistic is the least recommended, based on this case study.
NASA Astrophysics Data System (ADS)
Khobragade, P.; Fan, Jiahua; Rupcich, Franco; Crotty, Dominic J.; Gilat Schmidt, Taly
2016-03-01
This study quantitatively evaluated the performance of the exponential transformation of the free-response operating characteristic curve (EFROC) metric, with the Channelized Hotelling Observer (CHO) as a reference. The CHO has been used for image quality assessment of reconstruction algorithms and imaging systems and often it is applied to study the signal-location-known cases. The CHO also requires a large set of images to estimate the covariance matrix. In terms of clinical applications, this assumption and requirement may be unrealistic. The newly developed location-unknown EFROC detectability metric is estimated from the confidence scores reported by a model observer. Unlike the CHO, EFROC does not require a channelization step and is a non-parametric detectability metric. There are few quantitative studies available on application of the EFROC metric, most of which are based on simulation data. This study investigated the EFROC metric using experimental CT data. A phantom with four low contrast objects: 3mm (14 HU), 5mm (7HU), 7mm (5 HU) and 10 mm (3 HU) was scanned at dose levels ranging from 25 mAs to 270 mAs and reconstructed using filtered backprojection. The area under the curve values for CHO (AUC) and EFROC (AFE) were plotted with respect to different dose levels. The number of images required to estimate the non-parametric AFE metric was calculated for varying tasks and found to be less than the number of images required for parametric CHO estimation. The AFE metric was found to be more sensitive to changes in dose than the CHO metric. This increased sensitivity and the assumption of unknown signal location may be useful for investigating and optimizing CT imaging methods. Future work is required to validate the AFE metric against human observers.
Metrics for evaluating performance and uncertainty of Bayesian network models
Bruce G. Marcot
2012-01-01
This paper presents a selected set of existing and new metrics for gauging Bayesian network model performance and uncertainty. Selected existing and new metrics are discussed for conducting model sensitivity analysis (variance reduction, entropy reduction, case file simulation); evaluating scenarios (influence analysis); depicting model complexity (numbers of model...
Evaluation of Two Crew Module Boilerplate Tests Using Newly Developed Calibration Metrics
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Reaves, Mercedes C.
2012-01-01
The paper discusses a application of multi-dimensional calibration metrics to evaluate pressure data from water drop tests of the Max Launch Abort System (MLAS) crew module boilerplate. Specifically, three metrics are discussed: 1) a metric to assess the probability of enveloping the measured data with the model, 2) a multi-dimensional orthogonality metric to assess model adequacy between test and analysis, and 3) a prediction error metric to conduct sensor placement to minimize pressure prediction errors. Data from similar (nearly repeated) capsule drop tests shows significant variability in the measured pressure responses. When compared to expected variability using model predictions, it is demonstrated that the measured variability cannot be explained by the model under the current uncertainty assumptions.
Schmidt, J.M.; Light, T.D.; Drew, L.J.; Wilson, Frederic H.; Miller, M.L.; Saltus, R.W.
2007-01-01
The Bay Resource Management Plan (RMP) area in southwestern Alaska, north and northeast of Bristol Bay contains significant potential for undiscovered locatable mineral resources of base and precious metals, in addition to metallic mineral deposits that are already known. A quantitative probabilistic assessment has identified 24 tracts of land that are permissive for 17 mineral deposit model types likely to be explored for within the next 15 years in this region. Commodities we discuss in this report that have potential to occur in the Bay RMP area are Ag, Au, Cr, Cu, Fe, Hg, Mo, Pb, Sn, W, Zn, and platinum-group elements. Geoscience data for the region are sufficient to make quantitative estimates of the number of undiscovered deposits only for porphyry copper, epithermal vein, copper skarn, iron skarn, hot-spring mercury, placer gold, and placer platinum-deposit models. A description of a group of shallow- to intermediate-level intrusion-related gold deposits is combined with grade and tonnage data from 13 deposits of this type to provide a quantitative estimate of undiscovered deposits of this new type. We estimate that significant resources of Ag, Au, Cu, Fe, Hg, Mo, Pb, and Pt occur in the Bay Resource Management Plan area in these deposit types. At the 10th percentile probability level, the Bay RMP area is estimated to contain 10,067 metric tons silver, 1,485 metric tons gold, 12.66 million metric tons copper, 560 million metric tons iron, 8,100 metric tons mercury, 500,000 metric tons molybdenum, 150 metric tons lead, and 17 metric tons of platinum in undiscovered deposits of the eight quantified deposit types. At the 90th percentile probability level, the Bay RMP area is estimated to contain 89 metric tons silver, 14 metric tons gold, 911,215 metric tons copper, 330,000 metric tons iron, 1 metric ton mercury, 8,600 metric tons molybdenum and 1 metric ton platinum in undiscovered deposits of the eight deposit types. Other commodities, which may occur in the Bay RMP area, include Cr, Sn, W, Zn, and other platinum-group elements such as Ir, Os, and Pd. We define 13 permissive tracts for 9 additional deposit model types. These are: Besshi- and Cyprus, and Kuroko-volcanogenic massive sulfides, hot spring gold, low sulfide gold veins, Mississippi-Valley Pb-Zn, tin greisen, zinc skarn and Alaskan-type zoned ultramafic platinum-group element deposits. Resources in undiscovered deposits of these nine types have not been quantified, and would be in addition to those in known deposits and the undiscovered resources listed above. Additional mineral resources also may occur in the Bay RMP area in deposit types, which were not considered here.
Threat driven modeling framework using petri nets for e-learning system.
Khamparia, Aditya; Pandey, Babita
2016-01-01
Vulnerabilities at various levels are main cause of security risks in e-learning system. This paper presents a modified threat driven modeling framework, to identify the threats after risk assessment which requires mitigation and how to mitigate those threats. To model those threat mitigations aspects oriented stochastic petri nets are used. This paper included security metrics based on vulnerabilities present in e-learning system. The Common Vulnerability Scoring System designed to provide a normalized method for rating vulnerabilities which will be used as basis in metric definitions and calculations. A case study has been also proposed which shows the need and feasibility of using aspect oriented stochastic petri net models for threat modeling which improves reliability, consistency and robustness of the e-learning system.
NASA Astrophysics Data System (ADS)
Pôças, Isabel; Nogueira, António; Paço, Teresa A.; Sousa, Adélia; Valente, Fernanda; Silvestre, José; Andrade, José A.; Santos, Francisco L.; Pereira, Luís S.; Allen, Richard G.
2013-04-01
Satellite-based surface energy balance models have been successfully applied to estimate and map evapotranspiration (ET). The METRICtm model, Mapping EvapoTranspiration at high Resolution using Internalized Calibration, is one of such models. METRIC has been widely used over an extensive range of vegetation types and applications, mostly focusing annual crops. In the current study, the single-layer-blended METRIC model was applied to Landsat5 TM and Landsat7 ETM+ images to produce estimates of evapotranspiration (ET) in a super intensive olive orchard in Southern Portugal. In sparse woody canopies as in olive orchards, some adjustments in METRIC application related to the estimation of vegetation temperature and of momentum roughness length and sensible heat flux (H) for tall vegetation must be considered. To minimize biases in H estimates due to uncertainties in the definition of momentum roughness length, the Perrier function based on leaf area index and tree canopy architecture, associated with an adjusted estimation of crop height, was used to obtain momentum roughness length estimates. Additionally, to minimize the biases in surface temperature simulations, due to soil and shadow effects, the computation of radiometric temperature considered a three-source condition, where Ts=fcTc+fshadowTshadow+fsunlitTsunlit. As such, the surface temperature (Ts), derived from the thermal band of the Landsat images, integrates the temperature of the canopy (Tc), the temperature of the shaded ground surface (Tshadow), and the temperature of the sunlit ground surface (Tsunlit), according to the relative fraction of vegetation (fc), shadow (fshadow) and sunlit (fsunlit) ground surface, respectively. As the sunlit canopies are the primary source of energy exchange, the effective temperature for the canopy was estimated by solving the three-source condition equation for Tc. To evaluate METRIC performance to estimate ET over the olive grove, several parameters derived from the algorithm were tested against data collected in the field, including eddy covariance ET, surface temperature over the canopy and soil temperature in shaded and sunlit conditions. Additionally, the results were also compared with results published in the literature. The information obtained so far revealed very interesting perspectives for the use of METRIC in the estimation and mapping of ET in super intensive olive orchards. Thereby, this approach might constitute a useful tool towards the improvement of the efficiency of irrigation water management in this crop. The study described is still under way, and thus further applications of METRIC algorithm to a larger number of images and to olive groves with different tree density are planned.
Anderson, Donald D; Kilburg, Anthony T; Thomas, Thaddeus P; Marsh, J Lawrence
2016-01-01
Post-traumatic osteoarthritis (PTOA) is common after intra-articular fractures of the tibial plafond. An objective CT-based measure of fracture severity was previously found to reliably predict whether PTOA developed following surgical treatment of such fractures. However, the extended time required obtaining the fracture energy metric and its reliance upon an intact contralateral limb CT limited its clinical applicability. The objective of this study was to establish an expedited fracture severity metric that provided comparable PTOA predictive ability without the prior limitations. An expedited fracture severity metric was computed from the CT scans of 30 tibial plafond fractures using textural analysis to quantify disorder in CT images. The expedited method utilized an intact surrogate model to enable severity assessment without requiring a contralateral limb CT. Agreement between the expedited fracture severity metric and the Kellgren-Lawrence (KL) radiographic OA score at two-year follow-up was assessed using concordance. The ability of the metric to differentiate between patients that did or did not develop PTOA was assessed using the Wilcoxon Ranked Sum test. The expedited severity metric agreed well (75.2% concordance) with the KL scores. The initial fracture severity of cases that developed PTOA differed significantly (p = 0.004) from those that did not. Receiver operating characteristic analysis showed that the expedited severity metric could accurately predict PTOA outcome in 80% of the cases. The time required to obtain the expedited severity metric averaged 14.9 minutes/ case, and the metric was obtained without using an intact contralateral CT. The expedited CT-based methods for fracture severity assessment present a solution to issues limiting the utility of prior methods. In a relatively short amount of time, the expedited methodology provided a severity score capable of predicting PTOA risk, without needing to have the intact contralateral limb included in the CT scan. The described methods provide surgeons an objective, quantitative representation of the severity of a fracture. Obtained prior to the surgery, it provides a reasonable alternative to current subjective classification systems. The expedited severity metric offers surgeons an objective means for factoring severity of joint insult into treatment decision-making.
Impact of region contouring variability on image-based focal therapy evaluation
NASA Astrophysics Data System (ADS)
Gibson, Eli; Donaldson, Ian A.; Shah, Taimur T.; Hu, Yipeng; Ahmed, Hashim U.; Barratt, Dean C.
2016-03-01
Motivation: Focal therapy is an emerging low-morbidity treatment option for low-intermediate risk prostate cancer; however, challenges remain in accurately delivering treatment to specified targets and determining treatment success. Registered multi-parametric magnetic resonance imaging (MPMRI) acquired before and after treatment can support focal therapy evaluation and optimization; however, contouring variability, when defining the prostate, the clinical target volume (CTV) and the ablation region in images, reduces the precision of quantitative image-based focal therapy evaluation metrics. To inform the interpretation and clarify the limitations of such metrics, we investigated inter-observer contouring variability and its impact on four metrics. Methods: Pre-therapy and 2-week-post-therapy standard-of-care MPMRI were acquired from 5 focal cryotherapy patients. Two clinicians independently contoured, on each slice, the prostate (pre- and post-treatment) and the dominant index lesion CTV (pre-treatment) in the T2-weighted MRI, and the ablated region (post-treatment) in the dynamic-contrast- enhanced MRI. For each combination of clinician contours, post-treatment images were registered to pre-treatment images using a 3D biomechanical-model-based registration of prostate surfaces, and four metrics were computed: the proportion of the target tissue region that was ablated and the target:ablated region volume ratio for each of two targets (the CTV and an expanded planning target volume). Variance components analysis was used to measure the contribution of each type of contour to the variance in the therapy evaluation metrics. Conclusions: 14-23% of evaluation metric variance was attributable to contouring variability (including 6-12% from ablation region contouring); reducing this variability could improve the precision of focal therapy evaluation metrics.
NASA Technical Reports Server (NTRS)
Bole, Brian; Goebel, Kai; Vachtsevanos, George
2012-01-01
This paper introduces a novel Markov process formulation of stochastic fault growth modeling, in order to facilitate the development and analysis of prognostics-based control adaptation. A metric representing the relative deviation between the nominal output of a system and the net output that is actually enacted by an implemented prognostics-based control routine, will be used to define the action space of the formulated Markov process. The state space of the Markov process will be defined in terms of an abstracted metric representing the relative health remaining in each of the system s components. The proposed formulation of component fault dynamics will conveniently relate feasible system output performance modifications to predictions of future component health deterioration.
Baseline and Target Values for PV Forecasts: Toward Improved Solar Power Forecasting: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Jie; Hodge, Bri-Mathias; Lu, Siyuan
2015-08-05
Accurate solar power forecasting allows utilities to get the most out of the solar resources on their systems. To truly measure the improvements that any new solar forecasting methods can provide, it is important to first develop (or determine) baseline and target solar forecasting at different spatial and temporal scales. This paper aims to develop baseline and target values for solar forecasting metrics. These were informed by close collaboration with utility and independent system operator partners. The baseline values are established based on state-of-the-art numerical weather prediction models and persistence models. The target values are determined based on the reductionmore » in the amount of reserves that must be held to accommodate the uncertainty of solar power output. forecasting metrics. These were informed by close collaboration with utility and independent system operator partners. The baseline values are established based on state-of-the-art numerical weather prediction models and persistence models. The target values are determined based on the reduction in the amount of reserves that must be held to accommodate the uncertainty of solar power output.« less
Spatial statistical network models for stream and river temperature in New England, USA
NASA Astrophysics Data System (ADS)
Detenbeck, Naomi E.; Morrison, Alisa C.; Abele, Ralph W.; Kopp, Darin A.
2016-08-01
Watershed managers are challenged by the need for predictive temperature models with sufficient accuracy and geographic breadth for practical use. We described thermal regimes of New England rivers and streams based on a reduced set of metrics for the May-September growing season (July or August median temperature, diurnal rate of change, and magnitude and timing of growing season maximum) chosen through principal component analysis of 78 candidate metrics. We then developed and assessed spatial statistical models for each of these metrics, incorporating spatial autocorrelation based on both distance along the flow network and Euclidean distance between points. Calculation of spatial autocorrelation based on travel or retention time in place of network distance yielded tighter-fitting Torgegrams with less scatter but did not improve overall model prediction accuracy. We predicted monthly median July or August stream temperatures as a function of median air temperature, estimated urban heat island effect, shaded solar radiation, main channel slope, watershed storage (percent lake and wetland area), percent coarse-grained surficial deposits, and presence or maximum depth of a lake immediately upstream, with an overall root-mean-square prediction error of 1.4 and 1.5°C, respectively. Growing season maximum water temperature varied as a function of air temperature, local channel slope, shaded August solar radiation, imperviousness, and watershed storage. Predictive models for July or August daily range, maximum daily rate of change, and timing of growing season maximum were statistically significant but explained a much lower proportion of variance than the above models (5-14% of total).
Perimal-Lewis, Lua; Teubner, David; Hakendorf, Paul; Horwood, Chris
2016-12-01
Effective and accurate use of routinely collected health data to produce Key Performance Indicator reporting is dependent on the underlying data quality. In this research, Process Mining methodology and tools were leveraged to assess the data quality of time-based Emergency Department data sourced from electronic health records. This research was done working closely with the domain experts to validate the process models. The hospital patient journey model was used to assess flow abnormalities which resulted from incorrect timestamp data used in time-based performance metrics. The research demonstrated process mining as a feasible methodology to assess data quality of time-based hospital performance metrics. The insight gained from this research enabled appropriate corrective actions to be put in place to address the data quality issues. © The Author(s) 2015.
Metrics for Evaluation of Student Models
ERIC Educational Resources Information Center
Pelanek, Radek
2015-01-01
Researchers use many different metrics for evaluation of performance of student models. The aim of this paper is to provide an overview of commonly used metrics, to discuss properties, advantages, and disadvantages of different metrics, to summarize current practice in educational data mining, and to provide guidance for evaluation of student…
No-Reference Video Quality Assessment Based on Statistical Analysis in 3D-DCT Domain.
Li, Xuelong; Guo, Qun; Lu, Xiaoqiang
2016-05-13
It is an important task to design models for universal no-reference video quality assessment (NR-VQA) in multiple video processing and computer vision applications. However, most existing NR-VQA metrics are designed for specific distortion types which are not often aware in practical applications. A further deficiency is that the spatial and temporal information of videos is hardly considered simultaneously. In this paper, we propose a new NR-VQA metric based on the spatiotemporal natural video statistics (NVS) in 3D discrete cosine transform (3D-DCT) domain. In the proposed method, a set of features are firstly extracted based on the statistical analysis of 3D-DCT coefficients to characterize the spatiotemporal statistics of videos in different views. These features are used to predict the perceived video quality via the efficient linear support vector regression (SVR) model afterwards. The contributions of this paper are: 1) we explore the spatiotemporal statistics of videos in 3DDCT domain which has the inherent spatiotemporal encoding advantage over other widely used 2D transformations; 2) we extract a small set of simple but effective statistical features for video visual quality prediction; 3) the proposed method is universal for multiple types of distortions and robust to different databases. The proposed method is tested on four widely used video databases. Extensive experimental results demonstrate that the proposed method is competitive with the state-of-art NR-VQA metrics and the top-performing FR-VQA and RR-VQA metrics.
Rekik, Islem; Li, Gang; Lin, Weili; Shen, Dinggang
2016-02-01
Longitudinal neuroimaging analysis methods have remarkably advanced our understanding of early postnatal brain development. However, learning predictive models to trace forth the evolution trajectories of both normal and abnormal cortical shapes remains broadly absent. To fill this critical gap, we pioneered the first prediction model for longitudinal developing cortical surfaces in infants using a spatiotemporal current-based learning framework solely from the baseline cortical surface. In this paper, we detail this prediction model and even further improve its performance by introducing two key variants. First, we use the varifold metric to overcome the limitations of the current metric for surface registration that was used in our preliminary study. We also extend the conventional varifold-based surface registration model for pairwise registration to a spatiotemporal surface regression model. Second, we propose a morphing process of the baseline surface using its topographic attributes such as normal direction and principal curvature sign. Specifically, our method learns from longitudinal data both the geometric (vertices positions) and dynamic (temporal evolution trajectories) features of the infant cortical surface, comprising a training stage and a prediction stage. In the training stage, we use the proposed varifold-based shape regression model to estimate geodesic cortical shape evolution trajectories for each training subject. We then build an empirical mean spatiotemporal surface atlas. In the prediction stage, given an infant, we select the best learnt features from training subjects to simultaneously predict the cortical surface shapes at all later timepoints, based on similarity metrics between this baseline surface and the learnt baseline population average surface atlas. We used a leave-one-out cross validation method to predict the inner cortical surface shape at 3, 6, 9 and 12 months of age from the baseline cortical surface shape at birth. Our method attained a higher prediction accuracy and better captured the spatiotemporal dynamic change of the highly folded cortical surface than the previous proposed prediction method. Copyright © 2015 Elsevier B.V. All rights reserved.
A Metric on Phylogenetic Tree Shapes.
Colijn, C; Plazzotta, G
2018-01-01
The shapes of evolutionary trees are influenced by the nature of the evolutionary process but comparisons of trees from different processes are hindered by the challenge of completely describing tree shape. We present a full characterization of the shapes of rooted branching trees in a form that lends itself to natural tree comparisons. We use this characterization to define a metric, in the sense of a true distance function, on tree shapes. The metric distinguishes trees from random models known to produce different tree shapes. It separates trees derived from tropical versus USA influenza A sequences, which reflect the differing epidemiology of tropical and seasonal flu. We describe several metrics based on the same core characterization, and illustrate how to extend the metric to incorporate trees' branch lengths or other features such as overall imbalance. Our approach allows us to construct addition and multiplication on trees, and to create a convex metric on tree shapes which formally allows computation of average tree shapes. © The Author(s) 2017. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.
Rezaeian, Sanaz; Zhong, Peng; Hartzell, Stephen; Zareian, Farzin
2015-01-01
Simulated earthquake ground motions can be used in many recent engineering applications that require time series as input excitations. However, applicability and validation of simulations are subjects of debate in the seismological and engineering communities. We propose a validation methodology at the waveform level and directly based on characteristics that are expected to influence most structural and geotechnical response parameters. In particular, three time-dependent validation metrics are used to evaluate the evolving intensity, frequency, and bandwidth of a waveform. These validation metrics capture nonstationarities in intensity and frequency content of waveforms, making them ideal to address nonlinear response of structural systems. A two-component error vector is proposed to quantify the average and shape differences between these validation metrics for a simulated and recorded ground-motion pair. Because these metrics are directly related to the waveform characteristics, they provide easily interpretable feedback to seismologists for modifying their ground-motion simulation models. To further simplify the use and interpretation of these metrics for engineers, it is shown how six scalar key parameters, including duration, intensity, and predominant frequency, can be extracted from the validation metrics. The proposed validation methodology is a step forward in paving the road for utilization of simulated ground motions in engineering practice and is demonstrated using examples of recorded and simulated ground motions from the 1994 Northridge, California, earthquake.
Hu, Weiming; Li, Xi; Luo, Wenhan; Zhang, Xiaoqin; Maybank, Stephen; Zhang, Zhongfei
2012-12-01
Object appearance modeling is crucial for tracking objects, especially in videos captured by nonstationary cameras and for reasoning about occlusions between multiple moving objects. Based on the log-euclidean Riemannian metric on symmetric positive definite matrices, we propose an incremental log-euclidean Riemannian subspace learning algorithm in which covariance matrices of image features are mapped into a vector space with the log-euclidean Riemannian metric. Based on the subspace learning algorithm, we develop a log-euclidean block-division appearance model which captures both the global and local spatial layout information about object appearances. Single object tracking and multi-object tracking with occlusion reasoning are then achieved by particle filtering-based Bayesian state inference. During tracking, incremental updating of the log-euclidean block-division appearance model captures changes in object appearance. For multi-object tracking, the appearance models of the objects can be updated even in the presence of occlusions. Experimental results demonstrate that the proposed tracking algorithm obtains more accurate results than six state-of-the-art tracking algorithms.
Statistical rice yield modeling using blended MODIS-Landsat based crop phenology metrics in Taiwan
NASA Astrophysics Data System (ADS)
Chen, C. R.; Chen, C. F.; Nguyen, S. T.; Lau, K. V.
2015-12-01
Taiwan is a populated island with a majority of residents settled in the western plains where soils are suitable for rice cultivation. Rice is not only the most important commodity, but also plays a critical role for agricultural and food marketing. Information of rice production is thus important for policymakers to devise timely plans for ensuring sustainably socioeconomic development. Because rice fields in Taiwan are generally small and yet crop monitoring requires information of crop phenology associating with the spatiotemporal resolution of satellite data, this study used Landsat-MODIS fusion data for rice yield modeling in Taiwan. We processed the data for the first crop (Feb-Mar to Jun-Jul) and the second (Aug-Sep to Nov-Dec) in 2014 through five main steps: (1) data pre-processing to account for geometric and radiometric errors of Landsat data, (2) Landsat-MODIS data fusion using using the spatial-temporal adaptive reflectance fusion model, (3) construction of the smooth time-series enhanced vegetation index 2 (EVI2), (4) rice yield modeling using EVI2-based crop phenology metrics, and (5) error verification. The fusion results by a comparison bewteen EVI2 derived from the fusion image and that from the reference Landsat image indicated close agreement between the two datasets (R2 > 0.8). We analysed smooth EVI2 curves to extract phenology metrics or phenological variables for establishment of rice yield models. The results indicated that the established yield models significantly explained more than 70% variability in the data (p-value < 0.001). The comparison results between the estimated yields and the government's yield statistics for the first and second crops indicated a close significant relationship between the two datasets (R2 > 0.8), in both cases. The root mean square error (RMSE) and mean absolute error (MAE) used to measure the model accuracy revealed the consistency between the estimated yields and the government's yield statistics. This study demonstrates advantages of using EVI2-based phenology metrics (derived from Landsat-MODIS fusion data) for rice yield estimation in Taiwan prior to the harvest period.
"Development of Model-Based Air Pollution Exposure Metrics for use in Epidemiologic Studies"
Population-based epidemiological studies of air pollution have traditionally relied upon imperfect surrogates of personal exposures, such as area-wide ambient air pollution levels based on readily available concentrations from central monitoring sites. U.S. EPA in collaboration w...
DEVELOPMENT OF MODEL-BASED AIR POLLUTION EXPOSURE METRICS FOR USE IN EPIDEMIOLOGIC STUDIES
Population-based epidemiological studies of air pollution have traditionally relied upon imperfect surrogates of personal exposures, such as area-wide ambient air pollution levels based on readily available concentrations from central monitoring sites. U.S. EPA in collaboration w...
Mutual Information in Frequency and Its Application to Measure Cross-Frequency Coupling in Epilepsy
NASA Astrophysics Data System (ADS)
Malladi, Rakesh; Johnson, Don H.; Kalamangalam, Giridhar P.; Tandon, Nitin; Aazhang, Behnaam
2018-06-01
We define a metric, mutual information in frequency (MI-in-frequency), to detect and quantify the statistical dependence between different frequency components in the data, referred to as cross-frequency coupling and apply it to electrophysiological recordings from the brain to infer cross-frequency coupling. The current metrics used to quantify the cross-frequency coupling in neuroscience cannot detect if two frequency components in non-Gaussian brain recordings are statistically independent or not. Our MI-in-frequency metric, based on Shannon's mutual information between the Cramer's representation of stochastic processes, overcomes this shortcoming and can detect statistical dependence in frequency between non-Gaussian signals. We then describe two data-driven estimators of MI-in-frequency: one based on kernel density estimation and the other based on the nearest neighbor algorithm and validate their performance on simulated data. We then use MI-in-frequency to estimate mutual information between two data streams that are dependent across time, without making any parametric model assumptions. Finally, we use the MI-in- frequency metric to investigate the cross-frequency coupling in seizure onset zone from electrocorticographic recordings during seizures. The inferred cross-frequency coupling characteristics are essential to optimize the spatial and spectral parameters of electrical stimulation based treatments of epilepsy.
Fu, Lawrence D.; Aphinyanaphongs, Yindalon; Wang, Lily; Aliferis, Constantin F.
2011-01-01
Evaluating the biomedical literature and health-related websites for quality are challenging information retrieval tasks. Current commonly used methods include impact factor for journals, PubMed’s clinical query filters and machine learning-based filter models for articles, and PageRank for websites. Previous work has focused on the average performance of these methods without considering the topic, and it is unknown how performance varies for specific topics or focused searches. Clinicians, researchers, and users should be aware when expected performance is not achieved for specific topics. The present work analyzes the behavior of these methods for a variety of topics. Impact factor, clinical query filters, and PageRank vary widely across different topics while a topic-specific impact factor and machine learning-based filter models are more stable. The results demonstrate that a method may perform excellently on average but struggle when used on a number of narrower topics. Topic adjusted metrics and other topic robust methods have an advantage in such situations. Users of traditional topic-sensitive metrics should be aware of their limitations. PMID:21419864
Voice based gender classification using machine learning
NASA Astrophysics Data System (ADS)
Raahul, A.; Sapthagiri, R.; Pankaj, K.; Vijayarajan, V.
2017-11-01
Gender identification is one of the major problem speech analysis today. Tracing the gender from acoustic data i.e., pitch, median, frequency etc. Machine learning gives promising results for classification problem in all the research domains. There are several performance metrics to evaluate algorithms of an area. Our Comparative model algorithm for evaluating 5 different machine learning algorithms based on eight different metrics in gender classification from acoustic data. Agenda is to identify gender, with five different algorithms: Linear Discriminant Analysis (LDA), K-Nearest Neighbour (KNN), Classification and Regression Trees (CART), Random Forest (RF), and Support Vector Machine (SVM) on basis of eight different metrics. The main parameter in evaluating any algorithms is its performance. Misclassification rate must be less in classification problems, which says that the accuracy rate must be high. Location and gender of the person have become very crucial in economic markets in the form of AdSense. Here with this comparative model algorithm, we are trying to assess the different ML algorithms and find the best fit for gender classification of acoustic data.
Measures of node centrality in mobile social networks
NASA Astrophysics Data System (ADS)
Gao, Zhenxiang; Shi, Yan; Chen, Shanzhi
2015-02-01
Mobile social networks exploit human mobility and consequent device-to-device contact to opportunistically create data paths over time. While links in mobile social networks are time-varied and strongly impacted by human mobility, discovering influential nodes is one of the important issues for efficient information propagation in mobile social networks. Although traditional centrality definitions give metrics to identify the nodes with central positions in static binary networks, they cannot effectively identify the influential nodes for information propagation in mobile social networks. In this paper, we address the problems of discovering the influential nodes in mobile social networks. We first use the temporal evolution graph model which can more accurately capture the topology dynamics of the mobile social network over time. Based on the model, we explore human social relations and mobility patterns to redefine three common centrality metrics: degree centrality, closeness centrality and betweenness centrality. We then employ empirical traces to evaluate the benefits of the proposed centrality metrics, and discuss the predictability of nodes' global centrality ranking by nodes' local centrality ranking. Results demonstrate the efficiency of the proposed centrality metrics.
NEW CATEGORICAL METRICS FOR AIR QUALITY MODEL EVALUATION
Traditional categorical metrics used in model evaluations are "clear-cut" measures in that the model's ability to predict an exceedance is defined by a fixed threshold concentration and the metrics are defined by observation-forecast sets that are paired both in space and time. T...
Leung, Michael; Bassani, Diego G; Racine-Poon, Amy; Goldenberg, Anna; Ali, Syed Asad; Kang, Gagandeep; Premkumar, Prasanna S; Roth, Daniel E
2017-09-10
Conditioning child growth measures on baseline accounts for regression to the mean (RTM). Here, we present the "conditional random slope" (CRS) model, based on a linear-mixed effects model that incorporates a baseline-time interaction term that can accommodate multiple data points for a child while also directly accounting for RTM. In two birth cohorts, we applied five approaches to estimate child growth velocities from 0 to 12 months to assess the effect of increasing data density (number of measures per child) on the magnitude of RTM of unconditional estimates, and the correlation and concordance between the CRS and four alternative metrics. Further, we demonstrated the differential effect of the choice of velocity metric on the magnitude of the association between infant growth and stunting at 2 years. RTM was minimally attenuated by increasing data density for unconditional growth modeling approaches. CRS and classical conditional models gave nearly identical estimates with two measures per child. Compared to the CRS estimates, unconditional metrics had moderate correlation (r = 0.65-0.91), but poor agreement in the classification of infants with relatively slow growth (kappa = 0.38-0.78). Estimates of the velocity-stunting association were the same for CRS and classical conditional models but differed substantially between conditional versus unconditional metrics. The CRS can leverage the flexibility of linear mixed models while addressing RTM in longitudinal analyses. © 2017 The Authors American Journal of Human Biology Published by Wiley Periodicals, Inc.
Model Performance Evaluation and Scenario Analysis ...
This tool consists of two parts: model performance evaluation and scenario analysis (MPESA). The model performance evaluation consists of two components: model performance evaluation metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit measures that capture magnitude only, sequence only, and combined magnitude and sequence errors. The performance measures include error analysis, coefficient of determination, Nash-Sutcliffe efficiency, and a new weighted rank method. These performance metrics only provide useful information about the overall model performance. Note that MPESA is based on the separation of observed and simulated time series into magnitude and sequence components. The separation of time series into magnitude and sequence components and the reconstruction back to time series provides diagnostic insights to modelers. For example, traditional approaches lack the capability to identify if the source of uncertainty in the simulated data is due to the quality of the input data or the way the analyst adjusted the model parameters. This report presents a suite of model diagnostics that identify if mismatches between observed and simulated data result from magnitude or sequence related errors. MPESA offers graphical and statistical options that allow HSPF users to compare observed and simulated time series and identify the parameter values to adjust or the input data to modify. The scenario analysis part of the too
Darrow, Lyndsey A; Klein, Mitchel; Sarnat, Jeremy A; Mulholland, James A; Strickland, Matthew J; Sarnat, Stefanie E; Russell, Armistead G; Tolbert, Paige E
2011-01-01
Various temporal metrics of daily pollution levels have been used to examine the relationships between air pollutants and acute health outcomes. However, daily metrics of the same pollutant have rarely been systematically compared within a study. In this analysis, we describe the variability of effect estimates attributable to the use of different temporal metrics of daily pollution levels. We obtained hourly measurements of ambient particulate matter (PM₂.₅), carbon monoxide (CO), nitrogen dioxide (NO₂), and ozone (O₃) from air monitoring networks in 20-county Atlanta for the time period 1993-2004. For each pollutant, we created (1) a daily 1-h maximum; (2) a 24-h average; (3) a commute average; (4) a daytime average; (5) a nighttime average; and (6) a daily 8-h maximum (only for O₃). Using Poisson generalized linear models, we examined associations between daily counts of respiratory emergency department visits and the previous day's pollutant metrics. Variability was greatest across O₃ metrics, with the 8-h maximum, 1-h maximum, and daytime metrics yielding strong positive associations and the nighttime O₃ metric yielding a negative association (likely reflecting confounding by air pollutants oxidized by O₃). With the exception of daytime metric, all of the CO and NO₂ metrics were positively associated with respiratory emergency department visits. Differences in observed associations with respiratory emergency room visits among temporal metrics of the same pollutant were influenced by the diurnal patterns of the pollutant, spatial representativeness of the metrics, and correlation between each metric and copollutant concentrations. Overall, the use of metrics based on the US National Ambient Air Quality Standards (for example, the use of a daily 8-h maximum O₃ as opposed to a 24-h average metric) was supported by this analysis. Comparative analysis of temporal metrics also provided insight into underlying relationships between specific air pollutants and respiratory health.
VARS-TOOL: A Comprehensive, Efficient, and Robust Sensitivity Analysis Toolbox
NASA Astrophysics Data System (ADS)
Razavi, S.; Sheikholeslami, R.; Haghnegahdar, A.; Esfahbod, B.
2016-12-01
VARS-TOOL is an advanced sensitivity and uncertainty analysis toolbox, applicable to the full range of computer simulation models, including Earth and Environmental Systems Models (EESMs). The toolbox was developed originally around VARS (Variogram Analysis of Response Surfaces), which is a general framework for Global Sensitivity Analysis (GSA) that utilizes the variogram/covariogram concept to characterize the full spectrum of sensitivity-related information, thereby providing a comprehensive set of "global" sensitivity metrics with minimal computational cost. VARS-TOOL is unique in that, with a single sample set (set of simulation model runs), it generates simultaneously three philosophically different families of global sensitivity metrics, including (1) variogram-based metrics called IVARS (Integrated Variogram Across a Range of Scales - VARS approach), (2) variance-based total-order effects (Sobol approach), and (3) derivative-based elementary effects (Morris approach). VARS-TOOL is also enabled with two novel features; the first one being a sequential sampling algorithm, called Progressive Latin Hypercube Sampling (PLHS), which allows progressively increasing the sample size for GSA while maintaining the required sample distributional properties. The second feature is a "grouping strategy" that adaptively groups the model parameters based on their sensitivity or functioning to maximize the reliability of GSA results. These features in conjunction with bootstrapping enable the user to monitor the stability, robustness, and convergence of GSA with the increase in sample size for any given case study. VARS-TOOL has been shown to achieve robust and stable results within 1-2 orders of magnitude smaller sample sizes (fewer model runs) than alternative tools. VARS-TOOL, available in MATLAB and Python, is under continuous development and new capabilities and features are forthcoming.
Software risk management through independent verification and validation
NASA Technical Reports Server (NTRS)
Callahan, John R.; Zhou, Tong C.; Wood, Ralph
1995-01-01
Software project managers need tools to estimate and track project goals in a continuous fashion before, during, and after development of a system. In addition, they need an ability to compare the current project status with past project profiles to validate management intuition, identify problems, and then direct appropriate resources to the sources of problems. This paper describes a measurement-based approach to calculating the risk inherent in meeting project goals that leverages past project metrics and existing estimation and tracking models. We introduce the IV&V Goal/Questions/Metrics model, explain its use in the software development life cycle, and describe our attempts to validate the model through the reverse engineering of existing projects.
NASA Astrophysics Data System (ADS)
Qian, Y.; Wang, L.; Leung, L. R.; Lin, G.; Lu, J.; Gao, Y.; Zhang, Y.
2017-12-01
Projecting precipitation changes is challenging because of incomplete understanding of the climate system and biases and uncertainty in climate models. In East Asia where summer precipitation is dominantly influenced by the monsoon circulation and the global models from Coupled Model Intercomparison Project Phase 5 (CMIP5), however, give various projection of precipitation change for 21th century. It is critical for community to know which models' projection are more reliable in response to natural and anthropogenic forcings. In this study we defined multiple-dimensional metrics, measuring the model performance in simulating the present-day of large-scale circulation, regional precipitation and relationship between them. The large-scale circulation features examined in this study include the lower tropospheric southwesterly winds, the western North Pacific subtropical high, the South China Sea Subtropical High, and the East Asian westerly jet in the upper troposphere. Each of these circulation features transport moisture to East Asia, enhancing the moist static energy and strengthening the Meiyu moisture front that is the primary mechanism for precipitation generation in eastern China. Based on these metrics, 30 models in CMIP5 ensemble are classified into three groups. Models in the top performing group projected regional precipitation patterns that are more similar to each other than the bottom or middle performing group and consistently projected statistically significant increasing trends in two of the large-scale circulation indices and precipitation. In contrast, models in the bottom or middle performing group projected small drying or no trends in precipitation. We also find the models that only reasonably reproduce the observed precipitation climatology does not guarantee more reliable projection of future precipitation because good simulation skill could be achieved through compensating errors from multiple sources. Herein the potential for more robust projections of precipitation changes at regional scale is demonstrated through the use of discriminating metric to subsample the multi-model ensemble. The results from this study provides insights for how to select models from CMIP ensemble to project regional climate and hydrological cycle changes.
NASA Astrophysics Data System (ADS)
Koch, Julian; Cüneyd Demirel, Mehmet; Stisen, Simon
2018-05-01
The process of model evaluation is not only an integral part of model development and calibration but also of paramount importance when communicating modelling results to the scientific community and stakeholders. The modelling community has a large and well-tested toolbox of metrics to evaluate temporal model performance. In contrast, spatial performance evaluation does not correspond to the grand availability of spatial observations readily available and to the sophisticate model codes simulating the spatial variability of complex hydrological processes. This study makes a contribution towards advancing spatial-pattern-oriented model calibration by rigorously testing a multiple-component performance metric. The promoted SPAtial EFficiency (SPAEF) metric reflects three equally weighted components: correlation, coefficient of variation and histogram overlap. This multiple-component approach is found to be advantageous in order to achieve the complex task of comparing spatial patterns. SPAEF, its three components individually and two alternative spatial performance metrics, i.e. connectivity analysis and fractions skill score, are applied in a spatial-pattern-oriented model calibration of a catchment model in Denmark. Results suggest the importance of multiple-component metrics because stand-alone metrics tend to fail to provide holistic pattern information. The three SPAEF components are found to be independent, which allows them to complement each other in a meaningful way. In order to optimally exploit spatial observations made available by remote sensing platforms, this study suggests applying bias insensitive metrics which further allow for a comparison of variables which are related but may differ in unit. This study applies SPAEF in the hydrological context using the mesoscale Hydrologic Model (mHM; version 5.8), but we see great potential across disciplines related to spatially distributed earth system modelling.
NASA Astrophysics Data System (ADS)
Williams, Richard; Measures, Richard; Hicks, Murray; Brasington, James
2017-04-01
Advances in geomatics technologies have transformed the monitoring of reach-scale (100-101 km) river morphodynamics. Hyperscale Digital Elevation Models (DEMs) can now be acquired at temporal intervals that are commensurate with the frequencies of high-flow events that force morphological change. The low vertical errors associated with such DEMs enable DEMs of Difference (DoDs) to be generated to quantify patterns of erosion and deposition, and derive sediment budgets using the morphological approach. In parallel with reach-scale observational advances, high-resolution, two-dimensional, physics-based numerical morphodynamic models are now computationally feasible for unsteady, reach-scale simulations. In light of this observational and predictive progress, there is a need to identify appropriate metrics that can be extracted from DEMs and DoDs to assess model performance. Nowhere is this more pertinent than in braided river environments, where numerous mobile channels that intertwine around mid-channel bars result in complex patterns of erosion and deposition, thus making model assessment particularly challenging. This paper identifies and evaluates a range of morphological and morphological-change metrics that can be used to assess predictions of braided river morphodynamics at the timescale of single storm events. A depth-averaged, mixed-grainsize Delft3D morphodynamic model was used to simulate morphological change during four discrete high-flow events, ranging from 91 to 403 m3s-1, along a 2.5 x 0.7 km reach of the braided, gravel-bed Rees River, New Zealand. Pre- and post-event topographic surveys, using a fusion of Terrestrial Laser Scanning and optical-empirical bathymetric mapping, were used to produce 0.5 m resolution DEMs and DoDs. The pre- and post-event DEMs for a moderate (227m3s-1) high-flow event were used to calibrate the model. DEMs and DoDs from the other three high-flow events were used for model assessment using two approaches. First, "morphological" metrics were applied to compare observed and predicted post-event DEMs. These metrics include measures of confluence and bifurcation node density, bar shape, braiding intensity, and topographic comparisons using a form of the Brier Skill Score and cumulative frequency distributions of rugosity. Second, "morphological change" metrics were used to compare observed and predicted morphological change. These metrics included the extent of the morphologically active area, pairwise comparisons of morphological change (using kappa and fuzzy kappa statistics), and comparisons between vertical morphological change magnitude and elevation distribution. Results indicate that those metrics that assess characteristic features of braiding, rather than making direct comparisons, are most useful for assessing reach-scale braided river morphodynamic models. Together, the metrics indicate that there was a general affinity between observed and predicted braided river morphodynamics, both during small and large magnitude high-flow events. These results thus demonstrate how high-resolution, reach-scale, natural experiment datasets can be used to assess the efficacy of morphological models in predicting realistic patterns of erosion and deposition. This lays the foundation for the development and assessment of decadal scale morphodynamic models and their use in adaptive river basin management.
Funding Ohio Community Colleges: An Analysis of the Performance Funding Model
ERIC Educational Resources Information Center
Krueger, Cynthia A.
2013-01-01
This study examined Ohio's community college performance funding model that is based on seven student success metrics. A percentage of the regular state subsidy is withheld from institutions; funding is earned back based on the three-year average of success points achieved in comparison to other community colleges in the state. Analysis of…
DOE Office of Scientific and Technical Information (OSTI.GOV)
McMillan, K; Bostani, M; Cagnon, C
Purpose: AAPM Task Group 204 described size specific dose estimates (SSDE) for body scans. The purpose of this work is to use a similar approach to develop patient-specific, scanner-independent organ dose estimates for head CT exams using an attenuation-based size metric. Methods: For eight patient models from the GSF family of voxelized phantoms, dose to brain and lens of the eye was estimated using Monte Carlo simulations of contiguous axial scans for 64-slice MDCT scanners from four major manufacturers. Organ doses were normalized by scannerspecific 16 cm CTDIvol values and averaged across all scanners to obtain scanner-independent CTDIvol-to-organ-dose conversion coefficientsmore » for each patient model. Head size was measured at the first slice superior to the eyes; patient perimeter and effective diameter (ED) were measured directly from the GSF data. Because the GSF models use organ identification codes instead of Hounsfield units, water equivalent diameter (WED) was estimated indirectly. Using the image data from 42 patients ranging from 2 weeks old to adult, the perimeter, ED and WED size metrics were obtained and correlations between each metric were established. Applying these correlations to the GSF perimeter and ED measurements, WED was calculated for each model. The relationship between the various patient size metrics and CTDIvol-to-organ-dose conversion coefficients was then described. Results: The analysis of patient images demonstrated the correlation between WED and ED across a wide range of patient sizes. When applied to the GSF patient models, an exponential relationship between CTDIvol-to-organ-dose conversion coefficients and the WED size metric was observed with correlation coefficients of 0.93 and 0.77 for the brain and lens of the eye, respectively. Conclusion: Strong correlation exists between CTDIvol normalized brain dose and WED. For the lens of the eye, a lower correlation is observed, primarily due to surface dose variations. Funding Support: Siemens-UCLA Radiology Master Research Agreement; Disclosures - Michael McNitt-Gray: Institutional Research Agreement, Siemens AG; Research Support, Siemens AG; Consultant, Flaherty Sensabaugh Bonasso PLLC; Consultant, Fulbright and Jaworski.« less
The paper presents a hybrid air quality modeling approach and its application in NEXUS in order to provide spatial and temporally varying exposure estimates and identification of the mobile source contribution to the total pollutant exposure. Model-based exposure metrics, associa...
Meszlényi, Regina J.; Buza, Krisztian; Vidnyánszky, Zoltán
2017-01-01
Machine learning techniques have become increasingly popular in the field of resting state fMRI (functional magnetic resonance imaging) network based classification. However, the application of convolutional networks has been proposed only very recently and has remained largely unexplored. In this paper we describe a convolutional neural network architecture for functional connectome classification called connectome-convolutional neural network (CCNN). Our results on simulated datasets and a publicly available dataset for amnestic mild cognitive impairment classification demonstrate that our CCNN model can efficiently distinguish between subject groups. We also show that the connectome-convolutional network is capable to combine information from diverse functional connectivity metrics and that models using a combination of different connectivity descriptors are able to outperform classifiers using only one metric. From this flexibility follows that our proposed CCNN model can be easily adapted to a wide range of connectome based classification or regression tasks, by varying which connectivity descriptor combinations are used to train the network. PMID:29089883
Meszlényi, Regina J; Buza, Krisztian; Vidnyánszky, Zoltán
2017-01-01
Machine learning techniques have become increasingly popular in the field of resting state fMRI (functional magnetic resonance imaging) network based classification. However, the application of convolutional networks has been proposed only very recently and has remained largely unexplored. In this paper we describe a convolutional neural network architecture for functional connectome classification called connectome-convolutional neural network (CCNN). Our results on simulated datasets and a publicly available dataset for amnestic mild cognitive impairment classification demonstrate that our CCNN model can efficiently distinguish between subject groups. We also show that the connectome-convolutional network is capable to combine information from diverse functional connectivity metrics and that models using a combination of different connectivity descriptors are able to outperform classifiers using only one metric. From this flexibility follows that our proposed CCNN model can be easily adapted to a wide range of connectome based classification or regression tasks, by varying which connectivity descriptor combinations are used to train the network.
Stikic, Maja; Berka, Chris; Levendowski, Daniel J.; Rubio, Roberto F.; Tan, Veasna; Korszen, Stephanie; Barba, Douglas; Wurzer, David
2014-01-01
The objective of this study was to investigate the feasibility of physiological metrics such as ECG-derived heart rate and EEG-derived cognitive workload and engagement as potential predictors of performance on different training tasks. An unsupervised approach based on self-organizing neural network (NN) was utilized to model cognitive state changes over time. The feature vector comprised EEG-engagement, EEG-workload, and heart rate metrics, all self-normalized to account for individual differences. During the competitive training process, a linear topology was developed where the feature vectors similar to each other activated the same NN nodes. The NN model was trained and auto-validated on combat marksmanship training data from 51 participants that were required to make “deadly force decisions” in challenging combat scenarios. The trained NN model was cross validated using 10-fold cross-validation. It was also validated on a golf study in which additional 22 participants were asked to complete 10 sessions of 10 putts each. Temporal sequences of the activated nodes for both studies followed the same pattern of changes, demonstrating the generalization capabilities of the approach. Most node transition changes were local, but important events typically caused significant changes in the physiological metrics, as evidenced by larger state changes. This was investigated by calculating a transition score as the sum of subsequent state transitions between the activated NN nodes. Correlation analysis demonstrated statistically significant correlations between the transition scores and subjects' performances in both studies. This paper explored the hypothesis that temporal sequences of physiological changes comprise the discriminative patterns for performance prediction. These physiological markers could be utilized in future training improvement systems (e.g., through neurofeedback), and applied across a variety of training environments. PMID:25414629
Normalized distance aggregation of discriminative features for person reidentification
NASA Astrophysics Data System (ADS)
Hou, Li; Han, Kang; Wan, Wanggen; Hwang, Jenq-Neng; Yao, Haiyan
2018-03-01
We propose an effective person reidentification method based on normalized distance aggregation of discriminative features. Our framework is built on the integration of three high-performance discriminative feature extraction models, including local maximal occurrence (LOMO), feature fusion net (FFN), and a concatenation of LOMO and FFN called LOMO-FFN, through two fast and discriminant metric learning models, i.e., cross-view quadratic discriminant analysis (XQDA) and large-scale similarity learning (LSSL). More specifically, we first represent all the cross-view person images using LOMO, FFN, and LOMO-FFN, respectively, and then apply each extracted feature representation to train XQDA and LSSL, respectively, to obtain the optimized individual cross-view distance metric. Finally, the cross-view person matching is computed as the sum of the optimized individual cross-view distance metric through the min-max normalization. Experimental results have shown the effectiveness of the proposed algorithm on three challenging datasets (VIPeR, PRID450s, and CUHK01).
Gravitational waves from extreme mass ratio inspirals around bumpy black holes
NASA Astrophysics Data System (ADS)
Moore, Christopher J.; Chua, Alvin J. K.; Gair, Jonathan R.
2017-10-01
The space based interferometer LISA will be capable of detecting the gravitational waves emitted by stellar mass black holes or neutron stars slowly inspiralling into the supermassive black holes found in the centre of most galaxies. The gravitational wave signal from such an extreme mass ratio inspiral (EMRI) event will provide a unique opportunity to test whether the spacetime metric around the central black hole is well described by the Kerr solution. In this paper a variant of the well studied ‘analytic kludge’ model for EMRIs around Kerr black holes is extended to a family of parametrically deformed bumpy black holes which preserve the basic symmetries of the Kerr metric. The new EMRI model is then used to quantify the constraints that LISA observations of EMRIs may be able to place on the deviations, or bumps, on the Kerr metric.
NASA Astrophysics Data System (ADS)
Rutishauser, This; Stöckli, Reto; Jeanneret, François; Peñuelas, Josep
2010-05-01
Changes in the seasonality of life cycles of plants as recorded in phenological observations have been widely analysed at the species level with data available for many decades back in time. At the same time, seasonality changes in satellite-based observations and prognostic phenology models comprise information at the pixel-size or landscape scale. Change analysis of satellite-based records is restricted due to relatively short satellite records that further include gaps while model-based analyses are biased due to current model deficiencies., At 30 selected sites across Europe, we analysed three different sources of plant seasonality during the 1971-2000 period. Data consisted of (1) species-specific development stages of flowering and leave-out with different species observed at each site. (2) We used a synthetic phenological metric that integrates the common interannual phenological signal across all species at one site. (3) We estimated daily Leaf Area Index with a prognostic phenology model. The prior uncertainties of the model's empirical parameter space are constrained by assimilating the Fraction of Photosynthetically Active Radiation absorbed by vegetation (FPAR) and Leaf Area Index (LAI) from the MODerate Resolution Imaging Spectroradiometer (MODIS). We extracted the day of year when the 25%, 50% and 75% thresholds were passed each spring. The question arises how the three phenological signals compare and correlate across climate zones in Europe. Is there a match between single species observations, species-based ground-observed metrics and the landscape-scale prognostic model? Are there single key-species across Europe that best represent a landscape scale measure from the prognostic model? Can one source substitute another and serve as proxy-data? What can we learn from potential mismatches? Focusing on changes in spring this contribution presents first results of an ongoing comparison study from a number of European test sites that will be extended to the pan-European phenological database Cost725 and PEP725.
A bridge role metric model for nodes in software networks.
Li, Bo; Feng, Yanli; Ge, Shiyu; Li, Dashe
2014-01-01
A bridge role metric model is put forward in this paper. Compared with previous metric models, our solution of a large-scale object-oriented software system as a complex network is inherently more realistic. To acquire nodes and links in an undirected network, a new model that presents the crucial connectivity of a module or the hub instead of only centrality as in previous metric models is presented. Two previous metric models are described for comparison. In addition, it is obvious that the fitting curve between the Bre results and degrees can well be fitted by a power law. The model represents many realistic characteristics of actual software structures, and a hydropower simulation system is taken as an example. This paper makes additional contributions to an accurate understanding of module design of software systems and is expected to be beneficial to software engineering practices.
A Bridge Role Metric Model for Nodes in Software Networks
Li, Bo; Feng, Yanli; Ge, Shiyu; Li, Dashe
2014-01-01
A bridge role metric model is put forward in this paper. Compared with previous metric models, our solution of a large-scale object-oriented software system as a complex network is inherently more realistic. To acquire nodes and links in an undirected network, a new model that presents the crucial connectivity of a module or the hub instead of only centrality as in previous metric models is presented. Two previous metric models are described for comparison. In addition, it is obvious that the fitting curve between the results and degrees can well be fitted by a power law. The model represents many realistic characteristics of actual software structures, and a hydropower simulation system is taken as an example. This paper makes additional contributions to an accurate understanding of module design of software systems and is expected to be beneficial to software engineering practices. PMID:25364938
Three validation metrics for automated probabilistic image segmentation of brain tumours
Zou, Kelly H.; Wells, William M.; Kikinis, Ron; Warfield, Simon K.
2005-01-01
SUMMARY The validity of brain tumour segmentation is an important issue in image processing because it has a direct impact on surgical planning. We examined the segmentation accuracy based on three two-sample validation metrics against the estimated composite latent gold standard, which was derived from several experts’ manual segmentations by an EM algorithm. The distribution functions of the tumour and control pixel data were parametrically assumed to be a mixture of two beta distributions with different shape parameters. We estimated the corresponding receiver operating characteristic curve, Dice similarity coefficient, and mutual information, over all possible decision thresholds. Based on each validation metric, an optimal threshold was then computed via maximization. We illustrated these methods on MR imaging data from nine brain tumour cases of three different tumour types, each consisting of a large number of pixels. The automated segmentation yielded satisfactory accuracy with varied optimal thresholds. The performances of these validation metrics were also investigated via Monte Carlo simulation. Extensions of incorporating spatial correlation structures using a Markov random field model were considered. PMID:15083482
Climate Classification is an Important Factor in Assessing Hospital Performance Metrics
NASA Astrophysics Data System (ADS)
Boland, M. R.; Parhi, P.; Gentine, P.; Tatonetti, N. P.
2017-12-01
Context/Purpose: Climate is a known modulator of disease, but its impact on hospital performance metrics remains unstudied. Methods: We assess the relationship between Köppen-Geiger climate classification and hospital performance metrics, specifically 30-day mortality, as reported in Hospital Compare, and collected for the period July 2013 through June 2014 (7/1/2013 - 06/30/2014). A hospital-level multivariate linear regression analysis was performed while controlling for known socioeconomic factors to explore the relationship between all-cause mortality and climate. Hospital performance scores were obtained from 4,524 hospitals belonging to 15 distinct Köppen-Geiger climates and 2,373 unique counties. Results: Model results revealed that hospital performance metrics for mortality showed significant climate dependence (p<0.001) after adjusting for socioeconomic factors. Interpretation: Currently, hospitals are reimbursed by Governmental agencies using 30-day mortality rates along with 30-day readmission rates. These metrics allow Government agencies to rank hospitals according to their `performance' along these metrics. Various socioeconomic factors are taken into consideration when determining individual hospitals performance. However, no climate-based adjustment is made within the existing framework. Our results indicate that climate-based variability in 30-day mortality rates does exist even after socioeconomic confounder adjustment. Use of standardized high-level climate classification systems (such as Koppen-Geiger) would be useful to incorporate in future metrics. Conclusion: Climate is a significant factor in evaluating hospital 30-day mortality rates. These results demonstrate that climate classification is an important factor when comparing hospital performance across the United States.
Using Remote Sensing to Estimate Crop Water Use to Improve Irrigation Water Management
NASA Astrophysics Data System (ADS)
Reyes-Gonzalez, Arturo
Irrigation water is scarce. Hence, accurate estimation of crop water use is necessary for proper irrigation managements and water conservation. Satellite-based remote sensing is a tool that can estimate crop water use efficiently. Several models have been developed to estimate crop water requirement or actual evapotranspiration (ETa) using remote sensing. One of them is the Mapping EvapoTranspiration at High Resolution using Internalized Calibration (METRIC) model. This model has been compared with other methods for ET estimations including weighing lysimeters, pan evaporation, Bowen Ratio Energy Balance System (BREBS), Eddy Covariance (EC), and sap flow. However, comparison of METRIC model outputs to an atmometer for ETa estimation has not yet been attempted in eastern South Dakota. The results showed a good relationship between ETa estimated by the METRIC model and estimated with atmometer (r2 = 0.87 and RMSE = 0.65 mm day-1). However, ETa values from atmometer were consistently lower than ET a values from METRIC. The verification of remotely sensed estimates of surface variables is essential for any remote-sensing study. The relationships between LAI, Ts, and ETa estimated using the remote sensing-based METRIC model and in-situ measurements were established. The results showed good agreement between the variables measured in situ and estimated by the METRIC model. LAI showed r2 = 0.76, and RMSE = 0.59 m2 m -2, Ts had r2 = 0.87 and RMSE 1.24 °C and ETa presented r2= 0.89 and RMSE = 0.71 mm day -1. Estimation of ETa using energy balance method can be challenging and time consuming. Thus, there is a need to develop a simple and fast method to estimate ETa using minimum input parameters. Two methods were used, namely 1) an energy balance method (EB method) that used input parameters of the Landsat image, weather data, a digital elevation map, and a land cover map and 2) a Kc-NDVI method that use two input parameters: the Landsat image and weather data. A strong relationship was found between the two methods with r2 of 0.97 and RMSE of 0.37 mm day -1. Hence, the Kc-NDVI method performed well for ET a estimations, indicating that Kc-NDVI method can be a robust and reliable method to estimate ETa in a short period of time. Estimation of crop evapotranspiration (ETc) using satellite remote sensing-based vegetation index such as the Normalized Difference Vegetation Index (NDVI). The NDVI was calculated using near-infrared and red wavebands. The relationship between NDVI and tabulated Kc's was used to generate Kc maps. ETc maps were developed as an output of Kc maps multiplied by reference evapotranspiration (ETr). Daily ETc maps helped to explain the variability of crop water use during the growing season. Based on the results we can conclude that ETc maps developed from remotely sensed multispectral vegetation indices are a useful tool for quantifying crop water use at regional and field scales.
A Discrete Velocity Kinetic Model with Food Metric: Chemotaxis Traveling Waves.
Choi, Sun-Ho; Kim, Yong-Jung
2017-02-01
We introduce a mesoscopic scale chemotaxis model for traveling wave phenomena which is induced by food metric. The organisms of this simplified kinetic model have two discrete velocity modes, [Formula: see text] and a constant tumbling rate. The main feature of the model is that the speed of organisms is constant [Formula: see text] with respect to the food metric, not the Euclidean metric. The uniqueness and the existence of the traveling wave solution of the model are obtained. Unlike the classical logarithmic model case there exist traveling waves under super-linear consumption rates and infinite population pulse-type traveling waves are obtained. Numerical simulations are also provided.
Sample sizes and model comparison metrics for species distribution models
B.B. Hanberry; H.S. He; D.C. Dey
2012-01-01
Species distribution models use small samples to produce continuous distribution maps. The question of how small a sample can be to produce an accurate model generally has been answered based on comparisons to maximum sample sizes of 200 observations or fewer. In addition, model comparisons often are made with the kappa statistic, which has become controversial....
Text Authorship Identified Using the Dynamics of Word Co-Occurrence Networks
Akimushkin, Camilo; Amancio, Diego Raphael; Oliveira, Osvaldo Novais
2017-01-01
Automatic identification of authorship in disputed documents has benefited from complex network theory as this approach does not require human expertise or detailed semantic knowledge. Networks modeling entire books can be used to discriminate texts from different sources and understand network growth mechanisms, but only a few studies have probed the suitability of networks in modeling small chunks of text to grasp stylistic features. In this study, we introduce a methodology based on the dynamics of word co-occurrence networks representing written texts to classify a corpus of 80 texts by 8 authors. The texts were divided into sections with equal number of linguistic tokens, from which time series were created for 12 topological metrics. Since 73% of all series were stationary (ARIMA(p, 0, q)) and the remaining were integrable of first order (ARIMA(p, 1, q)), probability distributions could be obtained for the global network metrics. The metrics exhibit bell-shaped non-Gaussian distributions, and therefore distribution moments were used as learning attributes. With an optimized supervised learning procedure based on a nonlinear transformation performed by Isomap, 71 out of 80 texts were correctly classified using the K-nearest neighbors algorithm, i.e. a remarkable 88.75% author matching success rate was achieved. Hence, purely dynamic fluctuations in network metrics can characterize authorship, thus paving the way for a robust description of large texts in terms of small evolving networks. PMID:28125703
Steuer, Jeffrey J.
2010-01-01
It is widely recognized that urbanization can affect ecological conditions in aquatic systems; numerous studies have identified impervious surface cover as an indicator of urban intensity and as an index of development at the watershed, regional, and national scale. Watershed percent imperviousness, a commonly understood urban metric was used as the basis for a generalized watershed disturbance metric that, when applied in conjunction with weighted percent agriculture and percent grassland, predicted stream biotic conditions based on Ephemeroptera, Plecoptera, and Trichoptera (EPT) richness across a wide range of environmental settings. Data were collected in streams that encompassed a wide range of watershed area (4.4-1,714 km), precipitation (38-204 cm/yr), and elevation (31-2,024 m) conditions. Nevertheless the simple 3-landcover disturbance metric accounted for 58% of the variability in EPT richness based on the 261 nationwide sites. On the metropolitan area scale, relationship r ranged from 0.04 to 0.74. At disturbance values 15. Future work may incorporate watershed management practices within the disturbance metric, further increasing the management applicability of the relation. Such relations developed on a regional or metropolitan area scale are likely to be stronger than geographically generalized models; as found in these EPT richness relations. However, broad spatial models are able to provide much needed understanding in unmonitored areas and provide initial guidance for stream potential.
NASA Astrophysics Data System (ADS)
Papacharalampous, Georgia; Tyralis, Hristos; Koutsoyiannis, Demetris
2017-04-01
Machine learning (ML) is considered to be a promising approach to hydrological processes forecasting. We conduct a comparison between several stochastic and ML point estimation methods by performing large-scale computational experiments based on simulations. The purpose is to provide generalized results, while the respective comparisons in the literature are usually based on case studies. The stochastic methods used include simple methods, models from the frequently used families of Autoregressive Moving Average (ARMA), Autoregressive Fractionally Integrated Moving Average (ARFIMA) and Exponential Smoothing models. The ML methods used are Random Forests (RF), Support Vector Machines (SVM) and Neural Networks (NN). The comparison refers to the multi-step ahead forecasting properties of the methods. A total of 20 methods are used, among which 9 are the ML methods. 12 simulation experiments are performed, while each of them uses 2 000 simulated time series of 310 observations. The time series are simulated using stochastic processes from the families of ARMA and ARFIMA models. Each time series is split into a fitting (first 300 observations) and a testing set (last 10 observations). The comparative assessment of the methods is based on 18 metrics, that quantify the methods' performance according to several criteria related to the accurate forecasting of the testing set, the capturing of its variation and the correlation between the testing and forecasted values. The most important outcome of this study is that there is not a uniformly better or worse method. However, there are methods that are regularly better or worse than others with respect to specific metrics. It appears that, although a general ranking of the methods is not possible, their classification based on their similar or contrasting performance in the various metrics is possible to some extent. Another important conclusion is that more sophisticated methods do not necessarily provide better forecasts compared to simpler methods. It is pointed out that the ML methods do not differ dramatically from the stochastic methods, while it is interesting that the NN, RF and SVM algorithms used in this study offer potentially very good performance in terms of accuracy. It should be noted that, although this study focuses on hydrological processes, the results are of general scientific interest. Another important point in this study is the use of several methods and metrics. Using fewer methods and fewer metrics would have led to a very different overall picture, particularly if those fewer metrics corresponded to fewer criteria. For this reason, we consider that the proposed methodology is appropriate for the evaluation of forecasting methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bostani, Maryam, E-mail: mbostani@mednet.ucla.edu; McMillan, Kyle; Lu, Peiyun
2015-02-15
Purpose: Task Group 204 introduced effective diameter (ED) as the patient size metric used to correlate size-specific-dose-estimates. However, this size metric fails to account for patient attenuation properties and has been suggested to be replaced by an attenuation-based size metric, water equivalent diameter (D{sub W}). The purpose of this study is to investigate different size metrics, effective diameter, and water equivalent diameter, in combination with regional descriptions of scanner output to establish the most appropriate size metric to be used as a predictor for organ dose in tube current modulated CT exams. Methods: 101 thoracic and 82 abdomen/pelvis scans frommore » clinically indicated CT exams were collected retrospectively from a multidetector row CT (Sensation 64, Siemens Healthcare) with Institutional Review Board approval to generate voxelized patient models. Fully irradiated organs (lung and breasts in thoracic scans and liver, kidneys, and spleen in abdominal scans) were segmented and used as tally regions in Monte Carlo simulations for reporting organ dose. Along with image data, raw projection data were collected to obtain tube current information for simulating tube current modulation scans using Monte Carlo methods. Additionally, previously described patient size metrics [ED, D{sub W}, and approximated water equivalent diameter (D{sub Wa})] were calculated for each patient and reported in three different ways: a single value averaged over the entire scan, a single value averaged over the region of interest, and a single value from a location in the middle of the scan volume. Organ doses were normalized by an appropriate mAs weighted CTDI{sub vol} to reflect regional variation of tube current. Linear regression analysis was used to evaluate the correlations between normalized organ doses and each size metric. Results: For the abdominal organs, the correlations between normalized organ dose and size metric were overall slightly higher for all three differently (global, regional, and middle slice) reported D{sub W} and D{sub Wa} than they were for ED, but the differences were not statistically significant. However, for lung dose, computed correlations using water equivalent diameter calculated in the middle of the image data (D{sub W,middle}) and averaged over the low attenuating region of lung (D{sub W,regional}) were statistically significantly higher than correlations of normalized lung dose with ED. Conclusions: To conclude, effective diameter and water equivalent diameter are very similar in abdominal regions; however, their difference becomes noticeable in lungs. Water equivalent diameter, specifically reported as a regional average and middle of scan volume, was shown to be better predictors of lung dose. Therefore, an attenuation-based size metric (water equivalent diameter) is recommended because it is more robust across different anatomic regions. Additionally, it was observed that the regional size metric reported as a single value averaged over a region of interest and the size metric calculated from a single slice/image chosen from the middle of the scan volume are highly correlated for these specific patient models and scan types.« less
Evaluation of Enhanced Risk Monitors for Use on Advanced Reactors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramuhalli, Pradeep; Veeramany, Arun; Bonebrake, Christopher A.
This study provides an overview of the methodology for integrating time-dependent failure probabilities into nuclear power reactor risk monitors. This prototypic enhanced risk monitor (ERM) methodology was evaluated using a hypothetical probabilistic risk assessment (PRA) model, generated using a simplified design of a liquid-metal-cooled advanced reactor (AR). Component failure data from industry compilation of failures of components similar to those in the simplified AR model were used to initialize the PRA model. Core damage frequency (CDF) over time were computed and analyzed. In addition, a study on alternative risk metrics for ARs was conducted. Risk metrics that quantify the normalizedmore » cost of repairs, replacements, or other operations and management (O&M) actions were defined and used, along with an economic model, to compute the likely economic risk of future actions such as deferred maintenance based on the anticipated change in CDF due to current component condition and future anticipated degradation. Such integration of conventional-risk metrics with alternate-risk metrics provides a convenient mechanism for assessing the impact of O&M decisions on safety and economics of the plant. It is expected that, when integrated with supervisory control algorithms, such integrated-risk monitors will provide a mechanism for real-time control decision-making that ensure safety margins are maintained while operating the plant in an economically viable manner.« less
Extrapolation of Functions of Many Variables by Means of Metric Analysis
NASA Astrophysics Data System (ADS)
Kryanev, Alexandr; Ivanov, Victor; Romanova, Anastasiya; Sevastianov, Leonid; Udumyan, David
2018-02-01
The paper considers a problem of extrapolating functions of several variables. It is assumed that the values of the function of m variables at a finite number of points in some domain D of the m-dimensional space are given. It is required to restore the value of the function at points outside the domain D. The paper proposes a fundamentally new method for functions of several variables extrapolation. In the presented paper, the method of extrapolating a function of many variables developed by us uses the interpolation scheme of metric analysis. To solve the extrapolation problem, a scheme based on metric analysis methods is proposed. This scheme consists of two stages. In the first stage, using the metric analysis, the function is interpolated to the points of the domain D belonging to the segment of the straight line connecting the center of the domain D with the point M, in which it is necessary to restore the value of the function. In the second stage, based on the auto regression model and metric analysis, the function values are predicted along the above straight-line segment beyond the domain D up to the point M. The presented numerical example demonstrates the efficiency of the method under consideration.
Metric Ranking of Invariant Networks with Belief Propagation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tao, Changxia; Ge, Yong; Song, Qinbao
The management of large-scale distributed information systems relies on the effective use and modeling of monitoring data collected at various points in the distributed information systems. A promising approach is to discover invariant relationships among the monitoring data and generate invariant networks, where a node is a monitoring data source (metric) and a link indicates an invariant relationship between two monitoring data. Such an invariant network representation can help system experts to localize and diagnose the system faults by examining those broken invariant relationships and their related metrics, because system faults usually propagate among the monitoring data and eventually leadmore » to some broken invariant relationships. However, at one time, there are usually a lot of broken links (invariant relationships) within an invariant network. Without proper guidance, it is difficult for system experts to manually inspect this large number of broken links. Thus, a critical challenge is how to effectively and efficiently rank metrics (nodes) of invariant networks according to the anomaly levels of metrics. The ranked list of metrics will provide system experts with useful guidance for them to localize and diagnose the system faults. To this end, we propose to model the nodes and the broken links as a Markov Random Field (MRF), and develop an iteration algorithm to infer the anomaly of each node based on belief propagation (BP). Finally, we validate the proposed algorithm on both realworld and synthetic data sets to illustrate its effectiveness.« less
Distributed Space Mission Design for Earth Observation Using Model-Based Performance Evaluation
NASA Technical Reports Server (NTRS)
Nag, Sreeja; LeMoigne-Stewart, Jacqueline; Cervantes, Ben; DeWeck, Oliver
2015-01-01
Distributed Space Missions (DSMs) are gaining momentum in their application to earth observation missions owing to their unique ability to increase observation sampling in multiple dimensions. DSM design is a complex problem with many design variables, multiple objectives determining performance and cost and emergent, often unexpected, behaviors. There are very few open-access tools available to explore the tradespace of variables, minimize cost and maximize performance for pre-defined science goals, and therefore select the most optimal design. This paper presents a software tool that can multiple DSM architectures based on pre-defined design variable ranges and size those architectures in terms of predefined science and cost metrics. The tool will help a user select Pareto optimal DSM designs based on design of experiments techniques. The tool will be applied to some earth observation examples to demonstrate its applicability in making some key decisions between different performance metrics and cost metrics early in the design lifecycle.
First International Diagnosis Competition - DXC'09
NASA Technical Reports Server (NTRS)
Kurtoglu, tolga; Narasimhan, Sriram; Poll, Scott; Garcia, David; Kuhn, Lukas; deKleer, Johan; vanGemund, Arjan; Feldman, Alexander
2009-01-01
A framework to compare and evaluate diagnosis algorithms (DAs) has been created jointly by NASA Ames Research Center and PARC. In this paper, we present the first concrete implementation of this framework as a competition called DXC 09. The goal of this competition was to evaluate and compare DAs in a common platform and to determine a winner based on diagnosis results. 12 DAs (model-based and otherwise) competed in this first year of the competition in 3 tracks that included industrial and synthetic systems. Specifically, the participants provided algorithms that communicated with the run-time architecture to receive scenario data and return diagnostic results. These algorithms were run on extended scenario data sets (different from sample set) to compute a set of pre-defined metrics. A ranking scheme based on weighted metrics was used to declare winners. This paper presents the systems used in DXC 09, description of faults and data sets, a listing of participating DAs, the metrics and results computed from running the DAs, and a superficial analysis of the results.
NASA Astrophysics Data System (ADS)
Zhang, Lulu; Liu, Jingling; Li, Yi
2015-03-01
The influence of spatial differences, which are caused by different anthropogenic disturbances, and temporal changes, which are caused by natural conditions, on macroinvertebrates with periphyton communities in Baiyangdian Lake was compared. Periphyton and macrobenthos assemblage samples were simultaneously collected on four occasions during 2009 and 2010. Based on the physical and chemical attributes in the water and sediment, the 8 sampling sites can be divided into 5 habitat types by using cluster analysis. According to coefficients variation analysis (CV), three primary conclusions can be drawn: (1) the metrics of Hilsenhoff Biotic Index (HBI), Percent Tolerant Taxa (PTT), Percent dominant taxon (PDT), and community loss index (CLI), based on macroinvertebrates, and the metrics of algal density (AD), the proportion of chlorophyta (CHL), and the proportion of cyanophyta (CYA), based on periphytons, were mostly constant throughout our study; (2) in terms of spatial variation, the CV values in the macroinvertebratebased metrics were lower than the CV values in the periphyton-based metrics, and these findings may be caused by the effects of changes in environmental factors; whereas, the CV values in the macroinvertebrate-based metrics were higher than those in the periphyton-based metrics, and these results may be linked to the influences of phenology and life history patterns of the macroinvertebrate individuals; and (3) the CV values for the functionalbased metrics were higher than those for the structuralbased metrics. Therefore, spatial and temporal variation for metrics should be considered when assessing applying the biometrics.
Improved Mental Acuity Forecasting with an Individualized Quantitative Sleep Model.
Winslow, Brent D; Nguyen, Nam; Venta, Kimberly E
2017-01-01
Sleep impairment significantly alters human brain structure and cognitive function, but available evidence suggests that adults in developed nations are sleeping less. A growing body of research has sought to use sleep to forecast cognitive performance by modeling the relationship between the two, but has generally focused on vigilance rather than other cognitive constructs affected by sleep, such as reaction time, executive function, and working memory. Previous modeling efforts have also utilized subjective, self-reported sleep durations and were restricted to laboratory environments. In the current effort, we addressed these limitations by employing wearable systems and mobile applications to gather objective sleep information, assess multi-construct cognitive performance, and model/predict changes to mental acuity. Thirty participants were recruited for participation in the study, which lasted 1 week. Using the Fitbit Charge HR and a mobile version of the automated neuropsychological assessment metric called CogGauge, we gathered a series of features and utilized the unified model of performance to predict mental acuity based on sleep records. Our results suggest that individuals poorly rate their sleep duration, supporting the need for objective sleep metrics to model circadian changes to mental acuity. Participant compliance in using the wearable throughout the week and responding to the CogGauge assessments was 80%. Specific biases were identified in temporal metrics across mobile devices and operating systems and were excluded from the mental acuity metric development. Individualized prediction of mental acuity consistently outperformed group modeling. This effort indicates the feasibility of creating an individualized, mobile assessment and prediction of mental acuity, compatible with the majority of current mobile devices.
Kritikos, Nikolaos; Tsantili-Kakoulidou, Anna; Loukas, Yannis L; Dotsikas, Yannis
2015-07-17
In the current study, quantitative structure-retention relationships (QSRR) were constructed based on data obtained by a LC-(ESI)-QTOF-MS/MS method for the determination of amino acid analogues, following their derivatization via chloroformate esters. Molecules were derivatized via n-propyl chloroformate/n-propanol mediated reaction. Derivatives were acquired through a liquid-liquid extraction procedure. Chromatographic separation is based on gradient elution using methanol/water mixtures from a 70/30% composition to an 85/15% final one, maintaining a constant rate of change. The group of examined molecules was diverse, including mainly α-amino acids, yet also β- and γ-amino acids, γ-amino acid analogues, decarboxylated and phosphorylated analogues and dipeptides. Projection to latent structures (PLS) method was selected for the formation of QSRRs, resulting in a total of three PLS models with high cross-validated coefficients of determination Q(2)Y. For this reason, molecular structures were previously described through the use of descriptors. Through stratified random sampling procedures, 57 compounds were split to a training set and a test set. Model creation was based on multiple criteria including principal component significance and eigenvalue, variable importance, form of residuals, etc. Validation was based on statistical metrics Rpred(2),QextF2(2),QextF3(2) for the test set and Roy's metrics rm(Av)(2) and rm(δ)(2), assessing both predictive stability and internal validity. Based on aforementioned models, simplified equivalent were then created using a multi-linear regression (MLR) method. MLR models were also validated with the same metrics. The suggested models are considered useful for the estimation of retention times of amino acid analogues for a series of applications. Copyright © 2015 Elsevier B.V. All rights reserved.
Multi-linear model set design based on the nonlinearity measure and H-gap metric.
Shaghaghi, Davood; Fatehi, Alireza; Khaki-Sedigh, Ali
2017-05-01
This paper proposes a model bank selection method for a large class of nonlinear systems with wide operating ranges. In particular, nonlinearity measure and H-gap metric are used to provide an effective algorithm to design a model bank for the system. Then, the proposed model bank is accompanied with model predictive controllers to design a high performance advanced process controller. The advantage of this method is the reduction of excessive switch between models and also decrement of the computational complexity in the controller bank that can lead to performance improvement of the control system. The effectiveness of the method is verified by simulations as well as experimental studies on a pH neutralization laboratory apparatus which confirms the efficiency of the proposed algorithm. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Rivard, Justin D; Vergis, Ashley S; Unger, Bertram J; Hardy, Krista M; Andrew, Chris G; Gillman, Lawrence M; Park, Jason
2014-06-01
Computer-based surgical simulators capture a multitude of metrics based on different aspects of performance, such as speed, accuracy, and movement efficiency. However, without rigorous assessment, it may be unclear whether all, some, or none of these metrics actually reflect technical skill, which can compromise educational efforts on these simulators. We assessed the construct validity of individual performance metrics on the LapVR simulator (Immersion Medical, San Jose, CA, USA) and used these data to create task-specific summary metrics. Medical students with no prior laparoscopic experience (novices, N = 12), junior surgical residents with some laparoscopic experience (intermediates, N = 12), and experienced surgeons (experts, N = 11) all completed three repetitions of four LapVR simulator tasks. The tasks included three basic skills (peg transfer, cutting, clipping) and one procedural skill (adhesiolysis). We selected 36 individual metrics on the four tasks that assessed six different aspects of performance, including speed, motion path length, respect for tissue, accuracy, task-specific errors, and successful task completion. Four of seven individual metrics assessed for peg transfer, six of ten metrics for cutting, four of nine metrics for clipping, and three of ten metrics for adhesiolysis discriminated between experience levels. Time and motion path length were significant on all four tasks. We used the validated individual metrics to create summary equations for each task, which successfully distinguished between the different experience levels. Educators should maintain some skepticism when reviewing the plethora of metrics captured by computer-based simulators, as some but not all are valid. We showed the construct validity of a limited number of individual metrics and developed summary metrics for the LapVR. The summary metrics provide a succinct way of assessing skill with a single metric for each task, but require further validation.
Climate Data Analytics Workflow Management
NASA Astrophysics Data System (ADS)
Zhang, J.; Lee, S.; Pan, L.; Mattmann, C. A.; Lee, T. J.
2016-12-01
In this project we aim to pave a novel path to create a sustainable building block toward Earth science big data analytics and knowledge sharing. Closely studying how Earth scientists conduct data analytics research in their daily work, we have developed a provenance model to record their activities, and to develop a technology to automatically generate workflows for scientists from the provenance. On top of it, we have built the prototype of a data-centric provenance repository, and establish a PDSW (People, Data, Service, Workflow) knowledge network to support workflow recommendation. To ensure the scalability and performance of the expected recommendation system, we have leveraged the Apache OODT system technology. The community-approved, metrics-based performance evaluation web-service will allow a user to select a metric from the list of several community-approved metrics and to evaluate model performance using the metric as well as the reference dataset. This service will facilitate the use of reference datasets that are generated in support of the model-data intercomparison projects such as Obs4MIPs and Ana4MIPs. The data-centric repository infrastructure will allow us to catch richer provenance to further facilitate knowledge sharing and scientific collaboration in the Earth science community. This project is part of Apache incubator CMDA project.
A depth-adjusted ambient distribution approach for setting ...
We compiled and modelled macroinvertebrate assemblage data from samples collected in 1995-2014 from the estuarine portion of the St. Louis River Area of Concern (AOC) of western Lake Superior. Our objective to create depth-adjusted cutoff values for benthos condition classes (poor, fair, reference) that can be used to plan remediation and restoration actions, and to assess progress toward achieving removal targets for the degraded benthos beneficial use impairment. The relationship between depth and benthos metrics was wedge-shaped. We therefore used 90th percentile quantile regression to define the limiting effect of depth on selected benthos metrics, including taxa richness, percent non-oligochaete individuals, percent Ephemeroptera, Trichoptera, and Odonata individuals, and density of ephemerid mayfly larvae (e.g., Hexagenia). We also created a scaled trimetric index from the first three metrics. We examined gear type (standard vs. petite Ponar sampler), exposure class (derived from fetch), geographic zone of the AOC, and substrate type for confounding effects on the limiting depth. The effect of gear type was minimal. Metric values were generally higher at more exposed locations, but we judged the exposure effect less important for model application than variation among three geographic zones, so we combined data across exposure classes and created separate models for each geographic zone of the AOC. Based on qualitative substrate data for most samples, we
NASA Astrophysics Data System (ADS)
Biess, Armin
2013-01-01
The study of the kinematic and dynamic features of human arm movements provides insights into the computational strategies underlying human motor control. In this paper a differential geometric approach to movement control is taken by endowing arm configuration space with different non-Euclidean metric structures to study the predictions of the generalized minimum-jerk (MJ) model in the resulting Riemannian manifold for different types of human arm movements. For each metric space the solution of the generalized MJ model is given by reparametrized geodesic paths. This geodesic model is applied to a variety of motor tasks ranging from three-dimensional unconstrained movements of a four degree of freedom arm between pointlike targets to constrained movements where the hand location is confined to a surface (e.g., a sphere) or a curve (e.g., an ellipse). For the latter speed-curvature relations are derived depending on the boundary conditions imposed (periodic or nonperiodic) and the compatibility with the empirical one-third power law is shown. Based on these theoretical studies and recent experimental findings, I argue that geodesics may be an emergent property of the motor system and that the sensorimotor system may shape arm configuration space by learning metric structures through sensorimotor feedback.
Contrast-based sensorless adaptive optics for retinal imaging.
Zhou, Xiaolin; Bedggood, Phillip; Bui, Bang; Nguyen, Christine T O; He, Zheng; Metha, Andrew
2015-09-01
Conventional adaptive optics ophthalmoscopes use wavefront sensing methods to characterize ocular aberrations for real-time correction. However, there are important situations in which the wavefront sensing step is susceptible to difficulties that affect the accuracy of the correction. To circumvent these, wavefront sensorless adaptive optics (or non-wavefront sensing AO; NS-AO) imaging has recently been developed and has been applied to point-scanning based retinal imaging modalities. In this study we show, for the first time, contrast-based NS-AO ophthalmoscopy for full-frame in vivo imaging of human and animal eyes. We suggest a robust image quality metric that could be used for any imaging modality, and test its performance against other metrics using (physical) model eyes.
Large Footprint LiDAR Data Processing for Ground Detection and Biomass Estimation
NASA Astrophysics Data System (ADS)
Zhuang, Wei
Ground detection in large footprint waveform Light Detection And Ranging (LiDAR) data is important in calculating and estimating downstream products, especially in forestry applications. For example, tree heights are calculated as the difference between the ground peak and first returned signal in a waveform. Forest attributes, such as aboveground biomass, are estimated based on the tree heights. This dissertation investigated new metrics and algorithms for estimating aboveground biomass and extracting ground peak location in large footprint waveform LiDAR data. In the first manuscript, an accurate and computationally efficient algorithm, named Filtering and Clustering Algorithm (FICA), was developed based on a set of multiscale second derivative filters for automatically detecting the ground peak in an waveform from Land, Vegetation and Ice Sensor. Compared to existing ground peak identification algorithms, FICA was tested in different land cover type plots and showed improved accuracy in ground detections of the vegetation plots and similar accuracy in developed area plots. Also, FICA adopted a peak identification strategy rather than following a curve-fitting process, and therefore, exhibited improved efficiency. In the second manuscript, an algorithm was developed specifically for shrub waveforms. The algorithm only partially fitted the shrub canopy reflection and detected the ground peak by investigating the residual signal, which was generated by deducting a Gaussian fitting function from the raw waveform. After the deduction, the overlapping ground peak was identified as the local maximum of the residual signal. In addition, an applicability model was built for determining waveforms where the proposed PCF algorithm should be applied. In the third manuscript, a new set of metrics was developed to increase accuracy in biomass estimation models. The metrics were based on the results of Gaussian decomposition. They incorporated both waveform intensity represented by the area covered by a Gaussian function and its associated heights, which was the centroid of the Gaussian function. By considering signal reflection of different vegetation layers, the developed metrics obtained better estimation accuracy in aboveground biomass when compared to existing metrics. In addition, the new developed metrics showed strong correlation with other forest structural attributes, such as mean Diameter at Breast Height (DBH) and stem density. In sum, the dissertation investigated the various techniques for large footprint waveform LiDAR processing for detecting the ground peak and estimating biomass. The novel techniques developed in this dissertation showed better performance than existing methods or metrics.
Task-oriented lossy compression of magnetic resonance images
NASA Astrophysics Data System (ADS)
Anderson, Mark C.; Atkins, M. Stella; Vaisey, Jacques
1996-04-01
A new task-oriented image quality metric is used to quantify the effects of distortion introduced into magnetic resonance images by lossy compression. This metric measures the similarity between a radiologist's manual segmentation of pathological features in the original images and the automated segmentations performed on the original and compressed images. The images are compressed using a general wavelet-based lossy image compression technique, embedded zerotree coding, and segmented using a three-dimensional stochastic model-based tissue segmentation algorithm. The performance of the compression system is then enhanced by compressing different regions of the image volume at different bit rates, guided by prior knowledge about the location of important anatomical regions in the image. Application of the new system to magnetic resonance images is shown to produce compression results superior to the conventional methods, both subjectively and with respect to the segmentation similarity metric.
A priori discretization error metrics for distributed hydrologic modeling applications
NASA Astrophysics Data System (ADS)
Liu, Hongli; Tolson, Bryan A.; Craig, James R.; Shafii, Mahyar
2016-12-01
Watershed spatial discretization is an important step in developing a distributed hydrologic model. A key difficulty in the spatial discretization process is maintaining a balance between the aggregation-induced information loss and the increase in computational burden caused by the inclusion of additional computational units. Objective identification of an appropriate discretization scheme still remains a challenge, in part because of the lack of quantitative measures for assessing discretization quality, particularly prior to simulation. This study proposes a priori discretization error metrics to quantify the information loss of any candidate discretization scheme without having to run and calibrate a hydrologic model. These error metrics are applicable to multi-variable and multi-site discretization evaluation and provide directly interpretable information to the hydrologic modeler about discretization quality. The first metric, a subbasin error metric, quantifies the routing information loss from discretization, and the second, a hydrological response unit (HRU) error metric, improves upon existing a priori metrics by quantifying the information loss due to changes in land cover or soil type property aggregation. The metrics are straightforward to understand and easy to recode. Informed by the error metrics, a two-step discretization decision-making approach is proposed with the advantage of reducing extreme errors and meeting the user-specified discretization error targets. The metrics and decision-making approach are applied to the discretization of the Grand River watershed in Ontario, Canada. Results show that information loss increases as discretization gets coarser. Moreover, results help to explain the modeling difficulties associated with smaller upstream subbasins since the worst discretization errors and highest error variability appear in smaller upstream areas instead of larger downstream drainage areas. Hydrologic modeling experiments under candidate discretization schemes validate the strong correlation between the proposed discretization error metrics and hydrologic simulation responses. Discretization decision-making results show that the common and convenient approach of making uniform discretization decisions across the watershed performs worse than the proposed non-uniform discretization approach in terms of preserving spatial heterogeneity under the same computational cost.
Metric half-span model support system
NASA Technical Reports Server (NTRS)
Jackson, C. M., Jr.; Dollyhigh, S. M.; Shaw, D. S. (Inventor)
1982-01-01
A model support system used to support a model in a wind tunnel test section is described. The model comprises a metric, or measured, half-span supported by a nonmetric, or nonmeasured half-span which is connected to a sting support. Moments and forces acting on the metric half-span are measured without interference from the support system during a wind tunnel test.
Cognitive skills assessment during robot-assisted surgery: separating the wheat from the chaff.
Guru, Khurshid A; Esfahani, Ehsan T; Raza, Syed J; Bhat, Rohit; Wang, Katy; Hammond, Yana; Wilding, Gregory; Peabody, James O; Chowriappa, Ashirwad J
2015-01-01
To investigate the utility of cognitive assessment during robot-assisted surgery (RAS) to define skills in terms of cognitive engagement, mental workload, and mental state; while objectively differentiating between novice and expert surgeons. In all, 10 surgeons with varying operative experience were assigned to beginner (BG), combined competent and proficient (CPG), and expert (EG) groups based on the Dreyfus model. The participants performed tasks for basic, intermediate and advanced skills on the da Vinci Surgical System. Participant performance was assessed using both tool-based and cognitive metrics. Tool-based metrics showed significant differences between the BG vs CPG and the BG vs EG, in basic skills. While performing intermediate skills, there were significant differences only on the instrument-to-instrument collisions between the BG vs CPG (2.0 vs 0.2, P = 0.028), and the BG vs EG (2.0 vs 0.1, P = 0.018). There were no significant differences between the CPG and EG for both basic and intermediate skills. However, using cognitive metrics, there were significant differences between all groups for the basic and intermediate skills. In advanced skills, there were no significant differences between the CPG and the EG except time (1116 vs 599.6 s), using tool-based metrics. However, cognitive metrics revealed significant differences between both groups. Cognitive assessment of surgeons may aid in defining levels of expertise performing complex surgical tasks once competence is achieved. Cognitive assessment may be used as an adjunct to the traditional methods for skill assessment during RAS. © 2014 The Authors. BJU International © 2014 BJU International.
Advanced Life Support Research and Technology Development Metric: Fiscal Year 2003
NASA Technical Reports Server (NTRS)
Hanford, A. J.
2004-01-01
This document provides the official calculation of the Advanced Life Support (ALS) Research and Technology Development Metric (the Metric) for Fiscal Year 2003. As such, the values herein are primarily based on Systems Integration, Modeling, and Analysis (SIMA) Element approved software tools or reviewed and approved reference documents. The Metric is one of several measures employed by the National Aeronautics and Space Administration (NASA) to assess the Agency s progress as mandated by the United States Congress and the Office of Management and Budget. Because any measure must have a reference point, whether explicitly defined or implied, the Metric is a comparison between a selected ALS Project life support system and an equivalently detailed life support system using technology from the Environmental Control and Life Support System (ECLSS) for the International Space Station (ISS). More specifically, the Metric is the ratio defined by the equivalent system mass (ESM) of a life support system for a specific mission using the ISS ECLSS technologies divided by the ESM for an equivalent life support system using the best ALS technologies. As defined, the Metric should increase in value as the ALS technologies become lighter, less power intensive, and require less volume. For Fiscal Year 2003, the Advanced Life Support Research and Technology Development Metric value is 1.47 for an Orbiting Research Facility and 1.36 for an Independent Exploration Mission.
Liu, Feng; Tan, Chang; Lei, Pi-Feng
2014-11-01
Taking Wugang forest farm in Xuefeng Mountain as the research object, using the airborne light detection and ranging (LiDAR) data under leaf-on condition and field data of concomitant plots, this paper assessed the ability of using LiDAR technology to estimate aboveground biomass of the mid-subtropical forest. A semi-automated individual tree LiDAR cloud point segmentation was obtained by using condition random fields and optimization methods. Spatial structure, waveform characteristics and topography were calculated as LiDAR metrics from the segmented objects. Then statistical models between aboveground biomass from field data and these LiDAR metrics were built. The individual tree recognition rates were 93%, 86% and 60% for coniferous, broadleaf and mixed forests, respectively. The adjusted coefficients of determination (R(2)adj) and the root mean squared errors (RMSE) for the three types of forest were 0.83, 0.81 and 0.74, and 28.22, 29.79 and 32.31 t · hm(-2), respectively. The estimation capability of model based on canopy geometric volume, tree percentile height, slope and waveform characteristics was much better than that of traditional regression model based on tree height. Therefore, LiDAR metrics from individual tree could facilitate better performance in biomass estimation.
Synchronization of multi-agent systems with metric-topological interactions.
Wang, Lin; Chen, Guanrong
2016-09-01
A hybrid multi-agent systems model integrating the advantages of both metric interaction and topological interaction rules, called the metric-topological model, is developed. This model describes planar motions of mobile agents, where each agent can interact with all the agents within a circle of a constant radius, and can furthermore interact with some distant agents to reach a pre-assigned number of neighbors, if needed. Some sufficient conditions imposed only on system parameters and agent initial states are presented, which ensure achieving synchronization of the whole group of agents. It reveals the intrinsic relationships among the interaction range, the speed, the initial heading, and the density of the group. Moreover, robustness against variations of interaction range, density, and speed are investigated by comparing the motion patterns and performances of the hybrid metric-topological interaction model with the conventional metric-only and topological-only interaction models. Practically in all cases, the hybrid metric-topological interaction model has the best performance in the sense of achieving highest frequency of synchronization, fastest convergent rate, and smallest heading difference.
Information-theoretic model comparison unifies saliency metrics
Kümmerer, Matthias; Wallis, Thomas S. A.; Bethge, Matthias
2015-01-01
Learning the properties of an image associated with human gaze placement is important both for understanding how biological systems explore the environment and for computer vision applications. There is a large literature on quantitative eye movement models that seeks to predict fixations from images (sometimes termed “saliency” prediction). A major problem known to the field is that existing model comparison metrics give inconsistent results, causing confusion. We argue that the primary reason for these inconsistencies is because different metrics and models use different definitions of what a “saliency map” entails. For example, some metrics expect a model to account for image-independent central fixation bias whereas others will penalize a model that does. Here we bring saliency evaluation into the domain of information by framing fixation prediction models probabilistically and calculating information gain. We jointly optimize the scale, the center bias, and spatial blurring of all models within this framework. Evaluating existing metrics on these rephrased models produces almost perfect agreement in model rankings across the metrics. Model performance is separated from center bias and spatial blurring, avoiding the confounding of these factors in model comparison. We additionally provide a method to show where and how models fail to capture information in the fixations on the pixel level. These methods are readily extended to spatiotemporal models of fixation scanpaths, and we provide a software package to facilitate their use. PMID:26655340
A methodology to enable rapid evaluation of aviation environmental impacts and aircraft technologies
NASA Astrophysics Data System (ADS)
Becker, Keith Frederick
Commercial aviation has become an integral part of modern society and enables unprecedented global connectivity by increasing rapid business, cultural, and personal connectivity. In the decades following World War II, passenger travel through commercial aviation quickly grew at a rate of roughly 8% per year globally. The FAA's most recent Terminal Area Forecast predicts growth to continue at a rate of 2.5% domestically, and the market outlooks produced by Airbus and Boeing generally predict growth to continue at a rate of 5% per year globally over the next several decades, which translates into a need for up to 30,000 new aircraft produced by 2025. With such large numbers of new aircraft potentially entering service, any negative consequences of commercial aviation must undergo examination and mitigation by governing bodies so that growth may still be achieved. Options to simultaneously grow while reducing environmental impact include evolution of the commercial fleet through changes in operations, aircraft mix, and technology adoption. Methods to rapidly evaluate fleet environmental metrics are needed to enable decision makers to quickly compare the impact of different scenarios and weigh the impact of multiple policy options. As the fleet evolves, interdependencies may emerge in the form of tradeoffs between improvements in different environmental metrics as new technologies are brought into service. In order to include the impacts of these interdependencies on fleet evolution, physics-based modeling is required at the appropriate level of fidelity. Evaluation of environmental metrics in a physics-based manner can be done at the individual aircraft level, but will then not capture aggregate fleet metrics. Contrastingly, evaluation of environmental metrics at the fleet level is already being done for aircraft in the commercial fleet, but current tools and approaches require enhancement because they currently capture technology implementation through post-processing, which does not capture physical interdependencies that may arise at the aircraft-level. The goal of the work that has been conducted here was the development of a methodology to develop surrogate fleet approaches that leverage the capability of physics-based aircraft models and the development of connectivity to fleet-level analysis tools to enable rapid evaluation of fuel burn and emissions metrics. Instead of requiring development of an individual physics-based model for each vehicle in the fleet, the surrogate fleet approaches seek to reduce the number of such models needed while still accurately capturing performance of the fleet. By reducing the number of models, both development time and execution time to generate fleet-level results may also be reduced. The initial steps leading to surrogate fleet formulation were a characterization of the commercial fleet into groups based on capability followed by the selection of a reference vehicle model and a reference set of operations for each group. Next, three potential surrogate fleet approaches were formulated. These approaches include the parametric correction factor approach, in which the results of a reference vehicle model are corrected to match the aggregate results of each group; the average replacement approach, in which a new vehicle model is developed to generate aggregate results of each group, and the best-in-class replacement approach, in which results for a reference vehicle are simply substituted for the entire group. Once candidate surrogate fleet approaches were developed, they were each applied to and evaluated over the set of reference operations. Then each approach was evaluated for their ability to model variations in operations. Finally, the ability of each surrogate fleet approach to capture implementation of different technology suites along with corresponding interdependencies between fuel burn and emissions was evaluated using the concept of a virtual fleet to simulate the technology response of multiple aircraft families. The results of experimentation led to a down selection to the best approach to use to rapidly characterize the performance of the commercial fleet for accurately in the context of acceptability of current fleet evaluation methods. The parametric correction factor and average replacement approaches were shown to be successful in capturing reference fleet results as well as fleet performance with variations in operations. The best-in-class replacement approach was shown to be unacceptable as a model for the larger fleet in each of the scenarios tested. Finally, the average replacement approach was the only one that was successful in capturing the impact of technologies on a larger fleet. These results are meaningful because they show that it is possible to calculate the fuel burn and emissions of a larger fleet with a reduced number of physics-based models within acceptable bounds of accuracy. At the same time, the physics-based modeling also provides the ability to evaluate the impact of technologies on fleet-level fuel burn and emissions metrics. The value of such a capability is that multiple future fleet scenarios involving changes in both aircraft operations and technology levels may now be rapidly evaluated to inform and equip policy makers of the implications of impacts of changes on fleet-level metrics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kipritidis, John, E-mail: john.kipritidis@sydney.edu.au; Keall, Paul J.; Siva, Shankar
Purpose: CT ventilation imaging is a novel functional lung imaging modality based on deformable image registration. The authors present the first validation study of CT ventilation using positron emission tomography with{sup 68}Ga-labeled nanoparticles (PET-Galligas). The authors quantify this agreement for different CT ventilation metrics and PET reconstruction parameters. Methods: PET-Galligas ventilation scans were acquired for 12 lung cancer patients using a four-dimensional (4D) PET/CT scanner. CT ventilation images were then produced by applying B-spline deformable image registration between the respiratory correlated phases of the 4D-CT. The authors test four ventilation metrics, two existing and two modified. The two existing metricsmore » model mechanical ventilation (alveolar air-flow) based on Hounsfield unit (HU) change (V{sub HU}) or Jacobian determinant of deformation (V{sub Jac}). The two modified metrics incorporate a voxel-wise tissue-density scaling (ρV{sub HU} and ρV{sub Jac}) and were hypothesized to better model the physiological ventilation. In order to assess the impact of PET image quality, comparisons were performed using both standard and respiratory-gated PET images with the former exhibiting better signal. Different median filtering kernels (σ{sub m} = 0 or 3 mm) were also applied to all images. As in previous studies, similarity metrics included the Spearman correlation coefficient r within the segmented lung volumes, and Dice coefficient d{sub 20} for the (0 − 20)th functional percentile volumes. Results: The best agreement between CT and PET ventilation was obtained comparing standard PET images to the density-scaled HU metric (ρV{sub HU}) with σ{sub m} = 3 mm. This leads to correlation values in the ranges 0.22 ⩽ r ⩽ 0.76 and 0.38 ⩽ d{sub 20} ⩽ 0.68, with r{sup ¯}=0.42±0.16 and d{sup ¯}{sub 20}=0.52±0.09 averaged over the 12 patients. Compared to Jacobian-based metrics, HU-based metrics lead to statistically significant improvements in r{sup ¯} and d{sup ¯}{sub 20} (p < 0.05), with density scaled metrics also showing higher r{sup ¯} than for unscaled versions (p < 0.02). r{sup ¯} and d{sup ¯}{sub 20} were also sensitive to image quality, with statistically significant improvements using standard (as opposed to gated) PET images and with application of median filtering. Conclusions: The use of modified CT ventilation metrics, in conjunction with PET-Galligas and careful application of image filtering has resulted in improved correlation compared to earlier studies using nuclear medicine ventilation. However, CT ventilation and PET-Galligas do not always provide the same functional information. The authors have demonstrated that the agreement can improve for CT ventilation metrics incorporating a tissue density scaling, and also with increasing PET image quality. CT ventilation imaging has clear potential for imaging regional air volume change in the lung, and further development is warranted.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Villani, Mattia, E-mail: villani@fi.infn.it
2014-06-01
We consider the Goode-Wainwright representation of the Szekeres cosmological models and calculate the Taylor expansion of the luminosity distance in order to study the effects of the inhomogeneities on cosmographic parameters. Without making a particular choice for the arbitrary functions defining the metric, we Taylor expand up to the second order in redshift for Family I and up to the third order for Family II Szekeres metrics under the hypotesis, based on observation, that local structure formation is over. In a conservative fashion, we also allow for the existence of a non null cosmological constant.
NASA Astrophysics Data System (ADS)
Simon, E.; Nowicki, S.; Neumann, T.; Tyahla, L.; Saba, J. L.; Guerber, J. R.; Bonin, J. A.; DiMarzio, J. P.
2017-12-01
The Cryosphere model Comparison tool (CmCt) is a web based ice sheet model validation tool that is being developed by NASA to facilitate direct comparison between observational data and various ice sheet models. The CmCt allows the user to take advantage of several decades worth of observations from Greenland and Antarctica. Currently, the CmCt can be used to compare ice sheet models provided by the user with remotely sensed satellite data from ICESat (Ice, Cloud, and land Elevation Satellite) laser altimetry, GRACE (Gravity Recovery and Climate Experiment) satellite, and radar altimetry (ERS-1, ERS-2, and Envisat). One or more models can be uploaded through the CmCt website and compared with observational data, or compared to each other or other models. The CmCt calculates statistics on the differences between the model and observations, and other quantitative and qualitative metrics, which can be used to evaluate the different model simulations against the observations. The qualitative metrics consist of a range of visual outputs and the quantitative metrics consist of several whole-ice-sheet scalar values that can be used to assign an overall score to a particular simulation. The comparison results from CmCt are useful in quantifying improvements within a specific model (or within a class of models) as a result of differences in model dynamics (e.g., shallow vs. higher-order dynamics approximations), model physics (e.g., representations of ice sheet rheological or basal processes), or model resolution (mesh resolution and/or changes in the spatial resolution of input datasets). The framework and metrics could also be used for use as a model-to-model intercomparison tool, simply by swapping outputs from another model as the observational datasets. Future versions of the tool will include comparisons with other datasets that are of interest to the modeling community, such as ice velocity, ice thickness, and surface mass balance.
Sediment transport-based metrics of wetland stability
Ganju, Neil K.; Kirwan, Matthew L.; Dickhudt, Patrick J.; Guntenspergen, Glenn R.; Cahoon, Donald R.; Kroeger, Kevin D.
2015-01-01
Despite the importance of sediment availability on wetland stability, vulnerability assessments seldom consider spatiotemporal variability of sediment transport. Models predict that the maximum rate of sea level rise a marsh can survive is proportional to suspended sediment concentration (SSC) and accretion. In contrast, we find that SSC and accretion are higher in an unstable marsh than in an adjacent stable marsh, suggesting that these metrics cannot describe wetland vulnerability. Therefore, we propose the flood/ebb SSC differential and organic-inorganic suspended sediment ratio as better vulnerability metrics. The unstable marsh favors sediment export (18 mg L−1 higher on ebb tides), while the stable marsh imports sediment (12 mg L−1 higher on flood tides). The organic-inorganic SSC ratio is 84% higher in the unstable marsh, and stable isotopes indicate a source consistent with marsh-derived material. These simple metrics scale with sediment fluxes, integrate spatiotemporal variability, and indicate sediment sources.
McFarland, Tiffany Marie; van Riper, Charles
2013-01-01
Successful management practices of avian populations depend on understanding relationships between birds and their habitat, especially in rare habitats, such as riparian areas of the desert Southwest. Remote-sensing technology has become popular in habitat modeling, but most of these models focus on single species, leaving their applicability to understanding broader community structure and function largely untested. We investigated the usefulness of two Normalized Difference Vegetation Index (NDVI) habitat models to model avian abundance and species richness on the upper San Pedro River in southeastern Arizona. Although NDVI was positively correlated with our bird metrics, the amount of explained variation was low. We then investigated the addition of vegetation metrics and other remote-sensing metrics to improve our models. Although both vegetation metrics and remotely sensed metrics increased the power of our models, the overall explained variation was still low, suggesting that general avian community structure may be too complex for NDVI models.
EOID System Model Validation, Metrics, and Synthetic Clutter Generation
2003-09-30
Our long-term goal is to accurately predict the capability of the current generation of laser-based underwater imaging sensors to perform Electro ... Optic Identification (EOID) against relevant targets in a variety of realistic environmental conditions. The models will predict the impact of
Jovicich, Jorge; Marizzoni, Moira; Bosch, Beatriz; Bartrés-Faz, David; Arnold, Jennifer; Benninghoff, Jens; Wiltfang, Jens; Roccatagliata, Luca; Picco, Agnese; Nobili, Flavio; Blin, Oliver; Bombois, Stephanie; Lopes, Renaud; Bordet, Régis; Chanoine, Valérie; Ranjeva, Jean-Philippe; Didic, Mira; Gros-Dagnac, Hélène; Payoux, Pierre; Zoccatelli, Giada; Alessandrini, Franco; Beltramello, Alberto; Bargalló, Núria; Ferretti, Antonio; Caulo, Massimo; Aiello, Marco; Ragucci, Monica; Soricelli, Andrea; Salvadori, Nicola; Tarducci, Roberto; Floridi, Piero; Tsolaki, Magda; Constantinidis, Manos; Drevelegas, Antonios; Rossini, Paolo Maria; Marra, Camillo; Otto, Josephin; Reiss-Zimmermann, Martin; Hoffmann, Karl-Titus; Galluzzi, Samantha; Frisoni, Giovanni B
2014-11-01
Large-scale longitudinal neuroimaging studies with diffusion imaging techniques are necessary to test and validate models of white matter neurophysiological processes that change in time, both in healthy and diseased brains. The predictive power of such longitudinal models will always be limited by the reproducibility of repeated measures acquired during different sessions. At present, there is limited quantitative knowledge about the across-session reproducibility of standard diffusion metrics in 3T multi-centric studies on subjects in stable conditions, in particular when using tract based spatial statistics and with elderly people. In this study we implemented a multi-site brain diffusion protocol in 10 clinical 3T MRI sites distributed across 4 countries in Europe (Italy, Germany, France and Greece) using vendor provided sequences from Siemens (Allegra, Trio Tim, Verio, Skyra, Biograph mMR), Philips (Achieva) and GE (HDxt) scanners. We acquired DTI data (2 × 2 × 2 mm(3), b = 700 s/mm(2), 5 b0 and 30 diffusion weighted volumes) of a group of healthy stable elderly subjects (5 subjects per site) in two separate sessions at least a week apart. For each subject and session four scalar diffusion metrics were considered: fractional anisotropy (FA), mean diffusivity (MD), radial diffusivity (RD) and axial (AD) diffusivity. The diffusion metrics from multiple subjects and sessions at each site were aligned to their common white matter skeleton using tract-based spatial statistics. The reproducibility at each MRI site was examined by looking at group averages of absolute changes relative to the mean (%) on various parameters: i) reproducibility of the signal-to-noise ratio (SNR) of the b0 images in centrum semiovale, ii) full brain test-retest differences of the diffusion metric maps on the white matter skeleton, iii) reproducibility of the diffusion metrics on atlas-based white matter ROIs on the white matter skeleton. Despite the differences of MRI scanner configurations across sites (vendors, models, RF coils and acquisition sequences) we found good and consistent test-retest reproducibility. White matter b0 SNR reproducibility was on average 7 ± 1% with no significant MRI site effects. Whole brain analysis resulted in no significant test-retest differences at any of the sites with any of the DTI metrics. The atlas-based ROI analysis showed that the mean reproducibility errors largely remained in the 2-4% range for FA and AD and 2-6% for MD and RD, averaged across ROIs. Our results show reproducibility values comparable to those reported in studies using a smaller number of MRI scanners, slightly different DTI protocols and mostly younger populations. We therefore show that the acquisition and analysis protocols used are appropriate for multi-site experimental scenarios. Copyright © 2014 Elsevier Inc. All rights reserved.
Metrics for linear kinematic features in sea ice
NASA Astrophysics Data System (ADS)
Levy, G.; Coon, M.; Sulsky, D.
2006-12-01
The treatment of leads as cracks or discontinuities (see Coon et al. presentation) requires some shift in the procedure of evaluation and comparison of lead-resolving models and their validation against observations. Common metrics used to evaluate ice model skills are by and large an adaptation of a least square "metric" adopted from operational numerical weather prediction data assimilation systems and are most appropriate for continuous fields and Eilerian systems where the observations and predictions are commensurate. However, this class of metrics suffers from some flaws in areas of sharp gradients and discontinuities (e.g., leads) and when Lagrangian treatments are more natural. After a brief review of these metrics and their performance in areas of sharp gradients, we present two new metrics specifically designed to measure model accuracy in representing linear features (e.g., leads). The indices developed circumvent the requirement that both the observations and model variables be commensurate (i.e., measured with the same units) by considering the frequencies of the features of interest/importance. We illustrate the metrics by scoring several hypothetical "simulated" discontinuity fields against the lead interpreted from RGPS observations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ten Brinke, JoAnn
1995-08-01
Volatile organic compounds (VOCs) are suspected to contribute significantly to ''Sick Building Syndrome'' (SBS), a complex of subchronic symptoms that occurs during and in general decreases away from occupancy of the building in question. A new approach takes into account individual VOC potencies, as well as the highly correlated nature of the complex VOC mixtures found indoors. The new VOC metrics are statistically significant predictors of symptom outcomes from the California Healthy Buildings Study data. Multivariate logistic regression analyses were used to test the hypothesis that a summary measure of the VOC mixture, other risk factors, and covariates for eachmore » worker will lead to better prediction of symptom outcome. VOC metrics based on animal irritancy measures and principal component analysis had the most influence in the prediction of eye, dermal, and nasal symptoms. After adjustment, a water-based paints and solvents source was found to be associated with dermal and eye irritation. The more typical VOC exposure metrics used in prior analyses were not useful in symptom prediction in the adjusted model (total VOC (TVOC), or sum of individually identified VOCs (ΣVOC i)). Also not useful were three other VOC metrics that took into account potency, but did not adjust for the highly correlated nature of the data set, or the presence of VOCs that were not measured. High TVOC values (2--7 mg m -3) due to the presence of liquid-process photocopiers observed in several study spaces significantly influenced symptoms. Analyses without the high TVOC values reduced, but did not eliminate the ability of the VOC exposure metric based on irritancy and principal component analysis to explain symptom outcome.« less
A new look at mobility metrics for pyroclastic density currents: collection, interpretation, and use
NASA Astrophysics Data System (ADS)
Ogburn, S. E.; Lopes, D.; Calder, E. S.
2012-12-01
Mitigation of risk associated with pyroclastic density currents (PDCs) depends upon accurate forecasting of possible flow paths, often using empirical models that rely on mobility metrics or the stochastic application of computational flow models. Mobility metrics often inform computational models, sometimes as direct model inputs (e.g. energy cone model), or as estimates for input parameters (e.g. basal friction parameter in TITAN2D). These mobility metrics are often compiled from PDCs at many volcanoes, generalized to reveal empirical constants, or sampled for use in probabilistic models. In practice, however, there are often inconsistencies in how mobility metrics have been collected, reported, and used. For instance, the runout of PDCs often varies depending on the method used (e.g. manually measured from a paper map, automated using GIS software); and the distance traveled by the center of mass of PDCs is rarely reported due to the difficulty in locating it. This work reexamines the way we measure, report, and analyze PDC mobility metrics. Several metrics, such as the Heim coefficient (height dropped/runout, H/L) and the proportionality of inundated area to volume (A∝V2/3) have been used successfully with PDC data (Sparks 1976; Nairn and Self 1977; Sheridan 1979; Hayashi and Self 1992; Calder et al. 1999; Widiwijayanti et al. 2008) in addition to the non-volcanic flows they were originally developed for. Other mobility metrics have been investigated by the debris avalanche community but have not yet been extensively applied to pyroclastic flows (e.g. the initial aspect ratio of collapsing pile). We investigate the relative merits and suitability of contrasting mobility metrics for different types of PDCs (e.g. dome-collapse pyroclastic flows, ash-cloud surges, pumice flows), and indicate certain circumstances under which each model performs optimally. We show that these metrics can be used (with varying success) to predict the runout of a PDC of given volume, or vice versa. The problem of locating the center of mass of PDCs is also investigated by comparing field measurements, geometric centroids, linear thickness models, and computational flow models. Comparing center of mass measurements with runout provides insight into the relative roles of sliding vs. spreading in PDC emplacement. The effect of topography on mobility is explored by comparing mobility metrics to valley morphology measurements, including sinuosity, cross-sectional area, and valley slope. Lastly, we examine the problem of compiling and generalizing mobility data from worldwide databases using a hierarchical Bayes model for weighting mobility metrics for use as model inputs, which offers an improved method over simple space-filling strategies. This is especially useful for calibrating models at data-sparse volcanoes.
McAdams, Harley; AlQuraishi, Mohammed
2015-04-21
Techniques for determining values for a metric of microscale interactions include determining a mesoscale metric for a plurality of mesoscale interaction types, wherein a value of the mesoscale metric for each mesoscale interaction type is based on a corresponding function of values of the microscale metric for the plurality of the microscale interaction types. A plurality of observations that indicate the values of the mesoscale metric are determined for the plurality of mesoscale interaction types. Values of the microscale metric are determined for the plurality of microscale interaction types based on the plurality of observations and the corresponding functions and compressed sensing.
Development and Implementation of Metrics for Identifying Military Impulse Noise
2010-09-01
False Negative Rate FP False Positive FPR False Positive Rate FtC Fort Carson, CO GIS Geographic Information System GMM Gaussian mixture model Hz...60 70 80 90 100 110 Bin Number B in N um be r N um ber of D ata Points M apped to B in 14 Figure 8. Plot of typical neuron activation...signal metrics and waveform itself were saved and transmitted to the home base. There is also a provision to download the entire recorded waveform
Systems Engineering Techniques for ALS Decision Making
NASA Technical Reports Server (NTRS)
Rodriquez, Luis F.; Drysdale, Alan E.; Jones, Harry; Levri, Julie A.
2004-01-01
The Advanced Life Support (ALS) Metric is the predominant tool for predicting the cost of ALS systems. Metric goals for the ALS Program are daunting, requiring a threefold increase in the ALS Metric by 2010. Confounding the problem, the rate new ALS technologies reach the maturity required for consideration in the ALS Metric and the rate at which new configurations are developed is slow, limiting the search space and potentially giving the perspective of a ALS technology, the ALS Metric may remain elusive. This paper is a sequel to a paper published in the proceedings of the 2003 ICES conference entitled, "Managing to the metric: an approach to optimizing life support costs." The conclusions of that paper state that the largest contributors to the ALS Metric should be targeted by ALS researchers and management for maximum metric reductions. Certainly, these areas potentially offer large potential benefits to future ALS missions; however, the ALS Metric is not the only decision-making tool available to the community. To facilitate decision-making within the ALS community a combination of metrics should be utilized, such as the Equivalent System Mass (ESM)-based ALS metric, but also those available through techniques such as life cycle costing and faithful consideration of the sensitivity of the assumed models and data. Often a lack of data is cited as the reason why these techniques are not considered for utilization. An existing database development effort within the ALS community, known as OPIS, may provide the opportunity to collect the necessary information to enable the proposed systems analyses. A review of these additional analysis techniques is provided, focusing on the data necessary to enable these. The discussion is concluded by proposing how the data may be utilized by analysts in the future.
Tay, Benjamin Chia-Meng; Chow, Tzu-Hao; Ng, Beng-Koon; Loh, Thomas Kwok-Seng
2012-09-01
This study investigates the autocorrelation bandwidths of dual-window (DW) optical coherence tomography (OCT) k-space scattering profile of different-sized microspheres and their correlation to scatterer size. A dual-bandwidth spectroscopic metric defined as the ratio of the 10% to 90% autocorrelation bandwidths is found to change monotonically with microsphere size and gives the best contrast enhancement for scatterer size differentiation in the resulting spectroscopic image. A simulation model supports the experimental results and revealed a tradeoff between the smallest detectable scatterer size and the maximum scatterer size in the linear range of the dual-window dual-bandwidth (DWDB) metric, which depends on the choice of the light source optical bandwidth. Spectroscopic OCT (SOCT) images of microspheres and tonsil tissue samples based on the proposed DWDB metric showed clear differentiation between different-sized scatterers as compared to those derived from conventional short-time Fourier transform metrics. The DWDB metric significantly improves the contrast in SOCT imaging and can aid the visualization and identification of dissimilar scatterer size in a sample. Potential applications include the early detection of cell nuclear changes in tissue carcinogenesis, the monitoring of healing tendons, and cell proliferation in tissue scaffolds.
Metric Scale Calculation for Visual Mapping Algorithms
NASA Astrophysics Data System (ADS)
Hanel, A.; Mitschke, A.; Boerner, R.; Van Opdenbosch, D.; Hoegner, L.; Brodie, D.; Stilla, U.
2018-05-01
Visual SLAM algorithms allow localizing the camera by mapping its environment by a point cloud based on visual cues. To obtain the camera locations in a metric coordinate system, the metric scale of the point cloud has to be known. This contribution describes a method to calculate the metric scale for a point cloud of an indoor environment, like a parking garage, by fusing multiple individual scale values. The individual scale values are calculated from structures and objects with a-priori known metric extension, which can be identified in the unscaled point cloud. Extensions of building structures, like the driving lane or the room height, are derived from density peaks in the point distribution. The extension of objects, like traffic signs with a known metric size, are derived using projections of their detections in images onto the point cloud. The method is tested with synthetic image sequences of a drive with a front-looking mono camera through a virtual 3D model of a parking garage. It has been shown, that each individual scale value improves either the robustness of the fused scale value or reduces its error. The error of the fused scale is comparable to other recent works.
Software development predictors, error analysis, reliability models and software metric analysis
NASA Technical Reports Server (NTRS)
Basili, Victor
1983-01-01
The use of dynamic characteristics as predictors for software development was studied. It was found that there are some significant factors that could be useful as predictors. From a study on software errors and complexity, it was shown that meaningful results can be obtained which allow insight into software traits and the environment in which it is developed. Reliability models were studied. The research included the field of program testing because the validity of some reliability models depends on the answers to some unanswered questions about testing. In studying software metrics, data collected from seven software engineering laboratory (FORTRAN) projects were examined and three effort reporting accuracy checks were applied to demonstrate the need to validate a data base. Results are discussed.
NASA Astrophysics Data System (ADS)
Bernales, A. M.; Antolihao, J. A.; Samonte, C.; Campomanes, F.; Rojas, R. J.; dela Serna, A. M.; Silapan, J.
2016-06-01
The threat of the ailments related to urbanization like heat stress is very prevalent. There are a lot of things that can be done to lessen the effect of urbanization to the surface temperature of the area like using green roofs or planting trees in the area. So land use really matters in both increasing and decreasing surface temperature. It is known that there is a relationship between land use land cover (LULC) and land surface temperature (LST). Quantifying this relationship in terms of a mathematical model is very important so as to provide a way to predict LST based on the LULC alone. This study aims to examine the relationship between LST and LULC as well as to create a model that can predict LST using class-level spatial metrics from LULC. LST was derived from a Landsat 8 image and LULC classification was derived from LiDAR and Orthophoto datasets. Class-level spatial metrics were created in FRAGSTATS with the LULC and LST as inputs and these metrics were analysed using a statistical framework. Multi linear regression was done to create models that would predict LST for each class and it was found that the spatial metric "Effective mesh size" was a top predictor for LST in 6 out of 7 classes. The model created can still be refined by adding a temporal aspect by analysing the LST of another farming period (for rural areas) and looking for common predictors between LSTs of these two different farming periods.
Liang, Xia; Wang, Jinhui; Yan, Chaogan; Shu, Ni; Xu, Ke; Gong, Gaolang; He, Yong
2012-01-01
Graph theoretical analysis of brain networks based on resting-state functional MRI (R-fMRI) has attracted a great deal of attention in recent years. These analyses often involve the selection of correlation metrics and specific preprocessing steps. However, the influence of these factors on the topological properties of functional brain networks has not been systematically examined. Here, we investigated the influences of correlation metric choice (Pearson's correlation versus partial correlation), global signal presence (regressed or not) and frequency band selection [slow-5 (0.01-0.027 Hz) versus slow-4 (0.027-0.073 Hz)] on the topological properties of both binary and weighted brain networks derived from them, and we employed test-retest (TRT) analyses for further guidance on how to choose the "best" network modeling strategy from the reliability perspective. Our results show significant differences in global network metrics associated with both correlation metrics and global signals. Analysis of nodal degree revealed differing hub distributions for brain networks derived from Pearson's correlation versus partial correlation. TRT analysis revealed that the reliability of both global and local topological properties are modulated by correlation metrics and the global signal, with the highest reliability observed for Pearson's-correlation-based brain networks without global signal removal (WOGR-PEAR). The nodal reliability exhibited a spatially heterogeneous distribution wherein regions in association and limbic/paralimbic cortices showed moderate TRT reliability in Pearson's-correlation-based brain networks. Moreover, we found that there were significant frequency-related differences in topological properties of WOGR-PEAR networks, and brain networks derived in the 0.027-0.073 Hz band exhibited greater reliability than those in the 0.01-0.027 Hz band. Taken together, our results provide direct evidence regarding the influences of correlation metrics and specific preprocessing choices on both the global and nodal topological properties of functional brain networks. This study also has important implications for how to choose reliable analytical schemes in brain network studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yin, L; Lin, A; Ahn, P
Purpose: To utilize online CBCT scans to develop models for predicting DVH metrics in proton therapy of head and neck tumors. Methods: Nine patients with locally advanced oropharyngeal cancer were retrospectively selected in this study. Deformable image registration was applied to the simulation CT, target volumes, and organs at risk (OARs) contours onto each weekly CBCT scan. Intensity modulated proton therapy (IMPT) treatment plans were created on the simulation CT and forward calculated onto each corrected CBCT scan. Thirty six potentially predictive metrics were extracted from each corrected CBCT. These features include minimum/maximum/mean over and under-ranges at the proximal andmore » distal surface of PTV volumes, and geometrical and water equivalent distance between PTV and each OARs. Principal component analysis (PCA) was used to reduce the dimension of the extracted features. Three principal components were found to account for over 90% of variances in those features. Datasets from eight patients were used to train a machine learning model to fit these principal components with DVH metrics (dose to 95% and 5% of PTV, mean dose or max dose to OARs) from the forward calculated dose on each corrected CBCT. The accuracy of this model was verified on the datasets from the 9th patient. Results: The predicted changes of DVH metrics from the model were in good agreement with actual values calculated on corrected CBCT images. Median differences were within 1 Gy for most DVH metrics except for larynx and constrictor mean dose. However, a large spread of the differences was observed, indicating additional training datasets and predictive features are needed to improve the model. Conclusion: Intensity corrected CBCT scans hold the potential to be used for online verification of proton therapy and prediction of delivered dose distributions.« less
This EnviroAtlas dataset contains biodiversity metrics reflecting ecosystem services or other aspects of biodiversity for reptile species, based on the number of reptile species as measured by predicted habitat present within a pixel. These metrics were created from grouping national level single species habitat models created by the USGS Gap Analysis Program into smaller ecologically based, phylogeny based, or stakeholder suggested composites. The dataset includes reptile species richness metrics for all reptile species, lizards, snakes, turtles, poisonous reptiles, Natureserve-listed G1,G2, and G3 reptile species, and reptile species listed by IUCN (International Union for Conservation of Nature), PARC (Partners in Amphibian and Reptile Conservation) and SWPARC (Southwest Partners in Amphibian and Reptile Conservation). This dataset was produced by a joint effort of New Mexico State University, US EPA, and USGS to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa
Huang, Qiongyu; Swatantran, Anu; Dubayah, Ralph; Goetz, Scott J
2014-01-01
Avian diversity is under increasing pressures. It is thus critical to understand the ecological variables that contribute to large scale spatial distribution of avian species diversity. Traditionally, studies have relied primarily on two-dimensional habitat structure to model broad scale species richness. Vegetation vertical structure is increasingly used at local scales. However, the spatial arrangement of vegetation height has never been taken into consideration. Our goal was to examine the efficacies of three-dimensional forest structure, particularly the spatial heterogeneity of vegetation height in improving avian richness models across forested ecoregions in the U.S. We developed novel habitat metrics to characterize the spatial arrangement of vegetation height using the National Biomass and Carbon Dataset for the year 2000 (NBCD). The height-structured metrics were compared with other habitat metrics for statistical association with richness of three forest breeding bird guilds across Breeding Bird Survey (BBS) routes: a broadly grouped woodland guild, and two forest breeding guilds with preferences for forest edge and for interior forest. Parametric and non-parametric models were built to examine the improvement of predictability. Height-structured metrics had the strongest associations with species richness, yielding improved predictive ability for the woodland guild richness models (r(2) = ∼ 0.53 for the parametric models, 0.63 the non-parametric models) and the forest edge guild models (r(2) = ∼ 0.34 for the parametric models, 0.47 the non-parametric models). All but one of the linear models incorporating height-structured metrics showed significantly higher adjusted-r2 values than their counterparts without additional metrics. The interior forest guild richness showed a consistent low association with height-structured metrics. Our results suggest that height heterogeneity, beyond canopy height alone, supplements habitat characterization and richness models of forest bird species. The metrics and models derived in this study demonstrate practical examples of utilizing three-dimensional vegetation data for improved characterization of spatial patterns in species richness.
Huang, Qiongyu; Swatantran, Anu; Dubayah, Ralph; Goetz, Scott J.
2014-01-01
Avian diversity is under increasing pressures. It is thus critical to understand the ecological variables that contribute to large scale spatial distribution of avian species diversity. Traditionally, studies have relied primarily on two-dimensional habitat structure to model broad scale species richness. Vegetation vertical structure is increasingly used at local scales. However, the spatial arrangement of vegetation height has never been taken into consideration. Our goal was to examine the efficacies of three-dimensional forest structure, particularly the spatial heterogeneity of vegetation height in improving avian richness models across forested ecoregions in the U.S. We developed novel habitat metrics to characterize the spatial arrangement of vegetation height using the National Biomass and Carbon Dataset for the year 2000 (NBCD). The height-structured metrics were compared with other habitat metrics for statistical association with richness of three forest breeding bird guilds across Breeding Bird Survey (BBS) routes: a broadly grouped woodland guild, and two forest breeding guilds with preferences for forest edge and for interior forest. Parametric and non-parametric models were built to examine the improvement of predictability. Height-structured metrics had the strongest associations with species richness, yielding improved predictive ability for the woodland guild richness models (r2 = ∼0.53 for the parametric models, 0.63 the non-parametric models) and the forest edge guild models (r2 = ∼0.34 for the parametric models, 0.47 the non-parametric models). All but one of the linear models incorporating height-structured metrics showed significantly higher adjusted-r2 values than their counterparts without additional metrics. The interior forest guild richness showed a consistent low association with height-structured metrics. Our results suggest that height heterogeneity, beyond canopy height alone, supplements habitat characterization and richness models of forest bird species. The metrics and models derived in this study demonstrate practical examples of utilizing three-dimensional vegetation data for improved characterization of spatial patterns in species richness. PMID:25101782
Road Risk Modeling and Cloud-Aided Safety-Based Route Planning.
Li, Zhaojian; Kolmanovsky, Ilya; Atkins, Ella; Lu, Jianbo; Filev, Dimitar P; Michelini, John
2016-11-01
This paper presents a safety-based route planner that exploits vehicle-to-cloud-to-vehicle (V2C2V) connectivity. Time and road risk index (RRI) are considered as metrics to be balanced based on user preference. To evaluate road segment risk, a road and accident database from the highway safety information system is mined with a hybrid neural network model to predict RRI. Real-time factors such as time of day, day of the week, and weather are included as correction factors to the static RRI prediction. With real-time RRI and expected travel time, route planning is formulated as a multiobjective network flow problem and further reduced to a mixed-integer programming problem. A V2C2V implementation of our safety-based route planning approach is proposed to facilitate access to real-time information and computing resources. A real-world case study, route planning through the city of Columbus, Ohio, is presented. Several scenarios illustrate how the "best" route can be adjusted to favor time versus safety metrics.
Uncertainty in modeled upper ocean heat content change
NASA Astrophysics Data System (ADS)
Tokmakian, Robin; Challenor, Peter
2014-02-01
This paper examines the uncertainty in the change in the heat content in the ocean component of a general circulation model. We describe the design and implementation of our statistical methodology. Using an ensemble of model runs and an emulator, we produce an estimate of the full probability distribution function (PDF) for the change in upper ocean heat in an Atmosphere/Ocean General Circulation Model, the Community Climate System Model v. 3, across a multi-dimensional input space. We show how the emulator of the GCM's heat content change and hence, the PDF, can be validated and how implausible outcomes from the emulator can be identified when compared to observational estimates of the metric. In addition, the paper describes how the emulator outcomes and related uncertainty information might inform estimates of the same metric from a multi-model Coupled Model Intercomparison Project phase 3 ensemble. We illustrate how to (1) construct an ensemble based on experiment design methods, (2) construct and evaluate an emulator for a particular metric of a complex model, (3) validate the emulator using observational estimates and explore the input space with respect to implausible outcomes and (4) contribute to the understanding of uncertainties within a multi-model ensemble. Finally, we estimate the most likely value for heat content change and its uncertainty for the model, with respect to both observations and the uncertainty in the value for the input parameters.
NASA Astrophysics Data System (ADS)
Ring, Christoph; Pollinger, Felix; Kaspar-Ott, Irena; Hertig, Elke; Jacobeit, Jucundus; Paeth, Heiko
2017-04-01
The COMEPRO project (Comparison of Metrics for Probabilistic Climate Change Projections of Mediterranean Precipitation), funded by the Deutsche Forschungsgemeinschaft (DFG), is dedicated to the development of new evaluation metrics for state-of-the-art climate models. Further, we analyze implications for probabilistic projections of climate change. This study focuses on the results of 4-field matrix metrics. Here, six different approaches are compared. We evaluate 24 models of the Coupled Model Intercomparison Project Phase 3 (CMIP3), 40 of CMIP5 and 18 of the Coordinated Regional Downscaling Experiment (CORDEX). In addition to the annual and seasonal precipitation the mean temperature is analysed. We consider both 50-year trend and climatological mean for the second half of the 20th century. For the probabilistic projections of climate change A1b, A2 (CMIP3) and RCP4.5, RCP8.5 (CMIP5,CORDEX) scenarios are used. The eight main study areas are located in the Mediterranean. However, we apply our metrics to globally distributed regions as well. The metrics show high simulation quality of temperature trend and both precipitation and temperature mean for most climate models and study areas. In addition, we find high potential for model weighting in order to reduce uncertainty. These results are in line with other accepted evaluation metrics and studies. The comparison of the different 4-field approaches reveals high correlations for most metrics. The results of the metric-weighted probabilistic density functions of climate change are heterogeneous. We find for different regions and seasons both increases and decreases of uncertainty. The analysis of global study areas is consistent with the regional study areas of the Medeiterrenean.
Neural decoding with kernel-based metric learning.
Brockmeier, Austin J; Choi, John S; Kriminger, Evan G; Francis, Joseph T; Principe, Jose C
2014-06-01
In studies of the nervous system, the choice of metric for the neural responses is a pivotal assumption. For instance, a well-suited distance metric enables us to gauge the similarity of neural responses to various stimuli and assess the variability of responses to a repeated stimulus-exploratory steps in understanding how the stimuli are encoded neurally. Here we introduce an approach where the metric is tuned for a particular neural decoding task. Neural spike train metrics have been used to quantify the information content carried by the timing of action potentials. While a number of metrics for individual neurons exist, a method to optimally combine single-neuron metrics into multineuron, or population-based, metrics is lacking. We pose the problem of optimizing multineuron metrics and other metrics using centered alignment, a kernel-based dependence measure. The approach is demonstrated on invasively recorded neural data consisting of both spike trains and local field potentials. The experimental paradigm consists of decoding the location of tactile stimulation on the forepaws of anesthetized rats. We show that the optimized metrics highlight the distinguishing dimensions of the neural response, significantly increase the decoding accuracy, and improve nonlinear dimensionality reduction methods for exploratory neural analysis.
Spatial statistical network models for stream and river temperature in New England, USA
Watershed managers are challenged by the need for predictive temperature models with sufficient accuracy and geographic breadth for practical use. We described thermal regimes of New England rivers and streams based on a reduced set of metrics for the May–September growing ...
A meta-analysis of asbestos-related cancer risk that addresses fiber size and mineral type.
Berman, D Wayne; Crump, Kenny S
2008-01-01
Quantitative estimates of the risk of lung cancer or mesothelioma in humans from asbestos exposure made by the U.S. Environmental Protection Agency (EPA) make use of estimates of potency factors based on phase-contrast microscopy (PCM) and obtained from cohorts exposed to asbestos in different occupational environments. These potency factors exhibit substantial variability. The most likely reasons for this variability appear to be differences among environments in fiber size and mineralogy not accounted for by PCM. In this article, the U.S. Environmental Protection Agency (EPA) models for asbestos-related lung cancer and mesothelioma are expanded to allow the potency of fibers to depend upon their mineralogical types and sizes. This is accomplished by positing exposure metrics composed of nonoverlapping fiber categories and assigning each category its own unique potency. These category-specific potencies are estimated in a meta-analysis that fits the expanded models to potencies for lung cancer (KL's) or mesothelioma (KM's) based on PCM that were calculated for multiple epidemiological studies in our previous paper (Berman and Crump, 2008). Epidemiological study-specific estimates of exposures to fibers in the different fiber size categories of an exposure metric are estimated using distributions for fiber size based on transmission electron microscopy (TEM) obtained from the literature and matched to the individual epidemiological studies. The fraction of total asbestos exposure in a given environment respectively represented by chrysotile and amphibole asbestos is also estimated from information in the literature for that environment. Adequate information was found to allow KL's from 15 epidemiological studies and KM's from 11 studies to be included in the meta-analysis. Since the range of exposure metrics that could be considered was severely restricted by limitations in the published TEM fiber size distributions, it was decided to focus attention on four exposure metrics distinguished by fiber width: "all widths," widths > 0.2 micro m, widths < 0.4 microm, and widths < 0.2 microm, each of which has historical relevance. Each such metric defined by width was composed of four categories of fibers: chrysotile or amphibole asbestos with lengths between 5 microm and 10 microm or longer than 10 microm. Using these metrics three parameters were estimated for lung cancer and, separately, for mesothelioma: KLA, the potency of longer (length > 10 microm) amphibole fibers; rpc, the potency of pure chrysotile (uncontaminated by amphibole) relative to amphibole asbestos; and rps, the potency of shorter fibers (5 microm < length < 10 microm) relative to longer fibers. For mesothelioma, the hypothesis that chrysotile and amphibole asbestos are equally potent (rpc = 1) was strongly rejected by every metric and the hypothesis that (pure) chrysotile is nonpotent for mesothelioma was not rejected by any metric. Best estimates for the relative potency of chrysotile ranged from zero to about 1/200th that of amphibole asbestos (depending on metric). For lung cancer, the hypothesis that chrysotile and amphibole asbestos are equally potent (rpc = 1) was rejected (p < or = .05) by the two metrics based on thin fibers (length < 0.4 microm and < 0.2 microm) but not by the metrics based on thicker fibers. The "all widths" and widths < 0.4 microm metrics provide the best fits to both the lung cancer and mesothelioma data over the other metrics evaluated, although the improvements are only marginal for lung cancer. That these two metrics provide equivalent (for mesothelioma) and nearly equivalent (for lung cancer) fits to the data suggests that the available data sets may not be sufficiently rich (in variation of exposure characteristics) to fully evaluate the effects of fiber width on potency. Compared to the metric with widths > 0.2 microm with both rps and rpc fixed at 1 (which is nominally equivalent to the traditional PCM metric), the "all widths" and widths < 0.4 microm metrics provide substantially better fits for both lung cancer and, especially, mesothelioma. Although the best estimates of the potency of shorter fibers (5 < length < 10 microm) is zero for the "all widths" and widths < 0.4 microm metrics (or a small fraction of that of longer fibers for the widths > 0.2 microm metric for mesothelioma), the hypothesis that these shorter fibers were nonpotent could not be rejected for any of these metrics. Expansion of these metrics to include a category for fibers with lengths < 5 microm did not find any consistent evidence for any potency of these shortest fibers for either lung cancer or mesothelioma. Despite the substantial improvements in fit over that provided by the traditional use of PCM, neither the "all widths" nor the widths < 0.4 microm metrics (or any of the other metrics evaluated) completely resolve the differences in potency factors estimated in different occupational studies. Unresolved in particular is the discrepancy in potency factors for lung cancer from Quebec chrysotile miners and workers at the Charleston, SC, textile mill, which mainly processed chrysotile from Quebec. A leading hypothesis for this discrepancy is limitations in the fiber size distributions available for this analysis. Dement et al. (2007) recently analyzed by TEM archived air samples from the South Carolina plant to determine a detailed distribution of fiber lengths up to lengths of 40 microm and greater. If similar data become available for Quebec, perhaps these two size distributions can be used to eliminate the discrepancy between these two studies.
Predicting the difficulty of pure, strict, epistatic models: metrics for simulated model selection.
Urbanowicz, Ryan J; Kiralis, Jeff; Fisher, Jonathan M; Moore, Jason H
2012-09-26
Algorithms designed to detect complex genetic disease associations are initially evaluated using simulated datasets. Typical evaluations vary constraints that influence the correct detection of underlying models (i.e. number of loci, heritability, and minor allele frequency). Such studies neglect to account for model architecture (i.e. the unique specification and arrangement of penetrance values comprising the genetic model), which alone can influence the detectability of a model. In order to design a simulation study which efficiently takes architecture into account, a reliable metric is needed for model selection. We evaluate three metrics as predictors of relative model detection difficulty derived from previous works: (1) Penetrance table variance (PTV), (2) customized odds ratio (COR), and (3) our own Ease of Detection Measure (EDM), calculated from the penetrance values and respective genotype frequencies of each simulated genetic model. We evaluate the reliability of these metrics across three very different data search algorithms, each with the capacity to detect epistatic interactions. We find that a model's EDM and COR are each stronger predictors of model detection success than heritability. This study formally identifies and evaluates metrics which quantify model detection difficulty. We utilize these metrics to intelligently select models from a population of potential architectures. This allows for an improved simulation study design which accounts for differences in detection difficulty attributed to model architecture. We implement the calculation and utilization of EDM and COR into GAMETES, an algorithm which rapidly and precisely generates pure, strict, n-locus epistatic models.
Citizen science: A new perspective to advance spatial pattern evaluation in hydrology.
Koch, Julian; Stisen, Simon
2017-01-01
Citizen science opens new pathways that can complement traditional scientific practice. Intuition and reasoning often make humans more effective than computer algorithms in various realms of problem solving. In particular, a simple visual comparison of spatial patterns is a task where humans are often considered to be more reliable than computer algorithms. However, in practice, science still largely depends on computer based solutions, which inevitably gives benefits such as speed and the possibility to automatize processes. However, the human vision can be harnessed to evaluate the reliability of algorithms which are tailored to quantify similarity in spatial patterns. We established a citizen science project to employ the human perception to rate similarity and dissimilarity between simulated spatial patterns of several scenarios of a hydrological catchment model. In total, the turnout counts more than 2500 volunteers that provided over 43000 classifications of 1095 individual subjects. We investigate the capability of a set of advanced statistical performance metrics to mimic the human perception to distinguish between similarity and dissimilarity. Results suggest that more complex metrics are not necessarily better at emulating the human perception, but clearly provide auxiliary information that is valuable for model diagnostics. The metrics clearly differ in their ability to unambiguously distinguish between similar and dissimilar patterns which is regarded a key feature of a reliable metric. The obtained dataset can provide an insightful benchmark to the community to test novel spatial metrics.
Mapping suitability areas for concentrated solar power plants using remote sensing data
Omitaomu, Olufemi A.; Singh, Nagendra; Bhaduri, Budhendra L.
2015-05-14
The political push to increase power generation from renewable sources such as solar energy requires knowing the best places to site new solar power plants with respect to the applicable regulatory, operational, engineering, environmental, and socioeconomic criteria. Therefore, in this paper, we present applications of remote sensing data for mapping suitability areas for concentrated solar power plants. Our approach uses digital elevation model derived from NASA s Shuttle Radar Topographic Mission (SRTM) at a resolution of 3 arc second (approx. 90m resolution) for estimating global solar radiation for the study area. Then, we develop a computational model built on amore » Geographic Information System (GIS) platform that divides the study area into a grid of cells and estimates site suitability value for each cell by computing a list of metrics based on applicable siting requirements using GIS data. The computed metrics include population density, solar energy potential, federal lands, and hazardous facilities. Overall, some 30 GIS data are used to compute eight metrics. The site suitability value for each cell is computed as an algebraic sum of all metrics for the cell with the assumption that all metrics have equal weight. Finally, we color each cell according to its suitability value. Furthermore, we present results for concentrated solar power that drives a stream turbine and parabolic mirror connected to a Stirling Engine.« less
Disturbance metrics predict a wetland Vegetation Index of Biotic Integrity
Stapanian, Martin A.; Mack, John; Adams, Jean V.; Gara, Brian; Micacchion, Mick
2013-01-01
Indices of biological integrity of wetlands based on vascular plants (VIBIs) have been developed in many areas in the USA. Knowledge of the best predictors of VIBIs would enable management agencies to make better decisions regarding mitigation site selection and performance monitoring criteria. We use a novel statistical technique to develop predictive models for an established index of wetland vegetation integrity (Ohio VIBI), using as independent variables 20 indices and metrics of habitat quality, wetland disturbance, and buffer area land use from 149 wetlands in Ohio, USA. For emergent and forest wetlands, predictive models explained 61% and 54% of the variability, respectively, in Ohio VIBI scores. In both cases the most important predictor of Ohio VIBI score was a metric that assessed habitat alteration and development in the wetland. Of secondary importance as a predictor was a metric that assessed microtopography, interspersion, and quality of vegetation communities in the wetland. Metrics and indices assessing disturbance and land use of the buffer area were generally poor predictors of Ohio VIBI scores. Our results suggest that vegetation integrity of emergent and forest wetlands could be most directly enhanced by minimizing substrate and habitat disturbance within the wetland. Such efforts could include reducing or eliminating any practices that disturb the soil profile, such as nutrient enrichment from adjacent farm land, mowing, grazing, or cutting or removing woody plants.
Brooks, Scott C.; Brandt, Craig C.; Griffiths, Natalie A.
2016-10-07
Nutrient spiraling is an important ecosystem process characterizing nutrient transport and uptake in streams. Various nutrient addition methods are used to estimate uptake metrics; however, uncertainty in the metrics is not often evaluated. A method was developed to quantify uncertainty in ambient and saturation nutrient uptake metrics estimated from saturating pulse nutrient additions (Tracer Additions for Spiraling Curve Characterization; TASCC). Using a Monte Carlo (MC) approach, the 95% confidence interval (CI) was estimated for ambient uptake lengths (S w-amb) and maximum areal uptake rates (U max) based on 100,000 datasets generated from each of four nitrogen and five phosphorous TASCCmore » experiments conducted seasonally in a forest stream in eastern Tennessee, U.S.A. Uncertainty estimates from the MC approach were compared to the CIs estimated from ordinary least squares (OLS) and non-linear least squares (NLS) models used to calculate S w-amb and U max, respectively, from the TASCC method. The CIs for Sw-amb and Umax were large, but were not consistently larger using the MC method. Despite the large CIs, significant differences (based on nonoverlapping CIs) in nutrient metrics among seasons were found with more significant differences using the OLS/NLS vs. the MC method. Lastly, we suggest that the MC approach is a robust way to estimate uncertainty, as the calculation of S w-amb and U max violates assumptions of OLS/NLS while the MC approach is free of these assumptions. The MC approach can be applied to other ecosystem metrics that are calculated from multiple parameters, providing a more robust estimate of these metrics and their associated uncertainties.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brooks, Scott C.; Brandt, Craig C.; Griffiths, Natalie A.
Nutrient spiraling is an important ecosystem process characterizing nutrient transport and uptake in streams. Various nutrient addition methods are used to estimate uptake metrics; however, uncertainty in the metrics is not often evaluated. A method was developed to quantify uncertainty in ambient and saturation nutrient uptake metrics estimated from saturating pulse nutrient additions (Tracer Additions for Spiraling Curve Characterization; TASCC). Using a Monte Carlo (MC) approach, the 95% confidence interval (CI) was estimated for ambient uptake lengths (S w-amb) and maximum areal uptake rates (U max) based on 100,000 datasets generated from each of four nitrogen and five phosphorous TASCCmore » experiments conducted seasonally in a forest stream in eastern Tennessee, U.S.A. Uncertainty estimates from the MC approach were compared to the CIs estimated from ordinary least squares (OLS) and non-linear least squares (NLS) models used to calculate S w-amb and U max, respectively, from the TASCC method. The CIs for Sw-amb and Umax were large, but were not consistently larger using the MC method. Despite the large CIs, significant differences (based on nonoverlapping CIs) in nutrient metrics among seasons were found with more significant differences using the OLS/NLS vs. the MC method. Lastly, we suggest that the MC approach is a robust way to estimate uncertainty, as the calculation of S w-amb and U max violates assumptions of OLS/NLS while the MC approach is free of these assumptions. The MC approach can be applied to other ecosystem metrics that are calculated from multiple parameters, providing a more robust estimate of these metrics and their associated uncertainties.« less
Ren, J; Guo, X L; Lu, Z L; Zhang, J Y; Tang, J L; Chen, X; Gao, C C; Xu, C X; Xu, A Q
2016-09-07
Cardiovascular disease (CVD) is the leading cause of morbidity and mortality in the world. In 2010, a goal released by the American Heart Association (AHA) Committee focused on the primary reduction in cardiovascular risk. Data collected from 7683 men and 7667 women aged 18-69 years were analyzed. The distribution of ideal cardiovascular health metrics based on 7 cardiovascular disease risk factors or health behaviors in according to the definition of AHA was evaluated among the subjects. The association of the socioeconomic factors on the prevalence of meeting 5 or more ideal cardiovascular health metrics was estimated by logistic regression analysis, and a chi-square test for categorical variables and the general linear model (GLM) procedure for continuous variables were used to compare differences in prevalence and in means among genders. Seven of 15350 participants (0.05 %) met all 7 cardiovascular health metrics. The women had a higher proportion of meeting 5 or more ideal health metrics compared with men (32.67 VS.14.27 %). The subjects with a higher education and income level had a higher proportion of meeting 5 or more ideal health metrics than the subjects with a lower education and income level. A comparison between subjects with meeting 5 or more ideal cardiovascular health metrics with subjects meeting 4 or fewer ideal cardiovascular health metrics reveals that adjusted odds ratio [OR, 95 % confidence intervals (95 % CI)] was 1.42 (0.95, 2.21) in men and 2.59 (1.74, 3.87) in women for higher education and income, respectively. The prevalence of meeting all 7 cardiovascular health metrics was low in the adult population. Women, young subjects, and those with higher levels of education or income tend to have a greater number of the ideal cardiovascular health metrics. Higher socioeconomic status was associated with an increasing prevalence of meeting 5 or more cardiovascular health metrics in women but not in men. It's urgent to develop comprehensive population-based interventions to improve the cardiovascular risk factors in Shandong Province in China.
A foreground object features-based stereoscopic image visual comfort assessment model
NASA Astrophysics Data System (ADS)
Jin, Xin; Jiang, G.; Ying, H.; Yu, M.; Ding, S.; Peng, Z.; Shao, F.
2014-11-01
Since stereoscopic images provide observers with both realistic and discomfort viewing experience, it is necessary to investigate the determinants of visual discomfort. By considering that foreground object draws most attention when human observing stereoscopic images. This paper proposes a new foreground object based visual comfort assessment (VCA) metric. In the first place, a suitable segmentation method is applied to disparity map and then the foreground object is ascertained as the one having the biggest average disparity. In the second place, three visual features being average disparity, average width and spatial complexity of foreground object are computed from the perspective of visual attention. Nevertheless, object's width and complexity do not consistently influence the perception of visual comfort in comparison with disparity. In accordance with this psychological phenomenon, we divide the whole images into four categories on the basis of different disparity and width, and exert four different models to more precisely predict its visual comfort in the third place. Experimental results show that the proposed VCA metric outperformance other existing metrics and can achieve a high consistency between objective and subjective visual comfort scores. The Pearson Linear Correlation Coefficient (PLCC) and Spearman Rank Order Correlation Coefficient (SROCC) are over 0.84 and 0.82, respectively.
Fu, Lawrence D; Aphinyanaphongs, Yindalon; Wang, Lily; Aliferis, Constantin F
2011-08-01
Evaluating the biomedical literature and health-related websites for quality are challenging information retrieval tasks. Current commonly used methods include impact factor for journals, PubMed's clinical query filters and machine learning-based filter models for articles, and PageRank for websites. Previous work has focused on the average performance of these methods without considering the topic, and it is unknown how performance varies for specific topics or focused searches. Clinicians, researchers, and users should be aware when expected performance is not achieved for specific topics. The present work analyzes the behavior of these methods for a variety of topics. Impact factor, clinical query filters, and PageRank vary widely across different topics while a topic-specific impact factor and machine learning-based filter models are more stable. The results demonstrate that a method may perform excellently on average but struggle when used on a number of narrower topics. Topic-adjusted metrics and other topic robust methods have an advantage in such situations. Users of traditional topic-sensitive metrics should be aware of their limitations. Copyright © 2011 Elsevier Inc. All rights reserved.
USDA-ARS?s Scientific Manuscript database
Remote sensing based evapotranspiration (ET) mapping is an important improvement for water resources management. Hourly climatic data and reference ET are crucial for implementing remote sensing based ET models such as METRIC and SEBAL. In Turkey, data on all climatic variables may not be available ...
Possible causes of data model discrepancy in the temperature history of the last Millennium.
Neukom, Raphael; Schurer, Andrew P; Steiger, Nathan J; Hegerl, Gabriele C
2018-05-15
Model simulations and proxy-based reconstructions are the main tools for quantifying pre-instrumental climate variations. For some metrics such as Northern Hemisphere mean temperatures, there is remarkable agreement between models and reconstructions. For other diagnostics, such as the regional response to volcanic eruptions, or hemispheric temperature differences, substantial disagreements between data and models have been reported. Here, we assess the potential sources of these discrepancies by comparing 1000-year hemispheric temperature reconstructions based on real-world paleoclimate proxies with climate-model-based pseudoproxies. These pseudoproxy experiments (PPE) indicate that noise inherent in proxy records and the unequal spatial distribution of proxy data are the key factors in explaining the data-model differences. For example, lower inter-hemispheric correlations in reconstructions can be fully accounted for by these factors in the PPE. Noise and data sampling also partly explain the reduced amplitude of the response to external forcing in reconstructions compared to models. For other metrics, such as inter-hemispheric differences, some, although reduced, discrepancy remains. Our results suggest that improving proxy data quality and spatial coverage is the key factor to increase the quality of future climate reconstructions, while the total number of proxy records and reconstruction methodology play a smaller role.
PREDICTION METRICS FOR CHEMICAL DETECTION IN LONG-WAVE INFRARED HYPERSPECTRAL IMAGERY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chilton, M.; Walsh, S.J.; Daly, D.S.
2009-01-01
Natural and man-made chemical processes generate gaseous plumes that may be detected by hyperspectral imaging, which produces a matrix of spectra affected by the chemical constituents of the plume, the atmosphere, the bounding background surface and instrument noise. A physics-based model of observed radiance shows that high chemical absorbance and low background emissivity result in a larger chemical signature. Using simulated hyperspectral imagery, this study investigated two metrics which exploited this relationship. The objective was to explore how well the chosen metrics predicted when a chemical would be more easily detected when comparing one background type to another. The twomore » predictor metrics correctly rank ordered the backgrounds for about 94% of the chemicals tested as compared to the background rank orders from Whitened Matched Filtering (a detection algorithm) of the simulated spectra. These results suggest that the metrics provide a reasonable summary of how the background emissivity and chemical absorbance interact to produce the at-sensor chemical signal. This study suggests that similarly effective predictors that account for more general physical conditions may be derived.« less
Quantitative metrics for assessment of chemical image quality and spatial resolution
Kertesz, Vilmos; Cahill, John F.; Van Berkel, Gary J.
2016-02-28
Rationale: Currently objective/quantitative descriptions of the quality and spatial resolution of mass spectrometry derived chemical images are not standardized. Development of these standardized metrics is required to objectively describe chemical imaging capabilities of existing and/or new mass spectrometry imaging technologies. Such metrics would allow unbiased judgment of intra-laboratory advancement and/or inter-laboratory comparison for these technologies if used together with standardized surfaces. Methods: We developed two image metrics, viz., chemical image contrast (ChemIC) based on signal-to-noise related statistical measures on chemical image pixels and corrected resolving power factor (cRPF) constructed from statistical analysis of mass-to-charge chronograms across features of interest inmore » an image. These metrics, quantifying chemical image quality and spatial resolution, respectively, were used to evaluate chemical images of a model photoresist patterned surface collected using a laser ablation/liquid vortex capture mass spectrometry imaging system under different instrument operational parameters. Results: The calculated ChemIC and cRPF metrics determined in an unbiased fashion the relative ranking of chemical image quality obtained with the laser ablation/liquid vortex capture mass spectrometry imaging system. These rankings were used to show that both chemical image contrast and spatial resolution deteriorated with increasing surface scan speed, increased lane spacing and decreasing size of surface features. Conclusions: ChemIC and cRPF, respectively, were developed and successfully applied for the objective description of chemical image quality and spatial resolution of chemical images collected from model surfaces using a laser ablation/liquid vortex capture mass spectrometry imaging system.« less
Quality assessment of color images based on the measure of just noticeable color difference
NASA Astrophysics Data System (ADS)
Chou, Chun-Hsien; Hsu, Yun-Hsiang
2014-01-01
Accurate assessment on the quality of color images is an important step to many image processing systems that convey visual information of the reproduced images. An accurate objective image quality assessment (IQA) method is expected to give the assessment result highly agreeing with the subjective assessment. To assess the quality of color images, many approaches simply apply the metric for assessing the quality of gray scale images to each of three color channels of the color image, neglecting the correlation among three color channels. In this paper, a metric for assessing color images' quality is proposed, in which the model of variable just-noticeable color difference (VJNCD) is employed to estimate the visibility thresholds of distortion inherent in each color pixel. With the estimated visibility thresholds of distortion, the proposed metric measures the average perceptible distortion in terms of the quantized distortion according to the perceptual error map similar to that defined by National Bureau of Standards (NBS) for converting the color difference enumerated by CIEDE2000 to the objective score of perceptual quality assessment. The perceptual error map in this case is designed for each pixel according to the visibility threshold estimated by the VJNCD model. The performance of the proposed metric is verified by assessing the test images in the LIVE database, and is compared with those of many well-know IQA metrics. Experimental results indicate that the proposed metric is an effective IQA method that can accurately predict the image quality of color images in terms of the correlation between objective scores and subjective evaluation.
Refringence, field theory and normal modes
NASA Astrophysics Data System (ADS)
Barceló, Carlos; Liberati, Stefano; Visser, Matt
2002-06-01
In a previous paper [Barceló C et al 2001 Class. Quantum Grav. 18 3595-610 (Preprint gr-qc/0104001)] we have shown that the occurrence of curved spacetime 'effective Lorentzian geometries' is a generic result of linearizing an arbitrary classical field theory around some nontrivial background configuration. This observation explains the ubiquitous nature of the 'analogue models' for general relativity that have recently been developed based on condensed matter physics. In the simple (single scalar field) situation analysed in our previous paper, there is a single unique effective metric; more complicated situations can lead to bi-metric and multi-metric theories. In the present paper we will investigate the conditions required to keep the situation under control and compatible with experiment - either by enforcing a unique effective metric (as would be required to be strictly compatible with the Einstein equivalence principle), or at the worst by arranging things so that there are multiple metrics that are all 'close' to each other (in order to be compatible with the Eötvös experiment). The algebraically most general situation leads to a physical model whose mathematical description requires an extension of the usual notion of Finsler geometry to a Lorentzian-signature pseudo-Finsler geometry; while this is possibly of some interest in its own right, this particular case does not seem to be immediately relevant for either particle physics or gravitation. The key result is that wide classes of theories lend themselves to an effective metric description. This observation provides further evidence that the notion of 'analogue gravity' is rather generic.
Quantitative metrics for assessment of chemical image quality and spatial resolution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kertesz, Vilmos; Cahill, John F.; Van Berkel, Gary J.
Rationale: Currently objective/quantitative descriptions of the quality and spatial resolution of mass spectrometry derived chemical images are not standardized. Development of these standardized metrics is required to objectively describe chemical imaging capabilities of existing and/or new mass spectrometry imaging technologies. Such metrics would allow unbiased judgment of intra-laboratory advancement and/or inter-laboratory comparison for these technologies if used together with standardized surfaces. Methods: We developed two image metrics, viz., chemical image contrast (ChemIC) based on signal-to-noise related statistical measures on chemical image pixels and corrected resolving power factor (cRPF) constructed from statistical analysis of mass-to-charge chronograms across features of interest inmore » an image. These metrics, quantifying chemical image quality and spatial resolution, respectively, were used to evaluate chemical images of a model photoresist patterned surface collected using a laser ablation/liquid vortex capture mass spectrometry imaging system under different instrument operational parameters. Results: The calculated ChemIC and cRPF metrics determined in an unbiased fashion the relative ranking of chemical image quality obtained with the laser ablation/liquid vortex capture mass spectrometry imaging system. These rankings were used to show that both chemical image contrast and spatial resolution deteriorated with increasing surface scan speed, increased lane spacing and decreasing size of surface features. Conclusions: ChemIC and cRPF, respectively, were developed and successfully applied for the objective description of chemical image quality and spatial resolution of chemical images collected from model surfaces using a laser ablation/liquid vortex capture mass spectrometry imaging system.« less
An Integrated Coral Reef Ecosystem Model to Support Resource Management under a Changing Climate
Weijerman, Mariska; Fulton, Elizabeth A.; Kaplan, Isaac C.; Gorton, Rebecca; Leemans, Rik; Mooij, Wolf M.; Brainard, Russell E.
2015-01-01
Millions of people rely on the ecosystem services provided by coral reefs, but sustaining these benefits requires an understanding of how reefs and their biotic communities are affected by local human-induced disturbances and global climate change. Ecosystem-based management that explicitly considers the indirect and cumulative effects of multiple disturbances has been recommended and adopted in policies in many places around the globe. Ecosystem models give insight into complex reef dynamics and their responses to multiple disturbances and are useful tools to support planning and implementation of ecosystem-based management. We adapted the Atlantis Ecosystem Model to incorporate key dynamics for a coral reef ecosystem around Guam in the tropical western Pacific. We used this model to quantify the effects of predicted climate and ocean changes and current levels of current land-based sources of pollution (LBSP) and fishing. We used the following six ecosystem metrics as indicators of ecosystem state, resilience and harvest potential: 1) ratio of calcifying to non-calcifying benthic groups, 2) trophic level of the community, 3) biomass of apex predators, 4) biomass of herbivorous fishes, 5) total biomass of living groups and 6) the end-to-start ratio of exploited fish groups. Simulation tests of the effects of each of the three drivers separately suggest that by mid-century climate change will have the largest overall effect on this suite of ecosystem metrics due to substantial negative effects on coral cover. The effects of fishing were also important, negatively influencing five out of the six metrics. Moreover, LBSP exacerbates this effect for all metrics but not quite as badly as would be expected under additive assumptions, although the magnitude of the effects of LBSP are sensitive to uncertainty associated with primary productivity. Over longer time spans (i.e., 65 year simulations), climate change impacts have a slight positive interaction with other drivers, generally meaning that declines in ecosystem metrics are not as steep as the sum of individual effects of the drivers. These analyses offer one way to quantify impacts and interactions of particular stressors in an ecosystem context and so provide guidance to managers. For example, the model showed that improving water quality, rather than prohibiting fishing, extended the timescales over which corals can maintain high abundance by at least 5–8 years. This result, in turn, provides more scope for corals to adapt or for resilient species to become established and for local and global management efforts to reduce or reverse stressors. PMID:26672983
An Integrated Coral Reef Ecosystem Model to Support Resource Management under a Changing Climate.
Weijerman, Mariska; Fulton, Elizabeth A; Kaplan, Isaac C; Gorton, Rebecca; Leemans, Rik; Mooij, Wolf M; Brainard, Russell E
2015-01-01
Millions of people rely on the ecosystem services provided by coral reefs, but sustaining these benefits requires an understanding of how reefs and their biotic communities are affected by local human-induced disturbances and global climate change. Ecosystem-based management that explicitly considers the indirect and cumulative effects of multiple disturbances has been recommended and adopted in policies in many places around the globe. Ecosystem models give insight into complex reef dynamics and their responses to multiple disturbances and are useful tools to support planning and implementation of ecosystem-based management. We adapted the Atlantis Ecosystem Model to incorporate key dynamics for a coral reef ecosystem around Guam in the tropical western Pacific. We used this model to quantify the effects of predicted climate and ocean changes and current levels of current land-based sources of pollution (LBSP) and fishing. We used the following six ecosystem metrics as indicators of ecosystem state, resilience and harvest potential: 1) ratio of calcifying to non-calcifying benthic groups, 2) trophic level of the community, 3) biomass of apex predators, 4) biomass of herbivorous fishes, 5) total biomass of living groups and 6) the end-to-start ratio of exploited fish groups. Simulation tests of the effects of each of the three drivers separately suggest that by mid-century climate change will have the largest overall effect on this suite of ecosystem metrics due to substantial negative effects on coral cover. The effects of fishing were also important, negatively influencing five out of the six metrics. Moreover, LBSP exacerbates this effect for all metrics but not quite as badly as would be expected under additive assumptions, although the magnitude of the effects of LBSP are sensitive to uncertainty associated with primary productivity. Over longer time spans (i.e., 65 year simulations), climate change impacts have a slight positive interaction with other drivers, generally meaning that declines in ecosystem metrics are not as steep as the sum of individual effects of the drivers. These analyses offer one way to quantify impacts and interactions of particular stressors in an ecosystem context and so provide guidance to managers. For example, the model showed that improving water quality, rather than prohibiting fishing, extended the timescales over which corals can maintain high abundance by at least 5-8 years. This result, in turn, provides more scope for corals to adapt or for resilient species to become established and for local and global management efforts to reduce or reverse stressors.
Contrast-based sensorless adaptive optics for retinal imaging
Zhou, Xiaolin; Bedggood, Phillip; Bui, Bang; Nguyen, Christine T.O.; He, Zheng; Metha, Andrew
2015-01-01
Conventional adaptive optics ophthalmoscopes use wavefront sensing methods to characterize ocular aberrations for real-time correction. However, there are important situations in which the wavefront sensing step is susceptible to difficulties that affect the accuracy of the correction. To circumvent these, wavefront sensorless adaptive optics (or non-wavefront sensing AO; NS-AO) imaging has recently been developed and has been applied to point-scanning based retinal imaging modalities. In this study we show, for the first time, contrast-based NS-AO ophthalmoscopy for full-frame in vivo imaging of human and animal eyes. We suggest a robust image quality metric that could be used for any imaging modality, and test its performance against other metrics using (physical) model eyes. PMID:26417525
Model Performance Evaluation and Scenario Analysis (MPESA) Tutorial
This tool consists of two parts: model performance evaluation and scenario analysis (MPESA). The model performance evaluation consists of two components: model performance evaluation metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit m...
Test of the FLRW Metric and Curvature with Strong Lens Time Delays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liao, Kai; Li, Zhengxiang; Wang, Guo-Jian
We present a new model-independent strategy for testing the Friedmann–Lemaître–Robertson–Walker (FLRW) metric and constraining cosmic curvature, based on future time-delay measurements of strongly lensed quasar-elliptical galaxy systems from the Large Synoptic Survey Telescope and supernova observations from the Dark Energy Survey. The test only relies on geometric optics. It is independent of the energy contents of the universe and the validity of the Einstein equation on cosmological scales. The study comprises two levels: testing the FLRW metric through the distance sum rule (DSR) and determining/constraining cosmic curvature. We propose an effective and efficient (redshift) evolution model for performing the formermore » test, which allows us to concretely specify the violation criterion for the FLRW DSR. If the FLRW metric is consistent with the observations, then on the second level the cosmic curvature parameter will be constrained to ∼0.057 or ∼0.041 (1 σ ), depending on the availability of high-redshift supernovae, which is much more stringent than current model-independent techniques. We also show that the bias in the time-delay method might be well controlled, leading to robust results. The proposed method is a new independent tool for both testing the fundamental assumptions of homogeneity and isotropy in cosmology and for determining cosmic curvature. It is complementary to cosmic microwave background plus baryon acoustic oscillation analyses, which normally assume a cosmological model with dark energy domination in the late-time universe.« less
Self-supervised online metric learning with low rank constraint for scene categorization.
Cong, Yang; Liu, Ji; Yuan, Junsong; Luo, Jiebo
2013-08-01
Conventional visual recognition systems usually train an image classifier in a bath mode with all training data provided in advance. However, in many practical applications, only a small amount of training samples are available in the beginning and many more would come sequentially during online recognition. Because the image data characteristics could change over time, it is important for the classifier to adapt to the new data incrementally. In this paper, we present an online metric learning method to address the online scene recognition problem via adaptive similarity measurement. Given a number of labeled data followed by a sequential input of unseen testing samples, the similarity metric is learned to maximize the margin of the distance among different classes of samples. By considering the low rank constraint, our online metric learning model not only can provide competitive performance compared with the state-of-the-art methods, but also guarantees convergence. A bi-linear graph is also defined to model the pair-wise similarity, and an unseen sample is labeled depending on the graph-based label propagation, while the model can also self-update using the more confident new samples. With the ability of online learning, our methodology can well handle the large-scale streaming video data with the ability of incremental self-updating. We evaluate our model to online scene categorization and experiments on various benchmark datasets and comparisons with state-of-the-art methods demonstrate the effectiveness and efficiency of our algorithm.
Thomas A. Spies; Eric White; Alan Ager; Jeffrey D. Kline; John P. Bolte; Emily K. Platt; Keith A. Olsen; Robert J. Pabst; Ana M. G. Barros; John D. Bailey; Susan Charnley; Anita T. Morzillo; Jennifer Koch; Michelle M. Steen-Adams; Peter H. Singleton; James Sulzman; Cynthia Schwartz; Blair Csuti
2017-01-01
Fire-prone landscapes present many challenges for both managers and policy makers in developing adaptive behaviors and institutions. We used a coupled human and natural systems framework and an agent-based landscape model to examine how alternative management scenarios affect fire and ecosystem services metrics in a fire-prone multiownership landscape in the eastern...
SUMMARY: The major accomplishment of NTD’s air toxics program is the development of an exposure-dose- response model for acute exposure to volatile organic compounds (VOCs), based on momentary brain concentration as the dose metric associated with acute neurological impairments...
Going Metric: Is It for You? A Planning Model for Small Manufacturing Companies.
ERIC Educational Resources Information Center
Beek, C.; And Others
This booklet is designed to aid small manufacturing companies in ascertaining the meaning of going metric for their unique circumstances and to guide them in making a smooth conversion to the metric system. First is a brief discussion of what the law says about metrics and what the metric system is. Then what is involved in going metric is…
Texture metric that predicts target detection performance
NASA Astrophysics Data System (ADS)
Culpepper, Joanne B.
2015-12-01
Two texture metrics based on gray level co-occurrence error (GLCE) are used to predict probability of detection and mean search time. The two texture metrics are local clutter metrics and are based on the statistics of GLCE probability distributions. The degree of correlation between various clutter metrics and the target detection performance of the nine military vehicles in complex natural scenes found in the Search_2 dataset are presented. Comparison is also made between four other common clutter metrics found in the literature: root sum of squares, Doyle, statistical variance, and target structure similarity. The experimental results show that the GLCE energy metric is a better predictor of target detection performance when searching for targets in natural scenes than the other clutter metrics studied.
NASA Astrophysics Data System (ADS)
Lorenzi, Juan M.; Stecher, Thomas; Reuter, Karsten; Matera, Sebastian
2017-10-01
Many problems in computational materials science and chemistry require the evaluation of expensive functions with locally rapid changes, such as the turn-over frequency of first principles kinetic Monte Carlo models for heterogeneous catalysis. Because of the high computational cost, it is often desirable to replace the original with a surrogate model, e.g., for use in coupled multiscale simulations. The construction of surrogates becomes particularly challenging in high-dimensions. Here, we present a novel version of the modified Shepard interpolation method which can overcome the curse of dimensionality for such functions to give faithful reconstructions even from very modest numbers of function evaluations. The introduction of local metrics allows us to take advantage of the fact that, on a local scale, rapid variation often occurs only across a small number of directions. Furthermore, we use local error estimates to weigh different local approximations, which helps avoid artificial oscillations. Finally, we test our approach on a number of challenging analytic functions as well as a realistic kinetic Monte Carlo model. Our method not only outperforms existing isotropic metric Shepard methods but also state-of-the-art Gaussian process regression.
Lorenzi, Juan M; Stecher, Thomas; Reuter, Karsten; Matera, Sebastian
2017-10-28
Many problems in computational materials science and chemistry require the evaluation of expensive functions with locally rapid changes, such as the turn-over frequency of first principles kinetic Monte Carlo models for heterogeneous catalysis. Because of the high computational cost, it is often desirable to replace the original with a surrogate model, e.g., for use in coupled multiscale simulations. The construction of surrogates becomes particularly challenging in high-dimensions. Here, we present a novel version of the modified Shepard interpolation method which can overcome the curse of dimensionality for such functions to give faithful reconstructions even from very modest numbers of function evaluations. The introduction of local metrics allows us to take advantage of the fact that, on a local scale, rapid variation often occurs only across a small number of directions. Furthermore, we use local error estimates to weigh different local approximations, which helps avoid artificial oscillations. Finally, we test our approach on a number of challenging analytic functions as well as a realistic kinetic Monte Carlo model. Our method not only outperforms existing isotropic metric Shepard methods but also state-of-the-art Gaussian process regression.
A Deep Similarity Metric Learning Model for Matching Text Chunks to Spatial Entities
NASA Astrophysics Data System (ADS)
Ma, K.; Wu, L.; Tao, L.; Li, W.; Xie, Z.
2017-12-01
The matching of spatial entities with related text is a long-standing research topic that has received considerable attention over the years. This task aims at enrich the contents of spatial entity, and attach the spatial location information to the text chunk. In the data fusion field, matching spatial entities with the corresponding describing text chunks has a big range of significance. However, the most traditional matching methods often rely fully on manually designed, task-specific linguistic features. This work proposes a Deep Similarity Metric Learning Model (DSMLM) based on Siamese Neural Network to learn similarity metric directly from the textural attributes of spatial entity and text chunk. The low-dimensional feature representation of the space entity and the text chunk can be learned separately. By employing the Cosine distance to measure the matching degree between the vectors, the model can make the matching pair vectors as close as possible. Mearnwhile, it makes the mismatching as far apart as possible through supervised learning. In addition, extensive experiments and analysis on geological survey data sets show that our DSMLM model can effectively capture the matching characteristics between the text chunk and the spatial entity, and achieve state-of-the-art performance.
Manifold Preserving: An Intrinsic Approach for Semisupervised Distance Metric Learning.
Ying, Shihui; Wen, Zhijie; Shi, Jun; Peng, Yaxin; Peng, Jigen; Qiao, Hong
2017-05-18
In this paper, we address the semisupervised distance metric learning problem and its applications in classification and image retrieval. First, we formulate a semisupervised distance metric learning model by considering the metric information of inner classes and interclasses. In this model, an adaptive parameter is designed to balance the inner metrics and intermetrics by using data structure. Second, we convert the model to a minimization problem whose variable is symmetric positive-definite matrix. Third, in implementation, we deduce an intrinsic steepest descent method, which assures that the metric matrix is strictly symmetric positive-definite at each iteration, with the manifold structure of the symmetric positive-definite matrix manifold. Finally, we test the proposed algorithm on conventional data sets, and compare it with other four representative methods. The numerical results validate that the proposed method significantly improves the classification with the same computational efficiency.
Foresters' Metric Conversions program (version 1.0). [Computer program
Jefferson A. Palmer
1999-01-01
The conversion of scientific measurements has become commonplace in the fields of - engineering, research, and forestry. Foresters? Metric Conversions is a Windows-based computer program that quickly converts user-defined measurements from English to metric and from metric to English. Foresters? Metric Conversions was derived from the publication "Metric...
NASA Astrophysics Data System (ADS)
Fahey, R. T.; Tallant, J.; Gough, C. M.; Hardiman, B. S.; Atkins, J.; Scheuermann, C. M.
2016-12-01
Canopy structure can be an important driver of forest ecosystem functioning - affecting factors such as radiative transfer and light use efficiency, and consequently net primary production (NPP). Both above- (aerial) and below-canopy (terrestrial) remote sensing techniques are used to assess canopy structure and each has advantages and disadvantages. Aerial techniques can cover large geographical areas and provide detailed information on canopy surface and canopy height, but are generally unable to quantitatively assess interior canopy structure. Terrestrial methods provide high resolution information on interior canopy structure and can be cost-effectively repeated, but are limited to very small footprints. Although these methods are often utilized to derive similar metrics (e.g., rugosity, LAI) and to address equivalent ecological questions and relationships (e.g., link between LAI and productivity), rarely are inter-comparisons made between techniques. Our objective is to compare methods for deriving canopy structural complexity (CSC) metrics and to assess the capacity of commonly available aerial remote sensing products (and combinations) to match terrestrially-sensed data. We also assess the potential to combine CSC metrics with image-based analysis to predict plot-based NPP measurements in forests of different ages and different levels of complexity. We use combinations of data from drone-based imagery (RGB, NIR, Red Edge), aerial LiDAR (commonly available medium-density leaf-off), terrestrial scanning LiDAR, portable canopy LiDAR, and a permanent plot network - all collected at the University of Michigan Biological Station. Our results will highlight the potential for deriving functionally meaningful CSC metrics from aerial imagery, LiDAR, and combinations of data sources. We will also present results of modeling focused on predicting plot-level NPP from combinations of image-based vegetation indices (e.g., NDVI, EVI) with LiDAR- or image-derived metrics of CSC (e.g., rugosity, porosity), canopy density, (e.g., LAI), and forest structure (e.g., canopy height). This work builds toward future efforts that will use other data combinations, such as those available at NEON sites, and could be used to inform and test popular ecosystem models (e.g., ED2) incorporating structure.
An Examination of Advisor Concerns in the Era of Academic Analytics
ERIC Educational Resources Information Center
Daughtry, Jeremy J.
2017-01-01
Performance-based funding models are increasingly becoming the norm for many institutions of higher learning. Such models place greater emphasis on student retention and success metrics, for example, as requirements for receiving state appropriations. To stay competitive, universities have adopted academic analytics technologies capable of…
Model assessment using a multi-metric ranking technique
NASA Astrophysics Data System (ADS)
Fitzpatrick, P. J.; Lau, Y.; Alaka, G.; Marks, F.
2017-12-01
Validation comparisons of multiple models presents challenges when skill levels are similar, especially in regimes dominated by the climatological mean. Assessing skill separation will require advanced validation metrics and identifying adeptness in extreme events, but maintain simplicity for management decisions. Flexibility for operations is also an asset. This work postulates a weighted tally and consolidation technique which ranks results by multiple types of metrics. Variables include absolute error, bias, acceptable absolute error percentages, outlier metrics, model efficiency, Pearson correlation, Kendall's Tau, reliability Index, multiplicative gross error, and root mean squared differences. Other metrics, such as root mean square difference and rank correlation were also explored, but removed when the information was discovered to be generally duplicative to other metrics. While equal weights are applied, weights could be altered depending for preferred metrics. Two examples are shown comparing ocean models' currents and tropical cyclone products, including experimental products. The importance of using magnitude and direction for tropical cyclone track forecasts instead of distance, along-track, and cross-track are discussed. Tropical cyclone intensity and structure prediction are also assessed. Vector correlations are not included in the ranking process, but found useful in an independent context, and will be briefly reported.
Quantification of three-dimensional cell-mediated collagen remodeling using graph theory.
Bilgin, Cemal Cagatay; Lund, Amanda W; Can, Ali; Plopper, George E; Yener, Bülent
2010-09-30
Cell cooperation is a critical event during tissue development. We present the first precise metrics to quantify the interaction between mesenchymal stem cells (MSCs) and extra cellular matrix (ECM). In particular, we describe cooperative collagen alignment process with respect to the spatio-temporal organization and function of mesenchymal stem cells in three dimensions. We defined two precise metrics: Collagen Alignment Index and Cell Dissatisfaction Level, for quantitatively tracking type I collagen and fibrillogenesis remodeling by mesenchymal stem cells over time. Computation of these metrics was based on graph theory and vector calculus. The cells and their three dimensional type I collagen microenvironment were modeled by three dimensional cell-graphs and collagen fiber organization was calculated from gradient vectors. With the enhancement of mesenchymal stem cell differentiation, acceleration through different phases was quantitatively demonstrated. The phases were clustered in a statistically significant manner based on collagen organization, with late phases of remodeling by untreated cells clustering strongly with early phases of remodeling by differentiating cells. The experiments were repeated three times to conclude that the metrics could successfully identify critical phases of collagen remodeling that were dependent upon cooperativity within the cell population. Definition of early metrics that are able to predict long-term functionality by linking engineered tissue structure to function is an important step toward optimizing biomaterials for the purposes of regenerative medicine.
Model Performance Evaluation and Scenario Analysis (MPESA) Tutorial
The model performance evaluation consists of metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit measures that capture magnitude only, sequence only, and combined magnitude and sequence errors.
Global-scale high-resolution ( 1 km) modelling of mean, maximum and minimum annual streamflow
NASA Astrophysics Data System (ADS)
Barbarossa, Valerio; Huijbregts, Mark; Hendriks, Jan; Beusen, Arthur; Clavreul, Julie; King, Henry; Schipper, Aafke
2017-04-01
Quantifying mean, maximum and minimum annual flow (AF) of rivers at ungauged sites is essential for a number of applications, including assessments of global water supply, ecosystem integrity and water footprints. AF metrics can be quantified with spatially explicit process-based models, which might be overly time-consuming and data-intensive for this purpose, or with empirical regression models that predict AF metrics based on climate and catchment characteristics. Yet, so far, regression models have mostly been developed at a regional scale and the extent to which they can be extrapolated to other regions is not known. We developed global-scale regression models that quantify mean, maximum and minimum AF as function of catchment area and catchment-averaged slope, elevation, and mean, maximum and minimum annual precipitation and air temperature. We then used these models to obtain global 30 arc-seconds (˜ 1 km) maps of mean, maximum and minimum AF for each year from 1960 through 2015, based on a newly developed hydrologically conditioned digital elevation model. We calibrated our regression models based on observations of discharge and catchment characteristics from about 4,000 catchments worldwide, ranging from 100 to 106 km2 in size, and validated them against independent measurements as well as the output of a number of process-based global hydrological models (GHMs). The variance explained by our regression models ranged up to 90% and the performance of the models compared well with the performance of existing GHMs. Yet, our AF maps provide a level of spatial detail that cannot yet be achieved by current GHMs.
Batterman, Stuart; Burke, Janet; Isakov, Vlad; Lewis, Toby; Mukherjee, Bhramar; Robins, Thomas
2014-01-01
Vehicles are major sources of air pollutant emissions, and individuals living near large roads endure high exposures and health risks associated with traffic-related air pollutants. Air pollution epidemiology, health risk, environmental justice, and transportation planning studies would all benefit from an improved understanding of the key information and metrics needed to assess exposures, as well as the strengths and limitations of alternate exposure metrics. This study develops and evaluates several metrics for characterizing exposure to traffic-related air pollutants for the 218 residential locations of participants in the NEXUS epidemiology study conducted in Detroit (MI, USA). Exposure metrics included proximity to major roads, traffic volume, vehicle mix, traffic density, vehicle exhaust emissions density, and pollutant concentrations predicted by dispersion models. Results presented for each metric include comparisons of exposure distributions, spatial variability, intraclass correlation, concordance and discordance rates, and overall strengths and limitations. While showing some agreement, the simple categorical and proximity classifications (e.g., high diesel/low diesel traffic roads and distance from major roads) do not reflect the range and overlap of exposures seen in the other metrics. Information provided by the traffic density metric, defined as the number of kilometers traveled (VKT) per day within a 300 m buffer around each home, was reasonably consistent with the more sophisticated metrics. Dispersion modeling provided spatially- and temporally-resolved concentrations, along with apportionments that separated concentrations due to traffic emissions and other sources. While several of the exposure metrics showed broad agreement, including traffic density, emissions density and modeled concentrations, these alternatives still produced exposure classifications that differed for a substantial fraction of study participants, e.g., from 20% to 50% of homes, depending on the metric, would be incorrectly classified into “low”, “medium” or “high” traffic exposure classes. These and other results suggest the potential for exposure misclassification and the need for refined and validated exposure metrics. While data and computational demands for dispersion modeling of traffic emissions are non-trivial concerns, once established, dispersion modeling systems can provide exposure information for both on- and near-road environments that would benefit future traffic-related assessments. PMID:25226412
Measure of robustness for complex networks
NASA Astrophysics Data System (ADS)
Youssef, Mina Nabil
Critical infrastructures are repeatedly attacked by external triggers causing tremendous amount of damages. Any infrastructure can be studied using the powerful theory of complex networks. A complex network is composed of extremely large number of different elements that exchange commodities providing significant services. The main functions of complex networks can be damaged by different types of attacks and failures that degrade the network performance. These attacks and failures are considered as disturbing dynamics, such as the spread of viruses in computer networks, the spread of epidemics in social networks, and the cascading failures in power grids. Depending on the network structure and the attack strength, every network differently suffers damages and performance degradation. Hence, quantifying the robustness of complex networks becomes an essential task. In this dissertation, new metrics are introduced to measure the robustness of technological and social networks with respect to the spread of epidemics, and the robustness of power grids with respect to cascading failures. First, we introduce a new metric called the Viral Conductance (VCSIS ) to assess the robustness of networks with respect to the spread of epidemics that are modeled through the susceptible/infected/susceptible (SIS) epidemic approach. In contrast to assessing the robustness of networks based on a classical metric, the epidemic threshold, the new metric integrates the fraction of infected nodes at steady state for all possible effective infection strengths. Through examples, VCSIS provides more insights about the robustness of networks than the epidemic threshold. In addition, both the paradoxical robustness of Barabasi-Albert preferential attachment networks and the effect of the topology on the steady state infection are studied, to show the importance of quantifying the robustness of networks. Second, a new metric VCSIR is introduced to assess the robustness of networks with respect to the spread of susceptible/infected/recovered (SIR) epidemics. To compute VCSIR, we propose a novel individual-based approach to model the spread of SIR epidemics in networks, which captures the infection size for a given effective infection rate. Thus, VCSIR quantitatively integrates the infection strength with the corresponding infection size. To optimize the VCSIR metric, a new mitigation strategy is proposed, based on a temporary reduction of contacts in social networks. The social contact network is modeled as a weighted graph that describes the frequency of contacts among the individuals. Thus, we consider the spread of an epidemic as a dynamical system, and the total number of infection cases as the state of the system, while the weight reduction in the social network is the controller variable leading to slow/reduce the spread of epidemics. Using optimal control theory, the obtained solution represents an optimal adaptive weighted network defined over a finite time interval. Moreover, given the high complexity of the optimization problem, we propose two heuristics to find the near optimal solutions by reducing the contacts among the individuals in a decentralized way. Finally, the cascading failures that can take place in power grids and have recently caused several blackouts are studied. We propose a new metric to assess the robustness of the power grid with respect to the cascading failures. The power grid topology is modeled as a network, which consists of nodes and links representing power substations and transmission lines, respectively. We also propose an optimal islanding strategy to protect the power grid when a cascading failure event takes place in the grid. The robustness metrics are numerically evaluated using real and synthetic networks to quantify their robustness with respect to disturbing dynamics. We show that the proposed metrics outperform the classical metrics in quantifying the robustness of networks and the efficiency of the mitigation strategies. In summary, our work advances the network science field in assessing the robustness of complex networks with respect to various disturbing dynamics.
NASA Astrophysics Data System (ADS)
Trimborn, Barbara; Wolf, Ivo; Abu-Sammour, Denis; Henzler, Thomas; Schad, Lothar R.; Zöllner, Frank G.
2017-03-01
Image registration of preprocedural contrast-enhanced CTs to intraprocedual cone-beam computed tomography (CBCT) can provide additional information for interventional liver oncology procedures such as transcatheter arterial chemoembolisation (TACE). In this paper, a novel similarity metric for gradient-based image registration is proposed. The metric relies on the patch-based computation of histograms of oriented gradients (HOG) building the basis for a feature descriptor. The metric was implemented in a framework for rigid 3D-3D-registration of pre-interventional CT with intra-interventional CBCT data obtained during the workflow of a TACE. To evaluate the performance of the new metric, the capture range was estimated based on the calculation of the mean target registration error and compared to the results obtained with a normalized cross correlation metric. The results show that 3D HOG feature descriptors are suitable as image-similarity metric and that the novel metric can compete with established methods in terms of registration accuracy
NASA Astrophysics Data System (ADS)
Moslehi, M.; de Barros, F.
2017-12-01
Complexity of hydrogeological systems arises from the multi-scale heterogeneity and insufficient measurements of their underlying parameters such as hydraulic conductivity and porosity. An inadequate characterization of hydrogeological properties can significantly decrease the trustworthiness of numerical models that predict groundwater flow and solute transport. Therefore, a variety of data assimilation methods have been proposed in order to estimate hydrogeological parameters from spatially scarce data by incorporating the governing physical models. In this work, we propose a novel framework for evaluating the performance of these estimation methods. We focus on the Ensemble Kalman Filter (EnKF) approach that is a widely used data assimilation technique. It reconciles multiple sources of measurements to sequentially estimate model parameters such as the hydraulic conductivity. Several methods have been used in the literature to quantify the accuracy of the estimations obtained by EnKF, including Rank Histograms, RMSE and Ensemble Spread. However, these commonly used methods do not regard the spatial information and variability of geological formations. This can cause hydraulic conductivity fields with very different spatial structures to have similar histograms or RMSE. We propose a vision-based approach that can quantify the accuracy of estimations by considering the spatial structure embedded in the estimated fields. Our new approach consists of adapting a new metric, Color Coherent Vectors (CCV), to evaluate the accuracy of estimated fields achieved by EnKF. CCV is a histogram-based technique for comparing images that incorporate spatial information. We represent estimated fields as digital three-channel images and use CCV to compare and quantify the accuracy of estimations. The sensitivity of CCV to spatial information makes it a suitable metric for assessing the performance of spatial data assimilation techniques. Under various factors of data assimilation methods such as number, layout, and type of measurements, we compare the performance of CCV with other metrics such as RMSE. By simulating hydrogeological processes using estimated and true fields, we observe that CCV outperforms other existing evaluation metrics.
Singh, Ramesh K.; Senay, Gabriel B.
2016-01-01
The development of different energy balance models has allowed users to choose a model based on its suitability in a region. We compared four commonly used models—Mapping EvapoTranspiration at high Resolution with Internalized Calibration (METRIC) model, Surface Energy Balance Algorithm for Land (SEBAL) model, Surface Energy Balance System (SEBS) model, and the Operational Simplified Surface Energy Balance (SSEBop) model—using Landsat images to estimate evapotranspiration (ET) in the Midwestern United States. Our models validation using three AmeriFlux cropland sites at Mead, Nebraska, showed that all four models captured the spatial and temporal variation of ET reasonably well with an R2 of more than 0.81. Both the METRIC and SSEBop models showed a low root mean square error (<0.93 mm·day−1) and a high Nash–Sutcliffe coefficient of efficiency (>0.80), whereas the SEBAL and SEBS models resulted in relatively higher bias for estimating daily ET. The empirical equation of daily average net radiation used in the SEBAL and SEBS models for upscaling instantaneous ET to daily ET resulted in underestimation of daily ET, particularly when the daily average net radiation was more than 100 W·m−2. Estimated daily ET for both cropland and grassland had some degree of linearity with METRIC, SEBAL, and SEBS, but linearity was stronger for evaporative fraction. Thus, these ET models have strengths and limitations for applications in water resource management.
Classification of forest land attributes using multi-source remotely sensed data
NASA Astrophysics Data System (ADS)
Pippuri, Inka; Suvanto, Aki; Maltamo, Matti; Korhonen, Kari T.; Pitkänen, Juho; Packalen, Petteri
2016-02-01
The aim of the study was to (1) examine the classification of forest land using airborne laser scanning (ALS) data, satellite images and sample plots of the Finnish National Forest Inventory (NFI) as training data and to (2) identify best performing metrics for classifying forest land attributes. Six different schemes of forest land classification were studied: land use/land cover (LU/LC) classification using both national classes and FAO (Food and Agricultural Organization of the United Nations) classes, main type, site type, peat land type and drainage status. Special interest was to test different ALS-based surface metrics in classification of forest land attributes. Field data consisted of 828 NFI plots collected in 2008-2012 in southern Finland and remotely sensed data was from summer 2010. Multinomial logistic regression was used as the classification method. Classification of LU/LC classes were highly accurate (kappa-values 0.90 and 0.91) but also the classification of site type, peat land type and drainage status succeeded moderately well (kappa-values 0.51, 0.69 and 0.52). ALS-based surface metrics were found to be the most important predictor variables in classification of LU/LC class, main type and drainage status. In best classification models of forest site types both spectral metrics from satellite data and point cloud metrics from ALS were used. In turn, in the classification of peat land types ALS point cloud metrics played the most important role. Results indicated that the prediction of site type and forest land category could be incorporated into stand level forest management inventory system in Finland.
Shwartz, Michael; Peköz, Erol A; Burgess, James F; Christiansen, Cindy L; Rosen, Amy K; Berlowitz, Dan
2014-12-01
Two approaches are commonly used for identifying high-performing facilities on a performance measure: one, that the facility is in a top quantile (eg, quintile or quartile); and two, that a confidence interval is below (or above) the average of the measure for all facilities. This type of yes/no designation often does not do well in distinguishing high-performing from average-performing facilities. To illustrate an alternative continuous-valued metric for profiling facilities--the probability a facility is in a top quantile--and show the implications of using this metric for profiling and pay-for-performance. We created a composite measure of quality from fiscal year 2007 data based on 28 quality indicators from 112 Veterans Health Administration nursing homes. A Bayesian hierarchical multivariate normal-binomial model was used to estimate shrunken rates of the 28 quality indicators, which were combined into a composite measure using opportunity-based weights. Rates were estimated using Markov Chain Monte Carlo methods as implemented in WinBUGS. The probability metric was calculated from the simulation replications. Our probability metric allowed better discrimination of high performers than the point or interval estimate of the composite score. In a pay-for-performance program, a smaller top quantile (eg, a quintile) resulted in more resources being allocated to the highest performers, whereas a larger top quantile (eg, being above the median) distinguished less among high performers and allocated more resources to average performers. The probability metric has potential but needs to be evaluated by stakeholders in different types of delivery systems.
Process-Driven Ecological Modeling for Landscape Change Analysis
NASA Astrophysics Data System (ADS)
Altman, S.; Reif, M. K.; Swannack, T. M.
2013-12-01
Landscape pattern is an important driver in ecosystem dynamics and can control system-level functions such as nutrient cycling, connectivity, biodiversity and carbon sequestration. However, the links between process, pattern and function remain ambiguous. Understanding the quantitative relationship between ecological processes and landscape pattern across temporal and spatial scales is vital for successful management and implementation of ecosystem-level projects. We used remote sensing imagery to develop critical landscape metrics to understand the factors influencing landscape change. Our study area, a coastal area in southwest Florida, is highly dynamic with critically eroding beaches and a range of natural and developed land cover types. Hurricanes in 2004 and 2005 caused a breach along the coast of North Captiva Island that filled in by 2010. We used a time series of light detection and ranging (lidar) elevation data and hyperspectral imagery from 2006 and 2010 to determine land cover changes. Landscape level metrics used included: Largest Patch Index, Class Area, Area-weighted mean area, Clumpiness, Area-weighted Contiguity Index, Number of Patches, Percent of landcover, Area-weighted Shape. Our results showed 1) 27% increase in sand/soil class as the channel repaired itself and shoreline was reestablished, 2) 40% decrease in the mudflat class area due to conversion to sand/soil and water, 3) 30% increase in non-wetland vegetation class as a result of new vegetation around the repaired channel, and 4) the water class only slightly increased though there was a marked increase in the patch size area. Thus, the smaller channels disappeared with the infilling of the channel, leaving much larger, less complex patches behind the breach. Our analysis demonstrated that quantification of landscape pattern is critical to linking patterns to ecological processes and understanding how both affect landscape change. Our proof of concept indicated that ecological processes can correlate to landscape pattern and that ecosystem function changes significantly as pattern changes. However, the number of links between landscape metrics and ecological processes are highly variable. Extensively studied processes such as biodiversity can be linked to numerous landscape metrics. In contrast, correlations between sediment cycling and landscape pattern have only been evaluated for a limited number of metrics. We are incorporating these data into a relational database linking landscape and ecological patterns, processes and metrics. The database will be used to parameterize site-specific landscape evolution models projecting how landscape pattern will change as a result of future ecosystem restoration projects. The model is a spatially-explicit, grid-based model that projects changes in community composition based on changes in soil elevations. To capture scalar differences in landscape change, local and regional landscape metrics are analyzed at each time step and correlated with ecological processes to determine how ecosystem function changes with scale over time.
Patient-specific computational modeling of blood flow in the pulmonary arterial circulation.
Kheyfets, Vitaly O; Rios, Lourdes; Smith, Triston; Schroeder, Theodore; Mueller, Jeffrey; Murali, Srinivas; Lasorda, David; Zikos, Anthony; Spotti, Jennifer; Reilly, John J; Finol, Ender A
2015-07-01
Computational fluid dynamics (CFD) modeling of the pulmonary vasculature has the potential to reveal continuum metrics associated with the hemodynamic stress acting on the vascular endothelium. It is widely accepted that the endothelium responds to flow-induced stress by releasing vasoactive substances that can dilate and constrict blood vessels locally. The objectives of this study are to examine the extent of patient specificity required to obtain a significant association of CFD output metrics and clinical measures in models of the pulmonary arterial circulation, and to evaluate the potential correlation of wall shear stress (WSS) with established metrics indicative of right ventricular (RV) afterload in pulmonary hypertension (PH). Right Heart Catheterization (RHC) hemodynamic data and contrast-enhanced computed tomography (CT) imaging were retrospectively acquired for 10 PH patients and processed to simulate blood flow in the pulmonary arteries. While conducting CFD modeling of the reconstructed patient-specific vasculatures, we experimented with three different outflow boundary conditions to investigate the potential for using computationally derived spatially averaged wall shear stress (SAWSS) as a metric of RV afterload. SAWSS was correlated with both pulmonary vascular resistance (PVR) (R(2)=0.77, P<0.05) and arterial compliance (C) (R(2)=0.63, P<0.05), but the extent of the correlation was affected by the degree of patient specificity incorporated in the fluid flow boundary conditions. We found that decreasing the distal PVR alters the flow distribution and changes the local velocity profile in the distal vessels, thereby increasing the local WSS. Nevertheless, implementing generic outflow boundary conditions still resulted in statistically significant SAWSS correlations with respect to both metrics of RV afterload, suggesting that the CFD model could be executed without the need for complex outflow boundary conditions that require invasively obtained patient-specific data. A preliminary study investigating the relationship between outlet diameter and flow distribution in the pulmonary tree offers a potential computationally inexpensive alternative to pressure based outflow boundary conditions. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Deriving injury risk curves using survival analysis from biomechanical experiments.
Yoganandan, Narayan; Banerjee, Anjishnu; Hsu, Fang-Chi; Bass, Cameron R; Voo, Liming; Pintar, Frank A; Gayzik, F Scott
2016-10-03
Injury risk curves from biomechanical experimental data analysis are used in automotive studies to improve crashworthiness and advance occupant safety. Metrics such as acceleration and deflection coupled with outcomes such as fractures and anatomical disruptions from impact tests are used in simple binary regression models. As an improvement, the International Standards Organization suggested a different approach. It was based on survival analysis. While probability curves for side-impact-induced thorax and abdominal injuries and frontal impact-induced foot-ankle-leg injuries are developed using this approach, deficiencies are apparent. The objective of this study is to present an improved, robust and generalizable methodology in an attempt to resolve these issues. It includes: (a) statistical identification of the most appropriate independent variable (metric) from a pool of candidate metrics, measured and or derived during experimentation and analysis processes, based on the highest area under the receiver operator curve, (b) quantitative determination of the most optimal probability distribution based on the lowest Akaike information criterion, (c) supplementing the qualitative/visual inspection method for comparing the selected distribution with a non-parametric distribution with objective measures, (d) identification of overly influential observations using different methods, and (e) estimation of confidence intervals using techniques more appropriate to the underlying survival statistical model. These clear and quantified details can be easily implemented with commercial/open source packages. They can be used in retrospective analysis and prospective design of experiments, and in applications to different loading scenarios such as underbody blast events. The feasibility of the methodology is demonstrated using post mortem human subject experiments and 24 metrics associated with thoracic/abdominal injuries in side-impacts. Published by Elsevier Ltd.
Gaewsky, James P; Weaver, Ashley A; Koya, Bharath; Stitzel, Joel D
2015-01-01
A 3-phase real-world motor vehicle crash (MVC) reconstruction method was developed to analyze injury variability as a function of precrash occupant position for 2 full-frontal Crash Injury Research and Engineering Network (CIREN) cases. Phase I: A finite element (FE) simplified vehicle model (SVM) was developed and tuned to mimic the frontal crash characteristics of the CIREN case vehicle (Camry or Cobalt) using frontal New Car Assessment Program (NCAP) crash test data. Phase II: The Toyota HUman Model for Safety (THUMS) v4.01 was positioned in 120 precrash configurations per case within the SVM. Five occupant positioning variables were varied using a Latin hypercube design of experiments: seat track position, seat back angle, D-ring height, steering column angle, and steering column telescoping position. An additional baseline simulation was performed that aimed to match the precrash occupant position documented in CIREN for each case. Phase III: FE simulations were then performed using kinematic boundary conditions from each vehicle's event data recorder (EDR). HIC15, combined thoracic index (CTI), femur forces, and strain-based injury metrics in the lung and lumbar vertebrae were evaluated to predict injury. Tuning the SVM to specific vehicle models resulted in close matches between simulated and test injury metric data, allowing the tuned SVM to be used in each case reconstruction with EDR-derived boundary conditions. Simulations with the most rearward seats and reclined seat backs had the greatest HIC15, head injury risk, CTI, and chest injury risk. Calculated injury risks for the head, chest, and femur closely correlated to the CIREN occupant injury patterns. CTI in the Camry case yielded a 54% probability of Abbreviated Injury Scale (AIS) 2+ chest injury in the baseline case simulation and ranged from 34 to 88% (mean = 61%) risk in the least and most dangerous occupant positions. The greater than 50% probability was consistent with the case occupant's AIS 2 hemomediastinum. Stress-based metrics were used to predict injury to the lower leg of the Camry case occupant. The regional-level injury metrics evaluated for the Cobalt case occupant indicated a low risk of injury; however, strain-based injury metrics better predicted pulmonary contusion. Approximately 49% of the Cobalt occupant's left lung was contused, though the baseline simulation predicted 40.5% of the lung to be injured. A method to compute injury metrics and risks as functions of precrash occupant position was developed and applied to 2 CIREN MVC FE reconstructions. The reconstruction process allows for quantification of the sensitivity and uncertainty of the injury risk predictions based on occupant position to further understand important factors that lead to more severe MVC injuries.
Assessing the Effects of Data Compression in Simulations Using Physically Motivated Metrics
Laney, Daniel; Langer, Steven; Weber, Christopher; ...
2014-01-01
This paper examines whether lossy compression can be used effectively in physics simulations as a possible strategy to combat the expected data-movement bottleneck in future high performance computing architectures. We show that, for the codes and simulations we tested, compression levels of 3–5X can be applied without causing significant changes to important physical quantities. Rather than applying signal processing error metrics, we utilize physics-based metrics appropriate for each code to assess the impact of compression. We evaluate three different simulation codes: a Lagrangian shock-hydrodynamics code, an Eulerian higher-order hydrodynamics turbulence modeling code, and an Eulerian coupled laser-plasma interaction code. Wemore » compress relevant quantities after each time-step to approximate the effects of tightly coupled compression and study the compression rates to estimate memory and disk-bandwidth reduction. We find that the error characteristics of compression algorithms must be carefully considered in the context of the underlying physics being modeled.« less
Chen, Qing; Zhang, Jinxiu; Hu, Ze
2017-01-01
This article investigates the dynamic topology control problem of satellite cluster networks (SCNs) in Earth observation (EO) missions by applying a novel metric of stability for inter-satellite links (ISLs). The properties of the periodicity and predictability of satellites’ relative position are involved in the link cost metric which is to give a selection criterion for choosing the most reliable data routing paths. Also, a cooperative work model with reliability is proposed for the situation of emergency EO missions. Based on the link cost metric and the proposed reliability model, a reliability assurance topology control algorithm and its corresponding dynamic topology control (RAT) strategy are established to maximize the stability of data transmission in the SCNs. The SCNs scenario is tested through some numeric simulations of the topology stability of average topology lifetime and average packet loss rate. Simulation results show that the proposed reliable strategy applied in SCNs significantly improves the data transmission performance and prolongs the average topology lifetime. PMID:28241474
Chen, Qing; Zhang, Jinxiu; Hu, Ze
2017-02-23
This article investigates the dynamic topology control problemof satellite cluster networks (SCNs) in Earth observation (EO) missions by applying a novel metric of stability for inter-satellite links (ISLs). The properties of the periodicity and predictability of satellites' relative position are involved in the link cost metric which is to give a selection criterion for choosing the most reliable data routing paths. Also, a cooperative work model with reliability is proposed for the situation of emergency EO missions. Based on the link cost metric and the proposed reliability model, a reliability assurance topology control algorithm and its corresponding dynamic topology control (RAT) strategy are established to maximize the stability of data transmission in the SCNs. The SCNs scenario is tested through some numeric simulations of the topology stability of average topology lifetime and average packet loss rate. Simulation results show that the proposed reliable strategy applied in SCNs significantly improves the data transmission performance and prolongs the average topology lifetime.
Can rodents conceive hyperbolic spaces?
Urdapilleta, Eugenio; Troiani, Francesca; Stella, Federico; Treves, Alessandro
2015-01-01
The grid cells discovered in the rodent medial entorhinal cortex have been proposed to provide a metric for Euclidean space, possibly even hardwired in the embryo. Yet, one class of models describing the formation of grid unit selectivity is entirely based on developmental self-organization, and as such it predicts that the metric it expresses should reflect the environment to which the animal has adapted. We show that, according to self-organizing models, if raised in a non-Euclidean hyperbolic cage rats should be able to form hyperbolic grids. For a given range of grid spacing relative to the radius of negative curvature of the hyperbolic surface, such grids are predicted to appear as multi-peaked firing maps, in which each peak has seven neighbours instead of the Euclidean six, a prediction that can be tested in experiments. We thus demonstrate that a useful universal neuronal metric, in the sense of a multi-scale ruler and compass that remain unaltered when changing environments, can be extended to other than the standard Euclidean plane. PMID:25948611
Teixeira, Andreia Sofia; Monteiro, Pedro T; Carriço, João A; Ramirez, Mário; Francisco, Alexandre P
2015-01-01
Trees, including minimum spanning trees (MSTs), are commonly used in phylogenetic studies. But, for the research community, it may be unclear that the presented tree is just a hypothesis, chosen from among many possible alternatives. In this scenario, it is important to quantify our confidence in both the trees and the branches/edges included in such trees. In this paper, we address this problem for MSTs by introducing a new edge betweenness metric for undirected and weighted graphs. This spanning edge betweenness metric is defined as the fraction of equivalent MSTs where a given edge is present. The metric provides a per edge statistic that is similar to that of the bootstrap approach frequently used in phylogenetics to support the grouping of taxa. We provide methods for the exact computation of this metric based on the well known Kirchhoff's matrix tree theorem. Moreover, we implement and make available a module for the PHYLOViZ software and evaluate the proposed metric concerning both effectiveness and computational performance. Analysis of trees generated using multilocus sequence typing data (MLST) and the goeBURST algorithm revealed that the space of possible MSTs in real data sets is extremely large. Selection of the edge to be represented using bootstrap could lead to unreliable results since alternative edges are present in the same fraction of equivalent MSTs. The choice of the MST to be presented, results from criteria implemented in the algorithm that must be based in biologically plausible models.
NASA Astrophysics Data System (ADS)
Russell, J. L.; Sarmiento, J. L.
2017-12-01
The Southern Ocean is central to the climate's response to increasing levels of atmospheric greenhouse gases as it ventilates a large fraction of the global ocean volume. Global coupled climate models and earth system models, however, vary widely in their simulations of the Southern Ocean and its role in, and response to, the ongoing anthropogenic forcing. Due to its complex water-mass structure and dynamics, Southern Ocean carbon and heat uptake depend on a combination of winds, eddies, mixing, buoyancy fluxes and topography. Understanding how the ocean carries heat and carbon into its interior and how the observed wind changes are affecting this uptake is essential to accurately projecting transient climate sensitivity. Observationally-based metrics are critical for discerning processes and mechanisms, and for validating and comparing climate models. As the community shifts toward Earth system models with explicit carbon simulations, more direct observations of important biogeochemical parameters, like those obtained from the biogeochemically-sensored floats that are part of the Southern Ocean Carbon and Climate Observations and Modeling project, are essential. One goal of future observing systems should be to create observationally-based benchmarks that will lead to reducing uncertainties in climate projections, and especially uncertainties related to oceanic heat and carbon uptake.
Commentary on New Metrics, Measures, and Uses for Fluency Data
ERIC Educational Resources Information Center
Christ, Theodore J.; Ardoin, Scott P.
2015-01-01
Fluency and rate-based assessments, such as curriculum-based measurement, are frequently used to screen and evaluate student progress. The application of such measures are especially prevalent within special education and response to intervention models of prevention and early intervention. Although there is an extensive research and professional…
An ice sheet model validation framework for the Greenland ice sheet.
Price, Stephen F; Hoffman, Matthew J; Bonin, Jennifer A; Howat, Ian M; Neumann, Thomas; Saba, Jack; Tezaur, Irina; Guerber, Jeffrey; Chambers, Don P; Evans, Katherine J; Kennedy, Joseph H; Lenaerts, Jan; Lipscomb, William H; Perego, Mauro; Salinger, Andrew G; Tuminaro, Raymond S; van den Broeke, Michiel R; Nowicki, Sophie M J
2017-01-01
We propose a new ice sheet model validation framework - the Cryospheric Model Comparison Tool (CmCt) - that takes advantage of ice sheet altimetry and gravimetry observations collected over the past several decades and is applied here to modeling of the Greenland ice sheet. We use realistic simulations performed with the Community Ice Sheet Model (CISM) along with two idealized, non-dynamic models to demonstrate the framework and its use. Dynamic simulations with CISM are forced from 1991 to 2013 using combinations of reanalysis-based surface mass balance and observations of outlet glacier flux change. We propose and demonstrate qualitative and quantitative metrics for use in evaluating the different model simulations against the observations. We find that the altimetry observations used here are largely ambiguous in terms of their ability to distinguish one simulation from another. Based on basin- and whole-ice-sheet scale metrics, we find that simulations using both idealized conceptual models and dynamic, numerical models provide an equally reasonable representation of the ice sheet surface (mean elevation differences of <1 m). This is likely due to their short period of record, biases inherent to digital elevation models used for model initial conditions, and biases resulting from firn dynamics, which are not explicitly accounted for in the models or observations. On the other hand, we find that the gravimetry observations used here are able to unambiguously distinguish between simulations of varying complexity, and along with the CmCt, can provide a quantitative score for assessing a particular model and/or simulation. The new framework demonstrates that our proposed metrics can distinguish relatively better from relatively worse simulations and that dynamic ice sheet models, when appropriately initialized and forced with the right boundary conditions, demonstrate predictive skill with respect to observed dynamic changes occurring on Greenland over the past few decades. An extensible design will allow for continued use of the CmCt as future altimetry, gravimetry, and other remotely sensed data become available for use in ice sheet model validation.
Landsat phenological metrics and their relation to aboveground carbon in the Brazilian Savanna.
Schwieder, M; Leitão, P J; Pinto, J R R; Teixeira, A M C; Pedroni, F; Sanchez, M; Bustamante, M M; Hostert, P
2018-05-15
The quantification and spatially explicit mapping of carbon stocks in terrestrial ecosystems is important to better understand the global carbon cycle and to monitor and report change processes, especially in the context of international policy mechanisms such as REDD+ or the implementation of Nationally Determined Contributions (NDCs) and the UN Sustainable Development Goals (SDGs). Especially in heterogeneous ecosystems, such as Savannas, accurate carbon quantifications are still lacking, where highly variable vegetation densities occur and a strong seasonality hinders consistent data acquisition. In order to account for these challenges we analyzed the potential of land surface phenological metrics derived from gap-filled 8-day Landsat time series for carbon mapping. We selected three areas located in different subregions in the central Brazil region, which is a prominent example of a Savanna with significant carbon stocks that has been undergoing extensive land cover conversions. Here phenological metrics from the season 2014/2015 were combined with aboveground carbon field samples of cerrado sensu stricto vegetation using Random Forest regression models to map the regional carbon distribution and to analyze the relation between phenological metrics and aboveground carbon. The gap filling approach enabled to accurately approximate the original Landsat ETM+ and OLI EVI values and the subsequent derivation of annual phenological metrics. Random Forest model performances varied between the three study areas with RMSE values of 1.64 t/ha (mean relative RMSE 30%), 2.35 t/ha (46%) and 2.18 t/ha (45%). Comparable relationships between remote sensing based land surface phenological metrics and aboveground carbon were observed in all study areas. Aboveground carbon distributions could be mapped and revealed comprehensible spatial patterns. Phenological metrics were derived from 8-day Landsat time series with a spatial resolution that is sufficient to capture gradual changes in carbon stocks of heterogeneous Savanna ecosystems. These metrics revealed the relationship between aboveground carbon and the phenology of the observed vegetation. Our results suggest that metrics relating to the seasonal minimum and maximum values were the most influential variables and bear potential to improve spatially explicit mapping approaches in heterogeneous ecosystems, where both spatial and temporal resolutions are critical.
Robotics-based synthesis of human motion.
Khatib, O; Demircan, E; De Sapio, V; Sentis, L; Besier, T; Delp, S
2009-01-01
The synthesis of human motion is a complex procedure that involves accurate reconstruction of movement sequences, modeling of musculoskeletal kinematics, dynamics and actuation, and characterization of reliable performance criteria. Many of these processes have much in common with the problems found in robotics research. Task-based methods used in robotics may be leveraged to provide novel musculoskeletal modeling methods and physiologically accurate performance predictions. In this paper, we present (i) a new method for the real-time reconstruction of human motion trajectories using direct marker tracking, (ii) a task-driven muscular effort minimization criterion and (iii) new human performance metrics for dynamic characterization of athletic skills. Dynamic motion reconstruction is achieved through the control of a simulated human model to follow the captured marker trajectories in real-time. The operational space control and real-time simulation provide human dynamics at any configuration of the performance. A new criteria of muscular effort minimization has been introduced to analyze human static postures. Extensive motion capture experiments were conducted to validate the new minimization criterion. Finally, new human performance metrics were introduced to study in details an athletic skill. These metrics include the effort expenditure and the feasible set of operational space accelerations during the performance of the skill. The dynamic characterization takes into account skeletal kinematics as well as muscle routing kinematics and force generating capacities. The developments draw upon an advanced musculoskeletal modeling platform and a task-oriented framework for the effective integration of biomechanics and robotics methods.
FDTD based model of ISOCT imaging for validation of nanoscale sensitivity (Conference Presentation)
NASA Astrophysics Data System (ADS)
Eid, Aya; Zhang, Di; Yi, Ji; Backman, Vadim
2017-02-01
Many of the earliest structural changes associated with neoplasia occur on the micro and nanometer scale, and thus appear histologically normal. Our group has established Inverse Spectroscopic OCT (ISOCT), a spectral based technique to extract nanoscale sensitive metrics derived from the OCT signal. Thus, there is a need to model light transport through relatively large volumes (< 50 um^3) of media with nanoscale level resolution. Finite Difference Time Domain (FDTD) is an iterative approach which directly solves Maxwell's equations to robustly estimate the electric and magnetic fields propagating through a sample. The sample's refractive index for every spatial voxel and wavelength are specified upon a grid with voxel sizes on the order of λ/20, making it an ideal modelling technique for nanoscale structure analysis. Here, we utilize the FDTD technique to validate the nanoscale sensing ability of ISOCT. The use of FDTD for OCT modelling requires three components: calculating the source beam as it propagates through the optical system, computing the sample's scattered field using FDTD, and finally propagating the scattered field back through the optical system. The principles of Fourier optics are employed to focus this interference field through a 4f optical system and onto the detector. Three-dimensional numerical samples are generated from a given refractive index correlation function with known parameters, and subsequent OCT images and mass density correlation function metrics are computed. We show that while the resolvability of the OCT image remains diffraction limited, spectral analysis allows nanoscale sensitive metrics to be extracted.
NASA Astrophysics Data System (ADS)
de Sanctis, Luca; Galla, Tobias
2009-04-01
We study the effects of bounded confidence thresholds and of interaction and external noise on Axelrod’s model of social influence. Our study is based on a combination of numerical simulations and an integration of the mean-field master equation describing the system in the thermodynamic limit. We find that interaction thresholds affect the system only quantitatively, but that they do not alter the basic phase structure. The known crossover between an ordered and a disordered state in finite systems subject to external noise persists in models with general confidence threshold. Interaction noise here facilitates the dynamics and reduces relaxation times. We also study Axelrod systems with metric features and point out similarities and differences compared to models with nominal features.
Measures and metrics of sustainable diets with a focus on milk, yogurt, and dairy products
Drewnowski, Adam
2018-01-01
The 4 domains of sustainable diets are nutrition, economics, society, and the environment. To be sustainable, foods and food patterns need to be nutrient-rich, affordable, culturally acceptable, and sparing of natural resources and the environment. Each sustainability domain has its own measures and metrics. Nutrient density of foods has been assessed through nutrient profiling models, such as the Nutrient-Rich Foods family of scores. The Food Affordability Index, applied to different food groups, has measured both calories and nutrients per penny (kcal/$). Cultural acceptance measures have been based on relative food consumption frequencies across population groups. Environmental impact of individual foods and composite food patterns has been measured in terms of land, water, and energy use. Greenhouse gas emissions assess the carbon footprint of agricultural food production, processing, and retail. Based on multiple sustainability metrics, milk, yogurt, and other dairy products can be described as nutrient-rich, affordable, acceptable, and appealing. The environmental impact of dairy farming needs to be weighed against the high nutrient density of milk, yogurt, and cheese as compared with some plant-based alternatives. PMID:29206982
Physician-Pharmacist collaboration in a pay for performance healthcare environment.
Farley, T M; Izakovic, M
2015-01-01
Healthcare is becoming more complex and costly in both European (Slovak) and American models. Healthcare in the United States (U.S.) is undergoing a particularly dramatic change. Physician and hospital reimbursement are becoming less procedure focused and increasingly outcome focused. Efforts at Mercy Hospital have shown promise in terms of collaborative team based care improving performance on glucose control outcome metrics, linked to reimbursement. Our performance on the Centers for Medicare and Medicaid Services (CMS) post-operative glucose control metric for cardiac surgery patients increased from a 63.6% pass rate to a 95.1% pass rate after implementing interventions involving physician-pharmacist team based care.Having a multidisciplinary team that is able to adapt quickly to changing expectations in the healthcare environment has aided our institution. As healthcare becomes increasingly saturated with technology, data and quality metrics, collaborative efforts resulting in increased quality and physician efficiency are desirable. Multidisciplinary collaboration (including physician-pharmacist collaboration) appears to be a viable route to improved performance in an outcome based healthcare system (Fig. 2, Ref. 12).
Photogrammetry using Apollo 16 orbital photography, part B
NASA Technical Reports Server (NTRS)
Wu, S. S. C.; Schafer, F. J.; Jordan, R.; Nakata, G. M.
1972-01-01
Discussion is made of the Apollo 15 and 16 metric and panoramic cameras which provided photographs for accurate topographic portrayal of the lunar surface using photogrammetric methods. Nine stereoscopic models of Apollo 16 metric photographs and three models of panoramic photographs were evaluated photogrammetrically in support of the Apollo 16 geologic investigations. Four of the models were used to collect profile data for crater morphology studies; three models were used to collect evaluation data for the frequency distributions of lunar slopes; one model was used to prepare a map of the Apollo 16 traverse area; and one model was used to determine elevations of the Cayley Formation. The remaining three models were used to test photogrammetric techniques using oblique metric and panoramic camera photographs. Two preliminary contour maps were compiled and a high-oblique metric photograph was rectified.
El Haimar, Amine; Santos, Joost R
2014-03-01
Influenza pandemic is a serious disaster that can pose significant disruptions to the workforce and associated economic sectors. This article examines the impact of influenza pandemic on workforce availability within an interdependent set of economic sectors. We introduce a simulation model based on the dynamic input-output model to capture the propagation of pandemic consequences through the National Capital Region (NCR). The analysis conducted in this article is based on the 2009 H1N1 pandemic data. Two metrics were used to assess the impacts of the influenza pandemic on the economic sectors: (i) inoperability, which measures the percentage gap between the as-planned output and the actual output of a sector, and (ii) economic loss, which quantifies the associated monetary value of the degraded output. The inoperability and economic loss metrics generate two different rankings of the critical economic sectors. Results show that most of the critical sectors in terms of inoperability are sectors that are related to hospitals and health-care providers. On the other hand, most of the sectors that are critically ranked in terms of economic loss are sectors with significant total production outputs in the NCR such as federal government agencies. Therefore, policy recommendations relating to potential mitigation and recovery strategies should take into account the balance between the inoperability and economic loss metrics. © 2013 Society for Risk Analysis.
Real-time video quality monitoring
NASA Astrophysics Data System (ADS)
Liu, Tao; Narvekar, Niranjan; Wang, Beibei; Ding, Ran; Zou, Dekun; Cash, Glenn; Bhagavathy, Sitaram; Bloom, Jeffrey
2011-12-01
The ITU-T Recommendation G.1070 is a standardized opinion model for video telephony applications that uses video bitrate, frame rate, and packet-loss rate to measure the video quality. However, this model was original designed as an offline quality planning tool. It cannot be directly used for quality monitoring since the above three input parameters are not readily available within a network or at the decoder. And there is a great room for the performance improvement of this quality metric. In this article, we present a real-time video quality monitoring solution based on this Recommendation. We first propose a scheme to efficiently estimate the three parameters from video bitstreams, so that it can be used as a real-time video quality monitoring tool. Furthermore, an enhanced algorithm based on the G.1070 model that provides more accurate quality prediction is proposed. Finally, to use this metric in real-world applications, we present an example emerging application of real-time quality measurement to the management of transmitted videos, especially those delivered to mobile devices.
A Computational Model Quantifies the Effect of Anatomical Variability on Velopharyngeal Function
Inouye, Joshua M.; Perry, Jamie L.; Lin, Kant Y.
2015-01-01
Purpose This study predicted the effects of velopharyngeal (VP) anatomical parameters on VP function to provide a greater understanding of speech mechanics and aid in the treatment of speech disorders. Method We created a computational model of the VP mechanism using dimensions obtained from magnetic resonance imaging measurements of 10 healthy adults. The model components included the levator veli palatini (LVP), the velum, and the posterior pharyngeal wall, and the simulations were based on material parameters from the literature. The outcome metrics were the VP closure force and LVP muscle activation required to achieve VP closure. Results Our average model compared favorably with experimental data from the literature. Simulations of 1,000 random anatomies reflected the large variability in closure forces observed experimentally. VP distance had the greatest effect on both outcome metrics when considering the observed anatomic variability. Other anatomical parameters were ranked by their predicted influences on the outcome metrics. Conclusions Our results support the implication that interventions for VP dysfunction that decrease anterior to posterior VP portal distance, increase velar length, and/or increase LVP cross-sectional area may be very effective. Future modeling studies will help to further our understanding of speech mechanics and optimize treatment of speech disorders. PMID:26049120
Galaxy rotation curves via conformal factors
NASA Astrophysics Data System (ADS)
Sporea, Ciprian A.; Borowiec, Andrzej; Wojnar, Aneta
2018-04-01
We propose a new formula to explain circular velocity profiles of spiral galaxies obtained from the Starobinsky model in the Palatini formalism. It is based on the assumption that the gravity can be described by two conformally related metrics: one of them is responsible for the measurement of distances, while the other, the so-called dark metric, is responsible for a geodesic equation and therefore can be used for the description of the velocity profile. The formula is tested against a subset of galaxies taken from the HI Nearby Galaxy Survey (THINGS).
Galaxy rotation curves via conformal factors.
Sporea, Ciprian A; Borowiec, Andrzej; Wojnar, Aneta
2018-01-01
We propose a new formula to explain circular velocity profiles of spiral galaxies obtained from the Starobinsky model in the Palatini formalism. It is based on the assumption that the gravity can be described by two conformally related metrics: one of them is responsible for the measurement of distances, while the other, the so-called dark metric, is responsible for a geodesic equation and therefore can be used for the description of the velocity profile. The formula is tested against a subset of galaxies taken from the HI Nearby Galaxy Survey (THINGS).
Gravitational waves during inflation from a 5D large-scale repulsive gravity model
NASA Astrophysics Data System (ADS)
Reyes, Luz M.; Moreno, Claudia; Madriz Aguilar, José Edgar; Bellini, Mauricio
2012-10-01
We investigate, in the transverse traceless (TT) gauge, the generation of the relic background of gravitational waves, generated during the early inflationary stage, on the framework of a large-scale repulsive gravity model. We calculate the spectrum of the tensor metric fluctuations of an effective 4D Schwarzschild-de Sitter metric on cosmological scales. This metric is obtained after implementing a planar coordinate transformation on a 5D Ricci-flat metric solution, in the context of a non-compact Kaluza-Klein theory of gravity. We found that the spectrum is nearly scale invariant under certain conditions. One interesting aspect of this model is that it is possible to derive the dynamical field equations for the tensor metric fluctuations, valid not just at cosmological scales, but also at astrophysical scales, from the same theoretical model. The astrophysical and cosmological scales are determined by the gravity-antigravity radius, which is a natural length scale of the model, that indicates when gravity becomes repulsive in nature.
Key metrics for HFIR HEU and LEU models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ilas, Germina; Betzler, Benjamin R.; Chandler, David
This report compares key metrics for two fuel design models of the High Flux Isotope Reactor (HFIR). The first model represents the highly enriched uranium (HEU) fuel currently in use at HFIR, and the second model considers a low-enriched uranium (LEU) interim design fuel. Except for the fuel region, the two models are consistent, and both include an experiment loading that is representative of HFIR's current operation. The considered key metrics are the neutron flux at the cold source moderator vessel, the mass of 252Cf produced in the flux trap target region as function of cycle time, the fast neutronmore » flux at locations of interest for material irradiation experiments, and the reactor cycle length. These key metrics are a small subset of the overall HFIR performance and safety metrics. They were defined as a means of capturing data essential for HFIR's primary missions, for use in optimization studies assessing the impact of HFIR's conversion from HEU fuel to different types of LEU fuel designs.« less
Skill Assessment for Coupled Biological/Physical Models of Marine Systems.
Stow, Craig A; Jolliff, Jason; McGillicuddy, Dennis J; Doney, Scott C; Allen, J Icarus; Friedrichs, Marjorie A M; Rose, Kenneth A; Wallhead, Philip
2009-02-20
Coupled biological/physical models of marine systems serve many purposes including the synthesis of information, hypothesis generation, and as a tool for numerical experimentation. However, marine system models are increasingly used for prediction to support high-stakes decision-making. In such applications it is imperative that a rigorous model skill assessment is conducted so that the model's capabilities are tested and understood. Herein, we review several metrics and approaches useful to evaluate model skill. The definition of skill and the determination of the skill level necessary for a given application is context specific and no single metric is likely to reveal all aspects of model skill. Thus, we recommend the use of several metrics, in concert, to provide a more thorough appraisal. The routine application and presentation of rigorous skill assessment metrics will also serve the broader interests of the modeling community, ultimately resulting in improved forecasting abilities as well as helping us recognize our limitations.
NASA Astrophysics Data System (ADS)
Kastor, David; Ray, Sourya; Traschen, Jennie
2017-10-01
We study the problem of finding brane-like solutions to Lovelock gravity, adopting a general approach to establish conditions that a lower dimensional base metric must satisfy in order that a solution to a given Lovelock theory can be constructed in one higher dimension. We find that for Lovelock theories with generic values of the coupling constants, the Lovelock tensors (higher curvature generalizations of the Einstein tensor) of the base metric must all be proportional to the metric. Hence, allowed base metrics form a subclass of Einstein metrics. This subclass includes so-called ‘universal metrics’, which have been previously investigated as solutions to quantum-corrected field equations. For specially tuned values of the Lovelock couplings, we find that the Lovelock tensors of the base metric need to satisfy fewer constraints. For example, for Lovelock theories with a unique vacuum there is only a single such constraint, a case previously identified in the literature, and brane solutions can be straightforwardly constructed.
McCabe, Collin M; Nunn, Charles L
2018-01-01
The transmission of infectious disease through a population is often modeled assuming that interactions occur randomly in groups, with all individuals potentially interacting with all other individuals at an equal rate. However, it is well known that pairs of individuals vary in their degree of contact. Here, we propose a measure to account for such heterogeneity: effective network size (ENS), which refers to the size of a maximally complete network (i.e., unstructured, where all individuals interact with all others equally) that corresponds to the outbreak characteristics of a given heterogeneous, structured network. We simulated susceptible-infected (SI) and susceptible-infected-recovered (SIR) models on maximally complete networks to produce idealized outbreak duration distributions for a disease on a network of a given size. We also simulated the transmission of these same diseases on random structured networks and then used the resulting outbreak duration distributions to predict the ENS for the group or population. We provide the methods to reproduce these analyses in a public R package, "enss." Outbreak durations of simulations on randomly structured networks were more variable than those on complete networks, but tended to have similar mean durations of disease spread. We then applied our novel metric to empirical primate networks taken from the literature and compared the information represented by our ENSs to that by other established social network metrics. In AICc model comparison frameworks, group size and mean distance proved to be the metrics most consistently associated with ENS for SI simulations, while group size, centralization, and modularity were most consistently associated with ENS for SIR simulations. In all cases, ENS was shown to be associated with at least two other independent metrics, supporting its use as a novel metric. Overall, our study provides a proof of concept for simulation-based approaches toward constructing metrics of ENS, while also revealing the conditions under which this approach is most promising.
Uncertainty in temperature response of current consumption-based emissions estimates
NASA Astrophysics Data System (ADS)
Karstensen, J.; Peters, G. P.; Andrew, R. M.
2015-05-01
Several studies have connected emissions of greenhouse gases to economic and trade data to quantify the causal chain from consumption to emissions and climate change. These studies usually combine data and models originating from different sources, making it difficult to estimate uncertainties along the entire causal chain. We estimate uncertainties in economic data, multi-pollutant emission statistics, and metric parameters, and use Monte Carlo analysis to quantify contributions to uncertainty and to determine how uncertainty propagates to estimates of global temperature change from regional and sectoral territorial- and consumption-based emissions for the year 2007. We find that the uncertainties are sensitive to the emission allocations, mix of pollutants included, the metric and its time horizon, and the level of aggregation of the results. Uncertainties in the final results are largely dominated by the climate sensitivity and the parameters associated with the warming effects of CO2. Based on our assumptions, which exclude correlations in the economic data, the uncertainty in the economic data appears to have a relatively small impact on uncertainty at the national level in comparison to emissions and metric uncertainty. Much higher uncertainties are found at the sectoral level. Our results suggest that consumption-based national emissions are not significantly more uncertain than the corresponding production-based emissions since the largest uncertainties are due to metric and emissions which affect both perspectives equally. The two perspectives exhibit different sectoral uncertainties, due to changes of pollutant compositions. We find global sectoral consumption uncertainties in the range of ±10 to ±27 % using the Global Temperature Potential with a 50-year time horizon, with metric uncertainties dominating. National-level uncertainties are similar in both perspectives due to the dominance of CO2 over other pollutants. The consumption emissions of the top 10 emitting regions have a broad uncertainty range of ±9 to ±25 %, with metric and emission uncertainties contributing similarly. The absolute global temperature potential (AGTP) with a 50-year time horizon has much higher uncertainties, with considerable uncertainty overlap for regions and sectors, indicating that the ranking of countries is uncertain.
Metric Evaluation Pipeline for 3d Modeling of Urban Scenes
NASA Astrophysics Data System (ADS)
Bosch, M.; Leichtman, A.; Chilcott, D.; Goldberg, H.; Brown, M.
2017-05-01
Publicly available benchmark data and metric evaluation approaches have been instrumental in enabling research to advance state of the art methods for remote sensing applications in urban 3D modeling. Most publicly available benchmark datasets have consisted of high resolution airborne imagery and lidar suitable for 3D modeling on a relatively modest scale. To enable research in larger scale 3D mapping, we have recently released a public benchmark dataset with multi-view commercial satellite imagery and metrics to compare 3D point clouds with lidar ground truth. We now define a more complete metric evaluation pipeline developed as publicly available open source software to assess semantically labeled 3D models of complex urban scenes derived from multi-view commercial satellite imagery. Evaluation metrics in our pipeline include horizontal and vertical accuracy and completeness, volumetric completeness and correctness, perceptual quality, and model simplicity. Sources of ground truth include airborne lidar and overhead imagery, and we demonstrate a semi-automated process for producing accurate ground truth shape files to characterize building footprints. We validate our current metric evaluation pipeline using 3D models produced using open source multi-view stereo methods. Data and software is made publicly available to enable further research and planned benchmarking activities.
An alternative mechanism for international health aid: evaluating a Global Social Protection Fund.
Basu, Sanjay; Stuckler, David; McKee, Martin
2014-01-01
Several public health groups have called for the creation of a global fund for 'social protection'-a fund that produces the international equivalent of domestic tax collection and safety net systems to finance care for the ill and disabled and related health costs. All participating countries would pay into a global fund based on a metric of their ability to pay and withdraw from the common pool based on a metric of their need for funds. We assessed how alternative strategies and metrics by which to operate such a fund would affect its size and impact on health system financing. Using a mathematical model, we found that common targets for health funding in low-income countries require higher levels of aid expenditures than presently distributed. Some mechanisms exist that may incentivize reduction of domestic health inequalities, and direct most funds towards the poorest populations. Payments from high-income countries are also likely to decrease over time as middle-income countries' economies grow.
Current-flow efficiency of networks
NASA Astrophysics Data System (ADS)
Liu, Kai; Yan, Xiaoyong
2018-02-01
Many real-world networks, from infrastructure networks to social and communication networks, can be formulated as flow networks. How to realistically measure the transport efficiency of these networks is of fundamental importance. The shortest-path-based efficiency measurement has limitations, as it assumes that flow travels only along those shortest paths. Here, we propose a new metric named current-flow efficiency, in which we calculate the average reciprocal effective resistance between all pairs of nodes in the network. This metric takes the multipath effect into consideration and is more suitable for measuring the efficiency of many real-world flow equilibrium networks. Moreover, this metric can handle a disconnected graph and can thus be used to identify critical nodes and edges from the efficiency-loss perspective. We further analyze how the topological structure affects the current-flow efficiency of networks based on some model and real-world networks. Our results enable a better understanding of flow networks and shed light on the design and improvement of such networks with higher transport efficiency.
An Approach for the Assessment of System Upset Resilience
NASA Technical Reports Server (NTRS)
Torres-Pomales, Wilfredo
2013-01-01
This report describes an approach for the assessment of upset resilience that is applicable to systems in general, including safety-critical, real-time systems. For this work, resilience is defined as the ability to preserve and restore service availability and integrity under stated conditions of configuration, functional inputs and environmental conditions. To enable a quantitative approach, we define novel system service degradation metrics and propose a new mathematical definition of resilience. These behavioral-level metrics are based on the fundamental service classification criteria of correctness, detectability, symmetry and persistence. This approach consists of a Monte-Carlo-based stimulus injection experiment, on a physical implementation or an error-propagation model of a system, to generate a system response set that can be characterized in terms of dimensional error metrics and integrated to form an overall measure of resilience. We expect this approach to be helpful in gaining insight into the error containment and repair capabilities of systems for a wide range of conditions.
Node-based measures of connectivity in genetic networks.
Koen, Erin L; Bowman, Jeff; Wilson, Paul J
2016-01-01
At-site environmental conditions can have strong influences on genetic connectivity, and in particular on the immigration and settlement phases of dispersal. However, at-site processes are rarely explored in landscape genetic analyses. Networks can facilitate the study of at-site processes, where network nodes are used to model site-level effects. We used simulated genetic networks to compare and contrast the performance of 7 node-based (as opposed to edge-based) genetic connectivity metrics. We simulated increasing node connectivity by varying migration in two ways: we increased the number of migrants moving between a focal node and a set number of recipient nodes, and we increased the number of recipient nodes receiving a set number of migrants. We found that two metrics in particular, the average edge weight and the average inverse edge weight, varied linearly with simulated connectivity. Conversely, node degree was not a good measure of connectivity. We demonstrated the use of average inverse edge weight to describe the influence of at-site habitat characteristics on genetic connectivity of 653 American martens (Martes americana) in Ontario, Canada. We found that highly connected nodes had high habitat quality for marten (deep snow and high proportions of coniferous and mature forest) and were farther from the range edge. We recommend the use of node-based genetic connectivity metrics, in particular, average edge weight or average inverse edge weight, to model the influences of at-site habitat conditions on the immigration and settlement phases of dispersal. © 2015 John Wiley & Sons Ltd.
Image Quality Assessment Based on Local Linear Information and Distortion-Specific Compensation.
Wang, Hanli; Fu, Jie; Lin, Weisi; Hu, Sudeng; Kuo, C-C Jay; Zuo, Lingxuan
2016-12-14
Image Quality Assessment (IQA) is a fundamental yet constantly developing task for computer vision and image processing. Most IQA evaluation mechanisms are based on the pertinence of subjective and objective estimation. Each image distortion type has its own property correlated with human perception. However, this intrinsic property may not be fully exploited by existing IQA methods. In this paper, we make two main contributions to the IQA field. First, a novel IQA method is developed based on a local linear model that examines the distortion between the reference and the distorted images for better alignment with human visual experience. Second, a distortion-specific compensation strategy is proposed to offset the negative effect on IQA modeling caused by different image distortion types. These score offsets are learned from several known distortion types. Furthermore, for an image with an unknown distortion type, a Convolutional Neural Network (CNN) based method is proposed to compute the score offset automatically. Finally, an integrated IQA metric is proposed by combining the aforementioned two ideas. Extensive experiments are performed to verify the proposed IQA metric, which demonstrate that the local linear model is useful in human perception modeling, especially for individual image distortion, and the overall IQA method outperforms several state-of-the-art IQA approaches.
A Neighborhood Wealth Metric for Use in Health Studies
Moudon, Anne Vernez; Cook, Andrea J.; Ulmer, Jared; Hurvitz, Philip M.; Drewnowski, Adam
2011-01-01
Background Measures of neighborhood deprivation used in health research are typically based on conventional area-based SES. Purpose The aim of this study is to examine new data and measures of SES for use in health research. Specifically, assessed property values are introduced as a new individual-level metric of wealth and tested for their ability to substitute for conventional area-based SES as measures of neighborhood deprivation. Methods The analysis was conducted in 2010 using data from 1922 participants in the 2008– 2009 survey of the Seattle Obesity Study (SOS). It compared the relative strength of the association between the individual-level neighborhood wealth metric (assessed property values) and area-level SES measures (including education, income, and percentage above poverty as single variables, and as the composite Singh index) on the binary outcome fair/poor general health status. Analyses were adjusted for gender, categoric age, race, employment status, home ownership, and household income. Results The neighborhood wealth measure was more predictive of fair/poor health status than area-level SES measures, calculated either as single variables or as indices (lower DIC measures for all models). The odds of having a fair/poor health status decreased by 0.85 [0.77, 0.93] per $50,000 increase in neighborhood property values after adjusting for individual-level SES measures. Conclusions The proposed individual-level metric of neighborhood wealth, if replicated in other areas, could replace area-based SES measures, thus simplifying analyses of contextual effects on health. PMID:21665069
Multi-metric calibration of hydrological model to capture overall flow regimes
NASA Astrophysics Data System (ADS)
Zhang, Yongyong; Shao, Quanxi; Zhang, Shifeng; Zhai, Xiaoyan; She, Dunxian
2016-08-01
Flow regimes (e.g., magnitude, frequency, variation, duration, timing and rating of change) play a critical role in water supply and flood control, environmental processes, as well as biodiversity and life history patterns in the aquatic ecosystem. The traditional flow magnitude-oriented calibration of hydrological model was usually inadequate to well capture all the characteristics of observed flow regimes. In this study, we simulated multiple flow regime metrics simultaneously by coupling a distributed hydrological model with an equally weighted multi-objective optimization algorithm. Two headwater watersheds in the arid Hexi Corridor were selected for the case study. Sixteen metrics were selected as optimization objectives, which could represent the major characteristics of flow regimes. Model performance was compared with that of the single objective calibration. Results showed that most metrics were better simulated by the multi-objective approach than those of the single objective calibration, especially the low and high flow magnitudes, frequency and variation, duration, maximum flow timing and rating. However, the model performance of middle flow magnitude was not significantly improved because this metric was usually well captured by single objective calibration. The timing of minimum flow was poorly predicted by both the multi-metric and single calibrations due to the uncertainties in model structure and input data. The sensitive parameter values of the hydrological model changed remarkably and the simulated hydrological processes by the multi-metric calibration became more reliable, because more flow characteristics were considered. The study is expected to provide more detailed flow information by hydrological simulation for the integrated water resources management, and to improve the simulation performances of overall flow regimes.
Lopes, Julio Cesar Dias; Dos Santos, Fábio Mendes; Martins-José, Andrelly; Augustyns, Koen; De Winter, Hans
2017-01-01
A new metric for the evaluation of model performance in the field of virtual screening and quantitative structure-activity relationship applications is described. This metric has been termed the power metric and is defined as the fraction of the true positive rate divided by the sum of the true positive and false positive rates, for a given cutoff threshold. The performance of this metric is compared with alternative metrics such as the enrichment factor, the relative enrichment factor, the receiver operating curve enrichment factor, the correct classification rate, Matthews correlation coefficient and Cohen's kappa coefficient. The performance of this new metric is found to be quite robust with respect to variations in the applied cutoff threshold and ratio of the number of active compounds to the total number of compounds, and at the same time being sensitive to variations in model quality. It possesses the correct characteristics for its application in early-recognition virtual screening problems.
Narayan, Anand; Cinelli, Christina; Carrino, John A; Nagy, Paul; Coresh, Josef; Riese, Victoria G; Durand, Daniel J
2015-11-01
As the US health care system transitions toward value-based reimbursement, there is an increasing need for metrics to quantify health care quality. Within radiology, many quality metrics are in use, and still more have been proposed, but there have been limited attempts to systematically inventory these measures and classify them using a standard framework. The purpose of this study was to develop an exhaustive inventory of public and private sector imaging quality metrics classified according to the classic Donabedian framework (structure, process, and outcome). A systematic review was performed in which eligibility criteria included published articles (from 2000 onward) from multiple databases. Studies were double-read, with discrepancies resolved by consensus. For the radiology benefit management group (RBM) survey, the six known companies nationally were surveyed. Outcome measures were organized on the basis of standard categories (structure, process, and outcome) and reported using Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. The search strategy yielded 1,816 citations; review yielded 110 reports (29 included for final analysis). Three of six RBMs (50%) responded to the survey; the websites of the other RBMs were searched for additional metrics. Seventy-five unique metrics were reported: 35 structure (46%), 20 outcome (27%), and 20 process (27%) metrics. For RBMs, 35 metrics were reported: 27 structure (77%), 4 process (11%), and 4 outcome (11%) metrics. The most commonly cited structure, process, and outcome metrics included ACR accreditation (37%), ACR Appropriateness Criteria (85%), and peer review (95%), respectively. Imaging quality metrics are more likely to be structural (46%) than process (27%) or outcome (27%) based (P < .05). As national value-based reimbursement programs increasingly emphasize outcome-based metrics, radiologists must keep pace by developing the data infrastructure required to collect outcome-based quality metrics. Copyright © 2015 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Butler, Richard J; Brusatte, Stephen L; Andres, Brian; Benson, Roger B J
2012-01-01
A fundamental contribution of paleobiology to macroevolutionary theory has been the illumination of deep time patterns of diversification. However, recent work has suggested that taxonomic diversity counts taken from the fossil record may be strongly biased by uneven spatiotemporal sampling. Although morphological diversity (disparity) is also frequently used to examine evolutionary radiations, no empirical work has yet addressed how disparity might be affected by uneven fossil record sampling. Here, we use pterosaurs (Mesozoic flying reptiles) as an exemplar group to address this problem. We calculate multiple disparity metrics based upon a comprehensive anatomical dataset including a novel phylogenetic correction for missing data, statistically compare these metrics to four geological sampling proxies, and use multiple regression modeling to assess the importance of uneven sampling and exceptional fossil deposits (Lagerstätten). We find that range-based disparity metrics are strongly affected by uneven fossil record sampling, and should therefore be interpreted cautiously. The robustness of variance-based metrics to sample size and geological sampling suggests that they can be more confidently interpreted as reflecting true biological signals. In addition, our results highlight the problem of high levels of missing data for disparity analyses, indicating a pressing need for more theoretical and empirical work. © 2011 The Author(s). Evolution © 2011 The Society for the Study of Evolution.
Impacts of Land Use/Cover Uncertainty on Predictions of Ecologically Relevant Flow Metrics
NASA Astrophysics Data System (ADS)
Kalin, L.; Dosdogru, F.
2016-12-01
Streamflow regimes are crucial parts of the ecological integrity in river systems. Although species are adopted to natural flow variability, permanent changes in flow regimes as a result of alterations in land use/cover of the watersheds can adversely impact ecosystem health. This study assessed the impacts of land use/cover (LULC) changes on ecologically relevant flow (ERF) metrics in the rapidly urbanizing upper Cahaba River basin in north-central Alabama. Cahaba River is the longest free-flowing river in the state of Alabama and is identified by the Nature Conservancy as one of the only eight "Hotspot of Biodiversity" in the contiguous United States. Cahaba River and its major tributaries support 69 rare and imperiled species, making it one of the most various aquatic ecosystems in the United States. SWAT model was used to generate daily streamflows, which were then fed into the Indicators of Hydrological Alterations (IHA) software to generate 38 key ERF metrics that capture high, low, and median flow, as well as flashiness, which are known to have significant impacts on flora and fauna. SWAT was calibrated and validated twice with two different sources of LULC. Model performances during calibration and validations were very good and were very similar with both LULC. The flow duration curves generated based on each LULC also look very similar. However, when we compared the ERF metrics significant differences were observed signifying the importance of LULC sources. The biggest differences were in Oct-Dec low flows, rise and fall rates of daily flows, annual maximum flow and average during month od October. This study shows that although model calibration can compensate for the differences in differences in LULC sources, when it comes to key ERF metrics the use of the most reliable LULC source is evident.
Models and metrics for software management and engineering
NASA Technical Reports Server (NTRS)
Basili, V. R.
1988-01-01
This paper attempts to characterize and present a state of the art view of several quantitative models and metrics of the software life cycle. These models and metrics can be used to aid in managing and engineering software projects. They deal with various aspects of the software process and product, including resources allocation and estimation, changes and errors, size, complexity and reliability. Some indication is given of the extent to which the various models have been used and the success they have achieved.
CCl4 is a common environmental contaminant in water and superfund sites, and a model liver toxicant. One application of PBPK models used in risk assessment is simulation of internal dose for the metric involved with toxicity, particularly for different routes of exposure. Time-co...
Citizen science: A new perspective to advance spatial pattern evaluation in hydrology
Stisen, Simon
2017-01-01
Citizen science opens new pathways that can complement traditional scientific practice. Intuition and reasoning often make humans more effective than computer algorithms in various realms of problem solving. In particular, a simple visual comparison of spatial patterns is a task where humans are often considered to be more reliable than computer algorithms. However, in practice, science still largely depends on computer based solutions, which inevitably gives benefits such as speed and the possibility to automatize processes. However, the human vision can be harnessed to evaluate the reliability of algorithms which are tailored to quantify similarity in spatial patterns. We established a citizen science project to employ the human perception to rate similarity and dissimilarity between simulated spatial patterns of several scenarios of a hydrological catchment model. In total, the turnout counts more than 2500 volunteers that provided over 43000 classifications of 1095 individual subjects. We investigate the capability of a set of advanced statistical performance metrics to mimic the human perception to distinguish between similarity and dissimilarity. Results suggest that more complex metrics are not necessarily better at emulating the human perception, but clearly provide auxiliary information that is valuable for model diagnostics. The metrics clearly differ in their ability to unambiguously distinguish between similar and dissimilar patterns which is regarded a key feature of a reliable metric. The obtained dataset can provide an insightful benchmark to the community to test novel spatial metrics. PMID:28558050
Performance metrics for the evaluation of hyperspectral chemical identification systems
NASA Astrophysics Data System (ADS)
Truslow, Eric; Golowich, Steven; Manolakis, Dimitris; Ingle, Vinay
2016-02-01
Remote sensing of chemical vapor plumes is a difficult but important task for many military and civilian applications. Hyperspectral sensors operating in the long-wave infrared regime have well-demonstrated detection capabilities. However, the identification of a plume's chemical constituents, based on a chemical library, is a multiple hypothesis testing problem which standard detection metrics do not fully describe. We propose using an additional performance metric for identification based on the so-called Dice index. Our approach partitions and weights a confusion matrix to develop both the standard detection metrics and identification metric. Using the proposed metrics, we demonstrate that the intuitive system design of a detector bank followed by an identifier is indeed justified when incorporating performance information beyond the standard detection metrics.
NASA Astrophysics Data System (ADS)
Lischeid, G.; Hohenbrink, T.; Schindler, U.
2012-04-01
Hydrology is based on the observation that catchments process input signals, e.g., precipitation, in a highly deterministic way. Thus, the Darcy or the Richards equation can be applied to model water fluxes in the saturated or vadose zone, respectively. Soils and aquifers usually exhibit substantial spatial heterogeneities at different scales that can, in principle, be represented by corresponding parameterisations of the models. In practice, however, data are hardly available at the required spatial resolution, and accounting for observed heterogeneities of soil and aquifer structure renders models very time and CPU consuming. We hypothesize that the intrinsic dimensionality of soil hydrological processes, which is induced by spatial heterogeneities, actually is very low and that soil hydrological processes in heterogeneous soils follow approximately the same trajectory. That means, the way how the soil transforms any hydrological input signals is the same for different soil textures and structures. Different soils differ only with respect to the extent of transformation of input signals. In a first step, we analysed the output of a soil hydrological model, based on the Richards equation, for homogeneous soils down to 5 m depth for different soil textures. A matrix of time series of soil matrix potential and soil water content at 10 cm depth intervals was set up. The intrinsic dimensionality of that matrix was assessed using the Correlation Dimension and a non-linear principal component approach. The latter provided a metrics for the extent of transformation ("damping") of the input signal. In a second step, model outputs for heterogeneous soils were analysed. In a last step, the same approaches were applied to 55 time series of observed soil water content from 15 sites and different depths. In all cases, the intrinsic dimensionality in fact was very close to unity, confirming our hypothesis. The metrics provided a very efficient tool to quantify the observed behaviour, depending on depth and soil heterogeneity: Different soils differed primarily with respect to the extent of damping per depth interval rather than to the kind of damping. We will show how that metrics can be used in a very efficient way for representing soil heterogeneities in simulation models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Götstedt, Julia; Karlsson Hauer, Anna; Bäck, Anna, E-mail: anna.back@vgregion.se
Purpose: Complexity metrics have been suggested as a complement to measurement-based quality assurance for intensity modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT). However, these metrics have not yet been sufficiently validated. This study develops and evaluates new aperture-based complexity metrics in the context of static multileaf collimator (MLC) openings and compares them to previously published metrics. Methods: This study develops the converted aperture metric and the edge area metric. The converted aperture metric is based on small and irregular parts within the MLC opening that are quantified as measured distances between MLC leaves. The edge area metricmore » is based on the relative size of the region around the edges defined by the MLC. Another metric suggested in this study is the circumference/area ratio. Earlier defined aperture-based complexity metrics—the modulation complexity score, the edge metric, the ratio monitor units (MU)/Gy, the aperture area, and the aperture irregularity—are compared to the newly proposed metrics. A set of small and irregular static MLC openings are created which simulate individual IMRT/VMAT control points of various complexities. These are measured with both an amorphous silicon electronic portal imaging device and EBT3 film. The differences between calculated and measured dose distributions are evaluated using a pixel-by-pixel comparison with two global dose difference criteria of 3% and 5%. The extent of the dose differences, expressed in terms of pass rate, is used as a measure of the complexity of the MLC openings and used for the evaluation of the metrics compared in this study. The different complexity scores are calculated for each created static MLC opening. The correlation between the calculated complexity scores and the extent of the dose differences (pass rate) are analyzed in scatter plots and using Pearson’s r-values. Results: The complexity scores calculated by the edge area metric, converted aperture metric, circumference/area ratio, edge metric, and MU/Gy ratio show good linear correlation to the complexity of the MLC openings, expressed as the 5% dose difference pass rate, with Pearson’s r-values of −0.94, −0.88, −0.84, −0.89, and −0.82, respectively. The overall trends for the 3% and 5% dose difference evaluations are similar. Conclusions: New complexity metrics are developed. The calculated scores correlate to the complexity of the created static MLC openings. The complexity of the MLC opening is dependent on the penumbra region relative to the area of the opening. The aperture-based complexity metrics that combined either the distances between the MLC leaves or the MLC opening circumference with the aperture area show the best correlation with the complexity of the static MLC openings.« less
Moment-based metrics for global sensitivity analysis of hydrological systems
NASA Astrophysics Data System (ADS)
Dell'Oca, Aronne; Riva, Monica; Guadagnini, Alberto
2017-12-01
We propose new metrics to assist global sensitivity analysis, GSA, of hydrological and Earth systems. Our approach allows assessing the impact of uncertain parameters on main features of the probability density function, pdf, of a target model output, y. These include the expected value of y, the spread around the mean and the degree of symmetry and tailedness of the pdf of y. Since reliable assessment of higher-order statistical moments can be computationally demanding, we couple our GSA approach with a surrogate model, approximating the full model response at a reduced computational cost. Here, we consider the generalized polynomial chaos expansion (gPCE), other model reduction techniques being fully compatible with our theoretical framework. We demonstrate our approach through three test cases, including an analytical benchmark, a simplified scenario mimicking pumping in a coastal aquifer and a laboratory-scale conservative transport experiment. Our results allow ascertaining which parameters can impact some moments of the model output pdf while being uninfluential to others. We also investigate the error associated with the evaluation of our sensitivity metrics by replacing the original system model through a gPCE. Our results indicate that the construction of a surrogate model with increasing level of accuracy might be required depending on the statistical moment considered in the GSA. The approach is fully compatible with (and can assist the development of) analysis techniques employed in the context of reduction of model complexity, model calibration, design of experiment, uncertainty quantification and risk assessment.
Fusion of 3D models derived from TLS and image-based techniques for CH enhanced documentation
NASA Astrophysics Data System (ADS)
Bastonero, P.; Donadio, E.; Chiabrando, F.; Spanò, A.
2014-05-01
Recognizing the various advantages offered by 3D new metric survey technologies in the Cultural Heritage documentation phase, this paper presents some tests of 3D model generation, using different methods, and their possible fusion. With the aim to define potentialities and problems deriving from integration or fusion of metric data acquired with different survey techniques, the elected test case is an outstanding Cultural Heritage item, presenting both widespread and specific complexities connected to the conservation of historical buildings. The site is the Staffarda Abbey, the most relevant evidence of medieval architecture in Piedmont. This application faced one of the most topical architectural issues consisting in the opportunity to study and analyze an object as a whole, from twice location of acquisition sensors, both the terrestrial and the aerial one. In particular, the work consists in the evaluation of chances deriving from a simple union or from the fusion of different 3D cloudmodels of the abbey, achieved by multi-sensor techniques. The aerial survey is based on a photogrammetric RPAS (Remotely piloted aircraft system) flight while the terrestrial acquisition have been fulfilled by laser scanning survey. Both techniques allowed to extract and process different point clouds and to generate consequent 3D continuous models which are characterized by different scale, that is to say different resolutions and diverse contents of details and precisions. Starting from these models, the proposed process, applied to a sample area of the building, aimed to test the generation of a unique 3Dmodel thorough a fusion of different sensor point clouds. Surely, the describing potential and the metric and thematic gains feasible by the final model exceeded those offered by the two detached models.
Evaluation of an Integrated Framework for Biodiversity with a New Metric for Functional Dispersion
Presley, Steven J.; Scheiner, Samuel M.; Willig, Michael R.
2014-01-01
Growing interest in understanding ecological patterns from phylogenetic and functional perspectives has driven the development of metrics that capture variation in evolutionary histories or ecological functions of species. Recently, an integrated framework based on Hill numbers was developed that measures three dimensions of biodiversity based on abundance, phylogeny and function of species. This framework is highly flexible, allowing comparison of those diversity dimensions, including different aspects of a single dimension and their integration into a single measure. The behavior of those metrics with regard to variation in data structure has not been explored in detail, yet is critical for ensuring an appropriate match between the concept and its measurement. We evaluated how each metric responds to particular data structures and developed a new metric for functional biodiversity. The phylogenetic metric is sensitive to variation in the topology of phylogenetic trees, including variation in the relative lengths of basal, internal and terminal branches. In contrast, the functional metric exhibited multiple shortcomings: (1) species that are functionally redundant contribute nothing to functional diversity and (2) a single highly distinct species causes functional diversity to approach the minimum possible value. We introduced an alternative, improved metric based on functional dispersion that solves both of these problems. In addition, the new metric exhibited more desirable behavior when based on multiple traits. PMID:25148103
Changing to the Metric System.
ERIC Educational Resources Information Center
Chambers, Donald L.; Dowling, Kenneth W.
This report examines educational aspects of the conversion to the metric system of measurement in the United States. Statements of positions on metrication and basic mathematical skills are given from various groups. Base units, symbols, prefixes, and style of the metric system are outlined. Guidelines for teaching metric concepts are given,…
Designing a Robust Micromixer Based on Fluid Stretching
NASA Astrophysics Data System (ADS)
Mott, David; Gautam, Dipesh; Voth, Greg; Oran, Elaine
2010-11-01
A metric for measuring fluid stretching based on finite-time Lyapunov exponents is described, and the use of this metric for optimizing mixing in microfluidic components is explored. The metric is implemented within an automated design approach called the Computational Toolbox (CTB). The CTB designs components by adding geometric features, such a grooves of various shapes, to a microchannel. The transport produced by each of these features in isolation was pre-computed and stored as an "advection map" for that feature, and the flow through a composite geometry that combines these features is calculated rapidly by applying the corresponding maps in sequence. A genetic algorithm search then chooses the feature combination that optimizes a user-specified metric. Metrics based on the variance of concentration generally require the user to specify the fluid distributions at inflow, which leads to different mixer designs for different inflow arrangements. The stretching metric is independent of the fluid arrangement at inflow. Mixers designed using the stretching metric are compared to those designed using a variance of concentration metric and show excellent performance across a variety of inflow distributions and diffusivities.
Comparison of two laboratory-based systems for evaluation of halos in intraocular lenses
Alexander, Elsinore; Wei, Xin; Lee, Shinwook
2018-01-01
Purpose Multifocal intraocular lenses (IOLs) can be associated with unwanted visual phenomena, including halos. Predicting potential for halos is desirable when designing new multifocal IOLs. Halo images from 6 IOL models were compared using the Optikos modulation transfer function bench system and a new high dynamic range (HDR) system. Materials and methods One monofocal, 1 extended depth of focus, and 4 multifocal IOLs were evaluated. An off-the-shelf optical bench was used to simulate a distant (>50 m) car headlight and record images. A custom HDR system was constructed using an imaging photometer to simulate headlight images and to measure quantitative halo luminance data. A metric was developed to characterize halo luminance properties. Clinical relevance was investigated by correlating halo measurements to visual outcomes questionnaire data. Results The Optikos system produced halo images useful for visual comparisons; however, measurements were relative and not quantitative. The HDR halo system provided objective and quantitative measurements used to create a metric from the area under the curve (AUC) of the logarithmic normalized halo profile. This proposed metric differentiated between IOL models, and linear regression analysis found strong correlations between AUC and subjective clinical ratings of halos. Conclusion The HDR system produced quantitative, preclinical metrics that correlated to patients’ subjective perception of halos. PMID:29503526
Quantifying Astronaut Tasks: Robotic Technology and Future Space Suit Design
NASA Technical Reports Server (NTRS)
Newman, Dava
2003-01-01
The primary aim of this research effort was to advance the current understanding of astronauts' capabilities and limitations in space-suited EVA by developing models of the constitutive and compatibility relations of a space suit, based on experimental data gained from human test subjects as well as a 12 degree-of-freedom human-sized robot, and utilizing these fundamental relations to estimate a human factors performance metric for space suited EVA work. The three specific objectives are to: 1) Compile a detailed database of torques required to bend the joints of a space suit, using realistic, multi- joint human motions. 2) Develop a mathematical model of the constitutive relations between space suit joint torques and joint angular positions, based on experimental data and compare other investigators' physics-based models to experimental data. 3) Estimate the work envelope of a space suited astronaut, using the constitutive and compatibility relations of the space suit. The body of work that makes up this report includes experimentation, empirical and physics-based modeling, and model applications. A detailed space suit joint torque-angle database was compiled with a novel experimental approach that used space-suited human test subjects to generate realistic, multi-joint motions and an instrumented robot to measure the torques required to accomplish these motions in a space suit. Based on the experimental data, a mathematical model is developed to predict joint torque from the joint angle history. Two physics-based models of pressurized fabric cylinder bending are compared to experimental data, yielding design insights. The mathematical model is applied to EVA operations in an inverse kinematic analysis coupled to the space suit model to calculate the volume in which space-suited astronauts can work with their hands, demonstrating that operational human factors metrics can be predicted from fundamental space suit information.
Spiral model pilot project information model
NASA Technical Reports Server (NTRS)
1991-01-01
The objective was an evaluation of the Spiral Model (SM) development approach to allow NASA Marshall to develop an experience base of that software management methodology. A discussion is presented of the Information Model (IM) that was used as part of the SM methodology. A key concept of the SM is the establishment of an IM to be used by management to track the progress of a project. The IM is the set of metrics that is to be measured and reported throughout the life of the project. These metrics measure both the product and the process to ensure the quality of the final delivery item and to ensure the project met programmatic guidelines. The beauty of the SM, along with the IM, is the ability to measure not only the correctness of the specification and implementation of the requirements but to also obtain a measure of customer satisfaction.
Han, Kelong; Claret, Laurent; Sandler, Alan; Das, Asha; Jin, Jin; Bruno, Rene
2016-07-13
Maintenance treatment (MTx) in responders following first-line treatment has been investigated and practiced for many cancers. Modeling and simulation may support interpretation of interim data and development decisions. We aimed to develop a modeling framework to simulate overall survival (OS) for MTx in NSCLC using tumor growth inhibition (TGI) data. TGI metrics were estimated using longitudinal tumor size data from two Phase III first-line NSCLC studies evaluating bevacizumab and erlotinib as MTx in 1632 patients. Baseline prognostic factors and TGI metric estimates were assessed in multivariate parametric models to predict OS. The OS model was externally validated by simulating a third independent NSCLC study (n = 253) based on interim TGI data (up to progression-free survival database lock). The third study evaluated pemetrexed + bevacizumab vs. bevacizumab alone as MTx. Time-to-tumor-growth (TTG) was the best TGI metric to predict OS. TTG, baseline tumor size, ECOG score, Asian ethnicity, age, and gender were significant covariates in the final OS model. The OS model was qualified by simulating OS distributions and hazard ratios (HR) in the two studies used for model-building. Simulations of the third independent study based on interim TGI data showed that pemetrexed + bevacizumab MTx was unlikely to significantly prolong OS vs. bevacizumab alone given the current sample size (predicted HR: 0.81; 95 % prediction interval: 0.59-1.09). Predicted median OS was 17.3 months and 14.7 months in both arms, respectively. These simulations are consistent with the results of the final OS analysis published 2 years later (observed HR: 0.87; 95 % confidence interval: 0.63-1.21). Final observed median OS was 17.1 months and 13.2 months in both arms, respectively, consistent with our predictions. A robust TGI-OS model was developed for MTx in NSCLC. TTG captures treatment effect. The model successfully predicted the OS outcomes of an independent study based on interim TGI data and thus may facilitate trial simulation and interpretation of interim data. The model was built based on erlotinib data and externally validated using pemetrexed data, suggesting that TGI-OS models may be treatment-independent. The results supported the use of longitudinal tumor size and TTG as endpoints in early clinical oncology studies.
Air pollution exposure prediction approaches used in air pollution epidemiology studies.
Özkaynak, Halûk; Baxter, Lisa K; Dionisio, Kathie L; Burke, Janet
2013-01-01
Epidemiological studies of the health effects of outdoor air pollution have traditionally relied upon surrogates of personal exposures, most commonly ambient concentration measurements from central-site monitors. However, this approach may introduce exposure prediction errors and misclassification of exposures for pollutants that are spatially heterogeneous, such as those associated with traffic emissions (e.g., carbon monoxide, elemental carbon, nitrogen oxides, and particulate matter). We review alternative air quality and human exposure metrics applied in recent air pollution health effect studies discussed during the International Society of Exposure Science 2011 conference in Baltimore, MD. Symposium presenters considered various alternative exposure metrics, including: central site or interpolated monitoring data, regional pollution levels predicted using the national scale Community Multiscale Air Quality model or from measurements combined with local-scale (AERMOD) air quality models, hybrid models that include satellite data, statistically blended modeling and measurement data, concentrations adjusted by home infiltration rates, and population-based human exposure model (Stochastic Human Exposure and Dose Simulation, and Air Pollutants Exposure models) predictions. These alternative exposure metrics were applied in epidemiological applications to health outcomes, including daily mortality and respiratory hospital admissions, daily hospital emergency department visits, daily myocardial infarctions, and daily adverse birth outcomes. This paper summarizes the research projects presented during the symposium, with full details of the work presented in individual papers in this journal issue.
SciSpark: Highly Interactive and Scalable Model Evaluation and Climate Metrics
NASA Astrophysics Data System (ADS)
Wilson, B. D.; Mattmann, C. A.; Waliser, D. E.; Kim, J.; Loikith, P.; Lee, H.; McGibbney, L. J.; Whitehall, K. D.
2014-12-01
Remote sensing data and climate model output are multi-dimensional arrays of massive sizes locked away in heterogeneous file formats (HDF5/4, NetCDF 3/4) and metadata models (HDF-EOS, CF) making it difficult to perform multi-stage, iterative science processing since each stage requires writing and reading data to and from disk. We are developing a lightning fast Big Data technology called SciSpark based on ApacheTM Spark. Spark implements the map-reduce paradigm for parallel computing on a cluster, but emphasizes in-memory computation, "spilling" to disk only as needed, and so outperforms the disk-based ApacheTM Hadoop by 100x in memory and by 10x on disk, and makes iterative algorithms feasible. SciSpark will enable scalable model evaluation by executing large-scale comparisons of A-Train satellite observations to model grids on a cluster of 100 to 1000 compute nodes. This 2nd generation capability for NASA's Regional Climate Model Evaluation System (RCMES) will compute simple climate metrics at interactive speeds, and extend to quite sophisticated iterative algorithms such as machine-learning (ML) based clustering of temperature PDFs, and even graph-based algorithms for searching for Mesocale Convective Complexes. The goals of SciSpark are to: (1) Decrease the time to compute comparison statistics and plots from minutes to seconds; (2) Allow for interactive exploration of time-series properties over seasons and years; (3) Decrease the time for satellite data ingestion into RCMES to hours; (4) Allow for Level-2 comparisons with higher-order statistics or PDF's in minutes to hours; and (5) Move RCMES into a near real time decision-making platform. We will report on: the architecture and design of SciSpark, our efforts to integrate climate science algorithms in Python and Scala, parallel ingest and partitioning (sharding) of A-Train satellite observations from HDF files and model grids from netCDF files, first parallel runs to compute comparison statistics and PDF's, and first metrics quantifying parallel speedups and memory & disk usage.
Holographic Dark Energy Density
NASA Astrophysics Data System (ADS)
Saadat, Hassan
2011-06-01
In this article we consider the cosmological model based on the holographic dark energy. We study dark energy density in Universe with arbitrary spatially curvature described by the Friedmann-Robertson-Walker metric. We use Chevallier-Polarski-Linder parametrization to specify dark energy density.
Evaluation of Vehicle-Based Crash Severity Metrics.
Tsoi, Ada H; Gabler, Hampton C
2015-01-01
Vehicle change in velocity (delta-v) is a widely used crash severity metric used to estimate occupant injury risk. Despite its widespread use, delta-v has several limitations. Of most concern, delta-v is a vehicle-based metric which does not consider the crash pulse or the performance of occupant restraints, e.g. seatbelts and airbags. Such criticisms have prompted the search for alternative impact severity metrics based upon vehicle kinematics. The purpose of this study was to assess the ability of the occupant impact velocity (OIV), acceleration severity index (ASI), vehicle pulse index (VPI), and maximum delta-v (delta-v) to predict serious injury in real world crashes. The study was based on the analysis of event data recorders (EDRs) downloaded from the National Automotive Sampling System / Crashworthiness Data System (NASS-CDS) 2000-2013 cases. All vehicles in the sample were GM passenger cars and light trucks involved in a frontal collision. Rollover crashes were excluded. Vehicles were restricted to single-event crashes that caused an airbag deployment. All EDR data were checked for a successful, completed recording of the event and that the crash pulse was complete. The maximum abbreviated injury scale (MAIS) was used to describe occupant injury outcome. Drivers were categorized into either non-seriously injured group (MAIS2-) or seriously injured group (MAIS3+), based on the severity of any injuries to the thorax, abdomen, and spine. ASI and OIV were calculated according to the Manual for Assessing Safety Hardware. VPI was calculated according to ISO/TR 12353-3, with vehicle-specific parameters determined from U.S. New Car Assessment Program crash tests. Using binary logistic regression, the cumulative probability of injury risk was determined for each metric and assessed for statistical significance, goodness-of-fit, and prediction accuracy. The dataset included 102,744 vehicles. A Wald chi-square test showed each vehicle-based crash severity metric estimate to be a significant predictor in the model (p < 0.05). For the belted drivers, both OIV and VPI were significantly better predictors of serious injury than delta-v (p < 0.05). For the unbelted drivers, there was no statistically significant difference between delta-v, OIV, VPI, and ASI. The broad findings of this study suggest it is feasible to improve injury prediction if we consider adding restraint performance to classic measures, e.g. delta-v. Applications, such as advanced automatic crash notification, should consider the use of different metrics for belted versus unbelted occupants.
Digital Elevation Model from Non-Metric Camera in Uas Compared with LIDAR Technology
NASA Astrophysics Data System (ADS)
Dayamit, O. M.; Pedro, M. F.; Ernesto, R. R.; Fernando, B. L.
2015-08-01
Digital Elevation Model (DEM) data as a representation of surface topography is highly demanded for use in spatial analysis and modelling. Aimed to that issue many methods of acquisition data and process it are developed, from traditional surveying until modern technology like LIDAR. On the other hands, in a past four year the development of Unamend Aerial System (UAS) aimed to Geomatic bring us the possibility to acquire data about surface by non-metric digital camera on board in a short time with good quality for some analysis. Data collectors have attracted tremendous attention on UAS due to possibility of the determination of volume changes over time, monitoring of the breakwaters, hydrological modelling including flood simulation, drainage networks, among others whose support in DEM for proper analysis. The DEM quality is considered as a combination of DEM accuracy and DEM suitability so; this paper is aimed to analyse the quality of the DEM from non-metric digital camera on UAS compared with a DEM from LIDAR corresponding to same geographic space covering 4 km2 in Artemisa province, Cuba. This area is in a frame of urban planning whose need to know the topographic characteristics in order to analyse hydrology behaviour and decide the best place for make roads, building and so on. Base on LIDAR technology is still more accurate method, it offer us a pattern for test DEM from non-metric digital camera on UAS, whose are much more flexible and bring a solution for many applications whose needs DEM of detail.
Citizen science: A new perspective to evaluate spatial patterns in hydrology.
NASA Astrophysics Data System (ADS)
Koch, J.; Stisen, S.
2016-12-01
Citizen science opens new pathways that can complement traditional scientific practice. Intuition and reasoning make humans often more effective than computer algorithms in various realms of problem solving. In particular, a simple visual comparison of spatial patterns is a task where humans are often considered to be more reliable than computer algorithms. However, in practice, science still largely depends on computer based solutions, which is inevitable giving benefits such as speed and the possibility to automatize processes. This study highlights the integration of the generally underused human resource into hydrology. We established a citizen science project on the zooniverse platform entitled Pattern Perception. The aim is to employ the human perception to rate similarity and dissimilarity between simulated spatial patterns of a hydrological catchment model. In total, the turnout counts more than 2,800 users that provided over 46,000 classifications of 1,095 individual subjects within 64 days after the launch. Each subject displays simulated spatial patterns of land-surface variables of a baseline model and six modelling scenarios. The citizen science data discloses a numeric pattern similarity score for each of the scenarios with respect to the reference. We investigate the capability of a set of innovative statistical performance metrics to mimic the human perception to distinguish between similarity and dissimilarity. Results suggest that more complex metrics are not necessarily better at emulating the human perception, but clearly provide flexibility and auxiliary information that is valuable for model diagnostics. The metrics clearly differ in their ability to unambiguously distinguish between similar and dissimilar patterns which is regarded a key feature of a reliable metric.
Human exposure to air pollution in many studies is represented by ambient concentrations from space-time kriging of observed values. Space-time kriging techniques based on a limited number of ambient monitors may fail to capture the concentration from local sources. Further, beca...
Landscape controls on total and methyl Hg in the Upper Hudson River basin, New York, USA
Burns, Douglas A.; Riva-Murray, K.; Bradley, P.M.; Aiken, G.R.; Brigham, M.E.
2012-01-01
Approaches are needed to better predict spatial variation in riverine Hg concentrations across heterogeneous landscapes that include mountains, wetlands, and open waters. We applied multivariate linear regression to determine the landscape factors and chemical variables that best account for the spatial variation of total Hg (THg) and methyl Hg (MeHg) concentrations in 27 sub-basins across the 493 km2 upper Hudson River basin in the Adirondack Mountains of New York. THg concentrations varied by sixfold, and those of MeHg by 40-fold in synoptic samples collected at low-to-moderate flow, during spring and summer of 2006 and 2008. Bivariate linear regression relations of THg and MeHg concentrations with either percent wetland area or DOC concentrations were significant but could account for only about 1/3 of the variation in these Hg forms in summer. In contrast, multivariate linear regression relations that included metrics of (1) hydrogeomorphology, (2) riparian/wetland area, and (3) open water, explained about 66% to >90% of spatial variation in each Hg form in spring and summer samples. These metrics reflect the influence of basin morphometry and riparian soils on Hg source and transport, and the role of open water as a Hg sink. Multivariate models based solely on these landscape metrics generally accounted for as much or more of the variation in Hg concentrations than models based on chemical and physical metrics, and show great promise for identifying waters with expected high Hg concentrations in the Adirondack region and similar glaciated riverine ecosystems.
App Usage Factor: A Simple Metric to Compare the Population Impact of Mobile Medical Apps.
Lewis, Thomas Lorchan; Wyatt, Jeremy C
2015-08-19
One factor when assessing the quality of mobile apps is quantifying the impact of a given app on a population. There is currently no metric which can be used to compare the population impact of a mobile app across different health care disciplines. The objective of this study is to create a novel metric to characterize the impact of a mobile app on a population. We developed the simple novel metric, app usage factor (AUF), defined as the logarithm of the product of the number of active users of a mobile app with the median number of daily uses of the app. The behavior of this metric was modeled using simulated modeling in Python, a general-purpose programming language. Three simulations were conducted to explore the temporal and numerical stability of our metric and a simulated app ecosystem model using a simulated dataset of 20,000 apps. Simulations confirmed the metric was stable between predicted usage limits and remained stable at extremes of these limits. Analysis of a simulated dataset of 20,000 apps calculated an average value for the app usage factor of 4.90 (SD 0.78). A temporal simulation showed that the metric remained stable over time and suitable limits for its use were identified. A key component when assessing app risk and potential harm is understanding the potential population impact of each mobile app. Our metric has many potential uses for a wide range of stakeholders in the app ecosystem, including users, regulators, developers, and health care professionals. Furthermore, this metric forms part of the overall estimate of risk and potential for harm or benefit posed by a mobile medical app. We identify the merits and limitations of this metric, as well as potential avenues for future validation and research.
Developing a Security Metrics Scorecard for Healthcare Organizations.
Elrefaey, Heba; Borycki, Elizabeth; Kushniruk, Andrea
2015-01-01
In healthcare, information security is a key aspect of protecting a patient's privacy and ensuring systems availability to support patient care. Security managers need to measure the performance of security systems and this can be achieved by using evidence-based metrics. In this paper, we describe the development of an evidence-based security metrics scorecard specific to healthcare organizations. Study participants were asked to comment on the usability and usefulness of a prototype of a security metrics scorecard that was developed based on current research in the area of general security metrics. Study findings revealed that scorecards need to be customized for the healthcare setting in order for the security information to be useful and usable in healthcare organizations. The study findings resulted in the development of a security metrics scorecard that matches the healthcare security experts' information requirements.
Mayo, Charles S; Yao, John; Eisbruch, Avraham; Balter, James M; Litzenberg, Dale W; Matuszak, Martha M; Kessler, Marc L; Weyburn, Grant; Anderson, Carlos J; Owen, Dawn; Jackson, William C; Haken, Randall Ten
2017-01-01
To develop statistical dose-volume histogram (DVH)-based metrics and a visualization method to quantify the comparison of treatment plans with historical experience and among different institutions. The descriptive statistical summary (ie, median, first and third quartiles, and 95% confidence intervals) of volume-normalized DVH curve sets of past experiences was visualized through the creation of statistical DVH plots. Detailed distribution parameters were calculated and stored in JavaScript Object Notation files to facilitate management, including transfer and potential multi-institutional comparisons. In the treatment plan evaluation, structure DVH curves were scored against computed statistical DVHs and weighted experience scores (WESs). Individual, clinically used, DVH-based metrics were integrated into a generalized evaluation metric (GEM) as a priority-weighted sum of normalized incomplete gamma functions. Historical treatment plans for 351 patients with head and neck cancer, 104 with prostate cancer who were treated with conventional fractionation, and 94 with liver cancer who were treated with stereotactic body radiation therapy were analyzed to demonstrate the usage of statistical DVH, WES, and GEM in a plan evaluation. A shareable dashboard plugin was created to display statistical DVHs and integrate GEM and WES scores into a clinical plan evaluation within the treatment planning system. Benchmarking with normal tissue complication probability scores was carried out to compare the behavior of GEM and WES scores. DVH curves from historical treatment plans were characterized and presented, with difficult-to-spare structures (ie, frequently compromised organs at risk) identified. Quantitative evaluations by GEM and/or WES compared favorably with the normal tissue complication probability Lyman-Kutcher-Burman model, transforming a set of discrete threshold-priority limits into a continuous model reflecting physician objectives and historical experience. Statistical DVH offers an easy-to-read, detailed, and comprehensive way to visualize the quantitative comparison with historical experiences and among institutions. WES and GEM metrics offer a flexible means of incorporating discrete threshold-prioritizations and historic context into a set of standardized scoring metrics. Together, they provide a practical approach for incorporating big data into clinical practice for treatment plan evaluations.
Cloud-based Computing and Applications of New Snow Metrics for Societal Benefit
NASA Astrophysics Data System (ADS)
Nolin, A. W.; Sproles, E. A.; Crumley, R. L.; Wilson, A.; Mar, E.; van de Kerk, M.; Prugh, L.
2017-12-01
Seasonal and interannual variability in snow cover affects socio-environmental systems including water resources, forest ecology, freshwater and terrestrial habitat, and winter recreation. We have developed two new seasonal snow metrics: snow cover frequency (SCF) and snow disappearance date (SDD). These metrics are calculated at 500-m resolution using NASA's Moderate Resolution Imaging Spectroradiometer (MODIS) snow cover data (MOD10A1). SCF is the number of times snow is observed in a pixel over the user-defined observation period. SDD is the last date of observed snow in a water year. These pixel-level metrics are calculated rapidly and globally in the Google Earth Engine cloud-based environment. SCF and SDD can be interactively visualized in a map-based interface, allowing users to explore spatial and temporal snowcover patterns from 2000-present. These metrics are especially valuable in regions where snow data are sparse or non-existent. We have used these metrics in several ongoing projects. When SCF was linked with a simple hydrologic model in the La Laguna watershed in northern Chile, it successfully predicted summer low flows with a Nash-Sutcliffe value of 0.86. SCF has also been used to help explain changes in Dall sheep populations in Alaska where sheep populations are negatively impacted by late snow cover and low snowline elevation during the spring lambing season. In forest management, SCF and SDD appear to be valuable predictors of post-wildfire vegetation growth. We see a positive relationship between winter SCF and subsequent summer greening for several years post-fire. For western US winter recreation, we are exploring trends in SDD and SCF for regions where snow sports are economically important. In a world with declining snowpacks and increasing uncertainty, these metrics extend across elevations and fill data gaps to provide valuable information for decision-making. SCF and SDD are being produced so that anyone with Internet access and a Google account can access, visualize, and download the data with a minimum of technical expertise and no need for proprietary software.
Liang, Xia; Wang, Jinhui; Yan, Chaogan; Shu, Ni; Xu, Ke; Gong, Gaolang; He, Yong
2012-01-01
Graph theoretical analysis of brain networks based on resting-state functional MRI (R-fMRI) has attracted a great deal of attention in recent years. These analyses often involve the selection of correlation metrics and specific preprocessing steps. However, the influence of these factors on the topological properties of functional brain networks has not been systematically examined. Here, we investigated the influences of correlation metric choice (Pearson's correlation versus partial correlation), global signal presence (regressed or not) and frequency band selection [slow-5 (0.01–0.027 Hz) versus slow-4 (0.027–0.073 Hz)] on the topological properties of both binary and weighted brain networks derived from them, and we employed test-retest (TRT) analyses for further guidance on how to choose the “best” network modeling strategy from the reliability perspective. Our results show significant differences in global network metrics associated with both correlation metrics and global signals. Analysis of nodal degree revealed differing hub distributions for brain networks derived from Pearson's correlation versus partial correlation. TRT analysis revealed that the reliability of both global and local topological properties are modulated by correlation metrics and the global signal, with the highest reliability observed for Pearson's-correlation-based brain networks without global signal removal (WOGR-PEAR). The nodal reliability exhibited a spatially heterogeneous distribution wherein regions in association and limbic/paralimbic cortices showed moderate TRT reliability in Pearson's-correlation-based brain networks. Moreover, we found that there were significant frequency-related differences in topological properties of WOGR-PEAR networks, and brain networks derived in the 0.027–0.073 Hz band exhibited greater reliability than those in the 0.01–0.027 Hz band. Taken together, our results provide direct evidence regarding the influences of correlation metrics and specific preprocessing choices on both the global and nodal topological properties of functional brain networks. This study also has important implications for how to choose reliable analytical schemes in brain network studies. PMID:22412922
Detection and quantification of flow consistency in business process models.
Burattin, Andrea; Bernstein, Vered; Neurauter, Manuel; Soffer, Pnina; Weber, Barbara
2018-01-01
Business process models abstract complex business processes by representing them as graphical models. Their layout, as determined by the modeler, may have an effect when these models are used. However, this effect is currently not fully understood. In order to systematically study this effect, a basic set of measurable key visual features is proposed, depicting the layout properties that are meaningful to the human user. The aim of this research is thus twofold: first, to empirically identify key visual features of business process models which are perceived as meaningful to the user and second, to show how such features can be quantified into computational metrics, which are applicable to business process models. We focus on one particular feature, consistency of flow direction, and show the challenges that arise when transforming it into a precise metric. We propose three different metrics addressing these challenges, each following a different view of flow consistency. We then report the results of an empirical evaluation, which indicates which metric is more effective in predicting the human perception of this feature. Moreover, two other automatic evaluations describing the performance and the computational capabilities of our metrics are reported as well.
Jahanishakib, Fatemeh; Mirkarimi, Seyed Hamed; Salmanmahiny, Abdolrassoul; Poodat, Fatemeh
2018-05-08
Efficient land use management requires awareness of past changes, present actions, and plans for future developments. Part of these requirements is achieved using scenarios that describe a future situation and the course of changes. This research aims to link scenario results with spatially explicit and quantitative forecasting of land use development. To develop land use scenarios, SMIC PROB-EXPERT and MORPHOL methods were used. It revealed eight scenarios as the most probable. To apply the scenarios, we considered population growth rate and used a cellular automata-Markov chain (CA-MC) model to implement the quantified changes described by each scenario. For each scenario, a set of landscape metrics was used to assess the ecological integrity of land use classes in terms of fragmentation and structural connectivity. The approach enabled us to develop spatial scenarios of land use change and detect their differences for choosing the most integrated landscape pattern in terms of landscape metrics. Finally, the comparison between paired forecasted scenarios based on landscape metrics indicates that scenarios 1-1, 2-2, 3-2, and 4-1 have a more suitable integrity. The proposed methodology for developing spatial scenarios helps executive managers to create scenarios with many repetitions and customize spatial patterns in real world applications and policies.
Bao, Jie; Hou, Zhangshuan; Huang, Maoyi; ...
2015-12-04
Here, effective sensitivity analysis approaches are needed to identify important parameters or factors and their uncertainties in complex Earth system models composed of multi-phase multi-component phenomena and multiple biogeophysical-biogeochemical processes. In this study, the impacts of 10 hydrologic parameters in the Community Land Model on simulations of runoff and latent heat flux are evaluated using data from a watershed. Different metrics, including residual statistics, the Nash-Sutcliffe coefficient, and log mean square error, are used as alternative measures of the deviations between the simulated and field observed values. Four sensitivity analysis (SA) approaches, including analysis of variance based on the generalizedmore » linear model, generalized cross validation based on the multivariate adaptive regression splines model, standardized regression coefficients based on a linear regression model, and analysis of variance based on support vector machine, are investigated. Results suggest that these approaches show consistent measurement of the impacts of major hydrologic parameters on response variables, but with differences in the relative contributions, particularly for the secondary parameters. The convergence behaviors of the SA with respect to the number of sampling points are also examined with different combinations of input parameter sets and output response variables and their alternative metrics. This study helps identify the optimal SA approach, provides guidance for the calibration of the Community Land Model parameters to improve the model simulations of land surface fluxes, and approximates the magnitudes to be adjusted in the parameter values during parametric model optimization.« less
Wunderli, Jean Marc; Pieren, Reto; Habermacher, Manuel; Vienneau, Danielle; Cajochen, Christian; Probst-Hensch, Nicole; Röösli, Martin; Brink, Mark
2016-01-01
Most environmental epidemiology studies model health effects of noise by regressing on acoustic exposure metrics that are based on the concept of average energetic dose over longer time periods (i.e. the Leq and related measures). Regarding noise effects on health and wellbeing, average measures often cannot satisfactorily predict annoyance and somatic health effects of noise, particularly sleep disturbances. It has been hypothesized that effects of noise can be better explained when also considering the variation of the level over time and the frequency distribution of event-related acoustic measures, such as for example, the maximum sound pressure level. However, it is unclear how this is best parametrized in a metric that is not correlated with the Leq, but takes into account the frequency distribution of events and their emergence from background. In this paper, a calculation method is presented that produces a metric which reflects the intermittency of road, rail and aircraft noise exposure situations. The metric termed intermittency ratio (IR) expresses the proportion of the acoustical energy contribution in the total energetic dose that is created by individual noise events above a certain threshold. To calculate the metric, it is shown how to estimate the distribution of maximum pass-by levels from information on geometry (distance and angle), traffic flow (number and speed) and single-event pass-by levels per vehicle category. On the basis of noise maps that simultaneously visualize Leq, as well as IR, the differences of both metrics are discussed. PMID:26350982
Qin, Haiming; Wang, Cheng; Zhao, Kaiguang; Xi, Xiaohuan
2018-01-01
Accurate estimation of the fraction of absorbed photosynthetically active radiation (fPAR) for maize canopies are important for maize growth monitoring and yield estimation. The goal of this study is to explore the potential of using airborne LiDAR and hyperspectral data to better estimate maize fPAR. This study focuses on estimating maize fPAR from (1) height and coverage metrics derived from airborne LiDAR point cloud data; (2) vegetation indices derived from hyperspectral imagery; and (3) a combination of these metrics. Pearson correlation analyses were conducted to evaluate the relationships among LiDAR metrics, hyperspectral metrics, and field-measured fPAR values. Then, multiple linear regression (MLR) models were developed using these metrics. Results showed that (1) LiDAR height and coverage metrics provided good explanatory power (i.e., R2 = 0.81); (2) hyperspectral vegetation indices provided moderate interpretability (i.e., R2 = 0.50); and (3) the combination of LiDAR metrics and hyperspectral metrics improved the LiDAR model (i.e., R2 = 0.88). These results indicate that LiDAR model seems to offer a reliable method for estimating maize fPAR at a high spatial resolution and it can be used for farmland management. Combining LiDAR and hyperspectral metrics led to better performance of maize fPAR estimation than LiDAR or hyperspectral metrics alone, which means that maize fPAR retrieval can benefit from the complementary nature of LiDAR-detected canopy structure characteristics and hyperspectral-captured vegetation spectral information.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Phillips, S. D.; Tarud, J. K.; Biddy, M. J.
This report documents the National Renewable Energy Laboratory's (NREL's) assessment of the feasibility of making gasoline via the methanol-to-gasoline route using syngas from a 2,000 dry metric tonne/day (2,205 U.S. ton/day) biomass-fed facility. A new technoeconomic model was developed in Aspen Plus for this study, based on the model developed for NREL's thermochemical ethanol design report (Phillips et al. 2007). The necessary process changes were incorporated into a biomass-to-gasoline model using a methanol synthesis operation followed by conversion, upgrading, and finishing to gasoline. Using a methodology similar to that used in previous NREL design reports and a feedstock cost ofmore » $50.70/dry ton ($55.89/dry metric tonne), the estimated plant gate price is $16.60/MMBtu ($15.73/GJ) (U.S. $2007) for gasoline and liquefied petroleum gas (LPG) produced from biomass via gasification of wood, methanol synthesis, and the methanol-to-gasoline process. The corresponding unit prices for gasoline and LPG are $1.95/gallon ($0.52/liter) and $1.53/gallon ($0.40/liter) with yields of 55.1 and 9.3 gallons per U.S. ton of dry biomass (229.9 and 38.8 liters per metric tonne of dry biomass), respectively.« less
The field-space metric in spiral inflation and related models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erlich, Joshua; Olsen, Jackson; Wang, Zhen
2016-09-22
Multi-field inflation models include a variety of scenarios for how inflation proceeds and ends. Models with the same potential but different kinetic terms are common in the literature. We compare spiral inflation and Dante’s inferno-type models, which differ only in their field-space metric. We justify a single-field effective description in these models and relate the single-field description to a mass-matrix formalism. We note the effects of the nontrivial field-space metric on inflationary observables, and consequently on the viability of these models. We also note a duality between spiral inflation and Dante’s inferno models with different potentials.
2013-09-01
based confidence metric is used to compare several different model predictions with the experimental data. II. Aerothermal Model Definition and...whereas 5% measurement uncertainty is assumed for aerodynamic pressure and heat flux measurements 4p y and 4Q y . Bayesian updating according... definitive conclusions for these particular aerodynamic models. However, given the confidence associated with the 4 sdp predictions for Run 30 (H/D
Weykamp, Cas; John, Garry; Gillery, Philippe; English, Emma; Ji, Linong; Lenters-Westra, Erna; Little, Randie R.; Roglic, Gojka; Sacks, David B.; Takei, Izumi
2016-01-01
Background A major objective of the IFCC Task Force on implementation of HbA1c standardization is to develop a model to define quality targets for HbA1c. Methods Two generic models, the Biological Variation and Sigma-metrics model, are investigated. Variables in the models were selected for HbA1c and data of EQA/PT programs were used to evaluate the suitability of the models to set and evaluate quality targets within and between laboratories. Results In the biological variation model 48% of individual laboratories and none of the 26 instrument groups met the minimum performance criterion. In the Sigma-metrics model, with a total allowable error (TAE) set at 5 mmol/mol (0.46% NGSP) 77% of the individual laboratories and 12 of 26 instrument groups met the 2 sigma criterion. Conclusion The Biological Variation and Sigma-metrics model were demonstrated to be suitable for setting and evaluating quality targets within and between laboratories. The Sigma-metrics model is more flexible as both the TAE and the risk of failure can be adjusted to requirements related to e.g. use for diagnosis/monitoring or requirements of (inter)national authorities. With the aim of reaching international consensus on advice regarding quality targets for HbA1c, the Task Force suggests the Sigma-metrics model as the model of choice with default values of 5 mmol/mol (0.46%) for TAE, and risk levels of 2 and 4 sigma for routine laboratories and laboratories performing clinical trials, respectively. These goals should serve as a starting point for discussion with international stakeholders in the field of diabetes. PMID:25737535
Energy-Based Metrics for Arthroscopic Skills Assessment.
Poursartip, Behnaz; LeBel, Marie-Eve; McCracken, Laura C; Escoto, Abelardo; Patel, Rajni V; Naish, Michael D; Trejos, Ana Luisa
2017-08-05
Minimally invasive skills assessment methods are essential in developing efficient surgical simulators and implementing consistent skills evaluation. Although numerous methods have been investigated in the literature, there is still a need to further improve the accuracy of surgical skills assessment. Energy expenditure can be an indication of motor skills proficiency. The goals of this study are to develop objective metrics based on energy expenditure, normalize these metrics, and investigate classifying trainees using these metrics. To this end, different forms of energy consisting of mechanical energy and work were considered and their values were divided by the related value of an ideal performance to develop normalized metrics. These metrics were used as inputs for various machine learning algorithms including support vector machines (SVM) and neural networks (NNs) for classification. The accuracy of the combination of the normalized energy-based metrics with these classifiers was evaluated through a leave-one-subject-out cross-validation. The proposed method was validated using 26 subjects at two experience levels (novices and experts) in three arthroscopic tasks. The results showed that there are statistically significant differences between novices and experts for almost all of the normalized energy-based metrics. The accuracy of classification using SVM and NN methods was between 70% and 95% for the various tasks. The results show that the normalized energy-based metrics and their combination with SVM and NN classifiers are capable of providing accurate classification of trainees. The assessment method proposed in this study can enhance surgical training by providing appropriate feedback to trainees about their level of expertise and can be used in the evaluation of proficiency.
Coverage Metrics for Model Checking
NASA Technical Reports Server (NTRS)
Penix, John; Visser, Willem; Norvig, Peter (Technical Monitor)
2001-01-01
When using model checking to verify programs in practice, it is not usually possible to achieve complete coverage of the system. In this position paper we describe ongoing research within the Automated Software Engineering group at NASA Ames on the use of test coverage metrics to measure partial coverage and provide heuristic guidance for program model checking. We are specifically interested in applying and developing coverage metrics for concurrent programs that might be used to support certification of next generation avionics software.
NASA Astrophysics Data System (ADS)
Pope, C. N.; Sohnius, M. F.; Stelle, K. S.
We show that, contrary to previous conjectures, there exist acceptable counterterms for Ricci-flat N = 1 and N = 2 super-symmetric σ-models. In the N = 1 case we present infinite sequences of counterterms, starting from the 7-loop order, that do not vanish for general riemannian Ricci-flat metrics but do vanish when the metric is also Kähler. We then investigate the counterterms for theories with Ricci-flat Kähler metrics (i.e. N = 2 models). Acceptable counterterms must vanish for hyper-Kähler metrics (the N = 4 case), and must respect the principle of universality; i.e. that counterterms to the metric can be expressed without the use of complex structures or other special tensors, which do not exist for general riemannian spaces. We show that a recently proposed 4-loop counterterm for the N = 2 models does indeed satisfy these two conditions, despite the apparent stringency of the universality principle. Hence the finiteness of Ricci-flat N = 1 and N = 2 supersymmetric σ-models seems unlikely to persist beyond the 3-loop order.
Optimization of Regression Models of Experimental Data Using Confirmation Points
NASA Technical Reports Server (NTRS)
Ulbrich, N.
2010-01-01
A new search metric is discussed that may be used to better assess the predictive capability of different math term combinations during the optimization of a regression model of experimental data. The new search metric can be determined for each tested math term combination if the given experimental data set is split into two subsets. The first subset consists of data points that are only used to determine the coefficients of the regression model. The second subset consists of confirmation points that are exclusively used to test the regression model. The new search metric value is assigned after comparing two values that describe the quality of the fit of each subset. The first value is the standard deviation of the PRESS residuals of the data points. The second value is the standard deviation of the response residuals of the confirmation points. The greater of the two values is used as the new search metric value. This choice guarantees that both standard deviations are always less or equal to the value that is used during the optimization. Experimental data from the calibration of a wind tunnel strain-gage balance is used to illustrate the application of the new search metric. The new search metric ultimately generates an optimized regression model that was already tested at regression model independent confirmation points before it is ever used to predict an unknown response from a set of regressors.
A Model and a Metric for the Analysis of Status Attainment Processes. Discussion Paper No. 492-78.
ERIC Educational Resources Information Center
Sorensen, Aage B.
This paper proposes a theory of the status attainment process, and specifies it in a mathematical model. The theory justifies a transformation of the conventional status scores to a metric that produces a exponential distribution of attainments, and a transformation of educational attainments to a metric that reflects the competitive advantage…
NASA Astrophysics Data System (ADS)
Wagle, Pradeep; Bhattarai, Nishan; Gowda, Prasanna H.; Kakani, Vijaya G.
2017-06-01
Robust evapotranspiration (ET) models are required to predict water usage in a variety of terrestrial ecosystems under different geographical and agrometeorological conditions. As a result, several remote sensing-based surface energy balance (SEB) models have been developed to estimate ET over large regions. However, comparison of the performance of several SEB models at the same site is limited. In addition, none of the SEB models have been evaluated for their ability to predict ET in rain-fed high biomass sorghum grown for biofuel production. In this paper, we evaluated the performance of five widely used single-source SEB models, namely Surface Energy Balance Algorithm for Land (SEBAL), Mapping ET with Internalized Calibration (METRIC), Surface Energy Balance System (SEBS), Simplified Surface Energy Balance Index (S-SEBI), and operational Simplified Surface Energy Balance (SSEBop), for estimating ET over a high biomass sorghum field during the 2012 and 2013 growing seasons. The predicted ET values were compared against eddy covariance (EC) measured ET (ETEC) for 19 cloud-free Landsat image. In general, S-SEBI, SEBAL, and SEBS performed reasonably well for the study period, while METRIC and SSEBop performed poorly. All SEB models substantially overestimated ET under extremely dry conditions as they underestimated sensible heat (H) and overestimated latent heat (LE) fluxes under dry conditions during the partitioning of available energy. METRIC, SEBAL, and SEBS overestimated LE regardless of wet or dry periods. Consequently, predicted seasonal cumulative ET by METRIC, SEBAL, and SEBS were higher than seasonal cumulative ETEC in both seasons. In contrast, S-SEBI and SSEBop substantially underestimated ET under too wet conditions, and predicted seasonal cumulative ET by S-SEBI and SSEBop were lower than seasonal cumulative ETEC in the relatively wetter 2013 growing season. Our results indicate the necessity of inclusion of soil moisture or plant water stress component in SEB models for the improvement of their performance, especially under too dry or wet environments.
Toward a Quantitative Comparison of Magnetic Field Extrapolations and Observed Coronal Loops
NASA Astrophysics Data System (ADS)
Warren, Harry P.; Crump, Nicholas A.; Ugarte-Urra, Ignacio; Sun, Xudong; Aschwanden, Markus J.; Wiegelmann, Thomas
2018-06-01
It is widely believed that loops observed in the solar atmosphere trace out magnetic field lines. However, the degree to which magnetic field extrapolations yield field lines that actually do follow loops has yet to be studied systematically. In this paper, we apply three different extrapolation techniques—a simple potential model, a nonlinear force-free (NLFF) model based on photospheric vector data, and an NLFF model based on forward fitting magnetic sources with vertical currents—to 15 active regions that span a wide range of magnetic conditions. We use a distance metric to assess how well each of these models is able to match field lines to the 12202 loops traced in coronal images. These distances are typically 1″–2″. We also compute the misalignment angle between each traced loop and the local magnetic field vector, and find values of 5°–12°. We find that the NLFF models generally outperform the potential extrapolation on these metrics, although the differences between the different extrapolations are relatively small. The methodology that we employ for this study suggests a number of ways that both the extrapolations and loop identification can be improved.
40 CFR 98.343 - Calculating GHG emissions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... potential (metric tons CH4/metric ton waste) = MCF × DOC × DOCF × F × 16/12. MCF = Methane correction factor... = Methane emissions from the landfill in the reporting year (metric tons CH4). GCH 4 = Modeled methane...). Emissions = Methane emissions from the landfill in the reporting year (metric tons CH4). R = Quantity of...
ERIC Educational Resources Information Center
Mairesse, Olivier; Hofmans, Joeri; Neu, Daniel; Dinis Monica de Oliveira, Armando Luis; Cluydts, Raymond; Theuns, Peter
2010-01-01
The present studies were conducted to contribute to the debate on the interaction between circadian (C) and homeostatic (S) processes in models of sleep regulation. The Two-Process Model of Sleep Regulation assumes a linear relationship between processes S and C. However, recent elaborations of the model, based on data from forced desynchrony…
NASA Astrophysics Data System (ADS)
Rodriguez-Galiano, Victor; Aragones, David; Caparros-Santiago, Jose A.; Navarro-Cerrillo, Rafael M.
2017-10-01
Land surface phenology (LSP) can improve the characterisation of forest areas and their change processes. The aim of this work was: i) to characterise the temporal dynamics in Mediterranean Pinus forests, and ii) to evaluate the potential of LSP for species discrimination. The different experiments were based on 679 mono-specific plots for the 5 native species on the Iberian Peninsula: P. sylvestris, P. pinea, P. halepensis, P. nigra and P. pinaster. The entire MODIS NDVI time series (2000-2016) of the MOD13Q1 product was used to characterise phenology. The following phenological parameters were extracted: the start, end and median days of the season, and the length of the season in days, as well as the base value, maximum value, amplitude and integrated value. Multi-temporal metrics were calculated to synthesise the inter-annual variability of the phenological parameters. The species were discriminated by the application of Random Forest (RF) classifiers from different subsets of variables: model 1) NDVI-smoothed time series, model 2) multi-temporal metrics of the phenological parameters, and model 3) multi-temporal metrics and the auxiliary physical variables (altitude, slope, aspect and distance to the coastline). Model 3 was the best, with an overall accuracy of 82%, a kappa coefficient of 0.77 and whose most important variables were: elevation, coast distance, and the end and start days of the growing season. The species that presented the largest errors was P. nigra, (kappa= 0.45), having locations with a similar behaviour to P. sylvestris or P. pinaster.
HZETRN radiation transport validation using balloon-based experimental data
NASA Astrophysics Data System (ADS)
Warner, James E.; Norman, Ryan B.; Blattnig, Steve R.
2018-05-01
The deterministic radiation transport code HZETRN (High charge (Z) and Energy TRaNsport) was developed by NASA to study the effects of cosmic radiation on astronauts and instrumentation shielded by various materials. This work presents an analysis of computed differential flux from HZETRN compared with measurement data from three balloon-based experiments over a range of atmospheric depths, particle types, and energies. Model uncertainties were quantified using an interval-based validation metric that takes into account measurement uncertainty both in the flux and the energy at which it was measured. Average uncertainty metrics were computed for the entire dataset as well as subsets of the measurements (by experiment, particle type, energy, etc.) to reveal any specific trends of systematic over- or under-prediction by HZETRN. The distribution of individual model uncertainties was also investigated to study the range and dispersion of errors beyond just single scalar and interval metrics. The differential fluxes from HZETRN were generally well-correlated with balloon-based measurements; the median relative model difference across the entire dataset was determined to be 30%. The distribution of model uncertainties, however, revealed that the range of errors was relatively broad, with approximately 30% of the uncertainties exceeding ± 40%. The distribution also indicated that HZETRN systematically under-predicts the measurement dataset as a whole, with approximately 80% of the relative uncertainties having negative values. Instances of systematic bias for subsets of the data were also observed, including a significant underestimation of alpha particles and protons for energies below 2.5 GeV/u. Muons were found to be systematically over-predicted at atmospheric depths deeper than 50 g/cm2 but under-predicted for shallower depths. Furthermore, a systematic under-prediction of alpha particles and protons was observed below the geomagnetic cutoff, suggesting that improvements to the light ion production cross sections in HZETRN should be investigated.
Creating a Data-Informed Culture in Community Colleges: A New Model for Educators
ERIC Educational Resources Information Center
Phillips, Brad C.; Horowitz, Jordan E.
2017-01-01
Brad C. Phillips and Jordan E. Horowitz offer a research-based model and actionable approach for using data strategically at community colleges to increase completion rates as well as other metrics linked to student success. They draw from the fields of psychology, neuroscience, and behavioral economics to show how leaders and administrators can…
Evaluation of mean climate in a chemistry-climate model simulation
NASA Astrophysics Data System (ADS)
Hong, S.; Park, H.; Wie, J.; Park, R.; Lee, S.; Moon, B. K.
2017-12-01
Incorporation of the interactive chemistry is essential for understanding chemistry-climate interactions and feedback processes in climate models. Here we assess a newly developed chemistry-climate model (GRIMs-Chem), which is based on the Global/Regional Integrated Model system (GRIMs) including the aerosol direct effect as well as stratospheric linearized ozone chemistry (LINOZ). We conducted GRIMs-Chem with observed sea surface temperature during the period of 1979-2010, and compared the simulation results with observations and also with CMIP models. To measure the relative performance of our model, we define the quantitative performance metric using the Taylor diagram. This metric allow us to assess overall features in simulating multiple variables. Overall, our model better reproduce the zonal mean spatial pattern of temperature, horizontal wind, vertical motion, and relative humidity relative to other models. However, the model did not produce good simulations at upper troposphere (200 hPa). It is currently unclear which model processes are responsible for this. AcknowledgementsThis research was supported by the Korea Ministry of Environment (MOE) as "Climate Change Correspondence Program."
Semantic Metrics for Analysis of Software
NASA Technical Reports Server (NTRS)
Etzkorn, Letha H.; Cox, Glenn W.; Farrington, Phil; Utley, Dawn R.; Ghalston, Sampson; Stein, Cara
2005-01-01
A recently conceived suite of object-oriented software metrics focus is on semantic aspects of software, in contradistinction to traditional software metrics, which focus on syntactic aspects of software. Semantic metrics represent a more human-oriented view of software than do syntactic metrics. The semantic metrics of a given computer program are calculated by use of the output of a knowledge-based analysis of the program, and are substantially more representative of software quality and more readily comprehensible from a human perspective than are the syntactic metrics.
Poulton, Barry C.; Graham, Jennifer L.; Rasmussen, Teresa J.; Stone, Mandy L.
2015-01-01
The Blue River Main wastewater treatment facility (WWTF) discharges into the upper Blue River (725 km2), and is recently upgraded to implement biological nutrient removal. We measured biotic condition upstream and downstream of the discharge utilizing the macroinvertebrate protocol developed for Kansas streams. We examined responses of 34 metrics to determine the best indicators for discriminating site differences and for predicting biological condition. Significant differences between sites upstream and downstream of the discharge were identified for 15 metrics in April and 12 metrics in August. Upstream biotic condition scores were significantly greater than scores at both downstream sites in April (p = 0.02), and in August the most downstream site was classified as non-biologically supporting. Thirteen EPT taxa (Ephemeroptera, Plecoptera, Trichoptera) considered intolerant of degraded stream quality were absent at one or both downstream sites. Increases in tolerance metrics and filtering macroinvertebrates, and a decline in ratio of scrapers to filterers all indicated effects of increased nutrient enrichment. Stepwise regressions identified several significant models containing a suite of metrics with low redundancy (R2 = 0.90 - 0.99). Based on the rapid decline in biological condition downstream of the discharge, the level of nutrient removal resulting from the facility upgrade (10% - 20%) was not enough to mitigate negative effects on macroinvertebrate communities.
NASA Astrophysics Data System (ADS)
Harper, Bryan; Thomas, Dennis; Chikkagoudar, Satish; Baker, Nathan; Tang, Kaizhi; Heredia-Langner, Alejandro; Lins, Roberto; Harper, Stacey
2015-06-01
The integration of rapid assays, large datasets, informatics, and modeling can overcome current barriers in understanding nanomaterial structure-toxicity relationships by providing a weight-of-the-evidence mechanism to generate hazard rankings for nanomaterials. Here, we present the use of a rapid, low-cost assay to perform screening-level toxicity evaluations of nanomaterials in vivo. Calculated EZ Metric scores, a combined measure of morbidity and mortality in developing embryonic zebrafish, were established at realistic exposure levels and used to develop a hazard ranking of diverse nanomaterial toxicity. Hazard ranking and clustering analysis of 68 diverse nanomaterials revealed distinct patterns of toxicity related to both the core composition and outermost surface chemistry of nanomaterials. The resulting clusters guided the development of a surface chemistry-based model of gold nanoparticle toxicity. Our findings suggest that risk assessments based on the size and core composition of nanomaterials alone may be wholly inappropriate, especially when considering complex engineered nanomaterials. Research should continue to focus on methodologies for determining nanomaterial hazard based on multiple sub-lethal responses following realistic, low-dose exposures, thus increasing the availability of quantitative measures of nanomaterial hazard to support the development of nanoparticle structure-activity relationships.
Launch Vehicle Production and Operations Cost Metrics
NASA Technical Reports Server (NTRS)
Watson, Michael D.; Neeley, James R.; Blackburn, Ruby F.
2014-01-01
Traditionally, launch vehicle cost has been evaluated based on $/Kg to orbit. This metric is calculated based on assumptions not typically met by a specific mission. These assumptions include the specified orbit whether Low Earth Orbit (LEO), Geostationary Earth Orbit (GEO), or both. The metric also assumes the payload utilizes the full lift mass of the launch vehicle, which is rarely true even with secondary payloads.1,2,3 Other approaches for cost metrics have been evaluated including unit cost of the launch vehicle and an approach to consider the full program production and operations costs.4 Unit cost considers the variable cost of the vehicle and the definition of variable costs are discussed. The full program production and operation costs include both the variable costs and the manufacturing base. This metric also distinguishes operations costs from production costs, including pre-flight operational testing. Operations costs also consider the costs of flight operations, including control center operation and maintenance. Each of these 3 cost metrics show different sensitivities to various aspects of launch vehicle cost drivers. The comparison of these metrics provides the strengths and weaknesses of each yielding an assessment useful for cost metric selection for launch vehicle programs.
Impact of climate change on global malaria distribution.
Caminade, Cyril; Kovats, Sari; Rocklov, Joacim; Tompkins, Adrian M; Morse, Andrew P; Colón-González, Felipe J; Stenlund, Hans; Martens, Pim; Lloyd, Simon J
2014-03-04
Malaria is an important disease that has a global distribution and significant health burden. The spatial limits of its distribution and seasonal activity are sensitive to climate factors, as well as the local capacity to control the disease. Malaria is also one of the few health outcomes that has been modeled by more than one research group and can therefore facilitate the first model intercomparison for health impacts under a future with climate change. We used bias-corrected temperature and rainfall simulations from the Coupled Model Intercomparison Project Phase 5 climate models to compare the metrics of five statistical and dynamical malaria impact models for three future time periods (2030s, 2050s, and 2080s). We evaluated three malaria outcome metrics at global and regional levels: climate suitability, additional population at risk and additional person-months at risk across the model outputs. The malaria projections were based on five different global climate models, each run under four emission scenarios (Representative Concentration Pathways, RCPs) and a single population projection. We also investigated the modeling uncertainty associated with future projections of populations at risk for malaria owing to climate change. Our findings show an overall global net increase in climate suitability and a net increase in the population at risk, but with large uncertainties. The model outputs indicate a net increase in the annual person-months at risk when comparing from RCP2.6 to RCP8.5 from the 2050s to the 2080s. The malaria outcome metrics were highly sensitive to the choice of malaria impact model, especially over the epidemic fringes of the malaria distribution.
Impact of climate change on global malaria distribution
Caminade, Cyril; Kovats, Sari; Rocklov, Joacim; Tompkins, Adrian M.; Morse, Andrew P.; Colón-González, Felipe J.; Stenlund, Hans; Martens, Pim; Lloyd, Simon J.
2014-01-01
Malaria is an important disease that has a global distribution and significant health burden. The spatial limits of its distribution and seasonal activity are sensitive to climate factors, as well as the local capacity to control the disease. Malaria is also one of the few health outcomes that has been modeled by more than one research group and can therefore facilitate the first model intercomparison for health impacts under a future with climate change. We used bias-corrected temperature and rainfall simulations from the Coupled Model Intercomparison Project Phase 5 climate models to compare the metrics of five statistical and dynamical malaria impact models for three future time periods (2030s, 2050s, and 2080s). We evaluated three malaria outcome metrics at global and regional levels: climate suitability, additional population at risk and additional person-months at risk across the model outputs. The malaria projections were based on five different global climate models, each run under four emission scenarios (Representative Concentration Pathways, RCPs) and a single population projection. We also investigated the modeling uncertainty associated with future projections of populations at risk for malaria owing to climate change. Our findings show an overall global net increase in climate suitability and a net increase in the population at risk, but with large uncertainties. The model outputs indicate a net increase in the annual person-months at risk when comparing from RCP2.6 to RCP8.5 from the 2050s to the 2080s. The malaria outcome metrics were highly sensitive to the choice of malaria impact model, especially over the epidemic fringes of the malaria distribution. PMID:24596427
Evaluative Usage-Based Metrics for the Selection of E-Journals.
ERIC Educational Resources Information Center
Hahn, Karla L.; Faulkner, Lila A.
2002-01-01
Explores electronic journal usage statistics and develops three metrics and three benchmarks based on those metrics. Topics include earlier work that assessed the value of print journals and was modified for the electronic format; the evaluation of potential purchases; and implications for standards development, including the need for content…
Validity of the two-level model for Viterbi decoder gap-cycle performance
NASA Technical Reports Server (NTRS)
Dolinar, S.; Arnold, S.
1990-01-01
A two-level model has previously been proposed for approximating the performance of a Viterbi decoder which encounters data received with periodically varying signal-to-noise ratio. Such cyclically gapped data is obtained from the Very Large Array (VLA), either operating as a stand-alone system or arrayed with Goldstone. This approximate model predicts that the decoder error rate will vary periodically between two discrete levels with the same period as the gap cycle. It further predicts that the length of the gapped portion of the decoder error cycle for a constraint length K decoder will be about K-1 bits shorter than the actual duration of the gap. The two-level model for Viterbi decoder performance with gapped data is subjected to detailed validation tests. Curves showing the cyclical behavior of the decoder error burst statistics are compared with the simple square-wave cycles predicted by the model. The validity of the model depends on a parameter often considered irrelevant in the analysis of Viterbi decoder performance, the overall scaling of the received signal or the decoder's branch-metrics. Three scaling alternatives are examined: optimum branch-metric scaling and constant branch-metric scaling combined with either constant noise-level scaling or constant signal-level scaling. The simulated decoder error cycle curves roughly verify the accuracy of the two-level model for both the case of optimum branch-metric scaling and the case of constant branch-metric scaling combined with constant noise-level scaling. However, the model is not accurate for the case of constant branch-metric scaling combined with constant signal-level scaling.
Kivelä, Mikko; Arnaud-Haond, Sophie; Saramäki, Jari
2015-01-01
The recent application of graph-based network theory analysis to biogeography, community ecology and population genetics has created a need for user-friendly software, which would allow a wider accessibility to and adaptation of these methods. EDENetworks aims to fill this void by providing an easy-to-use interface for the whole analysis pipeline of ecological and evolutionary networks starting from matrices of species distributions, genotypes, bacterial OTUs or populations characterized genetically. The user can choose between several different ecological distance metrics, such as Bray-Curtis or Sorensen distance, or population genetic metrics such as FST or Goldstein distances, to turn the raw data into a distance/dissimilarity matrix. This matrix is then transformed into a network by manual or automatic thresholding based on percolation theory or by building the minimum spanning tree. The networks can be visualized along with auxiliary data and analysed with various metrics such as degree, clustering coefficient, assortativity and betweenness centrality. The statistical significance of the results can be estimated either by resampling the original biological data or by null models based on permutations of the data. © 2014 John Wiley & Sons Ltd.
Retzlaff, Nancy; Stadler, Peter F
2018-06-21
Evolutionary processes have been described not only in biology but also for a wide range of human cultural activities including languages and law. In contrast to the evolution of DNA or protein sequences, the detailed mechanisms giving rise to the observed evolution-like processes are not or only partially known. The absence of a mechanistic model of evolution implies that it remains unknown how the distances between different taxa have to be quantified. Considering distortions of metric distances, we first show that poor choices of the distance measure can lead to incorrect phylogenetic trees. Based on the well-known fact that phylogenetic inference requires additive metrics, we then show that the correct phylogeny can be computed from a distance matrix [Formula: see text] if there is a monotonic, subadditive function [Formula: see text] such that [Formula: see text] is additive. The required metric-preserving transformation [Formula: see text] can be computed as the solution of an optimization problem. This result shows that the problem of phylogeny reconstruction is well defined even if a detailed mechanistic model of the evolutionary process remains elusive.
NASA Technical Reports Server (NTRS)
Poteet, Carl C.; Blosser, Max L.
2001-01-01
A design of experiments approach has been implemented using computational hypervelocity impact simulations to determine the most effective place to add mass to an existing metallic Thermal Protection System (TPS) to improve hypervelocity impact protection. Simulations were performed using axisymmetric models in CTH, a shock-physics code developed by Sandia National Laboratories, and validated by comparison with existing test data. The axisymmetric models were then used in a statistical sensitivity analysis to determine the influence of five design parameters on degree of hypervelocity particle dispersion. Several damage metrics were identified and evaluated. Damage metrics related to the extent of substructure damage were seen to produce misleading results, however damage metrics related to the degree of dispersion of the hypervelocity particle produced results that corresponded to physical intuition. Based on analysis of variance results it was concluded that the most effective way to increase hypervelocity impact resistance is to increase the thickness of the outer foil layer. Increasing the spacing between the outer surface and the substructure is also very effective at increasing dispersion.
Whittaker, Heather T; Zhu, Shenghua; Di Curzio, Domenico L; Buist, Richard; Li, Xin-Min; Noy, Suzanna; Wiseman, Frances K; Thiessen, Jonathan D; Martin, Melanie
2018-07-01
Alzheimer's disease (AD) pathology causes microstructural changes in the brain. These changes, if quantified with magnetic resonance imaging (MRI), could be studied for use as an early biomarker for AD. The aim of our study was to determine if T 1 relaxation, diffusion tensor imaging (DTI), and quantitative magnetization transfer imaging (qMTI) metrics could reveal changes within the hippocampus and surrounding white matter structures in ex vivo transgenic mouse brains overexpressing human amyloid precursor protein with the Swedish mutation. Delineation of hippocampal cell layers using DTI color maps allows more detailed analysis of T 1 -weighted imaging, DTI, and qMTI metrics, compared with segmentation of gross anatomy based on relaxation images, and with analysis of DTI or qMTI metrics alone. These alterations are observed in the absence of robust intracellular Aβ accumulation or plaque deposition as revealed by histology. This work demonstrates that multiparametric quantitative MRI methods are useful for characterizing changes within the hippocampal substructures and surrounding white matter tracts of mouse models of AD. Copyright © 2018. Published by Elsevier Inc.
Bisphenol A (BPA) is a weakly estrogenic monomer used in the production of polycarbonate plastics and epoxy resins, both of which are used in food contact applications. A physiologically based pharmacokinetic (PBPK) model of BPA pharmacokinetics in rats and humans was developed t...
Bisphenol A (BPA) is a weakly estrogenic monomer used in the production of polycarbonate plastics and epoxy resins, both of which are used in food contact applications. A physiologically based pharmacokinetic (PBPK) model of BPA pharmacokinetics in rats and humans was developed ...
NASA Astrophysics Data System (ADS)
Holland, C.
2013-10-01
Developing validated models of plasma dynamics is essential for confident predictive modeling of current and future fusion devices. This tutorial will present an overview of the key guiding principles and practices for state-of-the-art validation studies, illustrated using examples from investigations of turbulent transport in magnetically confined plasmas. The primary focus of the talk will be the development of quantiatve validation metrics, which are essential for moving beyond qualitative and subjective assessments of model performance and fidelity. Particular emphasis and discussion is given to (i) the need for utilizing synthetic diagnostics to enable quantitatively meaningful comparisons between simulation and experiment, and (ii) the importance of robust uncertainty quantification and its inclusion within the metrics. To illustrate these concepts, we first review the structure and key insights gained from commonly used ``global'' transport model metrics (e.g. predictions of incremental stored energy or radially-averaged temperature), as well as their limitations. Building upon these results, a new form of turbulent transport metrics is then proposed, which focuses upon comparisons of predicted local gradients and fluctuation characteristics against observation. We demonstrate the utility of these metrics by applying them to simulations and modeling of a newly developed ``validation database'' derived from the results of a systematic, multi-year turbulent transport validation campaign on the DIII-D tokamak, in which comprehensive profile and fluctuation measurements have been obtained from a wide variety of heating and confinement scenarios. Finally, we discuss extensions of these metrics and their underlying design concepts to other areas of plasma confinement research, including both magnetohydrodynamic stability and integrated scenario modeling. Supported by the US DOE under DE-FG02-07ER54917 and DE-FC02-08ER54977.
Using Publication Metrics to Highlight Academic Productivity and Research Impact
Carpenter, Christopher R.; Cone, David C.; Sarli, Cathy C.
2016-01-01
This article provides a broad overview of widely available measures of academic productivity and impact using publication data and highlights uses of these metrics for various purposes. Metrics based on publication data include measures such as number of publications, number of citations, the journal impact factor score, and the h-index, as well as emerging metrics based on document-level metrics. Publication metrics can be used for a variety of purposes for tenure and promotion, grant applications and renewal reports, benchmarking, recruiting efforts, and administrative purposes for departmental or university performance reports. The authors also highlight practical applications of measuring and reporting academic productivity and impact to emphasize and promote individual investigators, grant applications, or department output. PMID:25308141
Winslow, Luke A.; Hansen, Gretchen J. A.; Read, Jordan S.; Notaro, Michael
2017-01-01
Climate change has already influenced lake temperatures globally, but understanding future change is challenging. The response of lakes to changing climate drivers is complex due to the nature of lake-atmosphere coupling, ice cover, and stratification. To better understand the diversity of lake responses to climate change and give managers insight on individual lakes, we modelled daily water temperature profiles for 10,774 lakes in Michigan, Minnesota, and Wisconsin for contemporary (1979–2015) and future (2020–2040 and 2080–2100) time periods with climate models based on the Representative Concentration Pathway 8.5, the worst-case emission scenario. In addition to lake-specific daily simulated temperatures, we derived commonly used, ecologically relevant annual metrics of thermal conditions for each lake. We include all supporting lake-specific model parameters, meteorological drivers, and archived code for the model and derived metric calculations. This unique dataset offers landscape-level insight into the impact of climate change on lakes.
Winslow, Luke A.; Hansen, Gretchen J.A.; Read, Jordan S; Notaro, Michael
2017-01-01
Climate change has already influenced lake temperatures globally, but understanding future change is challenging. The response of lakes to changing climate drivers is complex due to the nature of lake-atmosphere coupling, ice cover, and stratification. To better understand the diversity of lake responses to climate change and give managers insight on individual lakes, we modelled daily water temperature profiles for 10,774 lakes in Michigan, Minnesota, and Wisconsin for contemporary (1979–2015) and future (2020–2040 and 2080–2100) time periods with climate models based on the Representative Concentration Pathway 8.5, the worst-case emission scenario. In addition to lake-specific daily simulated temperatures, we derived commonly used, ecologically relevant annual metrics of thermal conditions for each lake. We include all supporting lake-specific model parameters, meteorological drivers, and archived code for the model and derived metric calculations. This unique dataset offers landscape-level insight into the impact of climate change on lakes. PMID:28440790
NASA Astrophysics Data System (ADS)
Winslow, Luke A.; Hansen, Gretchen J. A.; Read, Jordan S.; Notaro, Michael
2017-04-01
Climate change has already influenced lake temperatures globally, but understanding future change is challenging. The response of lakes to changing climate drivers is complex due to the nature of lake-atmosphere coupling, ice cover, and stratification. To better understand the diversity of lake responses to climate change and give managers insight on individual lakes, we modelled daily water temperature profiles for 10,774 lakes in Michigan, Minnesota, and Wisconsin for contemporary (1979-2015) and future (2020-2040 and 2080-2100) time periods with climate models based on the Representative Concentration Pathway 8.5, the worst-case emission scenario. In addition to lake-specific daily simulated temperatures, we derived commonly used, ecologically relevant annual metrics of thermal conditions for each lake. We include all supporting lake-specific model parameters, meteorological drivers, and archived code for the model and derived metric calculations. This unique dataset offers landscape-level insight into the impact of climate change on lakes.
Zhang, Jie; Hodge, Bri -Mathias; Lu, Siyuan; ...
2015-11-10
Accurate solar photovoltaic (PV) power forecasting allows utilities to reliably utilize solar resources on their systems. However, to truly measure the improvements that any new solar forecasting methods provide, it is important to develop a methodology for determining baseline and target values for the accuracy of solar forecasting at different spatial and temporal scales. This paper aims at developing a framework to derive baseline and target values for a suite of generally applicable, value-based, and custom-designed solar forecasting metrics. The work was informed by close collaboration with utility and independent system operator partners. The baseline values are established based onmore » state-of-the-art numerical weather prediction models and persistence models in combination with a radiative transfer model. The target values are determined based on the reduction in the amount of reserves that must be held to accommodate the uncertainty of PV power output. The proposed reserve-based methodology is a reasonable and practical approach that can be used to assess the economic benefits gained from improvements in accuracy of solar forecasting. Lastly, the financial baseline and targets can be translated back to forecasting accuracy metrics and requirements, which will guide research on solar forecasting improvements toward the areas that are most beneficial to power systems operations.« less
NASA Astrophysics Data System (ADS)
Manuri, Solichin; Andersen, Hans-Erik; McGaughey, Robert J.; Brack, Cris
2017-04-01
The airborne lidar system (ALS) provides a means to efficiently monitor the status of remote tropical forests and continues to be the subject of intense evaluation. However, the cost of ALS acquisition can vary significantly depending on the acquisition parameters, particularly the return density (i.e., spatial resolution) of the lidar point cloud. This study assessed the effect of lidar return density on the accuracy of lidar metrics and regression models for estimating aboveground biomass (AGB) and basal area (BA) in tropical peat swamp forests (PSF) in Kalimantan, Indonesia. A large dataset of ALS covering an area of 123,000 ha was used in this study. This study found that cumulative return proportion (CRP) variables represent a better accumulation of AGB over tree heights than height-related variables. The CRP variables in power models explained 80.9% and 90.9% of the BA and AGB variations, respectively. Further, it was found that low-density (and low-cost) lidar should be considered as a feasible option for assessing AGB and BA in vast areas of flat, lowland PSF. The performance of the models generated using reduced return densities as low as 1/9 returns per m2 also yielded strong agreement with the original high-density data. The use model-based statistical inferences enabled relatively precise estimates of the mean AGB at the landscape scale to be obtained with a fairly low-density of 1/4 returns per m2, with less than 10% standard error (SE). Further, even when very low-density lidar data was used (i.e., 1/49 returns per m2) the bias of the mean AGB estimates were still less than 10% with a SE of approximately 15%. This study also investigated the influence of different DTM resolutions for normalizing the elevation during the generation of forest-related lidar metrics using various return densities point cloud. We found that the high-resolution digital terrain model (DTM) had little effect on the accuracy of lidar metrics calculation in PSF. The accuracy of low-density lidar metrics in PSF was more influenced by the density of aboveground returns, rather than the last return. This is due to the flat topography of the study area. The results of this study will be valuable for future economical and feasible assessments of forest metrics over large areas of tropical peat swamp ecosystems.
Dong, Bing; Li, Yan; Han, Xin-Li; Hu, Bin
2016-09-02
For high-speed aircraft, a conformal window is used to optimize the aerodynamic performance. However, the local shape of the conformal window leads to large amounts of dynamic aberrations varying with look angle. In this paper, deformable mirror (DM) and model-based wavefront sensorless adaptive optics (WSLAO) are used for dynamic aberration correction of an infrared remote sensor equipped with a conformal window and scanning mirror. In model-based WSLAO, aberration is captured using Lukosz mode, and we use the low spatial frequency content of the image spectral density as the metric function. Simulations show that aberrations induced by the conformal window are dominated by some low-order Lukosz modes. To optimize the dynamic correction, we can only correct dominant Lukosz modes and the image size can be minimized to reduce the time required to compute the metric function. In our experiment, a 37-channel DM is used to mimic the dynamic aberration of conformal window with scanning rate of 10 degrees per second. A 52-channel DM is used for correction. For a 128 × 128 image, the mean value of image sharpness during dynamic correction is 1.436 × 10(-5) in optimized correction and is 1.427 × 10(-5) in un-optimized correction. We also demonstrated that model-based WSLAO can achieve convergence two times faster than traditional stochastic parallel gradient descent (SPGD) method.
Uncertainty in temperature response of current consumption-based emissions estimates
NASA Astrophysics Data System (ADS)
Karstensen, J.; Peters, G. P.; Andrew, R. M.
2014-09-01
Several studies have connected emissions of greenhouse gases to economic and trade data to quantify the causal chain from consumption to emissions and climate change. These studies usually combine data and models originating from different sources, making it difficult to estimate uncertainties in the end results. We estimate uncertainties in economic data, multi-pollutant emission statistics and metric parameters, and use Monte Carlo analysis to quantify contributions to uncertainty and to determine how uncertainty propagates to estimates of global temperature change from regional and sectoral territorial- and consumption-based emissions for the year 2007. We find that the uncertainties are sensitive to the emission allocations, mix of pollutants included, the metric and its time horizon, and the level of aggregation of the results. Uncertainties in the final results are largely dominated by the climate sensitivity and the parameters associated with the warming effects of CO2. The economic data have a relatively small impact on uncertainty at the global and national level, while much higher uncertainties are found at the sectoral level. Our results suggest that consumption-based national emissions are not significantly more uncertain than the corresponding production based emissions, since the largest uncertainties are due to metric and emissions which affect both perspectives equally. The two perspectives exhibit different sectoral uncertainties, due to changes of pollutant compositions. We find global sectoral consumption uncertainties in the range of ±9-±27% using the global temperature potential with a 50 year time horizon, with metric uncertainties dominating. National level uncertainties are similar in both perspectives due to the dominance of CO2 over other pollutants. The consumption emissions of the top 10 emitting regions have a broad uncertainty range of ±9-±25%, with metric and emissions uncertainties contributing similarly. The Absolute global temperature potential with a 50 year time horizon has much higher uncertainties, with considerable uncertainty overlap for regions and sectors, indicating that the ranking of countries is uncertain.
Poodat, Fatemeh; Arrowsmith, Colin; Fraser, David; Gordon, Ascelin
2015-09-01
Connectivity among fragmented areas of habitat has long been acknowledged as important for the viability of biological conservation, especially within highly modified landscapes. Identifying important habitat patches in ecological connectivity is a priority for many conservation strategies, and the application of 'graph theory' has been shown to provide useful information on connectivity. Despite the large number of metrics for connectivity derived from graph theory, only a small number have been compared in terms of the importance they assign to nodes in a network. This paper presents a study that aims to define a new set of metrics and compares these with traditional graph-based metrics, used in the prioritization of habitat patches for ecological connectivity. The metrics measured consist of "topological" metrics, "ecological metrics," and "integrated metrics," Integrated metrics are a combination of topological and ecological metrics. Eight metrics were applied to the habitat network for the fat-tailed dunnart within Greater Melbourne, Australia. A non-directional network was developed in which nodes were linked to adjacent nodes. These links were then weighted by the effective distance between patches. By applying each of the eight metrics for the study network, nodes were ranked according to their contribution to the overall network connectivity. The structured comparison revealed the similarity and differences in the way the habitat for the fat-tailed dunnart was ranked based on different classes of metrics. Due to the differences in the way the metrics operate, a suitable metric should be chosen that best meets the objectives established by the decision maker.
2009-08-01
integration across base MSF Category: Neighbors and Stakeholders (NS) No. Conceptual Metric No. Conceptual Metric NS1 “ Walkable ” on-base community...34 Walkable " on- base community design 1 " Walkable " community Design – on-base: clustering of facilities, presence of sidewalks, need for car...access to public transit LEED for Neighborhood Development (ND) 0-100 index based on score of walkable community indicators Adapt LEED-ND
Elementary Metric Curriculum - Project T.I.M.E. (Timely Implementation of Metric Education). Part I.
ERIC Educational Resources Information Center
Community School District 18, Brooklyn, NY.
This is a teacher's manual for an ISS-based elementary school course in the metric system. Behavioral objectives and student activities are included. The topics covered include: (1) linear measurement; (2) metric-decimal relationships; (3) metric conversions; (4) geometry; (5) scale drawings; and (6) capacity. This is the first of a two-part…
Rank Order Entropy: why one metric is not enough
McLellan, Margaret R.; Ryan, M. Dominic; Breneman, Curt M.
2011-01-01
The use of Quantitative Structure-Activity Relationship models to address problems in drug discovery has a mixed history, generally resulting from the mis-application of QSAR models that were either poorly constructed or used outside of their domains of applicability. This situation has motivated the development of a variety of model performance metrics (r2, PRESS r2, F-tests, etc) designed to increase user confidence in the validity of QSAR predictions. In a typical workflow scenario, QSAR models are created and validated on training sets of molecules using metrics such as Leave-One-Out or many-fold cross-validation methods that attempt to assess their internal consistency. However, few current validation methods are designed to directly address the stability of QSAR predictions in response to changes in the information content of the training set. Since the main purpose of QSAR is to quickly and accurately estimate a property of interest for an untested set of molecules, it makes sense to have a means at hand to correctly set user expectations of model performance. In fact, the numerical value of a molecular prediction is often less important to the end user than knowing the rank order of that set of molecules according to their predicted endpoint values. Consequently, a means for characterizing the stability of predicted rank order is an important component of predictive QSAR. Unfortunately, none of the many validation metrics currently available directly measure the stability of rank order prediction, making the development of an additional metric that can quantify model stability a high priority. To address this need, this work examines the stabilities of QSAR rank order models created from representative data sets, descriptor sets, and modeling methods that were then assessed using Kendall Tau as a rank order metric, upon which the Shannon Entropy was evaluated as a means of quantifying rank-order stability. Random removal of data from the training set, also known as Data Truncation Analysis (DTA), was used as a means for systematically reducing the information content of each training set while examining both rank order performance and rank order stability in the face of training set data loss. The premise for DTA ROE model evaluation is that the response of a model to incremental loss of training information will be indicative of the quality and sufficiency of its training set, learning method, and descriptor types to cover a particular domain of applicability. This process is termed a “rank order entropy” evaluation, or ROE. By analogy with information theory, an unstable rank order model displays a high level of implicit entropy, while a QSAR rank order model which remains nearly unchanged during training set reductions would show low entropy. In this work, the ROE metric was applied to 71 data sets of different sizes, and was found to reveal more information about the behavior of the models than traditional metrics alone. Stable, or consistently performing models, did not necessarily predict rank order well. Models that performed well in rank order did not necessarily perform well in traditional metrics. In the end, it was shown that ROE metrics suggested that some QSAR models that are typically used should be discarded. ROE evaluation helps to discern which combinations of data set, descriptor set, and modeling methods lead to usable models in prioritization schemes, and provides confidence in the use of a particular model within a specific domain of applicability. PMID:21875058
Effects of metric hierarchy and rhyme predictability on word duration in The Cat in the Hat.
Breen, Mara
2018-05-01
Word durations convey many types of linguistic information, including intrinsic lexical features like length and frequency and contextual features like syntactic and semantic structure. The current study was designed to investigate whether hierarchical metric structure and rhyme predictability account for durational variation over and above other features in productions of a rhyming, metrically-regular children's book: The Cat in the Hat (Dr. Seuss, 1957). One-syllable word durations and inter-onset intervals were modeled as functions of segment number, lexical frequency, word class, syntactic structure, repetition, and font emphasis. Consistent with prior work, factors predicting longer word durations and inter-onset intervals included more phonemes, lower frequency, first mention, alignment with a syntactic boundary, and capitalization. A model parameter corresponding to metric grid height improved model fit of word durations and inter-onset intervals. Specifically, speakers realized five levels of metric hierarchy with inter-onset intervals such that interval duration increased linearly with increased height in the metric hierarchy. Conversely, speakers realized only three levels of metric hierarchy with word duration, demonstrating that they shortened the highly predictable rhyme resolutions. These results further understanding of the factors that affect spoken word duration, and demonstrate the myriad cues that children receive about linguistic structure from nursery rhymes. Copyright © 2018 Elsevier B.V. All rights reserved.
An ice sheet model validation framework for the Greenland ice sheet
NASA Astrophysics Data System (ADS)
Price, Stephen F.; Hoffman, Matthew J.; Bonin, Jennifer A.; Howat, Ian M.; Neumann, Thomas; Saba, Jack; Tezaur, Irina; Guerber, Jeffrey; Chambers, Don P.; Evans, Katherine J.; Kennedy, Joseph H.; Lenaerts, Jan; Lipscomb, William H.; Perego, Mauro; Salinger, Andrew G.; Tuminaro, Raymond S.; van den Broeke, Michiel R.; Nowicki, Sophie M. J.
2017-01-01
We propose a new ice sheet model validation framework - the Cryospheric Model Comparison Tool (CmCt) - that takes advantage of ice sheet altimetry and gravimetry observations collected over the past several decades and is applied here to modeling of the Greenland ice sheet. We use realistic simulations performed with the Community Ice Sheet Model (CISM) along with two idealized, non-dynamic models to demonstrate the framework and its use. Dynamic simulations with CISM are forced from 1991 to 2013, using combinations of reanalysis-based surface mass balance and observations of outlet glacier flux change. We propose and demonstrate qualitative and quantitative metrics for use in evaluating the different model simulations against the observations. We find that the altimetry observations used here are largely ambiguous in terms of their ability to distinguish one simulation from another. Based on basin-scale and whole-ice-sheet-scale metrics, we find that simulations using both idealized conceptual models and dynamic, numerical models provide an equally reasonable representation of the ice sheet surface (mean elevation differences of < 1 m). This is likely due to their short period of record, biases inherent to digital elevation models used for model initial conditions, and biases resulting from firn dynamics, which are not explicitly accounted for in the models or observations. On the other hand, we find that the gravimetry observations used here are able to unambiguously distinguish between simulations of varying complexity, and along with the CmCt, can provide a quantitative score for assessing a particular model and/or simulation. The new framework demonstrates that our proposed metrics can distinguish relatively better from relatively worse simulations and that dynamic ice sheet models, when appropriately initialized and forced with the right boundary conditions, demonstrate a predictive skill with respect to observed dynamic changes that have occurred on Greenland over the past few decades. An extensible design will allow for continued use of the CmCt as future altimetry, gravimetry, and other remotely sensed data become available for use in ice sheet model validation.
An ice sheet model validation framework for the Greenland ice sheet
Price, Stephen F.; Hoffman, Matthew J.; Bonin, Jennifer A.; Howat, Ian M.; Neumann, Thomas; Saba, Jack; Tezaur, Irina; Guerber, Jeffrey; Chambers, Don P.; Evans, Katherine J.; Kennedy, Joseph H.; Lenaerts, Jan; Lipscomb, William H.; Perego, Mauro; Salinger, Andrew G.; Tuminaro, Raymond S.; van den Broeke, Michiel R.; Nowicki, Sophie M. J.
2018-01-01
We propose a new ice sheet model validation framework – the Cryospheric Model Comparison Tool (CmCt) – that takes advantage of ice sheet altimetry and gravimetry observations collected over the past several decades and is applied here to modeling of the Greenland ice sheet. We use realistic simulations performed with the Community Ice Sheet Model (CISM) along with two idealized, non-dynamic models to demonstrate the framework and its use. Dynamic simulations with CISM are forced from 1991 to 2013 using combinations of reanalysis-based surface mass balance and observations of outlet glacier flux change. We propose and demonstrate qualitative and quantitative metrics for use in evaluating the different model simulations against the observations. We find that the altimetry observations used here are largely ambiguous in terms of their ability to distinguish one simulation from another. Based on basin- and whole-ice-sheet scale metrics, we find that simulations using both idealized conceptual models and dynamic, numerical models provide an equally reasonable representation of the ice sheet surface (mean elevation differences of <1 m). This is likely due to their short period of record, biases inherent to digital elevation models used for model initial conditions, and biases resulting from firn dynamics, which are not explicitly accounted for in the models or observations. On the other hand, we find that the gravimetry observations used here are able to unambiguously distinguish between simulations of varying complexity, and along with the CmCt, can provide a quantitative score for assessing a particular model and/or simulation. The new framework demonstrates that our proposed metrics can distinguish relatively better from relatively worse simulations and that dynamic ice sheet models, when appropriately initialized and forced with the right boundary conditions, demonstrate predictive skill with respect to observed dynamic changes occurring on Greenland over the past few decades. An extensible design will allow for continued use of the CmCt as future altimetry, gravimetry, and other remotely sensed data become available for use in ice sheet model validation. PMID:29697704