Person Re-Identification via Distance Metric Learning With Latent Variables.
Sun, Chong; Wang, Dong; Lu, Huchuan
2017-01-01
In this paper, we propose an effective person re-identification method with latent variables, which represents a pedestrian as the mixture of a holistic model and a number of flexible models. Three types of latent variables are introduced to model uncertain factors in the re-identification problem, including vertical misalignments, horizontal misalignments and leg posture variations. The distance between two pedestrians can be determined by minimizing a given distance function with respect to latent variables, and then be used to conduct the re-identification task. In addition, we develop a latent metric learning method for learning the effective metric matrix, which can be solved via an iterative manner: once latent information is specified, the metric matrix can be obtained based on some typical metric learning methods; with the computed metric matrix, the latent variables can be determined by searching the state space exhaustively. Finally, extensive experiments are conducted on seven databases to evaluate the proposed method. The experimental results demonstrate that our method achieves better performance than other competing algorithms.
Extrapolation of Functions of Many Variables by Means of Metric Analysis
NASA Astrophysics Data System (ADS)
Kryanev, Alexandr; Ivanov, Victor; Romanova, Anastasiya; Sevastianov, Leonid; Udumyan, David
2018-02-01
The paper considers a problem of extrapolating functions of several variables. It is assumed that the values of the function of m variables at a finite number of points in some domain D of the m-dimensional space are given. It is required to restore the value of the function at points outside the domain D. The paper proposes a fundamentally new method for functions of several variables extrapolation. In the presented paper, the method of extrapolating a function of many variables developed by us uses the interpolation scheme of metric analysis. To solve the extrapolation problem, a scheme based on metric analysis methods is proposed. This scheme consists of two stages. In the first stage, using the metric analysis, the function is interpolated to the points of the domain D belonging to the segment of the straight line connecting the center of the domain D with the point M, in which it is necessary to restore the value of the function. In the second stage, based on the auto regression model and metric analysis, the function values are predicted along the above straight-line segment beyond the domain D up to the point M. The presented numerical example demonstrates the efficiency of the method under consideration.
Efficient dual approach to distance metric learning.
Shen, Chunhua; Kim, Junae; Liu, Fayao; Wang, Lei; van den Hengel, Anton
2014-02-01
Distance metric learning is of fundamental interest in machine learning because the employed distance metric can significantly affect the performance of many learning methods. Quadratic Mahalanobis metric learning is a popular approach to the problem, but typically requires solving a semidefinite programming (SDP) problem, which is computationally expensive. The worst case complexity of solving an SDP problem involving a matrix variable of size D×D with O(D) linear constraints is about O(D(6.5)) using interior-point methods, where D is the dimension of the input data. Thus, the interior-point methods only practically solve problems exhibiting less than a few thousand variables. Because the number of variables is D(D+1)/2, this implies a limit upon the size of problem that can practically be solved around a few hundred dimensions. The complexity of the popular quadratic Mahalanobis metric learning approach thus limits the size of problem to which metric learning can be applied. Here, we propose a significantly more efficient and scalable approach to the metric learning problem based on the Lagrange dual formulation of the problem. The proposed formulation is much simpler to implement, and therefore allows much larger Mahalanobis metric learning problems to be solved. The time complexity of the proposed method is roughly O(D(3)), which is significantly lower than that of the SDP approach. Experiments on a variety of data sets demonstrate that the proposed method achieves an accuracy comparable with the state of the art, but is applicable to significantly larger problems. We also show that the proposed method can be applied to solve more general Frobenius norm regularized SDP problems approximately.
ERIC Educational Resources Information Center
Fuwa, Minori; Kayama, Mizue; Kunimune, Hisayoshi; Hashimoto, Masami; Asano, David K.
2015-01-01
We have explored educational methods for algorithmic thinking for novices and implemented a block programming editor and a simple learning management system. In this paper, we propose a program/algorithm complexity metric specified for novice learners. This metric is based on the variable usage in arithmetic and relational formulas in learner's…
Variable-Metric Algorithm For Constrained Optimization
NASA Technical Reports Server (NTRS)
Frick, James D.
1989-01-01
Variable Metric Algorithm for Constrained Optimization (VMACO) is nonlinear computer program developed to calculate least value of function of n variables subject to general constraints, both equality and inequality. First set of constraints equality and remaining constraints inequalities. Program utilizes iterative method in seeking optimal solution. Written in ANSI Standard FORTRAN 77.
Rawlings, Renata A.; Shi, Hang; Yuan, Lo-Hua; Brehm, William; Pop-Busui, Rodica
2011-01-01
Abstract Background Several metrics of glucose variability have been proposed to date, but an integrated approach that provides a complete and consistent assessment of glycemic variation is missing. As a consequence, and because of the tedious coding necessary during quantification, most investigators and clinicians have not yet adopted the use of multiple glucose variability metrics to evaluate glycemic variation. Methods We compiled the most extensively used statistical techniques and glucose variability metrics, with adjustable hyper- and hypoglycemic limits and metric parameters, to create a user-friendly Continuous Glucose Monitoring Graphical User Interface for Diabetes Evaluation (CGM-GUIDE©). In addition, we introduce and demonstrate a novel transition density profile that emphasizes the dynamics of transitions between defined glucose states. Results Our combined dashboard of numerical statistics and graphical plots support the task of providing an integrated approach to describing glycemic variability. We integrated existing metrics, such as SD, area under the curve, and mean amplitude of glycemic excursion, with novel metrics such as the slopes across critical transitions and the transition density profile to assess the severity and frequency of glucose transitions per day as they move between critical glycemic zones. Conclusions By presenting the above-mentioned metrics and graphics in a concise aggregate format, CGM-GUIDE provides an easy to use tool to compare quantitative measures of glucose variability. This tool can be used by researchers and clinicians to develop new algorithms of insulin delivery for patients with diabetes and to better explore the link between glucose variability and chronic diabetes complications. PMID:21932986
Comparing Resource Adequacy Metrics and Their Influence on Capacity Value: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ibanez, E.; Milligan, M.
2014-04-01
Traditional probabilistic methods have been used to evaluate resource adequacy. The increasing presence of variable renewable generation in power systems presents a challenge to these methods because, unlike thermal units, variable renewable generation levels change over time because they are driven by meteorological events. Thus, capacity value calculations for these resources are often performed to simple rules of thumb. This paper follows the recommendations of the North American Electric Reliability Corporation?s Integration of Variable Generation Task Force to include variable generation in the calculation of resource adequacy and compares different reliability metrics. Examples are provided using the Western Interconnection footprintmore » under different variable generation penetrations.« less
A New Metric for Land-Atmosphere Coupling Strength: Applications on Observations and Modeling
NASA Astrophysics Data System (ADS)
Tang, Q.; Xie, S.; Zhang, Y.; Phillips, T. J.; Santanello, J. A., Jr.; Cook, D. R.; Riihimaki, L.; Gaustad, K.
2017-12-01
A new metric is proposed to quantify the land-atmosphere (LA) coupling strength and is elaborated by correlating the surface evaporative fraction and impacting land and atmosphere variables (e.g., soil moisture, vegetation, and radiation). Based upon multiple linear regression, this approach simultaneously considers multiple factors and thus represents complex LA coupling mechanisms better than existing single variable metrics. The standardized regression coefficients quantify the relative contributions from individual drivers in a consistent manner, avoiding the potential inconsistency in relative influence of conventional metrics. Moreover, the unique expendable feature of the new method allows us to verify and explore potentially important coupling mechanisms. Our observation-based application of the new metric shows moderate coupling with large spatial variations at the U.S. Southern Great Plains. The relative importance of soil moisture vs. vegetation varies by location. We also show that LA coupling strength is generally underestimated by single variable methods due to their incompleteness. We also apply this new metric to evaluate the representation of LA coupling in the Accelerated Climate Modeling for Energy (ACME) V1 Contiguous United States (CONUS) regionally refined model (RRM). This work is performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-734201
Application of Bounded Linear Stability Analysis Method for Metrics-Driven Adaptive Control
NASA Technical Reports Server (NTRS)
Bakhtiari-Nejad, Maryam; Nguyen, Nhan T.; Krishnakumar, Kalmanje
2009-01-01
This paper presents the application of Bounded Linear Stability Analysis (BLSA) method for metrics-driven adaptive control. The bounded linear stability analysis method is used for analyzing stability of adaptive control models, without linearizing the adaptive laws. Metrics-driven adaptive control introduces a notion that adaptation should be driven by some stability metrics to achieve robustness. By the application of bounded linear stability analysis method the adaptive gain is adjusted during the adaptation in order to meet certain phase margin requirements. Analysis of metrics-driven adaptive control is evaluated for a second order system that represents a pitch attitude control of a generic transport aircraft. The analysis shows that the system with the metrics-conforming variable adaptive gain becomes more robust to unmodeled dynamics or time delay. The effect of analysis time-window for BLSA is also evaluated in order to meet the stability margin criteria.
Paixão, Paulo; Gouveia, Luís F; Silva, Nuno; Morais, José A G
2017-03-01
A simulation study is presented, evaluating the performance of the f 2 , the model-independent multivariate statistical distance and the f 2 bootstrap methods in the ability to conclude similarity between two dissolution profiles. Different dissolution profiles, based on the Noyes-Whitney equation and ranging from theoretical f 2 values between 100 and 40, were simulated. Variability was introduced in the dissolution model parameters in an increasing order, ranging from a situation complying with the European guidelines requirements for the use of the f 2 metric to several situations where the f 2 metric could not be used anymore. Results have shown that the f 2 is an acceptable metric when used according to the regulatory requirements, but loses its applicability when variability increases. The multivariate statistical distance presented contradictory results in several of the simulation scenarios, which makes it an unreliable metric for dissolution profile comparisons. The bootstrap f 2 , although conservative in its conclusions is an alternative suitable method. Overall, as variability increases, all of the discussed methods reveal problems that can only be solved by increasing the number of dosage form units used in the comparison, which is usually not practical or feasible. Additionally, experimental corrective measures may be undertaken in order to reduce the overall variability, particularly when it is shown that it is mainly due to the dissolution assessment instead of being intrinsic to the dosage form. Copyright © 2016. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Kaiya, Haruhiko; Osada, Akira; Kaijiri, Kenji
We present a method to identify stakeholders and their preferences about non-functional requirements (NFR) by using use case diagrams of existing systems. We focus on the changes about NFR because such changes help stakeholders to identify their preferences. Comparing different use case diagrams of the same domain helps us to find changes to be occurred. We utilize Goal-Question-Metrics (GQM) method for identifying variables that characterize NFR, and we can systematically represent changes about NFR using the variables. Use cases that represent system interactions help us to bridge the gap between goals and metrics (variables), and we can easily construct measurable NFR. For validating and evaluating our method, we applied our method to an application domain of Mail User Agent (MUA) system.
Manifold Preserving: An Intrinsic Approach for Semisupervised Distance Metric Learning.
Ying, Shihui; Wen, Zhijie; Shi, Jun; Peng, Yaxin; Peng, Jigen; Qiao, Hong
2017-05-18
In this paper, we address the semisupervised distance metric learning problem and its applications in classification and image retrieval. First, we formulate a semisupervised distance metric learning model by considering the metric information of inner classes and interclasses. In this model, an adaptive parameter is designed to balance the inner metrics and intermetrics by using data structure. Second, we convert the model to a minimization problem whose variable is symmetric positive-definite matrix. Third, in implementation, we deduce an intrinsic steepest descent method, which assures that the metric matrix is strictly symmetric positive-definite at each iteration, with the manifold structure of the symmetric positive-definite matrix manifold. Finally, we test the proposed algorithm on conventional data sets, and compare it with other four representative methods. The numerical results validate that the proposed method significantly improves the classification with the same computational efficiency.
Duncan, James R; Kline, Benjamin; Glaiberman, Craig B
2007-04-01
To create and test methods of extracting efficiency data from recordings of simulated renal stent procedures. Task analysis was performed and used to design a standardized testing protocol. Five experienced angiographers then performed 16 renal stent simulations using the Simbionix AngioMentor angiographic simulator. Audio and video recordings of these simulations were captured from multiple vantage points. The recordings were synchronized and compiled. A series of efficiency metrics (procedure time, contrast volume, and tool use) were then extracted from the recordings. The intraobserver and interobserver variability of these individual metrics was also assessed. The metrics were converted to costs and aggregated to determine the fixed and variable costs of a procedure segment or the entire procedure. Task analysis and pilot testing led to a standardized testing protocol suitable for performance assessment. Task analysis also identified seven checkpoints that divided the renal stent simulations into six segments. Efficiency metrics for these different segments were extracted from the recordings and showed excellent intra- and interobserver correlations. Analysis of the individual and aggregated efficiency metrics demonstrated large differences between segments as well as between different angiographers. These differences persisted when efficiency was expressed as either total or variable costs. Task analysis facilitated both protocol development and data analysis. Efficiency metrics were readily extracted from recordings of simulated procedures. Aggregating the metrics and dividing the procedure into segments revealed potential insights that could be easily overlooked because the simulator currently does not attempt to aggregate the metrics and only provides data derived from the entire procedure. The data indicate that analysis of simulated angiographic procedures will be a powerful method of assessing performance in interventional radiology.
Accelerated Training for Large Feedforward Neural Networks
NASA Technical Reports Server (NTRS)
Stepniewski, Slawomir W.; Jorgensen, Charles C.
1998-01-01
In this paper we introduce a new training algorithm, the scaled variable metric (SVM) method. Our approach attempts to increase the convergence rate of the modified variable metric method. It is also combined with the RBackprop algorithm, which computes the product of the matrix of second derivatives (Hessian) with an arbitrary vector. The RBackprop method allows us to avoid computationally expensive, direct line searches. In addition, it can be utilized in the new, 'predictive' updating technique of the inverse Hessian approximation. We have used directional slope testing to adjust the step size and found that this strategy works exceptionally well in conjunction with the Rbackprop algorithm. Some supplementary, but nevertheless important enhancements to the basic training scheme such as improved setting of a scaling factor for the variable metric update and computationally more efficient procedure for updating the inverse Hessian approximation are presented as well. We summarize by comparing the SVM method with four first- and second- order optimization algorithms including a very effective implementation of the Levenberg-Marquardt method. Our tests indicate promising computational speed gains of the new training technique, particularly for large feedforward networks, i.e., for problems where the training process may be the most laborious.
Evaluating the Performance of the IEEE Standard 1366 Method for Identifying Major Event Days
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eto, Joseph H.; LaCommare, Kristina Hamachi; Sohn, Michael D.
IEEE Standard 1366 offers a method for segmenting reliability performance data to isolate the effects of major events from the underlying year-to-year trends in reliability. Recent analysis by the IEEE Distribution Reliability Working Group (DRWG) has found that reliability performance of some utilities differs from the expectations that helped guide the development of the Standard 1366 method. This paper proposes quantitative metrics to evaluate the performance of the Standard 1366 method in identifying major events and in reducing year-to-year variability in utility reliability. The metrics are applied to a large sample of utility-reported reliability data to assess performance of themore » method with alternative specifications that have been considered by the DRWG. We find that none of the alternatives perform uniformly 'better' than the current Standard 1366 method. That is, none of the modifications uniformly lowers the year-to-year variability in System Average Interruption Duration Index without major events. Instead, for any given alternative, while it may lower the value of this metric for some utilities, it also increases it for other utilities (sometimes dramatically). Thus, we illustrate some of the trade-offs that must be considered in using the Standard 1366 method and highlight the usefulness of the metrics we have proposed in conducting these evaluations.« less
Lauricella, Leticia L; Costa, Priscila B; Salati, Michele; Pego-Fernandes, Paulo M; Terra, Ricardo M
2018-06-01
Database quality measurement should be considered a mandatory step to ensure an adequate level of confidence in data used for research and quality improvement. Several metrics have been described in the literature, but no standardized approach has been established. We aimed to describe a methodological approach applied to measure the quality and inter-rater reliability of a regional multicentric thoracic surgical database (Paulista Lung Cancer Registry). Data from the first 3 years of the Paulista Lung Cancer Registry underwent an audit process with 3 metrics: completeness, consistency, and inter-rater reliability. The first 2 methods were applied to the whole data set, and the last method was calculated using 100 cases randomized for direct auditing. Inter-rater reliability was evaluated using percentage of agreement between the data collector and auditor and through calculation of Cohen's κ and intraclass correlation. The overall completeness per section ranged from 0.88 to 1.00, and the overall consistency was 0.96. Inter-rater reliability showed many variables with high disagreement (>10%). For numerical variables, intraclass correlation was a better metric than inter-rater reliability. Cohen's κ showed that most variables had moderate to substantial agreement. The methodological approach applied to the Paulista Lung Cancer Registry showed that completeness and consistency metrics did not sufficiently reflect the real quality status of a database. The inter-rater reliability associated with κ and intraclass correlation was a better quality metric than completeness and consistency metrics because it could determine the reliability of specific variables used in research or benchmark reports. This report can be a paradigm for future studies of data quality measurement. Copyright © 2018 American College of Surgeons. Published by Elsevier Inc. All rights reserved.
Parameterizing the Variability and Uncertainty of Wind and Solar in CEMs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frew, Bethany
We present current and improved methods for estimating the capacity value and curtailment impacts from variable generation (VG) in capacity expansion models (CEMs). The ideal calculation of these variability metrics is through an explicit co-optimized investment-dispatch model using multiple years of VG and load data. Because of data and computational limitations, existing CEMs typically approximate these metrics using a subset of all hours from a single year and/or using statistical methods, which often do not capture the tail-event impacts or the broader set of interactions between VG, storage, and conventional generators. In our proposed new methods, we use hourly generationmore » and load values across all hours of the year to characterize the (1) contribution of VG to system capacity during high load hours, (2) the curtailment level of VG, and (3) the reduction in VG curtailment due to storage and shutdown of select thermal generators. Using CEM model outputs from a preceding model solve period, we apply these methods to exogenously calculate capacity value and curtailment metrics for the subsequent model solve period. Preliminary results suggest that these hourly methods offer improved capacity value and curtailment representations of VG in the CEM from existing approximation methods without additional computational burdens.« less
Impact of region contouring variability on image-based focal therapy evaluation
NASA Astrophysics Data System (ADS)
Gibson, Eli; Donaldson, Ian A.; Shah, Taimur T.; Hu, Yipeng; Ahmed, Hashim U.; Barratt, Dean C.
2016-03-01
Motivation: Focal therapy is an emerging low-morbidity treatment option for low-intermediate risk prostate cancer; however, challenges remain in accurately delivering treatment to specified targets and determining treatment success. Registered multi-parametric magnetic resonance imaging (MPMRI) acquired before and after treatment can support focal therapy evaluation and optimization; however, contouring variability, when defining the prostate, the clinical target volume (CTV) and the ablation region in images, reduces the precision of quantitative image-based focal therapy evaluation metrics. To inform the interpretation and clarify the limitations of such metrics, we investigated inter-observer contouring variability and its impact on four metrics. Methods: Pre-therapy and 2-week-post-therapy standard-of-care MPMRI were acquired from 5 focal cryotherapy patients. Two clinicians independently contoured, on each slice, the prostate (pre- and post-treatment) and the dominant index lesion CTV (pre-treatment) in the T2-weighted MRI, and the ablated region (post-treatment) in the dynamic-contrast- enhanced MRI. For each combination of clinician contours, post-treatment images were registered to pre-treatment images using a 3D biomechanical-model-based registration of prostate surfaces, and four metrics were computed: the proportion of the target tissue region that was ablated and the target:ablated region volume ratio for each of two targets (the CTV and an expanded planning target volume). Variance components analysis was used to measure the contribution of each type of contour to the variance in the therapy evaluation metrics. Conclusions: 14-23% of evaluation metric variance was attributable to contouring variability (including 6-12% from ablation region contouring); reducing this variability could improve the precision of focal therapy evaluation metrics.
NASA Technical Reports Server (NTRS)
McFarland, Shane M.; Norcross, Jason
2016-01-01
Existing methods for evaluating EVA suit performance and mobility have historically concentrated on isolated joint range of motion and torque. However, these techniques do little to evaluate how well a suited crewmember can actually perform during an EVA. An alternative method of characterizing suited mobility through measurement of metabolic cost to the wearer has been evaluated at Johnson Space Center over the past several years. The most recent study involved six test subjects completing multiple trials of various functional tasks in each of three different space suits; the results indicated it was often possible to discern between different suit designs on the basis of metabolic cost alone. However, other variables may have an effect on real-world suited performance; namely, completion time of the task, the gravity field in which the task is completed, etc. While previous results have analyzed completion time, metabolic cost, and metabolic cost normalized to system mass individually, it is desirable to develop a single metric comprising these (and potentially other) performance metrics. This paper outlines the background upon which this single-score metric is determined to be feasible, and initial efforts to develop such a metric. Forward work includes variable coefficient determination and verification of the metric through repeated testing.
NASA Astrophysics Data System (ADS)
Shoaib, Syed Abu; Marshall, Lucy; Sharma, Ashish
2018-06-01
Every model to characterise a real world process is affected by uncertainty. Selecting a suitable model is a vital aspect of engineering planning and design. Observation or input errors make the prediction of modelled responses more uncertain. By way of a recently developed attribution metric, this study is aimed at developing a method for analysing variability in model inputs together with model structure variability to quantify their relative contributions in typical hydrological modelling applications. The Quantile Flow Deviation (QFD) metric is used to assess these alternate sources of uncertainty. The Australian Water Availability Project (AWAP) precipitation data for four different Australian catchments is used to analyse the impact of spatial rainfall variability on simulated streamflow variability via the QFD. The QFD metric attributes the variability in flow ensembles to uncertainty associated with the selection of a model structure and input time series. For the case study catchments, the relative contribution of input uncertainty due to rainfall is higher than that due to potential evapotranspiration, and overall input uncertainty is significant compared to model structure and parameter uncertainty. Overall, this study investigates the propagation of input uncertainty in a daily streamflow modelling scenario and demonstrates how input errors manifest across different streamflow magnitudes.
Neural decoding with kernel-based metric learning.
Brockmeier, Austin J; Choi, John S; Kriminger, Evan G; Francis, Joseph T; Principe, Jose C
2014-06-01
In studies of the nervous system, the choice of metric for the neural responses is a pivotal assumption. For instance, a well-suited distance metric enables us to gauge the similarity of neural responses to various stimuli and assess the variability of responses to a repeated stimulus-exploratory steps in understanding how the stimuli are encoded neurally. Here we introduce an approach where the metric is tuned for a particular neural decoding task. Neural spike train metrics have been used to quantify the information content carried by the timing of action potentials. While a number of metrics for individual neurons exist, a method to optimally combine single-neuron metrics into multineuron, or population-based, metrics is lacking. We pose the problem of optimizing multineuron metrics and other metrics using centered alignment, a kernel-based dependence measure. The approach is demonstrated on invasively recorded neural data consisting of both spike trains and local field potentials. The experimental paradigm consists of decoding the location of tactile stimulation on the forepaws of anesthetized rats. We show that the optimized metrics highlight the distinguishing dimensions of the neural response, significantly increase the decoding accuracy, and improve nonlinear dimensionality reduction methods for exploratory neural analysis.
Gebler, J.B.
2004-01-01
The related topics of spatial variability of aquatic invertebrate community metrics, implications of spatial patterns of metric values to distributions of aquatic invertebrate communities, and ramifications of natural variability to the detection of human perturbations were investigated. Four metrics commonly used for stream assessment were computed for 9 stream reaches within a fairly homogeneous, minimally impaired stream segment of the San Pedro River, Arizona. Metric variability was assessed for differing sampling scenarios using simple permutation procedures. Spatial patterns of metric values suggest that aquatic invertebrate communities are patchily distributed on subsegment and segment scales, which causes metric variability. Wide ranges of metric values resulted in wide ranges of metric coefficients of variation (CVs) and minimum detectable differences (MDDs), and both CVs and MDDs often increased as sample size (number of reaches) increased, suggesting that any particular set of sampling reaches could yield misleading estimates of population parameters and effects that can be detected. Mean metric variabilities were substantial, with the result that only fairly large differences in metrics would be declared significant at ?? = 0.05 and ?? = 0.20. The number of reaches required to obtain MDDs of 10% and 20% varied with significance level and power, and differed for different metrics, but were generally large, ranging into tens and hundreds of reaches. Study results suggest that metric values from one or a small number of stream reach(es) may not be adequate to represent a stream segment, depending on effect sizes of interest, and that larger sample sizes are necessary to obtain reasonable estimates of metrics and sample statistics. For bioassessment to progress, spatial variability may need to be investigated in many systems and should be considered when designing studies and interpreting data.
Retinal Vascular and Oxygen Temporal Dynamic Responses to Light Flicker in Humans
Felder, Anthony E.; Wanek, Justin; Blair, Norman P.
2017-01-01
Purpose To mathematically model the temporal dynamic responses of retinal vessel diameter (D), oxygen saturation (SO2), and inner retinal oxygen extraction fraction (OEF) to light flicker and to describe their responses to its cessation in humans. Methods In 16 healthy subjects (age: 60 ± 12 years), retinal oximetry was performed before, during, and after light flicker stimulation. At each time point, five metrics were measured: retinal arterial and venous D (DA, DV) and SO2 (SO2A, SO2V), and OEF. Intra- and intersubject variability of metrics was assessed by coefficient of variation of measurements before flicker within and among subjects, respectively. Metrics during flicker were modeled by exponential functions to determine the flicker-induced steady state metric values and the time constants of changes. Metrics after the cessation of flicker were compared to those before flicker. Results Intra- and intersubject variability for all metrics were less than 6% and 16%, respectively. At the flicker-induced steady state, DA and DV increased by 5%, SO2V increased by 7%, and OEF decreased by 13%. The time constants of DA and DV (14, 15 seconds) were twofold smaller than those of SO2V and OEF (39, 34 seconds). Within 26 seconds after the cessation of flicker, all metrics were not significantly different from before flicker values (P ≥ 0.07). Conclusions Mathematical modeling revealed considerable differences in the time courses of changes among metrics during flicker, indicating flicker duration should be considered separately for each metric. Future application of this method may be useful to elucidate alterations in temporal dynamic responses to light flicker due to retinal diseases. PMID:29098297
Decomposition-based transfer distance metric learning for image classification.
Luo, Yong; Liu, Tongliang; Tao, Dacheng; Xu, Chao
2014-09-01
Distance metric learning (DML) is a critical factor for image analysis and pattern recognition. To learn a robust distance metric for a target task, we need abundant side information (i.e., the similarity/dissimilarity pairwise constraints over the labeled data), which is usually unavailable in practice due to the high labeling cost. This paper considers the transfer learning setting by exploiting the large quantity of side information from certain related, but different source tasks to help with target metric learning (with only a little side information). The state-of-the-art metric learning algorithms usually fail in this setting because the data distributions of the source task and target task are often quite different. We address this problem by assuming that the target distance metric lies in the space spanned by the eigenvectors of the source metrics (or other randomly generated bases). The target metric is represented as a combination of the base metrics, which are computed using the decomposed components of the source metrics (or simply a set of random bases); we call the proposed method, decomposition-based transfer DML (DTDML). In particular, DTDML learns a sparse combination of the base metrics to construct the target metric by forcing the target metric to be close to an integration of the source metrics. The main advantage of the proposed method compared with existing transfer metric learning approaches is that we directly learn the base metric coefficients instead of the target metric. To this end, far fewer variables need to be learned. We therefore obtain more reliable solutions given the limited side information and the optimization tends to be faster. Experiments on the popular handwritten image (digit, letter) classification and challenge natural image annotation tasks demonstrate the effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Jaber, Salahuddin M.
Soil organic carbon (SOC) sequestration is a component of larger strategies to control the accumulation of greenhouse gases that may be causing global warming. To implement this approach, it is necessary to improve the methods of measuring SOC content. Among these methods are indirect remote sensing and geographic information systems (GIS) techniques that are required to provide non-intrusive, low cost, and spatially continuous information that cover large areas on a repetitive basis. The main goal of this study is to evaluate the effects of using Hyperion hyperspectral data on improving the existing remote sensing and GIS-based methodologies for rapidly, efficiently, and accurately measuring SOC content on farmland. The study area is Big Creek Watershed (BCW) in Southern Illinois. The methodology consists of compiling a GIS database (consisting of remote sensing and soil variables) for 303 composite soil samples collected from representative pixels along the Hyperion coverage area of the watershed. Stepwise procedures were used to calibrate and validate linear multiple regression models where SOC was regarded as the response and the other remote sensing and soil variables as the predictors. Two models were selected. The first was the best all variables model and the second was the best only raster variables model. Map algebra was implemented to extrapolate the best only raster variables model and produce a SOC map for the BGW. This study concluded that Hyperion data marginally improved the predictability of the existing SOC statistical models based on multispectral satellite remote sensing sensors with correlation coefficient of 0.37 and root mean square error of 3.19 metric tons/hectare to a 15-cm depth. The total SOC pool of the study area is about 225,232 metric tons to 15-cm depth. The nonforested wetlands contained the highest SOC density (34.3 metric tons/hectare/15cm) with total SOC content of about 2,003.5 metric tons to 15-cm depth, where croplands had the lowest SOC density (21.6 metric tons/hectare/15cm) with total SOC content of about 44,571.2 metric tons to 15-cm depth.
Sáez, Carlos; Robles, Montserrat; García-Gómez, Juan M
2017-02-01
Biomedical data may be composed of individuals generated from distinct, meaningful sources. Due to possible contextual biases in the processes that generate data, there may exist an undesirable and unexpected variability among the probability distribution functions (PDFs) of the source subsamples, which, when uncontrolled, may lead to inaccurate or unreproducible research results. Classical statistical methods may have difficulties to undercover such variabilities when dealing with multi-modal, multi-type, multi-variate data. This work proposes two metrics for the analysis of stability among multiple data sources, robust to the aforementioned conditions, and defined in the context of data quality assessment. Specifically, a global probabilistic deviation and a source probabilistic outlyingness metrics are proposed. The first provides a bounded degree of the global multi-source variability, designed as an estimator equivalent to the notion of normalized standard deviation of PDFs. The second provides a bounded degree of the dissimilarity of each source to a latent central distribution. The metrics are based on the projection of a simplex geometrical structure constructed from the Jensen-Shannon distances among the sources PDFs. The metrics have been evaluated and demonstrated their correct behaviour on a simulated benchmark and with real multi-source biomedical data using the UCI Heart Disease data set. The biomedical data quality assessment based on the proposed stability metrics may improve the efficiency and effectiveness of biomedical data exploitation and research.
SU-G-IeP4-13: PET Image Noise Variability and Its Consequences for Quantifying Tumor Hypoxia
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kueng, R; Radiation Medicine Program, Princess Margaret Cancer Centre, University Health Network, Toronto, Ontario; Manser, P
Purpose: The values in a PET image which represent activity concentrations of a radioactive tracer are influenced by a large number of parameters including patient conditions as well as image acquisition and reconstruction. This work investigates noise characteristics in PET images for various image acquisition and image reconstruction parameters. Methods: Different phantoms with homogeneous activity distributions were scanned using several acquisition parameters and reconstructed with numerous sets of reconstruction parameters. Images from six PET scanners from different vendors were analyzed and compared with respect to quantitative noise characteristics. Local noise metrics, which give rise to a threshold value defining themore » metric of hypoxic fraction, as well as global noise measures in terms of noise power spectra (NPS) were computed. In addition to variability due to different reconstruction parameters, spatial variability of activity distribution and its noise metrics were investigated. Patient data from clinical trials were mapped onto phantom scans to explore the impact of the scanner’s intrinsic noise variability on quantitative clinical analysis. Results: Local noise metrics showed substantial variability up to an order of magnitude for different reconstruction parameters. Investigations of corresponding NPS revealed reconstruction dependent structural noise characteristics. For the acquisition parameters, noise metrics were guided by Poisson statistics. Large spatial non-uniformity of the noise was observed in both axial and radial direction of a PET image. In addition, activity concentrations in PET images of homogeneous phantom scans showed intriguing spatial fluctuations for most scanners. The clinical metric of the hypoxic fraction was shown to be considerably influenced by the PET scanner’s spatial noise characteristics. Conclusion: We showed that a hypoxic fraction metric based on noise characteristics requires careful consideration of the various dependencies in order to justify its quantitative validity. This work may result in recommendations for harmonizing QA of PET imaging for multi-institutional clinical trials.« less
Tilsen, Sam; Arvaniti, Amalia
2013-07-01
This study presents a method for analyzing speech rhythm using empirical mode decomposition of the speech amplitude envelope, which allows for extraction and quantification of syllabic- and supra-syllabic time-scale components of the envelope. The method of empirical mode decomposition of a vocalic energy amplitude envelope is illustrated in detail, and several types of rhythm metrics derived from this method are presented. Spontaneous speech extracted from the Buckeye Corpus is used to assess the effect of utterance length on metrics, and it is shown how metrics representing variability in the supra-syllabic time-scale components of the envelope can be used to identify stretches of speech with targeted rhythmic characteristics. Furthermore, the envelope-based metrics are used to characterize cross-linguistic differences in speech rhythm in the UC San Diego Speech Lab corpus of English, German, Greek, Italian, Korean, and Spanish speech elicited in read sentences, read passages, and spontaneous speech. The envelope-based metrics exhibit significant effects of language and elicitation method that argue for a nuanced view of cross-linguistic rhythm patterns.
Multiscale entropy-based methods for heart rate variability complexity analysis
NASA Astrophysics Data System (ADS)
Silva, Luiz Eduardo Virgilio; Cabella, Brenno Caetano Troca; Neves, Ubiraci Pereira da Costa; Murta Junior, Luiz Otavio
2015-03-01
Physiologic complexity is an important concept to characterize time series from biological systems, which associated to multiscale analysis can contribute to comprehension of many complex phenomena. Although multiscale entropy has been applied to physiological time series, it measures irregularity as function of scale. In this study we purpose and evaluate a set of three complexity metrics as function of time scales. Complexity metrics are derived from nonadditive entropy supported by generation of surrogate data, i.e. SDiffqmax, qmax and qzero. In order to access accuracy of proposed complexity metrics, receiver operating characteristic (ROC) curves were built and area under the curves was computed for three physiological situations. Heart rate variability (HRV) time series in normal sinus rhythm, atrial fibrillation, and congestive heart failure data set were analyzed. Results show that proposed metric for complexity is accurate and robust when compared to classic entropic irregularity metrics. Furthermore, SDiffqmax is the most accurate for lower scales, whereas qmax and qzero are the most accurate when higher time scales are considered. Multiscale complexity analysis described here showed potential to assess complex physiological time series and deserves further investigation in wide context.
Alternative to the Palatini method: A new variational principle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goenner, Hubert
2010-06-15
A variational principle is suggested within Riemannian geometry, in which an auxiliary metric and the Levi Civita connection are varied independently. The auxiliary metric plays the role of a Lagrange multiplier and introduces nonminimal coupling of matter to the curvature scalar. The field equations are 2nd order PDEs and easier to handle than those following from the so-called Palatini method. Moreover, in contrast to the latter method, no gradients of the matter variables appear. In cosmological modeling, the physics resulting from the alternative variational principle will differ from the modeling using the standard Palatini method.
Temporal Variability of Daily Personal Magnetic Field Exposure Metrics in Pregnant Women
Lewis, Ryan C.; Evenson, Kelly R.; Savitz, David A.; Meeker, John D.
2015-01-01
Recent epidemiology studies of power-frequency magnetic fields and reproductive health have characterized exposures using data collected from personal exposure monitors over a single day, possibly resulting in exposure misclassification due to temporal variability in daily personal magnetic field exposure metrics, but relevant data in adults are limited. We assessed the temporal variability of daily central tendency (time-weighted average, median) and peak (upper percentiles, maximum) personal magnetic field exposure metrics over seven consecutive days in 100 pregnant women. When exposure was modeled as a continuous variable, central tendency metrics had substantial reliability, whereas peak metrics had fair (maximum) to moderate (upper percentiles) reliability. The predictive ability of a single day metric to accurately classify participants into exposure categories based on a weeklong metric depended on the selected exposure threshold, with sensitivity decreasing with increasing exposure threshold. Consistent with the continuous measures analysis, sensitivity was higher for central tendency metrics than for peak metrics. If there is interest in peak metrics, more than one day of measurement is needed over the window of disease susceptibility to minimize measurement error, but one day may be sufficient for central tendency metrics. PMID:24691007
NASA Astrophysics Data System (ADS)
Bookstein, Fred L.
1995-08-01
Recent advances in computational geometry have greatly extended the range of neuroanatomical questions that can be approached by rigorous quantitative methods. One of the major current challenges in this area is to describe the variability of human cortical surface form and its implications for individual differences in neurophysiological functioning. Existing techniques for representation of stochastically invaginated surfaces do not conduce to the necessary parametric statistical summaries. In this paper, following a hint from David Van Essen and Heather Drury, I sketch a statistical method customized for the constraints of this complex data type. Cortical surface form is represented by its Riemannian metric tensor and averaged according to parameters of a smooth averaged surface. Sulci are represented by integral trajectories of the smaller principal strains of this metric, and their statistics follow the statistics of that relative metric. The diagrams visualizing this tensor analysis look like alligator leather but summarize all aspects of cortical surface form in between the principal sulci, the reliable ones; no flattening is required.
Burton, Carmen A.
2008-01-01
Biotic communities and environmental conditions can be highly variable between natural ecosystems. The variability of natural assemblages should be considered in the interpretation of any ecological study when samples are either spatially or temporally distributed. Little is known about biotic variability in the Santa Ana River Basin. In this report, the lotic community and habitat assessment data from ecological studies done as part of the U.S. Geological Survey's National Water-Quality Assessment (NAWQA) program are used for a preliminary assessment of variability in the Santa Ana Basin. Habitat was assessed, and benthic algae, benthic macroinvertebrate, and fish samples were collected at four sites during 1999-2001. Three of these sites were sampled all three years. One of these sites is located in the San Bernardino Mountains, and the other two sites are located in the alluvial basin. Analysis of variance determined that the three sites with multiyear data were significantly different for 41 benthic algae metrics and 65 macroinvertebrate metrics and fish communities. Coefficients of variation (CVs) were calculated for the habitat measurements, metrics of benthic algae, and macroinvertebrate data as measures of variability. Annual variability of habitat data was generally greater at the mountain site than at the basin sites. The mountain site had higher CVs for water temperature, depth, velocity, canopy angle, streambed substrate, and most water-quality variables. In general, CVs of most benthic algae metrics calculated from the richest-targeted habitat (RTH) samples were greater at the mountain site. In contrast, CVs of most benthic algae metrics calculated from depositional-targeted habitat (DTH) samples were lower at the mountain site. In general, CVs of macroinvertebrate metrics calculated from qualitative multihabitat (QMH) samples were lower at the mountain site. In contrast, CVs of many metrics calculated from RTH samples were greater at the mountain site than at one of the basin sites. Fish communities were more variable at the basin sites because more species were present at these sites. Annual variability of benthic algae metrics was related to annual variability in habitat variables. The CVs of benthic algae metrics related to the most CVs of habitat variables included QMH taxon richness, the RTH percentage richness, RTH abundance of tolerant taxa, RTH percentage richness of halophilic diatoms, RTH percentage abundance of sestonic diatoms, DTH percentage richness of nitrogen heterotrophic diatoms, and DTH pollution tolerance index. The CVs of macroinvertebrate metrics related to the most CVs of habitat variables included the RTH trichoptera, RTH EPT, RTH scraper richness, RTH nonchironomid dipteran abundance (in percent), and RTH EPA (U.S. Environmental Protection Agency) tolerance, which is based on abundance. Many of the CVs of habitat variables related to CVs of macroinvertebrate metrics were the same habitat variables that were related to the CVs of benthic algae metrics. On the basis of these results, annual variability may have a role in the relationship of benthic algae and macroinvertebrates assemblages with habitat and water quality in the Santa Ana Basin. This report provides valuable baseline data on the variability of biological communities in the Santa Ana Basin.
NASA Astrophysics Data System (ADS)
Li, Wang; Niu, Zheng; Gao, Shuai; Wang, Cheng
2014-11-01
Light Detection and Ranging (LiDAR) and Synthetic Aperture Radar (SAR) are two competitive active remote sensing techniques in forest above ground biomass estimation, which is important for forest management and global climate change study. This study aims to further explore their capabilities in temperate forest above ground biomass (AGB) estimation by emphasizing the spatial auto-correlation of variables obtained from these two remote sensing tools, which is a usually overlooked aspect in remote sensing applications to vegetation studies. Remote sensing variables including airborne LiDAR metrics, backscattering coefficient for different SAR polarizations and their ratio variables for Radarsat-2 imagery were calculated. First, simple linear regression models (SLR) was established between the field-estimated above ground biomass and the remote sensing variables. Pearson's correlation coefficient (R2) was used to find which LiDAR metric showed the most significant correlation with the regression residuals and could be selected as co-variable in regression co-kriging (RCoKrig). Second, regression co-kriging was conducted by choosing the regression residuals as dependent variable and the LiDAR metric (Hmean) with highest R2 as co-variable. Third, above ground biomass over the study area was estimated using SLR model and RCoKrig model, respectively. The results for these two models were validated using the same ground points. Results showed that both of these two methods achieved satisfactory prediction accuracy, while regression co-kriging showed the lower estimation error. It is proved that regression co-kriging model is feasible and effective in mapping the spatial pattern of AGB in the temperate forest using Radarsat-2 data calibrated by airborne LiDAR metrics.
NASA Astrophysics Data System (ADS)
Szereszewski, A.; Sym, A.
2015-09-01
The standard method of separation of variables in PDEs called the Stäckel-Robertson-Eisenhart (SRE) approach originated in the papers by Robertson (1928 Math. Ann. 98 749-52) and Eisenhart (1934 Ann. Math. 35 284-305) on separability of variables in the Schrödinger equation defined on a pseudo-Riemannian space equipped with orthogonal coordinates, which in turn were based on the purely classical mechanics results by Paul Stäckel (1891, Habilitation Thesis, Halle). These still fundamental results have been further extended in diverse directions by e.g. Havas (1975 J. Math. Phys. 16 1461-8 J. Math. Phys. 16 2476-89) or Koornwinder (1980 Lecture Notes in Mathematics 810 (Berlin: Springer) pp 240-63). The involved separability is always ordinary (factor R = 1) and regular (maximum number of independent parameters in separation equations). A different approach to separation of variables was initiated by Gaston Darboux (1878 Ann. Sci. E.N.S. 7 275-348) which has been almost completely forgotten in today’s research on the subject. Darboux’s paper was devoted to the so-called R-separability of variables in the standard Laplace equation. At the outset he did not make any specific assumption about the separation equations (this is in sharp contrast to the SRE approach). After impressive calculations Darboux obtained a complete solution of the problem. He found not only eleven cases of ordinary separability Eisenhart (1934 Ann. Math. 35 284-305) but also Darboux-Moutard-cyclidic metrics (Bôcher 1894 Ueber die Reihenentwickelungen der Potentialtheorie (Leipzig: Teubner)) and non-regularly separable Dupin-cyclidic metrics as well. In our previous paper Darboux’s approach was extended to the case of the stationary Schrödinger equation on Riemannian spaces admitting orthogonal coordinates. In particular the class of isothermic metrics was defined (isothermicity of the metric is a necessary condition for its R-separability). An important sub-class of isothermic metrics are binary metrics. In this paper we solve the following problem: to classify all conformally flat (of arbitrary signature) 4-dimensional binary metrics. Among them there are 1) those that are separable in the sense of SRE metrics Kalnins-Miller (1978 Trans. Am. Math. Soc. 244 241-61 1982 J. Phys. A: Math. Gen. 15 2699-709 1984 Adv. Math. 51 91-106 1983 SIAM J. Math. Anal. 14 126-37) and 2) new examples of non-Stäckel R-separability in 4 dimensions.
Evaluation of Two Crew Module Boilerplate Tests Using Newly Developed Calibration Metrics
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Reaves, Mercedes C.
2012-01-01
The paper discusses a application of multi-dimensional calibration metrics to evaluate pressure data from water drop tests of the Max Launch Abort System (MLAS) crew module boilerplate. Specifically, three metrics are discussed: 1) a metric to assess the probability of enveloping the measured data with the model, 2) a multi-dimensional orthogonality metric to assess model adequacy between test and analysis, and 3) a prediction error metric to conduct sensor placement to minimize pressure prediction errors. Data from similar (nearly repeated) capsule drop tests shows significant variability in the measured pressure responses. When compared to expected variability using model predictions, it is demonstrated that the measured variability cannot be explained by the model under the current uncertainty assumptions.
Person re-identification over camera networks using multi-task distance metric learning.
Ma, Lianyang; Yang, Xiaokang; Tao, Dacheng
2014-08-01
Person reidentification in a camera network is a valuable yet challenging problem to solve. Existing methods learn a common Mahalanobis distance metric by using the data collected from different cameras and then exploit the learned metric for identifying people in the images. However, the cameras in a camera network have different settings and the recorded images are seriously affected by variability in illumination conditions, camera viewing angles, and background clutter. Using a common metric to conduct person reidentification tasks on different camera pairs overlooks the differences in camera settings; however, it is very time-consuming to label people manually in images from surveillance videos. For example, in most existing person reidentification data sets, only one image of a person is collected from each of only two cameras; therefore, directly learning a unique Mahalanobis distance metric for each camera pair is susceptible to over-fitting by using insufficiently labeled data. In this paper, we reformulate person reidentification in a camera network as a multitask distance metric learning problem. The proposed method designs multiple Mahalanobis distance metrics to cope with the complicated conditions that exist in typical camera networks. We address the fact that these Mahalanobis distance metrics are different but related, and learned by adding joint regularization to alleviate over-fitting. Furthermore, by extending, we present a novel multitask maximally collapsing metric learning (MtMCML) model for person reidentification in a camera network. Experimental results demonstrate that formulating person reidentification over camera networks as multitask distance metric learning problem can improve performance, and our proposed MtMCML works substantially better than other current state-of-the-art person reidentification methods.
Rawlings, Renata A; Shi, Hang; Yuan, Lo-Hua; Brehm, William; Pop-Busui, Rodica; Nelson, Patrick W
2011-12-01
Several metrics of glucose variability have been proposed to date, but an integrated approach that provides a complete and consistent assessment of glycemic variation is missing. As a consequence, and because of the tedious coding necessary during quantification, most investigators and clinicians have not yet adopted the use of multiple glucose variability metrics to evaluate glycemic variation. We compiled the most extensively used statistical techniques and glucose variability metrics, with adjustable hyper- and hypoglycemic limits and metric parameters, to create a user-friendly Continuous Glucose Monitoring Graphical User Interface for Diabetes Evaluation (CGM-GUIDE©). In addition, we introduce and demonstrate a novel transition density profile that emphasizes the dynamics of transitions between defined glucose states. Our combined dashboard of numerical statistics and graphical plots support the task of providing an integrated approach to describing glycemic variability. We integrated existing metrics, such as SD, area under the curve, and mean amplitude of glycemic excursion, with novel metrics such as the slopes across critical transitions and the transition density profile to assess the severity and frequency of glucose transitions per day as they move between critical glycemic zones. By presenting the above-mentioned metrics and graphics in a concise aggregate format, CGM-GUIDE provides an easy to use tool to compare quantitative measures of glucose variability. This tool can be used by researchers and clinicians to develop new algorithms of insulin delivery for patients with diabetes and to better explore the link between glucose variability and chronic diabetes complications.
Meel-van den Abeelen, Aisha S.S.; Simpson, David M.; Wang, Lotte J.Y.; Slump, Cornelis H.; Zhang, Rong; Tarumi, Takashi; Rickards, Caroline A.; Payne, Stephen; Mitsis, Georgios D.; Kostoglou, Kyriaki; Marmarelis, Vasilis; Shin, Dae; Tzeng, Yu-Chieh; Ainslie, Philip N.; Gommer, Erik; Müller, Martin; Dorado, Alexander C.; Smielewski, Peter; Yelicich, Bernardo; Puppo, Corina; Liu, Xiuyun; Czosnyka, Marek; Wang, Cheng-Yen; Novak, Vera; Panerai, Ronney B.; Claassen, Jurgen A.H.R.
2014-01-01
Transfer function analysis (TFA) is a frequently used method to assess dynamic cerebral autoregulation (CA) using spontaneous oscillations in blood pressure (BP) and cerebral blood flow velocity (CBFV). However, controversies and variations exist in how research groups utilise TFA, causing high variability in interpretation. The objective of this study was to evaluate between-centre variability in TFA outcome metrics. 15 centres analysed the same 70 BP and CBFV datasets from healthy subjects (n = 50 rest; n = 20 during hypercapnia); 10 additional datasets were computer-generated. Each centre used their in-house TFA methods; however, certain parameters were specified to reduce a priori between-centre variability. Hypercapnia was used to assess discriminatory performance and synthetic data to evaluate effects of parameter settings. Results were analysed using the Mann–Whitney test and logistic regression. A large non-homogeneous variation was found in TFA outcome metrics between the centres. Logistic regression demonstrated that 11 centres were able to distinguish between normal and impaired CA with an AUC > 0.85. Further analysis identified TFA settings that are associated with large variation in outcome measures. These results indicate the need for standardisation of TFA settings in order to reduce between-centre variability and to allow accurate comparison between studies. Suggestions on optimal signal processing methods are proposed. PMID:24725709
Information-Theoretic Metrics for Visualizing Gene-Environment Interactions
Chanda, Pritam ; Zhang, Aidong ; Brazeau, Daniel ; Sucheston, Lara ; Freudenheim, Jo L. ; Ambrosone, Christine ; Ramanathan, Murali
2007-01-01
The purpose of our work was to develop heuristics for visualizing and interpreting gene-environment interactions (GEIs) and to assess the dependence of candidate visualization metrics on biological and study-design factors. Two information-theoretic metrics, the k-way interaction information (KWII) and the total correlation information (TCI), were investigated. The effectiveness of the KWII and TCI to detect GEIs in a diverse range of simulated data sets and a Crohn disease data set was assessed. The sensitivity of the KWII and TCI spectra to biological and study-design variables was determined. Head-to-head comparisons with the relevance-chain, multifactor dimensionality reduction, and the pedigree disequilibrium test (PDT) methods were obtained. The KWII and TCI spectra, which are graphical summaries of the KWII and TCI for each subset of environmental and genotype variables, were found to detect each known GEI in the simulated data sets. The patterns in the KWII and TCI spectra were informative for factors such as case-control misassignment, locus heterogeneity, allele frequencies, and linkage disequilibrium. The KWII and TCI spectra were found to have excellent sensitivity for identifying the key disease-associated genetic variations in the Crohn disease data set. In head-to-head comparisons with the relevance-chain, multifactor dimensionality reduction, and PDT methods, the results from visual interpretation of the KWII and TCI spectra performed satisfactorily. The KWII and TCI are promising metrics for visualizing GEIs. They are capable of detecting interactions among numerous single-nucleotide polymorphisms and environmental variables for a diverse range of GEI models. PMID:17924337
Iterative methods for mixed finite element equations
NASA Technical Reports Server (NTRS)
Nakazawa, S.; Nagtegaal, J. C.; Zienkiewicz, O. C.
1985-01-01
Iterative strategies for the solution of indefinite system of equations arising from the mixed finite element method are investigated in this paper with application to linear and nonlinear problems in solid and structural mechanics. The augmented Hu-Washizu form is derived, which is then utilized to construct a family of iterative algorithms using the displacement method as the preconditioner. Two types of iterative algorithms are implemented. Those are: constant metric iterations which does not involve the update of preconditioner; variable metric iterations, in which the inverse of the preconditioning matrix is updated. A series of numerical experiments is conducted to evaluate the numerical performance with application to linear and nonlinear model problems.
Defining quality metrics and improving safety and outcome in allergy care.
Lee, Stella; Stachler, Robert J; Ferguson, Berrylin J
2014-04-01
The delivery of allergy immunotherapy in the otolaryngology office is variable and lacks standardization. Quality metrics encompasses the measurement of factors associated with good patient-centered care. These factors have yet to be defined in the delivery of allergy immunotherapy. We developed and applied quality metrics to 6 allergy practices affiliated with an academic otolaryngic allergy center. This work was conducted at a tertiary academic center providing care to over 1500 patients. We evaluated methods and variability between 6 sites. Tracking of errors and anaphylaxis was initiated across all sites. A nationwide survey of academic and private allergists was used to collect data on current practice and use of quality metrics. The most common types of errors recorded were patient identification errors (n = 4), followed by vial mixing errors (n = 3), and dosing errors (n = 2). There were 7 episodes of anaphylaxis of which 2 were secondary to dosing errors for a rate of 0.01% or 1 in every 10,000 injection visits/year. Site visits showed that 86% of key safety measures were followed. Analysis of nationwide survey responses revealed that quality metrics are still not well defined by either medical or otolaryngic allergy practices. Academic practices were statistically more likely to use quality metrics (p = 0.021) and perform systems reviews and audits in comparison to private practices (p = 0.005). Quality metrics in allergy delivery can help improve safety and quality care. These metrics need to be further defined by otolaryngic allergists in the changing health care environment. © 2014 ARS-AAOA, LLC.
Climate and soil attributes determine plant species turnover in global drylands
Maestre, Fernando T.; Gotelli, Nicholas J.; Quero, José L.; Delgado-Baquerizo, Manuel; Bowker, Matthew A.; Eldridge, David J.; Ochoa, Victoria; Gozalo, Beatriz; Valencia, Enrique; Berdugo, Miguel; Escolar, Cristina; García-Gómez, Miguel; Escudero, Adrián; Prina, Aníbal; Alfonso, Graciela; Arredondo, Tulio; Bran, Donaldo; Cabrera, Omar; Cea, Alex; Chaieb, Mohamed; Contreras, Jorge; Derak, Mchich; Espinosa, Carlos I.; Florentino, Adriana; Gaitán, Juan; Muro, Victoria García; Ghiloufi, Wahida; Gómez-González, Susana; Gutiérrez, Julio R.; Hernández, Rosa M.; Huber-Sannwald, Elisabeth; Jankju, Mohammad; Mau, Rebecca L.; Hughes, Frederic Mendes; Miriti, Maria; Monerris, Jorge; Muchane, Muchai; Naseri, Kamal; Pucheta, Eduardo; Ramírez-Collantes, David A.; Raveh, Eran; Romão, Roberto L.; Torres-Díaz, Cristian; Val, James; Veiga, José Pablo; Wang, Deli; Yuan, Xia; Zaady, Eli
2015-01-01
Aim Geographic, climatic, and soil factors are major drivers of plant beta diversity, but their importance for dryland plant communities is poorly known. This study aims to: i) characterize patterns of beta diversity in global drylands, ii) detect common environmental drivers of beta diversity, and iii) test for thresholds in environmental conditions driving potential shifts in plant species composition. Location 224 sites in diverse dryland plant communities from 22 geographical regions in six continents. Methods Beta diversity was quantified with four complementary measures: the percentage of singletons (species occurring at only one site), Whittake’s beta diversity (β(W)), a directional beta diversity metric based on the correlation in species occurrences among spatially contiguous sites (β(R2)), and a multivariate abundance-based metric (β(MV)). We used linear modelling to quantify the relationships between these metrics of beta diversity and geographic, climatic, and soil variables. Results Soil fertility and variability in temperature and rainfall, and to a lesser extent latitude, were the most important environmental predictors of beta diversity. Metrics related to species identity (percentage of singletons and β(W)) were most sensitive to soil fertility, whereas those metrics related to environmental gradients and abundance ((β(R2)) and β(MV)) were more associated with climate variability. Interactions among soil variables, climatic factors, and plant cover were not important determinants of beta diversity. Sites receiving less than 178 mm of annual rainfall differed sharply in species composition from more mesic sites (> 200 mm). Main conclusions Soil fertility and variability in temperature and rainfall are the most important environmental predictors of variation in plant beta diversity in global drylands. Our results suggest that those sites annually receiving ~ 178 mm of rainfall will be especially sensitive to future climate changes. These findings may help to define appropriate conservation strategies for mitigating effects of climate change on dryland vegetation. PMID:25914437
A condition metric for Eucalyptus woodland derived from expert evaluations.
Sinclair, Steve J; Bruce, Matthew J; Griffioen, Peter; Dodd, Amanda; White, Matthew D
2018-02-01
The evaluation of ecosystem quality is important for land-management and land-use planning. Evaluation is unavoidably subjective, and robust metrics must be based on consensus and the structured use of observations. We devised a transparent and repeatable process for building and testing ecosystem metrics based on expert data. We gathered quantitative evaluation data on the quality of hypothetical grassy woodland sites from experts. We used these data to train a model (an ensemble of 30 bagged regression trees) capable of predicting the perceived quality of similar hypothetical woodlands based on a set of 13 site variables as inputs (e.g., cover of shrubs, richness of native forbs). These variables can be measured at any site and the model implemented in a spreadsheet as a metric of woodland quality. We also investigated the number of experts required to produce an opinion data set sufficient for the construction of a metric. The model produced evaluations similar to those provided by experts, as shown by assessing the model's quality scores of expert-evaluated test sites not used to train the model. We applied the metric to 13 woodland conservation reserves and asked managers of these sites to independently evaluate their quality. To assess metric performance, we compared the model's evaluation of site quality with the managers' evaluations through multidimensional scaling. The metric performed relatively well, plotting close to the center of the space defined by the evaluators. Given the method provides data-driven consensus and repeatability, which no single human evaluator can provide, we suggest it is a valuable tool for evaluating ecosystem quality in real-world contexts. We believe our approach is applicable to any ecosystem. © 2017 State of Victoria.
Using Remote Sensing to Estimate Crop Water Use to Improve Irrigation Water Management
NASA Astrophysics Data System (ADS)
Reyes-Gonzalez, Arturo
Irrigation water is scarce. Hence, accurate estimation of crop water use is necessary for proper irrigation managements and water conservation. Satellite-based remote sensing is a tool that can estimate crop water use efficiently. Several models have been developed to estimate crop water requirement or actual evapotranspiration (ETa) using remote sensing. One of them is the Mapping EvapoTranspiration at High Resolution using Internalized Calibration (METRIC) model. This model has been compared with other methods for ET estimations including weighing lysimeters, pan evaporation, Bowen Ratio Energy Balance System (BREBS), Eddy Covariance (EC), and sap flow. However, comparison of METRIC model outputs to an atmometer for ETa estimation has not yet been attempted in eastern South Dakota. The results showed a good relationship between ETa estimated by the METRIC model and estimated with atmometer (r2 = 0.87 and RMSE = 0.65 mm day-1). However, ETa values from atmometer were consistently lower than ET a values from METRIC. The verification of remotely sensed estimates of surface variables is essential for any remote-sensing study. The relationships between LAI, Ts, and ETa estimated using the remote sensing-based METRIC model and in-situ measurements were established. The results showed good agreement between the variables measured in situ and estimated by the METRIC model. LAI showed r2 = 0.76, and RMSE = 0.59 m2 m -2, Ts had r2 = 0.87 and RMSE 1.24 °C and ETa presented r2= 0.89 and RMSE = 0.71 mm day -1. Estimation of ETa using energy balance method can be challenging and time consuming. Thus, there is a need to develop a simple and fast method to estimate ETa using minimum input parameters. Two methods were used, namely 1) an energy balance method (EB method) that used input parameters of the Landsat image, weather data, a digital elevation map, and a land cover map and 2) a Kc-NDVI method that use two input parameters: the Landsat image and weather data. A strong relationship was found between the two methods with r2 of 0.97 and RMSE of 0.37 mm day -1. Hence, the Kc-NDVI method performed well for ET a estimations, indicating that Kc-NDVI method can be a robust and reliable method to estimate ETa in a short period of time. Estimation of crop evapotranspiration (ETc) using satellite remote sensing-based vegetation index such as the Normalized Difference Vegetation Index (NDVI). The NDVI was calculated using near-infrared and red wavebands. The relationship between NDVI and tabulated Kc's was used to generate Kc maps. ETc maps were developed as an output of Kc maps multiplied by reference evapotranspiration (ETr). Daily ETc maps helped to explain the variability of crop water use during the growing season. Based on the results we can conclude that ETc maps developed from remotely sensed multispectral vegetation indices are a useful tool for quantifying crop water use at regional and field scales.
Marc C. Coles-Ritchie; Richard C. Henderson; Eric K. Archer; Caroline Kennedy; Jeffrey L. Kershner
2004-01-01
Tests were conducted to evaluate variability among observers for riparian vegetation data collection methods and data reduction techniques. The methods are used as part of a largescale monitoring program designed to detect changes in riparian resource conditions on Federal lands. Methods were evaluated using agreement matrices, the Bray-Curtis dissimilarity metric, the...
Parrish, Donna; Butryn, Ryan S.; Rizzo, Donna M.
2012-01-01
We developed a methodology to predict brook trout (Salvelinus fontinalis) distribution using summer temperature metrics as predictor variables. Our analysis used long-term fish and hourly water temperature data from the Dog River, Vermont (USA). Commonly used metrics (e.g., mean, maximum, maximum 7-day maximum) tend to smooth the data so information on temperature variation is lost. Therefore, we developed a new set of metrics (called event metrics) to capture temperature variation by describing the frequency, area, duration, and magnitude of events that exceeded a user-defined temperature threshold. We used 16, 18, 20, and 22°C. We built linear discriminant models and tested and compared the event metrics against the commonly used metrics. Correct classification of the observations was 66% with event metrics and 87% with commonly used metrics. However, combined event and commonly used metrics correctly classified 92%. Of the four individual temperature thresholds, it was difficult to assess which threshold had the “best” accuracy. The 16°C threshold had slightly fewer misclassifications; however, the 20°C threshold had the fewest extreme misclassifications. Our method leveraged the volumes of existing long-term data and provided a simple, systematic, and adaptable framework for monitoring changes in fish distribution, specifically in the case of irregular, extreme temperature events.
A public hedonic analysis of environmental attributes in an open space preservation program
NASA Astrophysics Data System (ADS)
Nordman, Erik E.
The Town of Brookhaven, on Long Island, NY, has implemented an open space preservation program to protect natural areas, and the ecosystem services they provide, from suburban growth. I used a public hedonic model of Brookhaven's open space purchases to estimate implicit prices for various environmental attributes, locational variables and spatial metrics. I also measured the correlation between cost per acre and non-monetary environmental benefit scores and tested whether including cost data, as opposed to non-monetary environmental benefit score alone, would change the prioritization ranks of acquired properties. The mean acquisition cost per acre was 82,501. I identified the key on-site environmental and locational variables using stepwise regression for four functional forms. The log-log specification performed best ( R2adj= 0.727). I performed a second stepwise regression (log-log form) which included spatial metrics, calculated from a high-resolution land cover classification, in addition to the environmental and locational variables. This markedly improved the model's performance ( R2adj=0.866). Statistically significant variables included the property size, location in the Pine Barrens Compatible Growth Area, location in a FEMA flood zone, adjacency to public land, and several other environmental dummy variables. The single significant spatial metric, the fractal dimension of the tree cover class, had the largest elasticity of any variable. Of the dummy variables, location within the Compatible Growth Area had the largest implicit price (298,792 per acre). The priority rank for the two methods, non-monetary environmental benefit score alone and the ratio of non-monetary environmental benefit score to acquisition cost were significantly positively correlated. This suggests that, despite the lack of cost data in their ranking method, Brookhaven does not suffer from efficiency losses. The economics literature encourages using both environmental benefits and acquisition costs to ensure cost-effective conservation programs. I recommend that Brookhaven consider acquisition costs in addition to environmental benefits to avert potential efficiency losses in future open space purchases. This dissertation shows that the addition of spatial metrics can enhance the performance of hedonic models. It also provides a baseline valuation for the environmental attributes of Brookhaven' open spaces and shows that location is critical when dealing with open space preservation programs.
Many multivariate methods are used in describing and predicting relation; each has its unique usage of categorical and non-categorical data. In multivariate analysis of variance (MANOVA), many response variables (y's) are related to many independent variables that are categorical...
Taming the nonlinearity of the Einstein equation.
Harte, Abraham I
2014-12-31
Many of the technical complications associated with the general theory of relativity ultimately stem from the nonlinearity of Einstein's equation. It is shown here that an appropriate choice of dynamical variables may be used to eliminate all such nonlinearities beyond a particular order: Both Landau-Lifshitz and tetrad formulations of Einstein's equation are obtained that involve only finite products of the unknowns and their derivatives. Considerable additional simplifications arise in physically interesting cases where metrics become approximately Kerr or, e.g., plane waves, suggesting that the variables described here can be used to efficiently reformulate perturbation theory in a variety of contexts. In all cases, these variables are shown to have simple geometrical interpretations that directly relate the local causal structure associated with the metric of interest to the causal structure associated with a prescribed background. A new method to search for exact solutions is outlined as well.
Launch Vehicle Production and Operations Cost Metrics
NASA Technical Reports Server (NTRS)
Watson, Michael D.; Neeley, James R.; Blackburn, Ruby F.
2014-01-01
Traditionally, launch vehicle cost has been evaluated based on $/Kg to orbit. This metric is calculated based on assumptions not typically met by a specific mission. These assumptions include the specified orbit whether Low Earth Orbit (LEO), Geostationary Earth Orbit (GEO), or both. The metric also assumes the payload utilizes the full lift mass of the launch vehicle, which is rarely true even with secondary payloads.1,2,3 Other approaches for cost metrics have been evaluated including unit cost of the launch vehicle and an approach to consider the full program production and operations costs.4 Unit cost considers the variable cost of the vehicle and the definition of variable costs are discussed. The full program production and operation costs include both the variable costs and the manufacturing base. This metric also distinguishes operations costs from production costs, including pre-flight operational testing. Operations costs also consider the costs of flight operations, including control center operation and maintenance. Each of these 3 cost metrics show different sensitivities to various aspects of launch vehicle cost drivers. The comparison of these metrics provides the strengths and weaknesses of each yielding an assessment useful for cost metric selection for launch vehicle programs.
NASA Astrophysics Data System (ADS)
Senay, G. B.; Budde, M. E.; Allen, R. G.; Verdin, J. P.
2008-12-01
Evapotranspiration (ET) is an important component of the hydrologic budget because it expresses the exchange of mass and energy between the soil-water-vegetation system and the atmosphere. Since direct measurement of ET is difficult, various modeling methods are used to estimate actual ET (ETa). Generally, the choice of method for ET estimation depends on the objective of the study and is further limited by the availability of data and desired accuracy of the ET estimate. Operational monitoring of crop performance requires processing large data sets and a quick response time. A Simplified Surface Energy Balance (SSEB) model was developed by the U.S. Geological Survey's Famine Early Warning Systems Network to estimate irrigation water use in remote places of the world. In this study, we evaluated the performance of the SSEB model with the METRIC (Mapping Evapotranspiration at high Resolution and with Internalized Calibration) model that has been evaluated by several researchers using the Lysimeter data. The METRIC model has been proven to provide reliable ET estimates in different regions of the world. Reference ET fractions of both models (ETrF of METRIC vs. ETf of SSEB) were generated and compared using individual Landsat thermal images collected from 2000 though 2005 in Idaho, New Mexico, and California. In addition, the models were compared using monthly and seasonal total ETa estimates. The SSEB model reproduced both the spatial and temporal variability exhibited by METRIC on land surfaces, explaining up to 80 percent of the spatial variability. However, the ETa estimates over water bodies were systematically higher in the SSEB output, which could be improved by using a correction coefficient to take into account the absorption of solar energy by deeper water layers that has little contribution to the ET process. This study demonstrated the usefulness of the SSEB method for large-scale agro-hydrologic applications for operational monitoring and assessing of crop performance and regional water balance dynamics.
Hens, Koen; Berth, Mario; Armbruster, Dave; Westgard, Sten
2014-07-01
Six Sigma metrics were used to assess the analytical quality of automated clinical chemistry and immunoassay tests in a large Belgian clinical laboratory and to explore the importance of the source used for estimation of the allowable total error. Clinical laboratories are continually challenged to maintain analytical quality. However, it is difficult to measure assay quality objectively and quantitatively. The Sigma metric is a single number that estimates quality based on the traditional parameters used in the clinical laboratory: allowable total error (TEa), precision and bias. In this study, Sigma metrics were calculated for 41 clinical chemistry assays for serum and urine on five ARCHITECT c16000 chemistry analyzers. Controls at two analyte concentrations were tested and Sigma metrics were calculated using three different TEa targets (Ricos biological variability, CLIA, and RiliBÄK). Sigma metrics varied with analyte concentration, the TEa target, and between/among analyzers. Sigma values identified those assays that are analytically robust and require minimal quality control rules and those that exhibit more variability and require more complex rules. The analyzer to analyzer variability was assessed on the basis of Sigma metrics. Six Sigma is a more efficient way to control quality, but the lack of TEa targets for many analytes and the sometimes inconsistent TEa targets from different sources are important variables for the interpretation and the application of Sigma metrics in a routine clinical laboratory. Sigma metrics are a valuable means of comparing the analytical quality of two or more analyzers to ensure the comparability of patient test results.
New Objective Refraction Metric Based on Sphere Fitting to the Wavefront
Martínez-Finkelshtein, Andreí
2017-01-01
Purpose To develop an objective refraction formula based on the ocular wavefront error (WFE) expressed in terms of Zernike coefficients and pupil radius, which would be an accurate predictor of subjective spherical equivalent (SE) for different pupil sizes. Methods A sphere is fitted to the ocular wavefront at the center and at a variable distance, t. The optimal fitting distance, topt, is obtained empirically from a dataset of 308 eyes as a function of objective refraction pupil radius, r0, and used to define the formula of a new wavefront refraction metric (MTR). The metric is tested in another, independent dataset of 200 eyes. Results For pupil radii r0 ≤ 2 mm, the new metric predicts the equivalent sphere with similar accuracy (<0.1D), however, for r0 > 2 mm, the mean error of traditional metrics can increase beyond 0.25D, and the MTR remains accurate. The proposed metric allows clinicians to obtain an accurate clinical spherical equivalent value without rescaling/refitting of the wavefront coefficients. It has the potential to be developed into a metric which will be able to predict full spherocylindrical refraction for the desired illumination conditions and corresponding pupil size. PMID:29104804
Utility of different glycemic control metrics for optimizing management of diabetes.
Kohnert, Klaus-Dieter; Heinke, Peter; Vogt, Lutz; Salzsieder, Eckhard
2015-02-15
The benchmark for assessing quality of long-term glycemic control and adjustment of therapy is currently glycated hemoglobin (HbA1c). Despite its importance as an indicator for the development of diabetic complications, recent studies have revealed that this metric has some limitations; it conveys a rather complex message, which has to be taken into consideration for diabetes screening and treatment. On the basis of recent clinical trials, the relationship between HbA1c and cardiovascular outcomes in long-standing diabetes has been called into question. It becomes obvious that other surrogate and biomarkers are needed to better predict cardiovascular diabetes complications and assess efficiency of therapy. Glycated albumin, fructosamin, and 1,5-anhydroglucitol have received growing interest as alternative markers of glycemic control. In addition to measures of hyperglycemia, advanced glucose monitoring methods became available. An indispensible adjunct to HbA1c in routine diabetes care is self-monitoring of blood glucose. This monitoring method is now widely used, as it provides immediate feedback to patients on short-term changes, involving fasting, preprandial, and postprandial glucose levels. Beyond the traditional metrics, glycemic variability has been identified as a predictor of hypoglycemia, and it might also be implicated in the pathogenesis of vascular diabetes complications. Assessment of glycemic variability is thus important, but exact quantification requires frequently sampled glucose measurements. In order to optimize diabetes treatment, there is a need for both key metrics of glycemic control on a day-to-day basis and for more advanced, user-friendly monitoring methods. In addition to traditional discontinuous glucose testing, continuous glucose sensing has become a useful tool to reveal insufficient glycemic management. This new technology is particularly effective in patients with complicated diabetes and provides the opportunity to characterize glucose dynamics. Several continuous glucose monitoring (CGM) systems, which have shown usefulness in clinical practice, are presently on the market. They can broadly be divided into systems providing retrospective or real-time information on glucose patterns. The widespread clinical application of CGM is still hampered by the lack of generally accepted measures for assessment of glucose profiles and standardized reporting of glucose data. In this article, we will discuss advantages and limitations of various metrics for glycemic control as well as possibilities for evaluation of glucose data with the special focus on glycemic variability and application of CGM to improve individual diabetes management.
A method for the use of landscape metrics in freshwater research and management
Kearns, F.R.; Kelly, N.M.; Carter, J.L.; Resh, V.H.
2005-01-01
Freshwater research and management efforts could be greatly enhanced by a better understanding of the relationship between landscape-scale factors and water quality indicators. This is particularly true in urban areas, where land transformation impacts stream systems at a variety of scales. Despite advances in landscape quantification methods, several studies attempting to elucidate the relationship between land use/land cover (LULC) and water quality have resulted in mixed conclusions. However, these studies have largely relied on compositional landscape metrics. For urban and urbanizing watersheds in particular, the use of metrics that capture spatial pattern may further aid in distinguishing the effects of various urban growth patterns, as well as exploring the interplay between environmental and socioeconomic variables. However, to be truly useful for freshwater applications, pattern metrics must be optimized based on characteristic watershed properties and common water quality point sampling methods. Using a freely available LULC data set for the Santa Clara Basin, California, USA, we quantified landscape composition and configuration for subwatershed areas upstream of individual sampling sites, reducing the number of metrics based on: (1) sensitivity to changes in extent and (2) redundancy, as determined by a multivariate factor analysis. The first two factors, interpreted as (1) patch density and distribution and (2) patch shape and landscape subdivision, explained approximately 85% of the variation in the data set, and are highly reflective of the heterogeneous urban development pattern found in the study area. Although offering slightly less explanatory power, compositional metrics can provide important contextual information. ?? Springer 2005.
NASA Astrophysics Data System (ADS)
Konoplya, R. A.; Stuchlík, Z.; Zhidenko, A.
2018-04-01
We determine the class of axisymmetric and asymptotically flat black-hole spacetimes for which the test Klein-Gordon and Hamilton-Jacobi equations allow for the separation of variables. The known Kerr, Kerr-Newman, Kerr-Sen and some other black-hole metrics in various theories of gravity are within the class of spacetimes described here. It is shown that although the black-hole metric in the Einstein-dilaton-Gauss-Bonnet theory does not allow for the separation of variables (at least in the considered coordinates), for a number of applications it can be effectively approximated by a metric within the above class. This gives us some hope that the class of spacetimes described here may be not only generic for the known solutions allowing for the separation of variables, but also a good approximation for a broader class of metrics, which does not admit such separation. Finally, the generic form of the axisymmetric metric is expanded in the radial direction in terms of the continued fractions and the connection with other black-hole parametrizations is discussed.
Brock, John C.; Krabill, William; Sallenger, Asbury H.
2004-01-01
In order to reap the potential of airborne lidar surveys to provide geological information useful in understanding coastal sedimentary processes acting on various time scales, a new set of analysis methods are needed. This paper presents a multi-temporal lidar analysis of north Assateague Island, Maryland, and demonstrates the calculation of lidar metrics that condense barrier island morphology and morphological change into attributed linear features that may be used to analyze trends in coastal evolution. The new methods proposed in this paper are also of significant practical value, because lidar metric analysis reduces large volumes of point elevations into linear features attributed with essential morphological variables that are ideally suited for inclusion in Geographic Information Systems. A morphodynamic classification of north Assategue Island for a recent 10 month time period that is based on the recognition of simple patterns described by lidar change metrics is presented. Such morphodynamic classification reveals the relative magnitude and the fine scale alongshore variation in the importance of coastal changes over the study area during a defined time period. More generally, through the presentation of this morphodynamic classification of north Assateague Island, the value of lidar metrics in both examining large lidar data sets for coherent trends and in building hypotheses regarding processes driving barrier evolution is demonstrated
Nyflot, Matthew J.; Yang, Fei; Byrd, Darrin; Bowen, Stephen R.; Sandison, George A.; Kinahan, Paul E.
2015-01-01
Abstract. Image heterogeneity metrics such as textural features are an active area of research for evaluating clinical outcomes with positron emission tomography (PET) imaging and other modalities. However, the effects of stochastic image acquisition noise on these metrics are poorly understood. We performed a simulation study by generating 50 statistically independent PET images of the NEMA IQ phantom with realistic noise and resolution properties. Heterogeneity metrics based on gray-level intensity histograms, co-occurrence matrices, neighborhood difference matrices, and zone size matrices were evaluated within regions of interest surrounding the lesions. The impact of stochastic variability was evaluated with percent difference from the mean of the 50 realizations, coefficient of variation and estimated sample size for clinical trials. Additionally, sensitivity studies were performed to simulate the effects of patient size and image reconstruction method on the quantitative performance of these metrics. Complex trends in variability were revealed as a function of textural feature, lesion size, patient size, and reconstruction parameters. In conclusion, the sensitivity of PET textural features to normal stochastic image variation and imaging parameters can be large and is feature-dependent. Standards are needed to ensure that prospective studies that incorporate textural features are properly designed to measure true effects that may impact clinical outcomes. PMID:26251842
Nyflot, Matthew J; Yang, Fei; Byrd, Darrin; Bowen, Stephen R; Sandison, George A; Kinahan, Paul E
2015-10-01
Image heterogeneity metrics such as textural features are an active area of research for evaluating clinical outcomes with positron emission tomography (PET) imaging and other modalities. However, the effects of stochastic image acquisition noise on these metrics are poorly understood. We performed a simulation study by generating 50 statistically independent PET images of the NEMA IQ phantom with realistic noise and resolution properties. Heterogeneity metrics based on gray-level intensity histograms, co-occurrence matrices, neighborhood difference matrices, and zone size matrices were evaluated within regions of interest surrounding the lesions. The impact of stochastic variability was evaluated with percent difference from the mean of the 50 realizations, coefficient of variation and estimated sample size for clinical trials. Additionally, sensitivity studies were performed to simulate the effects of patient size and image reconstruction method on the quantitative performance of these metrics. Complex trends in variability were revealed as a function of textural feature, lesion size, patient size, and reconstruction parameters. In conclusion, the sensitivity of PET textural features to normal stochastic image variation and imaging parameters can be large and is feature-dependent. Standards are needed to ensure that prospective studies that incorporate textural features are properly designed to measure true effects that may impact clinical outcomes.
Nonlinear image registration with bidirectional metric and reciprocal regularization
Ying, Shihui; Li, Dan; Xiao, Bin; Peng, Yaxin; Du, Shaoyi; Xu, Meifeng
2017-01-01
Nonlinear registration is an important technique to align two different images and widely applied in medical image analysis. In this paper, we develop a novel nonlinear registration framework based on the diffeomorphic demons, where a reciprocal regularizer is introduced to assume that the deformation between two images is an exact diffeomorphism. In detail, first, we adopt a bidirectional metric to improve the symmetry of the energy functional, whose variables are two reciprocal deformations. Secondly, we slack these two deformations into two independent variables and introduce a reciprocal regularizer to assure the deformations being the exact diffeomorphism. Then, we utilize an alternating iterative strategy to decouple the model into two minimizing subproblems, where a new closed form for the approximate velocity of deformation is calculated. Finally, we compare our proposed algorithm on two data sets of real brain MR images with two relative and conventional methods. The results validate that our proposed method improves accuracy and robustness of registration, as well as the gained bidirectional deformations are actually reciprocal. PMID:28231342
A Computational Model Quantifies the Effect of Anatomical Variability on Velopharyngeal Function
Inouye, Joshua M.; Perry, Jamie L.; Lin, Kant Y.
2015-01-01
Purpose This study predicted the effects of velopharyngeal (VP) anatomical parameters on VP function to provide a greater understanding of speech mechanics and aid in the treatment of speech disorders. Method We created a computational model of the VP mechanism using dimensions obtained from magnetic resonance imaging measurements of 10 healthy adults. The model components included the levator veli palatini (LVP), the velum, and the posterior pharyngeal wall, and the simulations were based on material parameters from the literature. The outcome metrics were the VP closure force and LVP muscle activation required to achieve VP closure. Results Our average model compared favorably with experimental data from the literature. Simulations of 1,000 random anatomies reflected the large variability in closure forces observed experimentally. VP distance had the greatest effect on both outcome metrics when considering the observed anatomic variability. Other anatomical parameters were ranked by their predicted influences on the outcome metrics. Conclusions Our results support the implication that interventions for VP dysfunction that decrease anterior to posterior VP portal distance, increase velar length, and/or increase LVP cross-sectional area may be very effective. Future modeling studies will help to further our understanding of speech mechanics and optimize treatment of speech disorders. PMID:26049120
2013-01-01
Background Exposure to air pollution is frequently associated with reductions in birth weight but results of available studies vary widely, possibly in part because of differences in air pollution metrics. Further insight is needed to identify the air pollution metrics most strongly and consistently associated with birth weight. Methods We used a hospital-based obstetric database of more than 70,000 births to study the relationships between air pollution and the risk of low birth weight (LBW, <2,500 g), as well as birth weight as a continuous variable, in term-born infants. Complementary metrics capturing different aspects of air pollution were used (measurements from ambient monitoring stations, predictions from land use regression models and from a Gaussian dispersion model, traffic density, and proximity to roads). Associations between air pollution metrics and birth outcomes were investigated using generalized additive models, adjusting for maternal age, parity, race/ethnicity, insurance status, poverty, gestational age and sex of the infants. Results Increased risks of LBW were associated with ambient O3 concentrations as measured by monitoring stations, as well as traffic density and proximity to major roadways. LBW was not significantly associated with other air pollution metrics, except that a decreased risk was associated with ambient NO2 concentrations as measured by monitoring stations. When birth weight was analyzed as a continuous variable, small increases in mean birth weight were associated with most air pollution metrics (<40 g per inter-quartile range in air pollution metrics). No such increase was observed for traffic density or proximity to major roadways, and a significant decrease in mean birth weight was associated with ambient O3 concentrations. Conclusions We found contrasting results according to the different air pollution metrics examined. Unmeasured confounders and/or measurement errors might have produced spurious positive associations between birth weight and some air pollution metrics. Despite this, ambient O3 was associated with a decrement in mean birth weight and significant increases in the risk of LBW were associated with traffic density, proximity to roads and ambient O3. This suggests that in our study population, these air pollution metrics are more likely related to increased risks of LBW than the other metrics we studied. Further studies are necessary to assess the consistency of such patterns across populations. PMID:23413962
NASA Astrophysics Data System (ADS)
Yan, Jin; Song, Xiao; Gong, Guanghong
2016-02-01
We describe a metric named averaged ratio between complementary profiles to represent the distortion of map projections, and the shape regularity of spherical cells derived from map projections or non-map-projection methods. The properties and statistical characteristics of our metric are investigated. Our metric (1) is a variable of numerical equivalence to both scale component and angular deformation component of Tissot indicatrix, and avoids the invalidation when using Tissot indicatrix and derived differential calculus for evaluating non-map-projection based tessellations where mathematical formulae do not exist (e.g., direct spherical subdivisions), (2) exhibits simplicity (neither differential nor integral calculus) and uniformity in the form of calculations, (3) requires low computational cost, while maintaining high correlation with the results of differential calculus, (4) is a quasi-invariant under rotations, and (5) reflects the distortions of map projections, distortion of spherical cells, and the associated distortions of texels. As an indicator of quantitative evaluation, we investigated typical spherical tessellation methods, some variants of tessellation methods, and map projections. The tessellation methods we evaluated are based on map projections or direct spherical subdivisions. The evaluation involves commonly used Platonic polyhedrons, Catalan polyhedrons, etc. Quantitative analyses based on our metric of shape regularity and an essential metric of area uniformity implied that (1) Uniform Spherical Grids and its variant show good qualities in both area uniformity and shape regularity, and (2) Crusta, Unicube map, and a variant of Unicube map exhibit fairly acceptable degrees of area uniformity and shape regularity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mather, Barry
The increasing deployment of distribution-connected photovoltaic (DPV) systems requires utilities to complete complex interconnection studies. Relatively simple interconnection study methods worked well for low penetrations of photovoltaic systems, but more complicated quasi-static time-series (QSTS) analysis is required to make better interconnection decisions as DPV penetration levels increase. Tools and methods must be developed to support this. This paper presents a variable-time-step solver for QSTS analysis that significantly shortens the computational time and effort to complete a detailed analysis of the operation of a distribution circuit with many DPV systems. Specifically, it demonstrates that the proposed variable-time-step solver can reduce themore » required computational time by as much as 84% without introducing any important errors to metrics, such as the highest and lowest voltage occurring on the feeder, number of voltage regulator tap operations, and total amount of losses realized in the distribution circuit during a 1-yr period. Further improvement in computational speed is possible with the introduction of only modest errors in these metrics, such as a 91 percent reduction with less than 5 percent error when predicting voltage regulator operations.« less
Hussain, Husniza; Khalid, Norhayati Mustafa; Selamat, Rusidah; Wan Nazaimoon, Wan Mohamud
2013-09-01
The urinary iodine micromethod (UIMM) is a modification of the conventional method and its performance needs evaluation. UIMM performance was evaluated using the method validation and 2008 Iodine Deficiency Disorders survey data obtained from four urinary iodine (UI) laboratories. Method acceptability tests and Sigma quality metrics were determined using total allowable errors (TEas) set by two external quality assurance (EQA) providers. UIMM obeyed various method acceptability test criteria with some discrepancies at low concentrations. Method validation data calculated against the UI Quality Program (TUIQP) TEas showed that the Sigma metrics were at 2.75, 1.80, and 3.80 for 51±15.50 µg/L, 108±32.40 µg/L, and 149±38.60 µg/L UI, respectively. External quality control (EQC) data showed that the performance of the laboratories was within Sigma metrics of 0.85-1.12, 1.57-4.36, and 1.46-4.98 at 46.91±7.05 µg/L, 135.14±13.53 µg/L, and 238.58±17.90 µg/L, respectively. No laboratory showed a calculated total error (TEcalc)
Uncertainty Quantification of the FUN3D-Predicted NASA CRM Flutter Boundary
NASA Technical Reports Server (NTRS)
Stanford, Bret K.; Massey, Steven J.
2017-01-01
A nonintrusive point collocation method is used to propagate parametric uncertainties of the flexible Common Research Model, a generic transport configuration, through the unsteady aeroelastic CFD solver FUN3D. A range of random input variables are considered, including atmospheric flow variables, structural variables, and inertial (lumped mass) variables. UQ results are explored for a range of output metrics (with a focus on dynamic flutter stability), for both subsonic and transonic Mach numbers, for two different CFD mesh refinements. A particular focus is placed on computing failure probabilities: the probability that the wing will flutter within the flight envelope.
Wendel, Jochen; Buttenfield, Barbara P.; Stanislawski, Larry V.
2016-01-01
Knowledge of landscape type can inform cartographic generalization of hydrographic features, because landscape characteristics provide an important geographic context that affects variation in channel geometry, flow pattern, and network configuration. Landscape types are characterized by expansive spatial gradients, lacking abrupt changes between adjacent classes; and as having a limited number of outliers that might confound classification. The US Geological Survey (USGS) is exploring methods to automate generalization of features in the National Hydrography Data set (NHD), to associate specific sequences of processing operations and parameters with specific landscape characteristics, thus obviating manual selection of a unique processing strategy for every NHD watershed unit. A chronology of methods to delineate physiographic regions for the United States is described, including a recent maximum likelihood classification based on seven input variables. This research compares unsupervised and supervised algorithms applied to these seven input variables, to evaluate and possibly refine the recent classification. Evaluation metrics for unsupervised methods include the Davies–Bouldin index, the Silhouette index, and the Dunn index as well as quantization and topographic error metrics. Cross validation and misclassification rate analysis are used to evaluate supervised classification methods. The paper reports the comparative analysis and its impact on the selection of landscape regions. The compared solutions show problems in areas of high landscape diversity. There is some indication that additional input variables, additional classes, or more sophisticated methods can refine the existing classification.
NASA Astrophysics Data System (ADS)
Mbabazi, D.; Mohanty, B.; Gaur, N.
2017-12-01
Evapotranspiration (ET) is an important component of the water and energy balance and accounts for 60 -70% of precipitation losses. However, accurate estimates of ET are difficult to quantify at varying spatial and temporal scales. Eddy covariance methods estimate ET at high temporal resolutions but without capturing the spatial variation in ET within its footprint. On the other hand, remote sensing methods using Landsat imagery provide ET with high spatial resolution but low temporal resolution (16 days). In this study, we used both eddy covariance and remote sensing methods to generate high space-time resolution ET. Daily, monthly and seasonal ET estimates were obtained using the eddy covariance (EC) method, Penman-Monteith (PM) and Mapping Evapotranspiration with Internalized Calibration (METRIC) models to determine cotton and native prairie ET dynamics in the Brazos river basin characterized by varying hydro-climatic and geological gradients. Daily estimates of spatially distributed ET (30 m resolution) were generated using spatial autocorrelation and temporal interpolations between the EC flux variable footprints and METRIC ET for the 2016 and 2017 growing seasons. A comparison of the 2016 and 2017 preliminary daily ET estimates showed similar ET dynamics/trends among the EC, PM and METRIC methods, and 5-20% differences in seasonal ET estimates. This study will improve the spatial estimates of EC ET and temporal resolution of satellite derived ET thus providing better ET data for water use management.
ASSOCIATION OF LANDSCAPE METRICS TO SURFACE WATER BIOLOGY IN THE SAVANNAH RIVER BASIN
Surface water quality for the Savannah River basin was assessed using water biology and landscape metrics. Two multivariate analyses, partial least square and cannonical correlation, were used to describe how the structural variation in landscape variable(s) that contribute the ...
The Ecosystem of Information Retrieval
ERIC Educational Resources Information Center
Rodriguez-Munoz, Jose-Vicente; Martinez-Mendez, Francisco-Javier; Pastor-Sanchez, Juan-Antonio
2012-01-01
Introduction: This paper presents an initial proposal for a formal framework that, by studying the metric variables involved in information retrieval, can establish the sequence of events involved and how to perform it. Method: A systematic approach from the equations of Shannon and Weaver to establish the decidability of information retrieval…
A SPATIAL ANALYSIS OF FINE-ROOT BIOMASS FROM STAND DATA IN OREGON AND WASHINGTON
Because of the high spatial variability of fine roots in natural forest stands, accurate estimates of stand-level fine root biomass are difficult and expensive to obtain by standard coring methods. This study compares two different approaches that employ aboveground tree metrics...
Auble, Gregor T.; Bowen, Zachary H.; Bovee, Ken D.; Farmer, Adrian H.; Sexton, Natalie R.; Waddle, Terry J.
2004-01-01
The largest portion of the document is an Appendix that summarizes each of the individual scientific studies in terms of scope and methods, findings, principal variables, and metrics used in the study or suggested by the study results, and important needs for further study.
The AMRL Anthropometric Data Bank Library: Volumes 1-5
1977-10-01
crinion arc (#127) which were not measured on bald and balding men. Non-metric variables on the tape include somatotype ratings, both by the Sheldon...158). An analysis of the somatotype material was pub- lished as A Statistical Comparison of the Body Typing Methods of Hooton and Sheldon by C
A SPATIAL ANALYSIS OF THE FINE ROOT BIOMASS FROM STAND DATA IN THE PACIFIC NORTHWEST
High spatial variability of fine roots in natural forest stands makes accurate estimates of stand-level fine root biomass difficult and expensive to obtain by standard coring methods. This study uses aboveground tree metrics and spatial relationships to improve core-based estima...
Surface water quality is related to conditions in the surrounding geophysical environment, including soils, landcover, and anthropogenic activities. A number of statistical methods may be used to analyze and explore relationships among variables. Single-, multiple- and multivaria...
Helmer, K G; Chou, M-C; Preciado, R I; Gimi, B; Rollins, N K; Song, A; Turner, J; Mori, S
2016-02-27
It is now common for magnetic-resonance-imaging (MRI) based multi-site trials to include diffusion-weighted imaging (DWI) as part of the protocol. It is also common for these sites to possess MR scanners of different manufacturers, different software and hardware, and different software licenses. These differences mean that scanners may not be able to acquire data with the same number of gradient amplitude values and number of available gradient directions. Variability can also occur in achievable b-values and minimum echo times. The challenge of a multi-site study then, is to create a common protocol by understanding and then minimizing the effects of scanner variability and identifying reliable and accurate diffusion metrics. This study describes the effect of site, scanner vendor, field strength, and TE on two diffusion metrics: the first moment of the diffusion tensor field (mean diffusivity, MD), and the fractional anisotropy (FA) using two common analyses (region-of-interest and mean-bin value of whole brain histograms). The goal of the study was to identify sources of variability in diffusion-sensitized imaging and their influence on commonly reported metrics. The results demonstrate that the site, vendor, field strength, and echo time all contribute to variability in FA and MD, though to different extent. We conclude that characterization of the variability of DTI metrics due to site, vendor, field strength, and echo time is a worthwhile step in the construction of multi-center trials.
Online kinematic regulation by visual feedback for grasp versus transport during reach-to-pinch
Nataraj, Raviraj; Pasluosta, Cristian; Li, Zong-Ming
2014-01-01
Purpose This study investigated novel kinematic performance parameters to understand regulation by visual feedback (VF) of the reaching hand on the grasp and transport components during the reach-to-pinch maneuver. Conventional metrics often signify discrete movement features to postulate sensory-based control effects (e.g., time for maximum velocity to signify feedback delay). The presented metrics of this study were devised to characterize relative vision-based control of the sub-movements across the entire maneuver. Methods Movement performance was assessed according to reduced variability and increased efficiency of kinematic trajectories. Variability was calculated as the standard deviation about the observed mean trajectory for a given subject and VF condition across kinematic derivatives for sub-movements of inter-pad grasp (distance between thumb and index finger-pads; relative orientation of finger-pads) and transport (distance traversed by wrist). A Markov analysis then examined the probabilistic effect of VF on which movement component exhibited higher variability over phases of the complete maneuver. Jerk-based metrics of smoothness (minimal jerk) and energy (integrated jerk-squared) were applied to indicate total movement efficiency with VF. Results/Discussion The reductions in grasp variability metrics with VF were significantly greater (p<0.05) compared to transport for velocity, acceleration, and jerk, suggesting separate control pathways for each component. The Markov analysis indicated that VF preferentially regulates grasp over transport when continuous control is modeled probabilistically during the movement. Efficiency measures demonstrated VF to be more integral for early motor planning of grasp than transport in producing greater increases in smoothness and trajectory adjustments (i.e., jerk-energy) early compared to late in the movement cycle. Conclusions These findings demonstrate the greater regulation by VF on kinematic performance of grasp compared to transport and how particular features of this relativistic control occur continually over the maneuver. Utilizing the advanced performance metrics presented in this study facilitated characterization of VF effects continuously across the entire movement in corroborating the notion of separate control pathways for each component. PMID:24968371
Mapping Resource Selection Functions in Wildlife Studies: Concerns and Recommendations
Morris, Lillian R.; Proffitt, Kelly M.; Blackburn, Jason K.
2018-01-01
Predicting the spatial distribution of animals is an important and widely used tool with applications in wildlife management, conservation, and population health. Wildlife telemetry technology coupled with the availability of spatial data and GIS software have facilitated advancements in species distribution modeling. There are also challenges related to these advancements including the accurate and appropriate implementation of species distribution modeling methodology. Resource Selection Function (RSF) modeling is a commonly used approach for understanding species distributions and habitat usage, and mapping the RSF results can enhance study findings and make them more accessible to researchers and wildlife managers. Currently, there is no consensus in the literature on the most appropriate method for mapping RSF results, methods are frequently not described, and mapping approaches are not always related to accuracy metrics. We conducted a systematic review of the RSF literature to summarize the methods used to map RSF outputs, discuss the relationship between mapping approaches and accuracy metrics, performed a case study on the implications of employing different mapping methods, and provide recommendations as to appropriate mapping techniques for RSF studies. We found extensive variability in methodology for mapping RSF results. Our case study revealed that the most commonly used approaches for mapping RSF results led to notable differences in the visual interpretation of RSF results, and there is a concerning disconnect between accuracy metrics and mapping methods. We make 5 recommendations for researchers mapping the results of RSF studies, which are focused on carefully selecting and describing the method used to map RSF studies, and relating mapping approaches to accuracy metrics. PMID:29887652
A Bibliography of Selected Publications: Project Air Force, 5th Edition
1989-05-01
Dyna - R-3028-AF. A Dynamic Retention Model for Air Force Officers: METRIC’s DL and and Pipeilne Variability. M. J. Carrillo. Theory and Estimates. G...Theorem and Dyna - and Support. METRIC’s Demand and Pipeline Variability. R-3255-AF. Aircraft Airframe Cost Estimating Relationships: N-2283/1-AF...U). 1970-1985. N-2409-AF. Tanker Splitting Across the SlOP Bomber Force R-3389-AF. Dyna -METRIC Version 4: Modeling Worldwide (U). Logistics Support of
Objectively Quantifying Radiation Esophagitis With Novel Computed Tomography–Based Metrics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niedzielski, Joshua S., E-mail: jsniedzielski@mdanderson.org; University of Texas Houston Graduate School of Biomedical Science, Houston, Texas; Yang, Jinzhong
Purpose: To study radiation-induced esophageal expansion as an objective measure of radiation esophagitis in patients with non-small cell lung cancer (NSCLC) treated with intensity modulated radiation therapy. Methods and Materials: Eighty-five patients had weekly intra-treatment CT imaging and esophagitis scoring according to Common Terminlogy Criteria for Adverse Events 4.0, (24 Grade 0, 45 Grade 2, and 16 Grade 3). Nineteen esophageal expansion metrics based on mean, maximum, spatial length, and volume of expansion were calculated as voxel-based relative volume change, using the Jacobian determinant from deformable image registration between the planning and weekly CTs. An anatomic variability correction method wasmore » validated and applied to these metrics to reduce uncertainty. An analysis of expansion metrics and radiation esophagitis grade was conducted using normal tissue complication probability from univariate logistic regression and Spearman rank for grade 2 and grade 3 esophagitis endpoints, as well as the timing of expansion and esophagitis grade. Metrics' performance in classifying esophagitis was tested with receiver operating characteristic analysis. Results: Expansion increased with esophagitis grade. Thirteen of 19 expansion metrics had receiver operating characteristic area under the curve values >0.80 for both grade 2 and grade 3 esophagitis endpoints, with the highest performance from maximum axial expansion (MaxExp1) and esophageal length with axial expansion ≥30% (LenExp30%) with area under the curve values of 0.93 and 0.91 for grade 2, 0.90 and 0.90 for grade 3 esophagitis, respectively. Conclusions: Esophageal expansion may be a suitable objective measure of esophagitis, particularly maximum axial esophageal expansion and esophageal length with axial expansion ≥30%, with 2.1 Jacobian value and 98.6 mm as the metric value for 50% probability of grade 3 esophagitis. The uncertainty in esophageal Jacobian calculations can be reduced with anatomic correction methods.« less
NASA Astrophysics Data System (ADS)
Chatenet, Q.; Tahan, A.; Gagnon, M.; Chamberland-Lauzon, J.
2016-11-01
Nowadays, engineers are able to solve complex equations thanks to the increase of computing capacity. Thus, finite elements software is widely used, especially in the field of mechanics to predict part behavior such as strain, stress and natural frequency. However, it can be difficult to determine how a model might be right or wrong, or whether a model is better than another one. Nevertheless, during the design phase, it is very important to estimate how the hydroelectric turbine blades will behave according to the stress to which it is subjected. Indeed, the static and dynamic stress levels will influence the blade's fatigue resistance and thus its lifetime, which is a significant feature. In the industry, engineers generally use either graphic representation, hypothesis tests such as the Student test, or linear regressions in order to compare experimental to estimated data from the numerical model. Due to the variability in personal interpretation (reproducibility), graphical validation is not considered objective. For an objective assessment, it is essential to use a robust validation metric to measure the conformity of predictions against data. We propose to use the area metric in the case of a turbine blade that meets the key points of the ASME Standards and produces a quantitative measure of agreement between simulations and empirical data. This validation metric excludes any belief and criterion of accepting a model which increases robustness. The present work is aimed at applying a validation method, according to ASME V&V 10 recommendations. Firstly, the area metric is applied on the case of a real Francis runner whose geometry and boundaries conditions are complex. Secondly, the area metric will be compared to classical regression methods to evaluate the performance of the method. Finally, we will discuss the use of the area metric as a tool to correct simulations.
Assessment of scoliosis by direct measurement of the curvature of the spine
NASA Astrophysics Data System (ADS)
Dougherty, Geoff; Johnson, Michael J.
2009-02-01
We present two novel metrics for assessing scoliosis, in which the geometric centers of all the affected vertebrae in an antero-posterior (A-P) radiographic image are used. This is in contradistinction to the existing methods of using selected vertebrae, and determining either their endplates or the intersections of their diagonals, to define a scoliotic angle. Our first metric delivers a scoliotic angle, comparable to the Cobb and Ferguson angles. It measures the sum of the angles between the centers of the affected vertebrae, and avoids the need for an observer to decide on the extent of component curvatures. Our second metric calculates the normalized root-mean-square curvature of the smoothest path comprising piece-wise polynomial splines fitted to the geometric centers of the vertebrae. The smoothest path is useful in modeling the spinal curvature. Our metrics were compared to existing methods using radiographs from a group of twenty subjects with spinal curvatures of varying severity. Their values were strongly correlated with those of the scoliotic angles (r = 0.850 - 0.886), indicating that they are valid surrogates for measuring the severity of scoliosis. Our direct use of positional data removes the vagaries of determining variably shaped endplates, and circumvented the significant interand intra-observer errors of the Cobb and Ferguson methods. Although we applied our metrics to two-dimensional (2- D) data in this paper, they are equally applicable to three-dimensional (3-D) data. We anticipate that they will prove to be the basis for a reliable 3-D measurement and classification system.
Getting the message across: using ecological integrity to communicate with resource managers
Mitchell, Brian R.; Tierney, Geraldine L.; Schweiger, E. William; Miller, Kathryn M.; Faber-Langendoen, Don; Grace, James B.
2014-01-01
This chapter describes and illustrates how concepts of ecological integrity, thresholds, and reference conditions can be integrated into a research and monitoring framework for natural resource management. Ecological integrity has been defined as a measure of the composition, structure, and function of an ecosystem in relation to the system’s natural or historical range of variation, as well as perturbations caused by natural or anthropogenic agents of change. Using ecological integrity to communicate with managers requires five steps, often implemented iteratively: (1) document the scale of the project and the current conceptual understanding and reference conditions of the ecosystem, (2) select appropriate metrics representing integrity, (3) define externally verified assessment points (metric values that signify an ecological change or need for management action) for the metrics, (4) collect data and calculate metric scores, and (5) summarize the status of the ecosystem using a variety of reporting methods. While we present the steps linearly for conceptual clarity, actual implementation of this approach may require addressing the steps in a different order or revisiting steps (such as metric selection) multiple times as data are collected. Knowledge of relevant ecological thresholds is important when metrics are selected, because thresholds identify where small changes in an environmental driver produce large responses in the ecosystem. Metrics with thresholds at or just beyond the limits of a system’s range of natural variability can be excellent, since moving beyond the normal range produces a marked change in their values. Alternatively, metrics with thresholds within but near the edge of the range of natural variability can serve as harbingers of potential change. Identifying thresholds also contributes to decisions about selection of assessment points. In particular, if there is a significant resistance to perturbation in an ecosystem, with threshold behavior not occurring until well beyond the historical range of variation, this may provide a scientific basis for shifting an ecological assessment point beyond the historical range. We present two case studies using ongoing monitoring by the US National Park Service Vital Signs program that illustrate the use of an ecological integrity approach to communicate ecosystem status to resource managers. The Wetland Ecological Integrity in Rocky Mountain National Park case study uses an analytical approach that specifically incorporates threshold detection into the process of establishing assessment points. The Forest Ecological Integrity of Northeastern National Parks case study describes a method for reporting ecological integrity to resource managers and other decision makers. We believe our approach has the potential for wide applicability for natural resource management.
NASA Astrophysics Data System (ADS)
Pham, A. D.
2017-10-01
The benthic macroinvertebrates living on the bottom channels are one of the most promising of the potential indicators of river health for the Saigon River and its tributaries with hydrochemistry playing a supporting role. An evaluation of the interrelationships within this approach deems necessary. This work identified and tested these relationships to improve the method for water quality assessment. Data from over 4,500 km2 watershed were used as a representative example for the Saigon River and its tributaries. The data covered the period March and September, 2007, 2008, 2009, 2010 and 2015. To implement this evaluation, the analyses were based on accepted the methodology of Mekong River Commission and the studies of scientific group for the biological status assessment. For correlation analyses, the selected environmental variables were compared with the ecological indices, based on benthic macroinvertebrates. The results showed that the metrics of Species Richness, H’, and 1-DS had significant and strong relationships with the water quality variables of DO, BOD5, T_N, and TP (R2 = 0.3751 - 0.8866; P << 0.05). While the metrics of Abundance of benthic macroinvertebrates did not have a statistically significant relationship with any water quality variables (R2 = 0.0000 - 0.0744; P > 0.05). Additionally, the metrics of Species Richness, H’, and 1-DS had negatively correlated with the pH and TSS. Both univariate and multivariate analyses were used to examine the ecological quality of the Saigon River and its tributaries using benthic macroinvertebrates seems to be the most sensitive indicator to correlate with physicochemical variables. This demonstrated that it could be applied to describe the water quality in the Saigon River and its tributaries.
USDA-ARS?s Scientific Manuscript database
In order to control algal blooms, stressor-response relationships between water quality metrics, environmental variables, and algal growth should be understood and modeled. Machine-learning methods were suggested to express stressor-response relationships found by application of mechanistic water qu...
Selective Attrition and Intraindividual Variability in Response Time Moderate Cognitive Change
Yao, Christie; Stawski, Robert S.; Hultsch, David F.; MacDonald, Stuart W.S.
2016-01-01
Objectives Selection of a developmental time metric is useful for understanding causal processes that underlie aging-related cognitive change, and for the identification of potential moderators of cognitive decline. Building on research suggesting that time to attrition is a metric sensitive to non-normative influences of aging (e.g., subclinical health conditions), we examined reason for attrition and intraindividual variability (IIV) in reaction time as predictors of cognitive performance. Method Three-hundred and four community dwelling older adults (64-92 years) completed annual assessments in a longitudinal study. IIV was calculated from baseline performance on reaction time tasks. Multilevel models were fit to examine patterns and predictors of cognitive change. Results We show that time to attrition was associated with cognitive decline. Greater IIV was associated with declines on executive functioning and episodic memory measures. Attrition due to personal health reasons was also associated with decreased executive functioning compared to individuals who remained in study. Discussion These findings suggest that time to attrition is a useful metric for representing cognitive change, and reason for attrition and IIV are predictive of non-normative influences that may underlie instances of cognitive loss in older adults. PMID:26647008
Wagner, Wolfgang; Hansen, Karolina; Kronberger, Nicole
2014-12-01
Growing globalisation of the world draws attention to cultural differences between people from different countries or from different cultures within the countries. Notwithstanding the diversity of people's worldviews, current cross-cultural research still faces the challenge of how to avoid ethnocentrism; comparing Western-driven phenomena with like variables across countries without checking their conceptual equivalence clearly is highly problematic. In the present article we argue that simple comparison of measurements (in the quantitative domain) or of semantic interpretations (in the qualitative domain) across cultures easily leads to inadequate results. Questionnaire items or text produced in interviews or via open-ended questions have culturally laden meanings and cannot be mapped onto the same semantic metric. We call the culture-specific space and relationship between variables or meanings a 'cultural metric', that is a set of notions that are inter-related and that mutually specify each other's meaning. We illustrate the problems and their possible solutions with examples from quantitative and qualitative research. The suggested methods allow to respect the semantic space of notions in cultures and language groups and the resulting similarities or differences between cultures can be better understood and interpreted.
Stochastic effects in EUV lithography: random, local CD variability, and printing failures
NASA Astrophysics Data System (ADS)
De Bisschop, Peter
2017-10-01
Stochastic effects in lithography are usually quantified through local CD variability metrics, such as line-width roughness or local CD uniformity (LCDU), and these quantities have been measured and studied intensively, both in EUV and optical lithography. Next to the CD-variability, stochastic effects can also give rise to local, random printing failures, such as missing contacts or microbridges in spaces. When these occur, there often is no (reliable) CD to be measured locally, and then such failures cannot be quantified with the usual CD-measuring techniques. We have developed algorithms to detect such stochastic printing failures in regular line/space (L/S) or contact- or dot-arrays from SEM images, leading to a stochastic failure metric that we call NOK (not OK), which we consider a complementary metric to the CD-variability metrics. This paper will show how both types of metrics can be used to experimentally quantify dependencies of stochastic effects to, e.g., CD, pitch, resist, exposure dose, etc. As it is also important to be able to predict upfront (in the OPC verification stage of a production-mask tape-out) whether certain structures in the layout are likely to have a high sensitivity to stochastic effects, we look into the feasibility of constructing simple predictors, for both stochastic CD-variability and printing failure, that can be calibrated for the process and exposure conditions used and integrated into the standard OPC verification flow. Finally, we briefly discuss the options to reduce stochastic variability and failure, considering the entire patterning ecosystem.
Quality assessment of color images based on the measure of just noticeable color difference
NASA Astrophysics Data System (ADS)
Chou, Chun-Hsien; Hsu, Yun-Hsiang
2014-01-01
Accurate assessment on the quality of color images is an important step to many image processing systems that convey visual information of the reproduced images. An accurate objective image quality assessment (IQA) method is expected to give the assessment result highly agreeing with the subjective assessment. To assess the quality of color images, many approaches simply apply the metric for assessing the quality of gray scale images to each of three color channels of the color image, neglecting the correlation among three color channels. In this paper, a metric for assessing color images' quality is proposed, in which the model of variable just-noticeable color difference (VJNCD) is employed to estimate the visibility thresholds of distortion inherent in each color pixel. With the estimated visibility thresholds of distortion, the proposed metric measures the average perceptible distortion in terms of the quantized distortion according to the perceptual error map similar to that defined by National Bureau of Standards (NBS) for converting the color difference enumerated by CIEDE2000 to the objective score of perceptual quality assessment. The perceptual error map in this case is designed for each pixel according to the visibility threshold estimated by the VJNCD model. The performance of the proposed metric is verified by assessing the test images in the LIVE database, and is compared with those of many well-know IQA metrics. Experimental results indicate that the proposed metric is an effective IQA method that can accurately predict the image quality of color images in terms of the correlation between objective scores and subjective evaluation.
Core Hunter 3: flexible core subset selection.
De Beukelaer, Herman; Davenport, Guy F; Fack, Veerle
2018-05-31
Core collections provide genebank curators and plant breeders a way to reduce size of their collections and populations, while minimizing impact on genetic diversity and allele frequency. Many methods have been proposed to generate core collections, often using distance metrics to quantify the similarity of two accessions, based on genetic marker data or phenotypic traits. Core Hunter is a multi-purpose core subset selection tool that uses local search algorithms to generate subsets relying on one or more metrics, including several distance metrics and allelic richness. In version 3 of Core Hunter (CH3) we have incorporated two new, improved methods for summarizing distances to quantify diversity or representativeness of the core collection. A comparison of CH3 and Core Hunter 2 (CH2) showed that these new metrics can be effectively optimized with less complex algorithms, as compared to those used in CH2. CH3 is more effective at maximizing the improved diversity metric than CH2, still ensures a high average and minimum distance, and is faster for large datasets. Using CH3, a simple stochastic hill-climber is able to find highly diverse core collections, and the more advanced parallel tempering algorithm further increases the quality of the core and further reduces variability across independent samples. We also evaluate the ability of CH3 to simultaneously maximize diversity, and either representativeness or allelic richness, and compare the results with those of the GDOpt and SimEli methods. CH3 can sample equally representative cores as GDOpt, which was specifically designed for this purpose, and is able to construct cores that are simultaneously more diverse, and either are more representative or have higher allelic richness, than those obtained by SimEli. In version 3, Core Hunter has been updated to include two new core subset selection metrics that construct cores for representativeness or diversity, with improved performance. It combines and outperforms the strengths of other methods, as it (simultaneously) optimizes a variety of metrics. In addition, CH3 is an improvement over CH2, with the option to use genetic marker data or phenotypic traits, or both, and improved speed. Core Hunter 3 is freely available on http://www.corehunter.org .
DOE Office of Scientific and Technical Information (OSTI.GOV)
Urrego-Blanco, Jorge R.; Hunke, Elizabeth C.; Urban, Nathan M.
Here, we implement a variance-based distance metric (D n) to objectively assess skill of sea ice models when multiple output variables or uncertainties in both model predictions and observations need to be considered. The metric compares observations and model data pairs on common spatial and temporal grids improving upon highly aggregated metrics (e.g., total sea ice extent or volume) by capturing the spatial character of model skill. The D n metric is a gamma-distributed statistic that is more general than the χ 2 statistic commonly used to assess model fit, which requires the assumption that the model is unbiased andmore » can only incorporate observational error in the analysis. The D n statistic does not assume that the model is unbiased, and allows the incorporation of multiple observational data sets for the same variable and simultaneously for different variables, along with different types of variances that can characterize uncertainties in both observations and the model. This approach represents a step to establish a systematic framework for probabilistic validation of sea ice models. The methodology is also useful for model tuning by using the D n metric as a cost function and incorporating model parametric uncertainty as part of a scheme to optimize model functionality. We apply this approach to evaluate different configurations of the standalone Los Alamos sea ice model (CICE) encompassing the parametric uncertainty in the model, and to find new sets of model configurations that produce better agreement than previous configurations between model and observational estimates of sea ice concentration and thickness.« less
Urrego-Blanco, Jorge R.; Hunke, Elizabeth C.; Urban, Nathan M.; ...
2017-04-01
Here, we implement a variance-based distance metric (D n) to objectively assess skill of sea ice models when multiple output variables or uncertainties in both model predictions and observations need to be considered. The metric compares observations and model data pairs on common spatial and temporal grids improving upon highly aggregated metrics (e.g., total sea ice extent or volume) by capturing the spatial character of model skill. The D n metric is a gamma-distributed statistic that is more general than the χ 2 statistic commonly used to assess model fit, which requires the assumption that the model is unbiased andmore » can only incorporate observational error in the analysis. The D n statistic does not assume that the model is unbiased, and allows the incorporation of multiple observational data sets for the same variable and simultaneously for different variables, along with different types of variances that can characterize uncertainties in both observations and the model. This approach represents a step to establish a systematic framework for probabilistic validation of sea ice models. The methodology is also useful for model tuning by using the D n metric as a cost function and incorporating model parametric uncertainty as part of a scheme to optimize model functionality. We apply this approach to evaluate different configurations of the standalone Los Alamos sea ice model (CICE) encompassing the parametric uncertainty in the model, and to find new sets of model configurations that produce better agreement than previous configurations between model and observational estimates of sea ice concentration and thickness.« less
Tracking variable sedimentation rates in orbitally forced paleoclimate proxy series
NASA Astrophysics Data System (ADS)
Li, M.; Kump, L. R.; Hinnov, L.
2017-12-01
This study addresses two fundamental issues in cyclostratigraphy: quantitative testing of orbital forcing in cyclic sedimentary sequences and tracking variable sedimentation rates. The methodology proposed here addresses these issues as an inverse problem, and estimates the product-moment correlation coefficient between the frequency spectra of orbital solutions and paleoclimate proxy series over a range of "test" sedimentation rates. It is inspired by the ASM method (1). The number of orbital parameters involved in the estimation is also considered. The method relies on the hypothesis that orbital forcing had a significant impact on the paleoclimate proxy variations, and thus is also tested. The null hypothesis of no astronomical forcing is evaluated using the Beta distribution, for which the shape parameters are estimated using a Monte Carlo simulation approach. We introduce a metric to estimate the most likely sedimentation rate using the product-moment correlation coefficient, H0 significance level, and the number of contributing orbital parameters, i.e., the CHO value. The CHO metric is applied with a sliding window to track variable sedimentation rates along the paleoclimate proxy series. Two forward models with uniform and variable sedimentation rates are evaluated to demonstrate the robustness of the method. The CHO method is applied to the classical Late Triassic Newark depth rank series; the estimated sedimentation rates match closely with previously published sedimentation rates and provide a more highly time-resolved estimate (2,3). References: (1) Meyers, S.R., Sageman, B.B., Amer. J. Sci., 307, 773-792, 2007; (2) Kent, D.V., Olsen, P.E., Muttoni, G., Earth-Sci. Rev.166, 153-180, 2017; (3) Li, M., Zhang, Y., Huang, C., Ogg, J., Hinnov, L., Wang, Y., Zou, Z., Li, L., 2017. Earth Plant. Sc. Lett. doi:10.1016/j.epsl.2017.07.015
Long, Andrew J.; Mahler, Barbara J.
2013-01-01
Many karst aquifers are rapidly filled and depleted and therefore are likely to be susceptible to changes in short-term climate variability. Here we explore methods that could be applied to model site-specific hydraulic responses, with the intent of simulating these responses to different climate scenarios from high-resolution climate models. We compare hydraulic responses (spring flow, groundwater level, stream base flow, and cave drip) at several sites in two karst aquifers: the Edwards aquifer (Texas, USA) and the Madison aquifer (South Dakota, USA). A lumped-parameter model simulates nonlinear soil moisture changes for estimation of recharge, and a time-variant convolution model simulates the aquifer response to this recharge. Model fit to data is 2.4% better for calibration periods than for validation periods according to the Nash–Sutcliffe coefficient of efficiency, which ranges from 0.53 to 0.94 for validation periods. We use metrics that describe the shapes of the impulse-response functions (IRFs) obtained from convolution modeling to make comparisons in the distribution of response times among sites and between aquifers. Time-variant IRFs were applied to 62% of the sites. Principal component analysis (PCA) of metrics describing the shapes of the IRFs indicates three principal components that together account for 84% of the variability in IRF shape: the first is related to IRF skewness and temporal spread and accounts for 51% of the variability; the second and third largely are related to time-variant properties and together account for 33% of the variability. Sites with IRFs that dominantly comprise exponential curves are separated geographically from those dominantly comprising lognormal curves in both aquifers as a result of spatial heterogeneity. The use of multiple IRF metrics in PCA is a novel method to characterize, compare, and classify the way in which different sites and aquifers respond to recharge. As convolution models are developed for additional aquifers, they could contribute to an IRF database and a general classification system for karst aquifers.
Field Validity of Heart Rate Variability Metrics Produced by QRSTool and CMetX
ERIC Educational Resources Information Center
Hibbert, Anita S.; Weinberg, Anna; Klonsky, E. David
2012-01-01
Interest in heart rate variability (HRV) metrics as markers of physiological and psychological health continues to grow beyond those with psychophysiological expertise, increasing the importance of developing suitable tools for researchers new to the field. Allen, Chambers, and Towers (2007) developed QRSTool and CMetX software as simple,…
Application of random forests methods to diabetic retinopathy classification analyses.
Casanova, Ramon; Saldana, Santiago; Chew, Emily Y; Danis, Ronald P; Greven, Craig M; Ambrosius, Walter T
2014-01-01
Diabetic retinopathy (DR) is one of the leading causes of blindness in the United States and world-wide. DR is a silent disease that may go unnoticed until it is too late for effective treatment. Therefore, early detection could improve the chances of therapeutic interventions that would alleviate its effects. Graded fundus photography and systemic data from 3443 ACCORD-Eye Study participants were used to estimate Random Forest (RF) and logistic regression classifiers. We studied the impact of sample size on classifier performance and the possibility of using RF generated class conditional probabilities as metrics describing DR risk. RF measures of variable importance are used to detect factors that affect classification performance. Both types of data were informative when discriminating participants with or without DR. RF based models produced much higher classification accuracy than those based on logistic regression. Combining both types of data did not increase accuracy but did increase statistical discrimination of healthy participants who subsequently did or did not have DR events during four years of follow-up. RF variable importance criteria revealed that microaneurysms counts in both eyes seemed to play the most important role in discrimination among the graded fundus variables, while the number of medicines and diabetes duration were the most relevant among the systemic variables. We have introduced RF methods to DR classification analyses based on fundus photography data. In addition, we propose an approach to DR risk assessment based on metrics derived from graded fundus photography and systemic data. Our results suggest that RF methods could be a valuable tool to diagnose DR diagnosis and evaluate its progression.
Helmer, K G; Chou, M-C; Preciado, R I; Gimi, B; Rollins, N K; Song, A; Turner, J; Mori, S
2016-02-27
MRI-based multi-site trials now routinely include some form of diffusion-weighted imaging (DWI) in their protocol. These studies can include data originating from scanners built by different vendors, each with their own set of unique protocol restrictions, including restrictions on the number of available gradient directions, whether an externally-generated list of gradient directions can be used, and restrictions on the echo time (TE). One challenge of multi-site studies is to create a common imaging protocol that will result in a reliable and accurate set of diffusion metrics. The present study describes the effect of site, scanner vendor, field strength, and TE on two common metrics: the first moment of the diffusion tensor field (mean diffusivity, MD), and the fractional anisotropy (FA). We have shown in earlier work that ROI metrics and the mean of MD and FA histograms are not sufficiently sensitive for use in site characterization. Here we use the distance between whole brain histograms of FA and MD to investigate within- and between-site effects. We concluded that the variability of DTI metrics due to site, vendor, field strength, and echo time could influence the results in multi-center trials and that histogram distance is sensitive metrics for each of these variables.
Separation of variables in Maxwell equations in Plebański-Demiański spacetime
NASA Astrophysics Data System (ADS)
Frolov, Valeri P.; Krtouš, Pavel; KubizÅák, David
2018-05-01
A new method for separating variables in the Maxwell equations in four- and higher-dimensional Kerr-(A)dS spacetimes proposed recently by Lunin is generalized to any off-shell metric that admits a principal Killing-Yano tensor. The key observation is that Lunin's ansatz for the vector potential can be formulated in a covariant form—in terms of the principal tensor. In particular, focusing on the four-dimensional case we demonstrate separability of Maxwell's equations in the Kerr-NUT-(A)dS and the Plebański-Demiański family of spacetimes. The new method of separation of variables is quite different from the standard approach based on the Newman-Penrose formalism.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nyflot, MJ; Yang, F; Byrd, D
Purpose: Despite increased use of heterogeneity metrics for PET imaging, standards for metrics such as textural features have yet to be developed. We evaluated the quantitative variability caused by image acquisition and reconstruction parameters on PET textural features. Methods: PET images of the NEMA IQ phantom were simulated with realistic image acquisition noise. 35 features based on intensity histograms (IH), co-occurrence matrices (COM), neighborhood-difference matrices (NDM), and zone-size matrices (ZSM) were evaluated within lesions (13, 17, 22, 28, 33 mm diameter). Variability in metrics across 50 independent images was evaluated as percent difference from mean for three phantom girths (850,more » 1030, 1200 mm) and two OSEM reconstructions (2 iterations, 28 subsets, 5 mm FWHM filtration vs 6 iterations, 28 subsets, 8.6 mm FWHM filtration). Also, patient sample size to detect a clinical effect of 30% with Bonferroni-corrected α=0.001 and 95% power was estimated. Results: As a class, NDM features demonstrated greatest sensitivity in means (5–50% difference for medium girth and reconstruction comparisons and 10–100% for large girth comparisons). Some IH features (standard deviation, energy, entropy) had variability below 10% for all sensitivity studies, while others (kurtosis, skewness) had variability above 30%. COM and ZSM features had complex sensitivities; correlation, energy, entropy (COM) and zone percentage, short-zone emphasis, zone-size non-uniformity (ZSM) had variability less than 5% while other metrics had differences up to 30%. Trends were similar for sample size estimation; for example, coarseness, contrast, and strength required 12, 38, and 52 patients to detect a 30% effect for the small girth case but 38, 88, and 128 patients in the large girth case. Conclusion: The sensitivity of PET textural features to image acquisition and reconstruction parameters is large and feature-dependent. Standards are needed to ensure that prospective trials which incorporate textural features are properly designed to detect clinical endpoints. Supported by NIH grants R01 CA169072, U01 CA148131, NCI Contract (SAIC-Frederick) 24XS036-004, and a research contract from GE Healthcare.« less
Day, Suzanne; Mason, Robin; Tannenbaum, Cara; Rochon, Paula A
2017-01-01
Integrating sex and gender in health research is essential to produce the best possible evidence to inform health care. Comprehensive integration of sex and gender requires considering these variables from the very beginning of the research process, starting at the proposal stage. To promote excellence in sex and gender integration, we have developed a set of metrics to assess the quality of sex and gender integration in research proposals. These metrics are designed to assist both researchers in developing proposals and reviewers in making funding decisions. We developed this tool through an iterative three-stage method involving 1) review of existing sex and gender integration resources and initial metrics design, 2) expert review and feedback via anonymous online survey (Likert scale and open-ended questions), and 3) analysis of feedback data and collective revision of the metrics. We received feedback on the initial metrics draft from 20 reviewers with expertise in conducting sex- and/or gender-based health research. The majority of reviewers responded positively to questions regarding the utility, clarity and completeness of the metrics, and all reviewers provided responses to open-ended questions about suggestions for improvements. Coding and analysis of responses identified three domains for improvement: clarifying terminology, refining content, and broadening applicability. Based on this analysis we revised the metrics into the Essential Metrics for Assessing Sex and Gender Integration in Health Research Proposals Involving Human Participants, which outlines criteria for excellence within each proposal component and provides illustrative examples to support implementation. By enhancing the quality of sex and gender integration in proposals, the metrics will help to foster comprehensive, meaningful integration of sex and gender throughout each stage of the research process, resulting in better quality evidence to inform health care for all.
Mason, Robin; Tannenbaum, Cara; Rochon, Paula A.
2017-01-01
Integrating sex and gender in health research is essential to produce the best possible evidence to inform health care. Comprehensive integration of sex and gender requires considering these variables from the very beginning of the research process, starting at the proposal stage. To promote excellence in sex and gender integration, we have developed a set of metrics to assess the quality of sex and gender integration in research proposals. These metrics are designed to assist both researchers in developing proposals and reviewers in making funding decisions. We developed this tool through an iterative three-stage method involving 1) review of existing sex and gender integration resources and initial metrics design, 2) expert review and feedback via anonymous online survey (Likert scale and open-ended questions), and 3) analysis of feedback data and collective revision of the metrics. We received feedback on the initial metrics draft from 20 reviewers with expertise in conducting sex- and/or gender-based health research. The majority of reviewers responded positively to questions regarding the utility, clarity and completeness of the metrics, and all reviewers provided responses to open-ended questions about suggestions for improvements. Coding and analysis of responses identified three domains for improvement: clarifying terminology, refining content, and broadening applicability. Based on this analysis we revised the metrics into the Essential Metrics for Assessing Sex and Gender Integration in Health Research Proposals Involving Human Participants, which outlines criteria for excellence within each proposal component and provides illustrative examples to support implementation. By enhancing the quality of sex and gender integration in proposals, the metrics will help to foster comprehensive, meaningful integration of sex and gender throughout each stage of the research process, resulting in better quality evidence to inform health care for all. PMID:28854192
NASA Astrophysics Data System (ADS)
Eum, H. I.; Cannon, A. J.
2015-12-01
Climate models are a key provider to investigate impacts of projected future climate conditions on regional hydrologic systems. However, there is a considerable mismatch of spatial resolution between GCMs and regional applications, in particular a region characterized by complex terrain such as Korean peninsula. Therefore, a downscaling procedure is an essential to assess regional impacts of climate change. Numerous statistical downscaling methods have been used mainly due to the computational efficiency and simplicity. In this study, four statistical downscaling methods [Bias-Correction/Spatial Disaggregation (BCSD), Bias-Correction/Constructed Analogue (BCCA), Multivariate Adaptive Constructed Analogs (MACA), and Bias-Correction/Climate Imprint (BCCI)] are applied to downscale the latest Climate Forecast System Reanalysis data to stations for precipitation, maximum temperature, and minimum temperature over South Korea. By split sampling scheme, all methods are calibrated with observational station data for 19 years from 1973 to 1991 are and tested for the recent 19 years from 1992 to 2010. To assess skill of the downscaling methods, we construct a comprehensive suite of performance metrics that measure an ability of reproducing temporal correlation, distribution, spatial correlation, and extreme events. In addition, we employ Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) to identify robust statistical downscaling methods based on the performance metrics for each season. The results show that downscaling skill is considerably affected by the skill of CFSR and all methods lead to large improvements in representing all performance metrics. According to seasonal performance metrics evaluated, when TOPSIS is applied, MACA is identified as the most reliable and robust method for all variables and seasons. Note that such result is derived from CFSR output which is recognized as near perfect climate data in climate studies. Therefore, the ranking of this study may be changed when various GCMs are downscaled and evaluated. Nevertheless, it may be informative for end-users (i.e. modelers or water resources managers) to understand and select more suitable downscaling methods corresponding to priorities on regional applications.
A Multimetric Benthic Macroinvertebrate Index for the Assessment of Stream Biotic Integrity in Korea
Jun, Yung-Chul; Won, Doo-Hee; Lee, Soo-Hyung; Kong, Dong-Soo; Hwang, Soon-Jin
2012-01-01
At a time when anthropogenic activities are increasingly disturbing the overall ecological integrity of freshwater ecosystems, monitoring of biological communities is central to assessing the health and function of streams. This study aimed to use a large nation-wide database to develop a multimetric index (the Korean Benthic macroinvertebrate Index of Biological Integrity—KB-IBI) applicable to the biological assessment of Korean streams. Reference and impaired conditions were determined based on watershed, chemical and physical criteria. Eight of an initial 34 candidate metrics were selected using a stepwise procedure that evaluated metric variability, redundancy, sensitivity and responsiveness to environmental gradients. The selected metrics were number of taxa, percent Ephemeroptera-Plecoptera-Trichoptera (EPT) individuals, percent of a dominant taxon, percent taxa abundance without Chironomidae, Shannon’s diversity index, percent gatherer individuals, ratio of filterers and scrapers, and the Korean saprobic index. Our multimetric index successfully distinguished reference from impaired conditions. A scoring system was established for each core metric using its quartile range and response to anthropogenic disturbances. The multimetric index was classified by aggregating the individual metric ..scores and the value range was quadrisected to provide a narrative criterion (Poor, Fair, Good and Excellent) to describe the biological integrity of the streams in the study. A validation procedure showed that the index is an effective method for evaluating stream conditions, and thus is appropriate for use in future studies measuring the long-term status of streams, and the effectiveness of restoration methods. PMID:23202765
Johnson, Robin R.; Stone, Bradly T.; Miranda, Carrie M.; Vila, Bryan; James, Lois; James, Stephen M.; Rubio, Roberto F.; Berka, Chris
2014-01-01
Objective: To demonstrate that psychophysiology may have applications for objective assessment of expertise development in deadly force judgment and decision making (DFJDM). Background: Modern training techniques focus on improving decision-making skills with participative assessment between trainees and subject matter experts primarily through subjective observation. Objective metrics need to be developed. The current proof of concept study explored the potential for psychophysiological metrics in deadly force judgment contexts. Method: Twenty-four participants (novice, expert) were recruited. All wore a wireless Electroencephalography (EEG) device to collect psychophysiological data during high-fidelity simulated deadly force judgment and decision-making simulations using a modified Glock firearm. Participants were exposed to 27 video scenarios, one-third of which would have justified use of deadly force. Pass/fail was determined by whether the participant used deadly force appropriately. Results: Experts had a significantly higher pass rate compared to novices (p < 0.05). Multiple metrics were shown to distinguish novices from experts. Hierarchical regression analyses indicate that psychophysiological variables are able to explain 72% of the variability in expert performance, but only 37% in novices. Discriminant function analysis (DFA) using psychophysiological metrics was able to discern between experts and novices with 72.6% accuracy. Conclusion: While limited due to small sample size, the results suggest that psychophysiology may be developed for use as an objective measure of expertise in DFDJM. Specifically, discriminant function measures may have the potential to objectively identify expert skill acquisition. Application: Psychophysiological metrics may create a performance model with the potential to optimize simulator-based DFJDM training. These performance models could be used for trainee feedback, and/or by the instructor to assess performance objectively. PMID:25100966
Tee, James J L; Yang, Yesa; Kalitzeos, Angelos; Webster, Andrew; Bainbridge, James; Weleber, Richard G; Michaelides, Michel
2018-05-01
To characterize bilateral visual function, interocular variability and progression by using static perimetry-derived volumetric and pointwise metrics in subjects with retinitis pigmentosa associated with mutations in the retinitis pigmentosa GTPase regulator (RPGR) gene. This was a prospective longitudinal observational study of 47 genetically confirmed subjects. Visual function was assessed with ETDRS and Pelli-Robson charts; and Octopus 900 static perimetry using a customized, radially oriented 185-point grid. Three-dimensional hill-of-vision topographic models were produced and interrogated with the Visual Field Modeling and Analysis software to obtain three volumetric metrics: VTotal, V30, and V5. These were analyzed together with Octopus mean sensitivity values. Interocular differences were assessed with the Bland-Altman method. Metric-specific exponential decline rates were calculated. Baseline symmetry was demonstrated by relative interocular difference values of 1% for VTotal and 8% with V30. Degree of symmetry varied between subjects and was quantified with the subject percentage interocular difference (SPID). SPID was 16% for VTotal and 17% for V30. Interocular symmetry in progression was greatest when quantified by VTotal and V30, with 73% and 64% of subjects possessing interocular rate differences smaller in magnitude than respective annual progression rates. Functional decline was evident with increasing age. An overall annual exponential decline of 6% was evident with both VTotal and V30. In general, good interocular symmetry exists; however, there was both variation between subjects and with the use of various metrics. Our findings will guide patient selection and design of RPGR treatment trials, and provide clinicians with specific prognostic information to offer patients affected by this condition.
NASA Technical Reports Server (NTRS)
Pearsons, Karl S.; Howe, Richard R.; Sneddon, Matthew D.; Fidell, Sanford
1996-01-01
Thirty audiometrically screened test participants judged the relative annoyance of two comparison (variable level) and thirty-four standard (fixed level) signals in an adaptive paired comparison psychoacoustic study. The signal ensemble included both FAR Part 36 Stage 2 and 3 aircraft overflights, as well as synthesized aircraft noise signatures and other non-aircraft signals. All test signals were presented for judgment as heard indoors, in the presence of continuous background noise, under free-field listening conditions in an anechoic chamber. Analyses of the performance of 30 noise metrics as predictors of these annoyance judgments confirmed that the more complex metrics were generally more accurate and precise predictors than the simpler methods. EPNL was somewhat less accurate and precise as a predictor of the annoyance judgments than a duration-adjusted variant of Zwicker's Loudness Level.
A Neighborhood Wealth Metric for Use in Health Studies
Moudon, Anne Vernez; Cook, Andrea J.; Ulmer, Jared; Hurvitz, Philip M.; Drewnowski, Adam
2011-01-01
Background Measures of neighborhood deprivation used in health research are typically based on conventional area-based SES. Purpose The aim of this study is to examine new data and measures of SES for use in health research. Specifically, assessed property values are introduced as a new individual-level metric of wealth and tested for their ability to substitute for conventional area-based SES as measures of neighborhood deprivation. Methods The analysis was conducted in 2010 using data from 1922 participants in the 2008– 2009 survey of the Seattle Obesity Study (SOS). It compared the relative strength of the association between the individual-level neighborhood wealth metric (assessed property values) and area-level SES measures (including education, income, and percentage above poverty as single variables, and as the composite Singh index) on the binary outcome fair/poor general health status. Analyses were adjusted for gender, categoric age, race, employment status, home ownership, and household income. Results The neighborhood wealth measure was more predictive of fair/poor health status than area-level SES measures, calculated either as single variables or as indices (lower DIC measures for all models). The odds of having a fair/poor health status decreased by 0.85 [0.77, 0.93] per $50,000 increase in neighborhood property values after adjusting for individual-level SES measures. Conclusions The proposed individual-level metric of neighborhood wealth, if replicated in other areas, could replace area-based SES measures, thus simplifying analyses of contextual effects on health. PMID:21665069
Waite, Ian R.
2014-01-01
As part of the USGS study of nutrient enrichment of streams in agricultural regions throughout the United States, about 30 sites within each of eight study areas were selected to capture a gradient of nutrient conditions. The objective was to develop watershed disturbance predictive models for macroinvertebrate and algal metrics at national and three regional landscape scales to obtain a better understanding of important explanatory variables. Explanatory variables in models were generated from landscape data, habitat, and chemistry. Instream nutrient concentration and variables assessing the amount of disturbance to the riparian zone (e.g., percent row crops or percent agriculture) were selected as most important explanatory variable in almost all boosted regression tree models regardless of landscape scale or assemblage. Frequently, TN and TP concentration and riparian agricultural land use variables showed a threshold type response at relatively low values to biotic metrics modeled. Some measure of habitat condition was also commonly selected in the final invertebrate models, though the variable(s) varied across regions. Results suggest national models tended to account for more general landscape/climate differences, while regional models incorporated both broad landscape scale and more specific local-scale variables.
Weber, Benjamin; Lee, Sau L; Delvadia, Renishkumar; Lionberger, Robert; Li, Bing V; Tsong, Yi; Hochhaus, Guenther
2015-03-01
Equivalence testing of aerodynamic particle size distribution (APSD) through multi-stage cascade impactors (CIs) is important for establishing bioequivalence of orally inhaled drug products. Recent work demonstrated that the median of the modified chi-square ratio statistic (MmCSRS) is a promising metric for APSD equivalence testing of test (T) and reference (R) products as it can be applied to a reduced number of CI sites that are more relevant for lung deposition. This metric is also less sensitive to the increased variability often observed for low-deposition sites. A method to establish critical values for the MmCSRS is described here. This method considers the variability of the R product by employing a reference variance scaling approach that allows definition of critical values as a function of the observed variability of the R product. A stepwise CI equivalence test is proposed that integrates the MmCSRS as a method for comparing the relative shapes of CI profiles and incorporates statistical tests for assessing equivalence of single actuation content and impactor sized mass. This stepwise CI equivalence test was applied to 55 published CI profile scenarios, which were classified as equivalent or inequivalent by members of the Product Quality Research Institute working group (PQRI WG). The results of the stepwise CI equivalence test using a 25% difference in MmCSRS as an acceptance criterion provided the best matching with those of the PQRI WG as decisions of both methods agreed in 75% of the 55 CI profile scenarios.
Unravelling connections between river flow and large-scale climate: experiences from Europe
NASA Astrophysics Data System (ADS)
Hannah, D. M.; Kingston, D. G.; Lavers, D.; Stagge, J. H.; Tallaksen, L. M.
2016-12-01
The United Nations has identified better knowledge of large-scale water cycle processes as essential for socio-economic development and global water-food-energy security. In this context, and given the ever-growing concerns about climate change/ variability and human impacts on hydrology, there is an urgent research need: (a) to quantify space-time variability in regional river flow, and (b) to improve hydroclimatological understanding of climate-flow connections as a basis for identifying current and future water-related issues. In this paper, we draw together studies undertaken at the pan-European scale: (1) to evaluate current methods for assessing space-time dynamics for different streamflow metrics (annual regimes, low flows and high flows) and for linking flow variability to atmospheric drivers (circulation indices, air-masses, gridded climate fields and vapour flux); and (2) to propose a plan for future research connecting streamflow and the atmospheric conditions in Europe and elsewhere. We believe this research makes a useful, unique contribution to the literature through a systematic inter-comparison of different streamflow metrics and atmospheric descriptors. In our findings, we highlight the need to consider appropriate atmospheric descriptors (dependent on the target flow metric and region of interest) and to develop analytical techniques that best characterise connections in the ocean-atmosphere-land surface process chain. We call for the need to consider not only atmospheric interactions, but also the role of the river basin-scale terrestrial hydrological processes in modifying the climate signal response of river flows.
Helmer, K. G.; Chou, M-C.; Preciado, R. I.; Gimi, B.; Rollins, N. K.; Song, A.; Turner, J.; Mori, S.
2016-01-01
MRI-based multi-site trials now routinely include some form of diffusion-weighted imaging (DWI) in their protocol. These studies can include data originating from scanners built by different vendors, each with their own set of unique protocol restrictions, including restrictions on the number of available gradient directions, whether an externally-generated list of gradient directions can be used, and restrictions on the echo time (TE). One challenge of multi-site studies is to create a common imaging protocol that will result in a reliable and accurate set of diffusion metrics. The present study describes the effect of site, scanner vendor, field strength, and TE on two common metrics: the first moment of the diffusion tensor field (mean diffusivity, MD), and the fractional anisotropy (FA). We have shown in earlier work that ROI metrics and the mean of MD and FA histograms are not sufficiently sensitive for use in site characterization. Here we use the distance between whole brain histograms of FA and MD to investigate within- and between-site effects. We concluded that the variability of DTI metrics due to site, vendor, field strength, and echo time could influence the results in multi-center trials and that histogram distance is sensitive metrics for each of these variables. PMID:27350723
Multi-version software reliability through fault-avoidance and fault-tolerance
NASA Technical Reports Server (NTRS)
Vouk, Mladen A.; Mcallister, David F.
1989-01-01
A number of experimental and theoretical issues associated with the practical use of multi-version software to provide run-time tolerance to software faults were investigated. A specialized tool was developed and evaluated for measuring testing coverage for a variety of metrics. The tool was used to collect information on the relationships between software faults and coverage provided by the testing process as measured by different metrics (including data flow metrics). Considerable correlation was found between coverage provided by some higher metrics and the elimination of faults in the code. Back-to-back testing was continued as an efficient mechanism for removal of un-correlated faults, and common-cause faults of variable span. Software reliability estimation methods was also continued based on non-random sampling, and the relationship between software reliability and code coverage provided through testing. New fault tolerance models were formulated. Simulation studies of the Acceptance Voting and Multi-stage Voting algorithms were finished and it was found that these two schemes for software fault tolerance are superior in many respects to some commonly used schemes. Particularly encouraging are the safety properties of the Acceptance testing scheme.
Two-dimensional habitat modeling in the Yellowstone/Upper Missouri River system
Waddle, T. J.; Bovee, K.D.; Bowen, Z.H.
1997-01-01
This study is being conducted to provide the aquatic biology component of a decision support system being developed by the U.S. Bureau of Reclamation. In an attempt to capture the habitat needs of Great Plains fish communities we are looking beyond previous habitat modeling methods. Traditional habitat modeling approaches have relied on one-dimensional hydraulic models and lumped compositional habitat metrics to describe aquatic habitat. A broader range of habitat descriptors is available when both composition and configuration of habitats is considered. Habitat metrics that consider both composition and configuration can be adapted from terrestrial biology. These metrics are most conveniently accessed with spatially explicit descriptors of the physical variables driving habitat composition. Two-dimensional hydrodynamic models have advanced to the point that they may provide the spatially explicit description of physical parameters needed to address this problem. This paper reports progress to date on applying two-dimensional hydraulic and habitat models on the Yellowstone and Missouri Rivers and uses examples from the Yellowstone River to illustrate the configurational metrics as a new tool for assessing riverine habitats.
Rodriguez Gutierrez, Daniel; Manita, Muftah; Jaspan, Tim; Dineen, Robert A.; Grundy, Richard G.; Auer, Dorothee P.
2013-01-01
Background Assessment of treatment response by measuring tumor size is known to be a late and potentially confounded response index. Serial diffusion MRI has shown potential for allowing earlier and possibly more reliable response assessment in adult patients, with limited experience in clinical settings and in pediatric brain cancer. We present a retrospective study of clinical MRI data in children with high-grade brain tumors to assess and compare the values of several diffusion change metrics to predict treatment response. Methods Eighteen patients (age range, 1.9–20.6 years) with high-grade brain tumors and serial diffusion MRI (pre- and posttreatment interval range, 1–16 weeks posttreatment) were identified after obtaining parental consent. The following diffusion change metrics were compared with the clinical response status assessed at 6 months: (1) regional change in absolute and normalized apparent diffusivity coefficient (ADC), (2) voxel-based fractional volume of increased (fiADC) and decreased ADC (fdADC), and (3) a new metric based on the slope of the first principal component of functional diffusion maps (fDM). Results Responders (n = 12) differed significantly from nonresponders (n = 6) in all 3 diffusional change metrics demonstrating higher regional ADC increase, larger fiADC, and steeper slopes (P < .05). The slope method allowed the best response prediction (P < .01, η2 = 0.78) with a classification accuracy of 83% for a slope of 58° using receiver operating characteristic (ROC) analysis. Conclusions We demonstrate that diffusion change metrics are suitable response predictors for high-grade pediatric tumors, even in the presence of variable clinical diffusion imaging protocols. PMID:23585630
Darrow, Lyndsey A; Klein, Mitchel; Sarnat, Jeremy A; Mulholland, James A; Strickland, Matthew J; Sarnat, Stefanie E; Russell, Armistead G; Tolbert, Paige E
2011-01-01
Various temporal metrics of daily pollution levels have been used to examine the relationships between air pollutants and acute health outcomes. However, daily metrics of the same pollutant have rarely been systematically compared within a study. In this analysis, we describe the variability of effect estimates attributable to the use of different temporal metrics of daily pollution levels. We obtained hourly measurements of ambient particulate matter (PM₂.₅), carbon monoxide (CO), nitrogen dioxide (NO₂), and ozone (O₃) from air monitoring networks in 20-county Atlanta for the time period 1993-2004. For each pollutant, we created (1) a daily 1-h maximum; (2) a 24-h average; (3) a commute average; (4) a daytime average; (5) a nighttime average; and (6) a daily 8-h maximum (only for O₃). Using Poisson generalized linear models, we examined associations between daily counts of respiratory emergency department visits and the previous day's pollutant metrics. Variability was greatest across O₃ metrics, with the 8-h maximum, 1-h maximum, and daytime metrics yielding strong positive associations and the nighttime O₃ metric yielding a negative association (likely reflecting confounding by air pollutants oxidized by O₃). With the exception of daytime metric, all of the CO and NO₂ metrics were positively associated with respiratory emergency department visits. Differences in observed associations with respiratory emergency room visits among temporal metrics of the same pollutant were influenced by the diurnal patterns of the pollutant, spatial representativeness of the metrics, and correlation between each metric and copollutant concentrations. Overall, the use of metrics based on the US National Ambient Air Quality Standards (for example, the use of a daily 8-h maximum O₃ as opposed to a 24-h average metric) was supported by this analysis. Comparative analysis of temporal metrics also provided insight into underlying relationships between specific air pollutants and respiratory health.
Gaewsky, James P; Weaver, Ashley A; Koya, Bharath; Stitzel, Joel D
2015-01-01
A 3-phase real-world motor vehicle crash (MVC) reconstruction method was developed to analyze injury variability as a function of precrash occupant position for 2 full-frontal Crash Injury Research and Engineering Network (CIREN) cases. Phase I: A finite element (FE) simplified vehicle model (SVM) was developed and tuned to mimic the frontal crash characteristics of the CIREN case vehicle (Camry or Cobalt) using frontal New Car Assessment Program (NCAP) crash test data. Phase II: The Toyota HUman Model for Safety (THUMS) v4.01 was positioned in 120 precrash configurations per case within the SVM. Five occupant positioning variables were varied using a Latin hypercube design of experiments: seat track position, seat back angle, D-ring height, steering column angle, and steering column telescoping position. An additional baseline simulation was performed that aimed to match the precrash occupant position documented in CIREN for each case. Phase III: FE simulations were then performed using kinematic boundary conditions from each vehicle's event data recorder (EDR). HIC15, combined thoracic index (CTI), femur forces, and strain-based injury metrics in the lung and lumbar vertebrae were evaluated to predict injury. Tuning the SVM to specific vehicle models resulted in close matches between simulated and test injury metric data, allowing the tuned SVM to be used in each case reconstruction with EDR-derived boundary conditions. Simulations with the most rearward seats and reclined seat backs had the greatest HIC15, head injury risk, CTI, and chest injury risk. Calculated injury risks for the head, chest, and femur closely correlated to the CIREN occupant injury patterns. CTI in the Camry case yielded a 54% probability of Abbreviated Injury Scale (AIS) 2+ chest injury in the baseline case simulation and ranged from 34 to 88% (mean = 61%) risk in the least and most dangerous occupant positions. The greater than 50% probability was consistent with the case occupant's AIS 2 hemomediastinum. Stress-based metrics were used to predict injury to the lower leg of the Camry case occupant. The regional-level injury metrics evaluated for the Cobalt case occupant indicated a low risk of injury; however, strain-based injury metrics better predicted pulmonary contusion. Approximately 49% of the Cobalt occupant's left lung was contused, though the baseline simulation predicted 40.5% of the lung to be injured. A method to compute injury metrics and risks as functions of precrash occupant position was developed and applied to 2 CIREN MVC FE reconstructions. The reconstruction process allows for quantification of the sensitivity and uncertainty of the injury risk predictions based on occupant position to further understand important factors that lead to more severe MVC injuries.
An exploratory survey of methods used to develop measures of performance
NASA Astrophysics Data System (ADS)
Hamner, Kenneth L.; Lafleur, Charles A.
1993-09-01
Nonmanufacturing organizations are being challenged to provide high-quality products and services to their customers, with an emphasis on continuous process improvement. Measures of performance, referred to as metrics, can be used to foster process improvement. The application of performance measurement to nonmanufacturing processes can be very difficult. This research explored methods used to develop metrics in nonmanufacturing organizations. Several methods were formally defined in the literature, and the researchers used a two-step screening process to determine the OMB Generic Method was most likely to produce high-quality metrics. The OMB Generic Method was then used to develop metrics. A few other metric development methods were found in use at nonmanufacturing organizations. The researchers interviewed participants in metric development efforts to determine their satisfaction and to have them identify the strengths and weaknesses of, and recommended improvements to, the metric development methods used. Analysis of participants' responses allowed the researchers to identify the key components of a sound metrics development method. Those components were incorporated into a proposed metric development method that was based on the OMB Generic Method, and should be more likely to produce high-quality metrics that will result in continuous process improvement.
Reliability of TMS metrics in patients with chronic incomplete spinal cord injury.
Potter-Baker, K A; Janini, D P; Frost, F S; Chabra, P; Varnerin, N; Cunningham, D A; Sankarasubramanian, V; Plow, E B
2016-11-01
Test-retest reliability analysis in individuals with chronic incomplete spinal cord injury (iSCI). The purpose of this study was to examine the reliability of neurophysiological metrics acquired with transcranial magnetic stimulation (TMS) in individuals with chronic incomplete tetraplegia. Cleveland Clinic Foundation, Cleveland, Ohio, USA. TMS metrics of corticospinal excitability, output, inhibition and motor map distribution were collected in muscles with a higher MRC grade and muscles with a lower MRC grade on the more affected side of the body. Metrics denoting upper limb function were also collected. All metrics were collected at two sessions separated by a minimum of two weeks. Reliability between sessions was determined using Spearman's correlation coefficients and concordance correlation coefficients (CCCs). We found that TMS metrics that were acquired in higher MRC grade muscles were approximately two times more reliable than those collected in lower MRC grade muscles. TMS metrics of motor map output, however, demonstrated poor reliability regardless of muscle choice (P=0.34; CCC=0.51). Correlation analysis indicated that patients with more baseline impairment and/or those in a more chronic phase of iSCI demonstrated greater variability of metrics. In iSCI, reliability of TMS metrics varies depending on the muscle grade of the tested muscle. Variability is also influenced by factors such as baseline motor function and time post SCI. Future studies that use TMS metrics in longitudinal study designs to understand functional recovery should be cautious as choice of muscle and clinical characteristics can influence reliability.
Atlas-based automatic measurements of the morphology of the tibiofemoral joint
NASA Astrophysics Data System (ADS)
Brehler, M.; Thawait, G.; Shyr, W.; Ramsay, J.; Siewerdsen, J. H.; Zbijewski, W.
2017-03-01
Purpose: Anatomical metrics of the tibiofemoral joint support assessment of joint stability and surgical planning. We propose an automated, atlas-based algorithm to streamline the measurements in 3D images of the joint and reduce userdependence of the metrics arising from manual identification of the anatomical landmarks. Methods: The method is initialized with coarse registrations of a set of atlas images to the fixed input image. The initial registrations are then refined separately for the tibia and femur and the best matching atlas is selected. Finally, the anatomical landmarks of the best matching atlas are transformed onto the input image by deforming a surface model of the atlas to fit the shape of the tibial plateau in the input image (a mesh-to-volume registration). We apply the method to weight-bearing volumetric images of the knee obtained from 23 subjects using an extremity cone-beam CT system. Results of the automated algorithm were compared to an expert radiologist for measurements of Static Alignment (SA), Medial Tibial Slope (MTS) and Lateral Tibial Slope (LTS). Results: Intra-reader variability as high as 10% for LTS and 7% for MTS (ratio of standard deviation to the mean in repeated measurements) was found for expert radiologist, illustrating the potential benefits of an automated approach in improving the precision of the metrics. The proposed method achieved excellent registration of the atlas mesh to the input volumes. The resulting automated measurements yielded high correlations with expert radiologist, as indicated by correlation coefficients of 0.72 for MTS, 0.8 for LTS, and 0.89 for SA. Conclusions: The automated method for measurement of anatomical metrics of the tibiofemoral joint achieves high correlation with expert radiologist without the need for time consuming and error prone manual selection of landmarks.
Floodplain complexity and surface metrics: influences of scale and geomorphology
Scown, Murray W.; Thoms, Martin C.; DeJager, Nathan R.
2015-01-01
Many studies of fluvial geomorphology and landscape ecology examine a single river or landscape, thus lack generality, making it difficult to develop a general understanding of the linkages between landscape patterns and larger-scale driving variables. We examined the spatial complexity of eight floodplain surfaces in widely different geographic settings and determined how patterns measured at different scales relate to different environmental drivers. Floodplain surface complexity is defined as having highly variable surface conditions that are also highly organised in space. These two components of floodplain surface complexity were measured across multiple sampling scales from LiDAR-derived DEMs. The surface character and variability of each floodplain were measured using four surface metrics; namely, standard deviation, skewness, coefficient of variation, and standard deviation of curvature from a series of moving window analyses ranging from 50 to 1000 m in radius. The spatial organisation of each floodplain surface was measured using spatial correlograms of the four surface metrics. Surface character, variability, and spatial organisation differed among the eight floodplains; and random, fragmented, highly patchy, and simple gradient spatial patterns were exhibited, depending upon the metric and window size. Differences in surface character and variability among the floodplains became statistically stronger with increasing sampling scale (window size), as did their associations with environmental variables. Sediment yield was consistently associated with differences in surface character and variability, as were flow discharge and variability at smaller sampling scales. Floodplain width was associated with differences in the spatial organization of surface conditions at smaller sampling scales, while valley slope was weakly associated with differences in spatial organisation at larger scales. A comparison of floodplain landscape patterns measured at different scales would improve our understanding of the role that different environmental variables play at different scales and in different geomorphic settings.
Measures of native and non-native rhythm in a quantity language.
Stockmal, Verna; Markus, Dace; Bond, Dzintra
2005-01-01
The traditional phonetic classification of language rhythm as stress-timed or syllable-timed is attributed to Pike. Recently, two different proposals have been offered for describing the rhythmic structure of languages from acoustic-phonetic measurements. Ramus has suggested a metric based on the proportion of vocalic intervals and the variability (SD) of consonantal intervals. Grabe has proposed Pairwise Variability Indices (nPVI, rPVI) calculated from the differences in vocalic and consonantal durations between successive syllables. We have calculated both the Ramus and Grabe metrics for Latvian, traditionally considered a syllable rhythm language, and for Latvian as spoken by Russian learners. Native speakers and proficient learners were very similar whereas low-proficiency learners showed high variability on some properties. The metrics did not provide an unambiguous classification of Latvian.
Prediction of Hydrologic Characteristics for Ungauged Catchments to Support Hydroecological Modeling
NASA Astrophysics Data System (ADS)
Bond, Nick R.; Kennard, Mark J.
2017-11-01
Hydrologic variability is a fundamental driver of ecological processes and species distribution patterns within river systems, yet the paucity of gauges in many catchments means that streamflow data are often unavailable for ecological survey sites. Filling this data gap is an important challenge in hydroecological research. To address this gap, we first test the ability to spatially extrapolate hydrologic metrics calculated from gauged streamflow data to ungauged sites as a function of stream distance and catchment area. Second, we examine the ability of statistical models to predict flow regime metrics based on climate and catchment physiographic variables. Our assessment focused on Australia's largest catchment, the Murray-Darling Basin (MDB). We found that hydrologic metrics were predictable only between sites within ˜25 km of one another. Beyond this, correlations between sites declined quickly. We found less than 40% of fish survey sites from a recent basin-wide monitoring program (n = 777 sites) to fall within this 25 km range, thereby greatly limiting the ability to utilize gauge data for direct spatial transposition of hydrologic metrics to biological survey sites. In contrast, statistical model-based transposition proved effective in predicting ecologically relevant aspects of the flow regime (including metrics describing central tendency, high- and low-flows intermittency, seasonality, and variability) across the entire gauge network (median R2 ˜ 0.54, range 0.39-0.94). Modeled hydrologic metrics thus offer a useful alternative to empirical data when examining biological survey data from ungauged sites. More widespread use of these statistical tools and modeled metrics could expand our understanding of flow-ecology relationships.
Application of Random Forests Methods to Diabetic Retinopathy Classification Analyses
Casanova, Ramon; Saldana, Santiago; Chew, Emily Y.; Danis, Ronald P.; Greven, Craig M.; Ambrosius, Walter T.
2014-01-01
Background Diabetic retinopathy (DR) is one of the leading causes of blindness in the United States and world-wide. DR is a silent disease that may go unnoticed until it is too late for effective treatment. Therefore, early detection could improve the chances of therapeutic interventions that would alleviate its effects. Methodology Graded fundus photography and systemic data from 3443 ACCORD-Eye Study participants were used to estimate Random Forest (RF) and logistic regression classifiers. We studied the impact of sample size on classifier performance and the possibility of using RF generated class conditional probabilities as metrics describing DR risk. RF measures of variable importance are used to detect factors that affect classification performance. Principal Findings Both types of data were informative when discriminating participants with or without DR. RF based models produced much higher classification accuracy than those based on logistic regression. Combining both types of data did not increase accuracy but did increase statistical discrimination of healthy participants who subsequently did or did not have DR events during four years of follow-up. RF variable importance criteria revealed that microaneurysms counts in both eyes seemed to play the most important role in discrimination among the graded fundus variables, while the number of medicines and diabetes duration were the most relevant among the systemic variables. Conclusions and Significance We have introduced RF methods to DR classification analyses based on fundus photography data. In addition, we propose an approach to DR risk assessment based on metrics derived from graded fundus photography and systemic data. Our results suggest that RF methods could be a valuable tool to diagnose DR diagnosis and evaluate its progression. PMID:24940623
Variability aware compact model characterization for statistical circuit design optimization
NASA Astrophysics Data System (ADS)
Qiao, Ying; Qian, Kun; Spanos, Costas J.
2012-03-01
Variability modeling at the compact transistor model level can enable statistically optimized designs in view of limitations imposed by the fabrication technology. In this work we propose an efficient variabilityaware compact model characterization methodology based on the linear propagation of variance. Hierarchical spatial variability patterns of selected compact model parameters are directly calculated from transistor array test structures. This methodology has been implemented and tested using transistor I-V measurements and the EKV-EPFL compact model. Calculation results compare well to full-wafer direct model parameter extractions. Further studies are done on the proper selection of both compact model parameters and electrical measurement metrics used in the method.
A direct-gradient multivariate index of biotic condition
Miranda, Leandro E.; Aycock, J.N.; Killgore, K. J.
2012-01-01
Multimetric indexes constructed by summing metric scores have been criticized despite many of their merits. A leading criticism is the potential for investigator bias involved in metric selection and scoring. Often there is a large number of competing metrics equally well correlated with environmental stressors, requiring a judgment call by the investigator to select the most suitable metrics to include in the index and how to score them. Data-driven procedures for multimetric index formulation published during the last decade have reduced this limitation, yet apprehension remains. Multivariate approaches that select metrics with statistical algorithms may reduce the level of investigator bias and alleviate a weakness of multimetric indexes. We investigated the suitability of a direct-gradient multivariate procedure to derive an index of biotic condition for fish assemblages in oxbow lakes in the Lower Mississippi Alluvial Valley. Although this multivariate procedure also requires that the investigator identify a set of suitable metrics potentially associated with a set of environmental stressors, it is different from multimetric procedures because it limits investigator judgment in selecting a subset of biotic metrics to include in the index and because it produces metric weights suitable for computation of index scores. The procedure, applied to a sample of 35 competing biotic metrics measured at 50 oxbow lakes distributed over a wide geographical region in the Lower Mississippi Alluvial Valley, selected 11 metrics that adequately indexed the biotic condition of five test lakes. Because the multivariate index includes only metrics that explain the maximum variability in the stressor variables rather than a balanced set of metrics chosen to reflect various fish assemblage attributes, it is fundamentally different from multimetric indexes of biotic integrity with advantages and disadvantages. As such, it provides an alternative to multimetric procedures.
Establishing Qualitative Software Metrics in Department of the Navy Programs
2015-10-29
dedicated to provide the highest quality software to its users. In doing, there is a need for a formalized set of Software Quality Metrics . The goal...of this paper is to establish the validity of those necessary Quality metrics . In our approach we collected the data of over a dozen programs...provide the necessary variable data for our formulas and tested the formulas for validity. Keywords: metrics ; software; quality I. PURPOSE Space
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mendell, Mark J.; Lei, Quanhong; Cozen, Myrna O.
2003-10-01
Metrics of culturable airborne microorganisms for either total organisms or suspected harmful subgroups have generally not been associated with symptoms among building occupants. However, the visible presence of moisture damage or mold in residences and other buildings has consistently been associated with respiratory symptoms and other health effects. This relationship is presumably caused by adverse but uncharacterized exposures to moisture-related microbiological growth. In order to assess this hypothesis, we studied relationships in U.S. office buildings between the prevalence of respiratory and irritant symptoms, the concentrations of airborne microorganisms that require moist surfaces on which to grow, and the presence ofmore » visible water damage. For these analyses we used data on buildings, indoor environments, and occupants collected from a representative sample of 100 U.S. office buildings in the U.S. Environmental Protection Agency's Building Assessment Survey and Evaluation (EPA BASE) study. We created 19 alternate metrics, using scales ranging from 3-10 units, that summarized the concentrations of airborne moisture-indicating microorganisms (AMIMOs) as indicators of moisture in buildings. Two were constructed to resemble a metric previously reported to be associated with lung function changes in building occupants; the others were based on another metric from the same group of Finnish researchers, concentration cutpoints from other studies, and professional judgment. We assessed three types of associations: between AMIMO metrics and symptoms in office workers, between evidence of water damage and symptoms, and between water damage and AMIMO metrics. We estimated (as odds ratios (ORs) with 95% confidence intervals) the unadjusted and adjusted associations between the 19 metrics and two types of weekly, work-related symptoms--lower respiratory and mucous membrane--using logistic regression models. Analyses used the original AMIMO metrics and were repeated with simplified dichotomized metrics. The multivariate models adjusted for other potential confounding variables associated with respondents, occupied spaces, buildings, or ventilation systems. Models excluded covariates for moisture-related risks hypothesized to increase AMIMO levels. We also estimated the association of water damage (using variables for specific locations in the study space or building, or summary variables) with the two symptom outcomes. Finally, using selected AMIMO metrics as outcomes, we constructed logistic regression models with observations at the building level to estimate unadjusted and adjusted associations of evident water damage with AMIMO metrics. All original AMIMO metrics showed little overall pattern of unadjusted or adjusted association with either symptom outcome. The 3-category metric resembling that previously used by others, which of all constructed metrics had the largest number of buildings in its top category, was not associated with symptoms in these buildings. However, most metrics with few buildings in their highest category showed increased risk for both symptoms in that category, especially metrics using cutpoints of >100 but <500 colony-forming units (CFU)/m{sup 3} for concentration of total culturable fungi. With AMIMO metrics dichotomized to compare the highest category with all lower categories combined, four metrics had unadjusted ORs between 1.4 and 1.6 for both symptom outcomes. The same four metrics had adjusted ORs of 1.7-2.1 for both symptom outcomes. In models of water damage and symptoms, several specific locations of past water damage had significant associations with outcomes, with ORs ranging from 1.4-1.6. In bivariate models of water damage and selected AMIMO metrics, a number of specific types of water damage and several summary variables for water damage were very strongly associated with AMIMO metrics (significant ORs ranging above 15). Multivariate modeling with the dichotomous AMIMO metrics was not possible due to limited numbers of observations.« less
NASA Technical Reports Server (NTRS)
Ezer, Neta; Zumbado, Jennifer Rochlis; Sandor, Aniko; Boyer, Jennifer
2011-01-01
Human-robot systems are expected to have a central role in future space exploration missions that extend beyond low-earth orbit [1]. As part of a directed research project funded by NASA s Human Research Program (HRP), researchers at the Johnson Space Center have started to use a variety of techniques, including literature reviews, case studies, knowledge capture, field studies, and experiments to understand critical human-robot interaction (HRI) variables for current and future systems. Activities accomplished to date include observations of the International Space Station s Special Purpose Dexterous Manipulator (SPDM), Robonaut, and Space Exploration Vehicle (SEV), as well as interviews with robotics trainers, robot operators, and developers of gesture interfaces. A survey of methods and metrics used in HRI was completed to identify those most applicable to space robotics. These methods and metrics included techniques and tools associated with task performance, the quantification of human-robot interactions and communication, usability, human workload, and situation awareness. The need for more research in areas such as natural interfaces, compensations for loss of signal and poor video quality, psycho-physiological feedback, and common HRI testbeds were identified. The initial findings from these activities and planned future research are discussed. Human-robot systems are expected to have a central role in future space exploration missions that extend beyond low-earth orbit [1]. As part of a directed research project funded by NASA s Human Research Program (HRP), researchers at the Johnson Space Center have started to use a variety of techniques, including literature reviews, case studies, knowledge capture, field studies, and experiments to understand critical human-robot interaction (HRI) variables for current and future systems. Activities accomplished to date include observations of the International Space Station s Special Purpose Dexterous Manipulator (SPDM), Robonaut, and Space Exploration Vehicle (SEV), as well as interviews with robotics trainers, robot operators, and developers of gesture interfaces. A survey of methods and metrics used in HRI was completed to identify those most applicable to space robotics. These methods and metrics included techniques and tools associated with task performance, the quantification of human-robot interactions and communication, usability, human workload, and situation awareness. The need for more research in areas such as natural interfaces, compensations for loss of signal and poor video quality, psycho-physiological feedback, and common HRI testbeds were identified. The initial findings from these activities and planned future research are discussed.
ERIC Educational Resources Information Center
Grané, Aurea; Romera, Rosario
2018-01-01
Survey data are usually of mixed type (quantitative, multistate categorical, and/or binary variables). Multidimensional scaling (MDS) is one of the most extended methodologies to visualize the profile structure of the data. Since the past 60s, MDS methods have been introduced in the literature, initially in publications in the psychometrics area.…
Comparing generalized ensemble methods for sampling of systems with many degrees of freedom
Lincoff, James; Sasmal, Sukanya; Head-Gordon, Teresa
2016-11-03
Here, we compare two standard replica exchange methods using temperature and dielectric constant as the scaling variables for independent replicas against two new corresponding enhanced sampling methods based on non-equilibrium statistical cooling (temperature) or descreening (dielectric). We test the four methods on a rough 1D potential as well as for alanine dipeptide in water, for which their relatively small phase space allows for the ability to define quantitative convergence metrics. We show that both dielectric methods are inferior to the temperature enhanced sampling methods, and in turn show that temperature cool walking (TCW) systematically outperforms the standard temperature replica exchangemore » (TREx) method. We extend our comparisons of the TCW and TREx methods to the 5 residue met-enkephalin peptide, in which we evaluate the Kullback-Leibler divergence metric to show that the rate of convergence between two independent trajectories is faster for TCW compared to TREx. Finally we apply the temperature methods to the 42 residue amyloid-β peptide in which we find non-negligible differences in the disordered ensemble using TCW compared to the standard TREx. All four methods have been made available as software through the OpenMM Omnia software consortium.« less
Comparing generalized ensemble methods for sampling of systems with many degrees of freedom.
Lincoff, James; Sasmal, Sukanya; Head-Gordon, Teresa
2016-11-07
We compare two standard replica exchange methods using temperature and dielectric constant as the scaling variables for independent replicas against two new corresponding enhanced sampling methods based on non-equilibrium statistical cooling (temperature) or descreening (dielectric). We test the four methods on a rough 1D potential as well as for alanine dipeptide in water, for which their relatively small phase space allows for the ability to define quantitative convergence metrics. We show that both dielectric methods are inferior to the temperature enhanced sampling methods, and in turn show that temperature cool walking (TCW) systematically outperforms the standard temperature replica exchange (TREx) method. We extend our comparisons of the TCW and TREx methods to the 5 residue met-enkephalin peptide, in which we evaluate the Kullback-Leibler divergence metric to show that the rate of convergence between two independent trajectories is faster for TCW compared to TREx. Finally we apply the temperature methods to the 42 residue amyloid-β peptide in which we find non-negligible differences in the disordered ensemble using TCW compared to the standard TREx. All four methods have been made available as software through the OpenMM Omnia software consortium (http://www.omnia.md/).
Zhu, Wenquan; Chen, Guangsheng; Jiang, Nan; Liu, Jianhong; Mou, Minjie
2013-01-01
Carbon Flux Phenology (CFP) can affect the interannual variation in Net Ecosystem Exchange (NEE) of carbon between terrestrial ecosystems and the atmosphere. In this study, we proposed a methodology to estimate CFP metrics with satellite-derived Land Surface Phenology (LSP) metrics and climate drivers for 4 biomes (i.e., deciduous broadleaf forest, evergreen needleleaf forest, grasslands and croplands), using 159 site-years of NEE and climate data from 32 AmeriFlux sites and MODIS vegetation index time-series data. LSP metrics combined with optimal climate drivers can explain the variability in Start of Carbon Uptake (SCU) by more than 70% and End of Carbon Uptake (ECU) by more than 60%. The Root Mean Square Error (RMSE) of the estimations was within 8.5 days for both SCU and ECU. The estimation performance for this methodology was primarily dependent on the optimal combination of the LSP retrieval methods, the explanatory climate drivers, the biome types, and the specific CFP metric. This methodology has a potential for allowing extrapolation of CFP metrics for biomes with a distinct and detectable seasonal cycle over large areas, based on synoptic multi-temporal optical satellite data and climate data. PMID:24386441
Consumer Neuroscience-Based Metrics Predict Recall, Liking and Viewing Rates in Online Advertising.
Guixeres, Jaime; Bigné, Enrique; Ausín Azofra, Jose M; Alcañiz Raya, Mariano; Colomer Granero, Adrián; Fuentes Hurtado, Félix; Naranjo Ornedo, Valery
2017-01-01
The purpose of the present study is to investigate whether the effectiveness of a new ad on digital channels (YouTube) can be predicted by using neural networks and neuroscience-based metrics (brain response, heart rate variability and eye tracking). Neurophysiological records from 35 participants were exposed to 8 relevant TV Super Bowl commercials. Correlations between neurophysiological-based metrics, ad recall, ad liking, the ACE metrix score and the number of views on YouTube during a year were investigated. Our findings suggest a significant correlation between neuroscience metrics and self-reported of ad effectiveness and the direct number of views on the YouTube channel. In addition, and using an artificial neural network based on neuroscience metrics, the model classifies (82.9% of average accuracy) and estimate the number of online views (mean error of 0.199). The results highlight the validity of neuromarketing-based techniques for predicting the success of advertising responses. Practitioners can consider the proposed methodology at the design stages of advertising content, thus enhancing advertising effectiveness. The study pioneers the use of neurophysiological methods in predicting advertising success in a digital context. This is the first article that has examined whether these measures could actually be used for predicting views for advertising on YouTube.
Consumer Neuroscience-Based Metrics Predict Recall, Liking and Viewing Rates in Online Advertising
Guixeres, Jaime; Bigné, Enrique; Ausín Azofra, Jose M.; Alcañiz Raya, Mariano; Colomer Granero, Adrián; Fuentes Hurtado, Félix; Naranjo Ornedo, Valery
2017-01-01
The purpose of the present study is to investigate whether the effectiveness of a new ad on digital channels (YouTube) can be predicted by using neural networks and neuroscience-based metrics (brain response, heart rate variability and eye tracking). Neurophysiological records from 35 participants were exposed to 8 relevant TV Super Bowl commercials. Correlations between neurophysiological-based metrics, ad recall, ad liking, the ACE metrix score and the number of views on YouTube during a year were investigated. Our findings suggest a significant correlation between neuroscience metrics and self-reported of ad effectiveness and the direct number of views on the YouTube channel. In addition, and using an artificial neural network based on neuroscience metrics, the model classifies (82.9% of average accuracy) and estimate the number of online views (mean error of 0.199). The results highlight the validity of neuromarketing-based techniques for predicting the success of advertising responses. Practitioners can consider the proposed methodology at the design stages of advertising content, thus enhancing advertising effectiveness. The study pioneers the use of neurophysiological methods in predicting advertising success in a digital context. This is the first article that has examined whether these measures could actually be used for predicting views for advertising on YouTube. PMID:29163251
Zhu, Wenquan; Chen, Guangsheng; Jiang, Nan; ...
2013-12-27
Carbon Flux Phenology (CFP) can affect the interannual variation in Net Ecosystem Exchange (NEE) of carbon between terrestrial ecosystems and the atmosphere. In this paper, we proposed a methodology to estimate CFP metrics with satellite-derived Land Surface Phenology (LSP) metrics and climate drivers for 4 biomes (i.e., deciduous broadleaf forest, evergreen needleleaf forest, grasslands and croplands), using 159 site-years of NEE and climate data from 32 AmeriFlux sites and MODIS vegetation index time-series data. LSP metrics combined with optimal climate drivers can explain the variability in Start of Carbon Uptake (SCU) by more than 70% and End of Carbon Uptakemore » (ECU) by more than 60%. The Root Mean Square Error (RMSE) of the estimations was within 8.5 days for both SCU and ECU. The estimation performance for this methodology was primarily dependent on the optimal combination of the LSP retrieval methods, the explanatory climate drivers, the biome types, and the specific CFP metric. In conclusion, this methodology has a potential for allowing extrapolation of CFP metrics for biomes with a distinct and detectable seasonal cycle over large areas, based on synoptic multi-temporal optical satellite data and climate data.« less
Johnson, Jeffrey P; Krupinski, Elizabeth A; Yan, Michelle; Roehrig, Hans; Graham, Anna R; Weinstein, Ronald S
2011-02-01
A major issue in telepathology is the extremely large and growing size of digitized "virtual" slides, which can require several gigabytes of storage and cause significant delays in data transmission for remote image interpretation and interactive visualization by pathologists. Compression can reduce this massive amount of virtual slide data, but reversible (lossless) methods limit data reduction to less than 50%, while lossy compression can degrade image quality and diagnostic accuracy. "Visually lossless" compression offers the potential for using higher compression levels without noticeable artifacts, but requires a rate-control strategy that adapts to image content and loss visibility. We investigated the utility of a visual discrimination model (VDM) and other distortion metrics for predicting JPEG 2000 bit rates corresponding to visually lossless compression of virtual slides for breast biopsy specimens. Threshold bit rates were determined experimentally with human observers for a variety of tissue regions cropped from virtual slides. For test images compressed to their visually lossless thresholds, just-noticeable difference (JND) metrics computed by the VDM were nearly constant at the 95th percentile level or higher, and were significantly less variable than peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) metrics. Our results suggest that VDM metrics could be used to guide the compression of virtual slides to achieve visually lossless compression while providing 5-12 times the data reduction of reversible methods.
Saleh, Ziad H.; Apte, Aditya P.; Sharp, Gregory C.; Shusharina, Nadezhda P.; Wang, Ya; Veeraraghavan, Harini; Thor, Maria; Muren, Ludvig P.; Rao, Shyam S.; Lee, Nancy Y.; Deasy, Joseph O.
2014-01-01
Previous methods to estimate the inherent accuracy of deformable image registration (DIR) have typically been performed relative to a known ground truth, such as tracking of anatomic landmarks or known deformations in a physical or virtual phantom. In this study, we propose a new approach to estimate the spatial geometric uncertainty of DIR using statistical sampling techniques that can be applied to the resulting deformation vector fields (DVFs) for a given registration. The proposed DIR performance metric, the distance discordance metric (DDM), is based on the variability in the distance between corresponding voxels from different images, which are co-registered to the same voxel at location (X) in an arbitrarily chosen “reference” image. The DDM value, at location (X) in the reference image, represents the mean dispersion between voxels, when these images are registered to other images in the image set. The method requires at least four registered images to estimate the uncertainty of the DIRs, both for inter-and intra-patient DIR. To validate the proposed method, we generated an image set by deforming a software phantom with known DVFs. The registration error was computed at each voxel in the “reference” phantom and then compared to DDM, inverse consistency error (ICE), and transitivity error (TE) over the entire phantom. The DDM showed a higher Pearson correlation (Rp) with the actual error (Rp ranged from 0.6 to 0.9) in comparison with ICE and TE (Rp ranged from 0.2 to 0.8). In the resulting spatial DDM map, regions with distinct intensity gradients had a lower discordance and therefore, less variability relative to regions with uniform intensity. Subsequently, we applied DDM for intra-patient DIR in an image set of 10 longitudinal computed tomography (CT) scans of one prostate cancer patient and for inter-patient DIR in an image set of 10 planning CT scans of different head and neck cancer patients. For both intra- and inter-patient DIR, the spatial DDM map showed large variation over the volume of interest (the pelvis for the prostate patient and the head for the head and neck patients). The highest discordance was observed in the soft tissues, such as the brain, bladder, and rectum, due to higher variability in the registration. The smallest DDM values were observed in the bony structures in the pelvis and the base of the skull. The proposed metric, DDM, provides a quantitative tool to evaluate the performance of DIR when a set of images is available. Therefore, DDM can be used to estimate and visualize the uncertainty of intra- and/or inter-patient DIR based on the variability of the registration rather than the absolute registration error. PMID:24440838
Climate and soil attributes determine plant species turnover in global drylands.
Ulrich, Werner; Soliveres, Santiago; Maestre, Fernando T; Gotelli, Nicholas J; Quero, José L; Delgado-Baquerizo, Manuel; Bowker, Matthew A; Eldridge, David J; Ochoa, Victoria; Gozalo, Beatriz; Valencia, Enrique; Berdugo, Miguel; Escolar, Cristina; García-Gómez, Miguel; Escudero, Adrián; Prina, Aníbal; Alfonso, Graciela; Arredondo, Tulio; Bran, Donaldo; Cabrera, Omar; Cea, Alex; Chaieb, Mohamed; Contreras, Jorge; Derak, Mchich; Espinosa, Carlos I; Florentino, Adriana; Gaitán, Juan; Muro, Victoria García; Ghiloufi, Wahida; Gómez-González, Susana; Gutiérrez, Julio R; Hernández, Rosa M; Huber-Sannwald, Elisabeth; Jankju, Mohammad; Mau, Rebecca L; Hughes, Frederic Mendes; Miriti, Maria; Monerris, Jorge; Muchane, Muchai; Naseri, Kamal; Pucheta, Eduardo; Ramírez-Collantes, David A; Raveh, Eran; Romão, Roberto L; Torres-Díaz, Cristian; Val, James; Veiga, José Pablo; Wang, Deli; Yuan, Xia; Zaady, Eli
2014-12-01
Geographic, climatic, and soil factors are major drivers of plant beta diversity, but their importance for dryland plant communities is poorly known. This study aims to: i) characterize patterns of beta diversity in global drylands, ii) detect common environmental drivers of beta diversity, and iii) test for thresholds in environmental conditions driving potential shifts in plant species composition. 224 sites in diverse dryland plant communities from 22 geographical regions in six continents. Beta diversity was quantified with four complementary measures: the percentage of singletons (species occurring at only one site), Whittake's beta diversity (β(W)), a directional beta diversity metric based on the correlation in species occurrences among spatially contiguous sites (β(R 2 )), and a multivariate abundance-based metric (β(MV)). We used linear modelling to quantify the relationships between these metrics of beta diversity and geographic, climatic, and soil variables. Soil fertility and variability in temperature and rainfall, and to a lesser extent latitude, were the most important environmental predictors of beta diversity. Metrics related to species identity (percentage of singletons and β(W)) were most sensitive to soil fertility, whereas those metrics related to environmental gradients and abundance ((β(R 2 )) and β(MV)) were more associated with climate variability. Interactions among soil variables, climatic factors, and plant cover were not important determinants of beta diversity. Sites receiving less than 178 mm of annual rainfall differed sharply in species composition from more mesic sites (> 200 mm). Soil fertility and variability in temperature and rainfall are the most important environmental predictors of variation in plant beta diversity in global drylands. Our results suggest that those sites annually receiving ~ 178 mm of rainfall will be especially sensitive to future climate changes. These findings may help to define appropriate conservation strategies for mitigating effects of climate change on dryland vegetation.
Environmental cost of using poor decision metrics to prioritize environmental projects.
Pannell, David J; Gibson, Fiona L
2016-04-01
Conservation decision makers commonly use project-scoring metrics that are inconsistent with theory on optimal ranking of projects. As a result, there may often be a loss of environmental benefits. We estimated the magnitudes of these losses for various metrics that deviate from theory in ways that are common in practice. These metrics included cases where relevant variables were omitted from the benefits metric, project costs were omitted, and benefits were calculated using a faulty functional form. We estimated distributions of parameters from 129 environmental projects from Australia, New Zealand, and Italy for which detailed analyses had been completed previously. The cost of using poor prioritization metrics (in terms of lost environmental values) was often high--up to 80% in the scenarios we examined. The cost in percentage terms was greater when the budget was smaller. The most costly errors were omitting information about environmental values (up to 31% loss of environmental values), omitting project costs (up to 35% loss), omitting the effectiveness of management actions (up to 9% loss), and using a weighted-additive decision metric for variables that should be multiplied (up to 23% loss). The latter 3 are errors that occur commonly in real-world decision metrics, in combination often reducing potential benefits from conservation investments by 30-50%. Uncertainty about parameter values also reduced the benefits from investments in conservation projects but often not by as much as faulty prioritization metrics. © 2016 Society for Conservation Biology.
Black, R.W.; Moran, P.W.; Frankforter, J.D.
2011-01-01
Many streams within the United States are impaired due to nutrient enrichment, particularly in agricultural settings. The present study examines the response of benthic algal communities in agricultural and minimally disturbed sites from across the western United States to a suite of environmental factors, including nutrients, collected at multiple scales. The first objective was to identify the relative importance of nutrients, habitat and watershed features, and macroinvertebrate trophic structure to explain algal metrics derived from deposition and erosion habitats. The second objective was to determine if thresholds in total nitrogen (TN) and total phosphorus (TP) related to algal metrics could be identified and how these thresholds varied across metrics and habitats. Nutrient concentrations within the agricultural areas were elevated and greater than published threshold values. All algal metrics examined responded to nutrients as hypothesized. Although nutrients typically were the most important variables in explaining the variation in each of the algal metrics, environmental factors operating at multiple scales also were important. Calculated thresholds for TN or TP based on the algal metrics generated from samples collected from erosion and deposition habitats were not significantly different. Little variability in threshold values for each metric for TN and TP was observed. The consistency of the threshold values measured across multiple metrics and habitats suggest that the thresholds identified in this study are ecologically relevant. Additional work to characterize the relationship between algal metrics, physical and chemical features, and nuisance algal growth would be of benefit to the development of nutrient thresholds and criteria. ?? 2010 The Author(s).
Black, Robert W; Moran, Patrick W; Frankforter, Jill D
2011-04-01
Many streams within the United States are impaired due to nutrient enrichment, particularly in agricultural settings. The present study examines the response of benthic algal communities in agricultural and minimally disturbed sites from across the western United States to a suite of environmental factors, including nutrients, collected at multiple scales. The first objective was to identify the relative importance of nutrients, habitat and watershed features, and macroinvertebrate trophic structure to explain algal metrics derived from deposition and erosion habitats. The second objective was to determine if thresholds in total nitrogen (TN) and total phosphorus (TP) related to algal metrics could be identified and how these thresholds varied across metrics and habitats. Nutrient concentrations within the agricultural areas were elevated and greater than published threshold values. All algal metrics examined responded to nutrients as hypothesized. Although nutrients typically were the most important variables in explaining the variation in each of the algal metrics, environmental factors operating at multiple scales also were important. Calculated thresholds for TN or TP based on the algal metrics generated from samples collected from erosion and deposition habitats were not significantly different. Little variability in threshold values for each metric for TN and TP was observed. The consistency of the threshold values measured across multiple metrics and habitats suggest that the thresholds identified in this study are ecologically relevant. Additional work to characterize the relationship between algal metrics, physical and chemical features, and nuisance algal growth would be of benefit to the development of nutrient thresholds and criteria.
Spatial patterns of development drive water use
Sanchez, G.M.; Smith, J.W.; Terando, Adam J.; Sun, G.; Meentemeyer, R.K.
2018-01-01
Water availability is becoming more uncertain as human populations grow, cities expand into rural regions and the climate changes. In this study, we examine the functional relationship between water use and the spatial patterns of developed land across the rapidly growing region of the southeastern United States. We quantified the spatial pattern of developed land within census tract boundaries, including multiple metrics of density and configuration. Through non‐spatial and spatial regression approaches we examined relationships and spatial dependencies between the spatial pattern metrics, socio‐economic and environmental variables and two water use variables: a) domestic water use, and b) total development‐related water use (a combination of public supply, domestic self‐supply and industrial self‐supply). Metrics describing the spatial patterns of development had the highest measure of relative importance (accounting for 53% of model's explanatory power), explaining significantly more variance in water use compared to socio‐economic or environmental variables commonly used to estimate water use. Integrating metrics characterizing the spatial pattern of development into water use models is likely to increase their utility and could facilitate water‐efficient land use planning.
Spatial Patterns of Development Drive Water Use
NASA Astrophysics Data System (ADS)
Sanchez, G. M.; Smith, J. W.; Terando, A.; Sun, G.; Meentemeyer, R. K.
2018-03-01
Water availability is becoming more uncertain as human populations grow, cities expand into rural regions and the climate changes. In this study, we examine the functional relationship between water use and the spatial patterns of developed land across the rapidly growing region of the southeastern United States. We quantified the spatial pattern of developed land within census tract boundaries, including multiple metrics of density and configuration. Through non-spatial and spatial regression approaches we examined relationships and spatial dependencies between the spatial pattern metrics, socio-economic and environmental variables and two water use variables: a) domestic water use, and b) total development-related water use (a combination of public supply, domestic self-supply and industrial self-supply). Metrics describing the spatial patterns of development had the highest measure of relative importance (accounting for 53% of model's explanatory power), explaining significantly more variance in water use compared to socio-economic or environmental variables commonly used to estimate water use. Integrating metrics characterizing the spatial pattern of development into water use models is likely to increase their utility and could facilitate water-efficient land use planning.
Pattern-based, multi-scale segmentation and regionalization of EOSD land cover
NASA Astrophysics Data System (ADS)
Niesterowicz, Jacek; Stepinski, Tomasz F.
2017-10-01
The Earth Observation for Sustainable Development of Forests (EOSD) map is a 25 m resolution thematic map of Canadian forests. Because of its large spatial extent and relatively high resolution the EOSD is difficult to analyze using standard GIS methods. In this paper we propose multi-scale segmentation and regionalization of EOSD as new methods for analyzing EOSD on large spatial scales. Segments, which we refer to as forest land units (FLUs), are delineated as tracts of forest characterized by cohesive patterns of EOSD categories; we delineated from 727 to 91,885 FLUs within the spatial extent of EOSD depending on the selected scale of a pattern. Pattern of EOSD's categories within each FLU is described by 1037 landscape metrics. A shapefile containing boundaries of all FLUs together with an attribute table listing landscape metrics make up an SQL-searchable spatial database providing detailed information on composition and pattern of land cover types in Canadian forest. Shapefile format and extensive attribute table pertaining to the entire legend of EOSD are designed to facilitate broad range of investigations in which assessment of composition and pattern of forest over large areas is needed. We calculated four such databases using different spatial scales of pattern. We illustrate the use of FLU database for producing forest regionalization maps of two Canadian provinces, Quebec and Ontario. Such maps capture the broad scale variability of forest at the spatial scale of the entire province. We also demonstrate how FLU database can be used to map variability of landscape metrics, and thus the character of landscape, over the entire Canada.
Hall, Lenwood W; Killen, William D
2006-01-01
This study was designed to assess trends in physical habitat and benthic communities (macroinvertebrates) annually in two agricultural streams (Del Puerto Creek and Salt Slough) in California's San Joaquin Valley from 2001 to 2005, determine the relationship between benthic communities and both water quality and physical habitat from both streams over the 5-year period, and compare benthic communities and physical habitat in both streams from 2001 to 2005. Physical habitat, measured with 10 metrics and a total score, was reported to be fairly stable over 5 years in Del Puerto Creek but somewhat variable in Salt Slough. Benthic communities, measured with 18 metrics, were reported to be marginally variable over time in Del Puerto Creek but fairly stable in Salt Slough. Rank correlation analysis for both water bodies combined showed that channel alteration, embeddedness, riparian buffer, and velocity/depth/diversity were the most important physical habitat metrics influencing the various benthic metrics. Correlations of water quality parameters and benthic community metrics for both water bodies combined showed that turbidity, dissolved oxygen, and conductivity were the most important water quality parameters influencing the different benthic metrics. A comparison of physical habitat metrics (including total score) for both water bodies over the 5-year period showed that habitat metrics were more positive in Del Puerto Creek when compared to Salt Slough. A comparison of benthic metrics in both water bodies showed that approximately one-third of the metrics were significantly different between the two water bodies. Generally, the more positive benthic metric scores were reported in Del Puerto Creek, which suggests that the communities in this creek are more robust than Salt Slough.
Monroe, Katherine S
2016-03-11
This research explored the assessment of self-directed learning readiness within the comprehensive evaluation of medical students' knowledge and skills and the extent to which several variables predicted participants' self-directed learning readiness prior to their graduation. Five metrics for evaluating medical students were considered in a multiple regression analysis. Fourth-year medical students at a competitive US medical school received an informed consent and an online survey. Participants voluntarily completed a self-directed learning readiness scale that assessed four subsets of self-directed learning readiness and consented to the release of their academic records. The assortment of metrics considered in this study only vaguely captured students' self-directedness. The strongest predictors were faculty evaluations of students' performance on clerkship rotations. Specific clerkship grades were mildly predictive of three subscales. The Pediatrics clerkship modestly predicted critical self-evaluation (r=-.30, p=.01) and the Psychiatry clerkship mildly predicted learning self-efficacy (r =-.30, p=.01), while the Junior Surgery clerkship nominally correlated with participants' effective organization for learning (r=.21, p=.05). Other metrics examined did not contribute to predicting participants' readiness for self-directed learning. Given individual differences among participants for the variables considered, no combination of students' grades and/or test scores overwhelmingly predicted their aptitude for self-directed learning. Considering the importance of fostering medical students' self-directed learning skills, schools need a reliable and pragmatic approach to measure them. This data analysis, however, offered no clear-cut way of documenting students' self-directed learning readiness based on the evaluation metrics included.
[Predictive model based multimetric index of macroinvertebrates for river health assessment].
Chen, Kai; Yu, Hai Yan; Zhang, Ji Wei; Wang, Bei Xin; Chen, Qiu Wen
2017-06-18
Improving the stability of integrity of biotic index (IBI; i.e., multi-metric indices, MMI) across temporal and spatial scales is one of the most important issues in water ecosystem integrity bioassessment and water environment management. Using datasets of field-based macroinvertebrate and physicochemical variables and GIS-based natural predictors (e.g., geomorphology and climate) and land use variables collected at 227 river sites from 2004 to 2011 across the Zhejiang Province, China, we used random forests (RF) to adjust the effects of natural variations at temporal and spatial scales on macroinvertebrate metrics. We then developed natural variations adjusted (predictive) and unadjusted (null) MMIs and compared performance between them. The core me-trics selected for predictive and null MMIs were different from each other, and natural variations within core metrics in predictive MMI explained by RF models ranged between 11.4% and 61.2%. The predictive MMI was more precise and accurate, but less responsive and sensitive than null MMI. The multivariate nearest-neighbor test determined that 9 test sites and 1 most degraded site were flagged outside of the environmental space of the reference site network. We found that combination of predictive MMI developed by using predictive model and the nearest-neighbor test performed best and decreased risks of inferring type I (designating a water body as being in poor biological condition, when it was actually in good condition) and type II (designating a water body as being in good biological condition, when it was actually in poor condition) errors. Our results provided an effective method to improve the stability and performance of integrity of biotic index.
Atlas-based automatic measurements of the morphology of the tibiofemoral joint.
Brehler, M; Thawait, G; Shyr, W; Ramsay, J; Siewerdsen, J H; Zbijewski, W
2017-02-11
Anatomical metrics of the tibiofemoral joint support assessment of joint stability and surgical planning. We propose an automated, atlas-based algorithm to streamline the measurements in 3D images of the joint and reduce user-dependence of the metrics arising from manual identification of the anatomical landmarks. The method is initialized with coarse registrations of a set of atlas images to the fixed input image. The initial registrations are then refined separately for the tibia and femur and the best matching atlas is selected. Finally, the anatomical landmarks of the best matching atlas are transformed onto the input image by deforming a surface model of the atlas to fit the shape of the tibial plateau in the input image (a mesh-to-volume registration). We apply the method to weight-bearing volumetric images of the knee obtained from 23 subjects using an extremity cone-beam CT system. Results of the automated algorithm were compared to an expert radiologist for measurements of Static Alignment (SA), Medial Tibial Slope (MTS) and Lateral Tibial Slope (LTS). Intra-reader variability as high as ~10% for LTS and 7% for MTS (ratio of standard deviation to the mean in repeated measurements) was found for expert radiologist, illustrating the potential benefits of an automated approach in improving the precision of the metrics. The proposed method achieved excellent registration of the atlas mesh to the input volumes. The resulting automated measurements yielded high correlations with expert radiologist, as indicated by correlation coefficients of 0.72 for MTS, 0.8 for LTS, and 0.89 for SA. The automated method for measurement of anatomical metrics of the tibiofemoral joint achieves high correlation with expert radiologist without the need for time consuming and error prone manual selection of landmarks.
Multi-objective optimization for generating a weighted multi-model ensemble
NASA Astrophysics Data System (ADS)
Lee, H.
2017-12-01
Many studies have demonstrated that multi-model ensembles generally show better skill than each ensemble member. When generating weighted multi-model ensembles, the first step is measuring the performance of individual model simulations using observations. There is a consensus on the assignment of weighting factors based on a single evaluation metric. When considering only one evaluation metric, the weighting factor for each model is proportional to a performance score or inversely proportional to an error for the model. While this conventional approach can provide appropriate combinations of multiple models, the approach confronts a big challenge when there are multiple metrics under consideration. When considering multiple evaluation metrics, it is obvious that a simple averaging of multiple performance scores or model ranks does not address the trade-off problem between conflicting metrics. So far, there seems to be no best method to generate weighted multi-model ensembles based on multiple performance metrics. The current study applies the multi-objective optimization, a mathematical process that provides a set of optimal trade-off solutions based on a range of evaluation metrics, to combining multiple performance metrics for the global climate models and their dynamically downscaled regional climate simulations over North America and generating a weighted multi-model ensemble. NASA satellite data and the Regional Climate Model Evaluation System (RCMES) software toolkit are used for assessment of the climate simulations. Overall, the performance of each model differs markedly with strong seasonal dependence. Because of the considerable variability across the climate simulations, it is important to evaluate models systematically and make future projections by assigning optimized weighting factors to the models with relatively good performance. Our results indicate that the optimally weighted multi-model ensemble always shows better performance than an arithmetic ensemble mean and may provide reliable future projections.
Yang, Xiaojun
2012-02-01
Exploring the quantitative association between landscape characteristics and the ecological conditions of receiving waters has recently become an emerging area for eco-environmental research. While the landscape-water relationship research has largely targeted on inland aquatic systems, there has been an increasing need to develop methods and techniques that can better work with coastal and estuarine ecosystems. In this paper, we present a geospatial approach to examine the quantitative relationship between landscape characteristics and estuarine nitrogen loading in an urban watershed. The case study site is in the Pensacola estuarine drainage area, home of the city of Pensacola, Florida, USA, where vigorous urban sprawling has prompted growing concerns on the estuarine ecological health. Central to this research is a remote sensor image that has been used to extract land use/cover information and derive landscape metrics. Several significant landscape metrics are selected and spatially linked with the nitrogen loading data for the Pensacola bay area. Landscape metrics and nitrogen loading are summarized by equal overland flow-length rings, and their association is examined by using multivariate statistical analysis. And a stepwise model-building protocol is used for regression designs to help identify significant variables that can explain much of the variance in the nitrogen loading dataset. It is found that using landscape composition or spatial configuration alone can explain most of the nitrogen loading variability. Of all the regression models using metrics derived from a single land use/cover class as the independent variables, the one from the low density urban gives the highest adjusted R-square score, suggesting the impact of the watershed-wide urban sprawl upon this sensitive estuarine ecosystem. Measures towards the reduction of non-point source pollution from urban development are necessary in the area to protect the Pensacola bay ecosystem and its ecosystem services. Copyright © 2011 Elsevier Ltd. All rights reserved.
A Method for Comparing Multivariate Time Series with Different Dimensions
Tapinos, Avraam; Mendes, Pedro
2013-01-01
In many situations it is desirable to compare dynamical systems based on their behavior. Similarity of behavior often implies similarity of internal mechanisms or dependency on common extrinsic factors. While there are widely used methods for comparing univariate time series, most dynamical systems are characterized by multivariate time series. Yet, comparison of multivariate time series has been limited to cases where they share a common dimensionality. A semi-metric is a distance function that has the properties of non-negativity, symmetry and reflexivity, but not sub-additivity. Here we develop a semi-metric – SMETS – that can be used for comparing groups of time series that may have different dimensions. To demonstrate its utility, the method is applied to dynamic models of biochemical networks and to portfolios of shares. The former is an example of a case where the dependencies between system variables are known, while in the latter the system is treated (and behaves) as a black box. PMID:23393554
Kamesh Iyer, Srikant; Tasdizen, Tolga; Likhite, Devavrat; DiBella, Edward
2016-01-01
Purpose: Rapid reconstruction of undersampled multicoil MRI data with iterative constrained reconstruction method is a challenge. The authors sought to develop a new substitution based variable splitting algorithm for faster reconstruction of multicoil cardiac perfusion MRI data. Methods: The new method, split Bregman multicoil accelerated reconstruction technique (SMART), uses a combination of split Bregman based variable splitting and iterative reweighting techniques to achieve fast convergence. Total variation constraints are used along the spatial and temporal dimensions. The method is tested on nine ECG-gated dog perfusion datasets, acquired with a 30-ray golden ratio radial sampling pattern and ten ungated human perfusion datasets, acquired with a 24-ray golden ratio radial sampling pattern. Image quality and reconstruction speed are evaluated and compared to a gradient descent (GD) implementation and to multicoil k-t SLR, a reconstruction technique that uses a combination of sparsity and low rank constraints. Results: Comparisons based on blur metric and visual inspection showed that SMART images had lower blur and better texture as compared to the GD implementation. On average, the GD based images had an ∼18% higher blur metric as compared to SMART images. Reconstruction of dynamic contrast enhanced (DCE) cardiac perfusion images using the SMART method was ∼6 times faster than standard gradient descent methods. k-t SLR and SMART produced images with comparable image quality, though SMART was ∼6.8 times faster than k-t SLR. Conclusions: The SMART method is a promising approach to reconstruct good quality multicoil images from undersampled DCE cardiac perfusion data rapidly. PMID:27036592
Climate Classification is an Important Factor in Assessing Hospital Performance Metrics
NASA Astrophysics Data System (ADS)
Boland, M. R.; Parhi, P.; Gentine, P.; Tatonetti, N. P.
2017-12-01
Context/Purpose: Climate is a known modulator of disease, but its impact on hospital performance metrics remains unstudied. Methods: We assess the relationship between Köppen-Geiger climate classification and hospital performance metrics, specifically 30-day mortality, as reported in Hospital Compare, and collected for the period July 2013 through June 2014 (7/1/2013 - 06/30/2014). A hospital-level multivariate linear regression analysis was performed while controlling for known socioeconomic factors to explore the relationship between all-cause mortality and climate. Hospital performance scores were obtained from 4,524 hospitals belonging to 15 distinct Köppen-Geiger climates and 2,373 unique counties. Results: Model results revealed that hospital performance metrics for mortality showed significant climate dependence (p<0.001) after adjusting for socioeconomic factors. Interpretation: Currently, hospitals are reimbursed by Governmental agencies using 30-day mortality rates along with 30-day readmission rates. These metrics allow Government agencies to rank hospitals according to their `performance' along these metrics. Various socioeconomic factors are taken into consideration when determining individual hospitals performance. However, no climate-based adjustment is made within the existing framework. Our results indicate that climate-based variability in 30-day mortality rates does exist even after socioeconomic confounder adjustment. Use of standardized high-level climate classification systems (such as Koppen-Geiger) would be useful to incorporate in future metrics. Conclusion: Climate is a significant factor in evaluating hospital 30-day mortality rates. These results demonstrate that climate classification is an important factor when comparing hospital performance across the United States.
SU-F-R-44: Modeling Lung SBRT Tumor Response Using Bayesian Network Averaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Diamant, A; Ybarra, N; Seuntjens, J
2016-06-15
Purpose: The prediction of tumor control after a patient receives lung SBRT (stereotactic body radiation therapy) has proven to be challenging, due to the complex interactions between an individual’s biology and dose-volume metrics. Many of these variables have predictive power when combined, a feature that we exploit using a graph modeling approach based on Bayesian networks. This provides a probabilistic framework that allows for accurate and visually intuitive predictive modeling. The aim of this study is to uncover possible interactions between an individual patient’s characteristics and generate a robust model capable of predicting said patient’s treatment outcome. Methods: We investigatedmore » a cohort of 32 prospective patients from multiple institutions whom had received curative SBRT to the lung. The number of patients exhibiting tumor failure was observed to be 7 (event rate of 22%). The serum concentration of 5 biomarkers previously associated with NSCLC (non-small cell lung cancer) was measured pre-treatment. A total of 21 variables were analyzed including: dose-volume metrics with BED (biologically effective dose) correction and clinical variables. A Markov Chain Monte Carlo technique estimated the posterior probability distribution of the potential graphical structures. The probability of tumor failure was then estimated by averaging the top 100 graphs and applying Baye’s rule. Results: The optimal Bayesian model generated throughout this study incorporated the PTV volume, the serum concentration of the biomarker EGFR (epidermal growth factor receptor) and prescription BED. This predictive model recorded an area under the receiver operating characteristic curve of 0.94(1), providing better performance compared to competing methods in other literature. Conclusion: The use of biomarkers in conjunction with dose-volume metrics allows for the generation of a robust predictive model. The preliminary results of this report demonstrate that it is possible to accurately model the prognosis of an individual lung SBRT patient’s treatment.« less
Michael E. Goerndt; Vincente J. Monleon; Hailemariam. Temesgen
2010-01-01
Three sets of linear models were developed to predict several forest attributes, using stand-level and single-tree remote sensing (STRS) light detection and ranging (LiDAR) metrics as predictor variables. The first used only area-level metrics (ALM) associated with first-return height distribution, percentage of cover, and canopy transparency. The second alternative...
Assessing deep and shallow learning methods for quantitative prediction of acute chemical toxicity.
Liu, Ruifeng; Madore, Michael; Glover, Kyle P; Feasel, Michael G; Wallqvist, Anders
2018-05-02
Animal-based methods for assessing chemical toxicity are struggling to meet testing demands. In silico approaches, including machine-learning methods, are promising alternatives. Recently, deep neural networks (DNNs) were evaluated and reported to outperform other machine-learning methods for quantitative structure-activity relationship modeling of molecular properties. However, most of the reported performance evaluations relied on global performance metrics, such as the root mean squared error (RMSE) between the predicted and experimental values of all samples, without considering the impact of sample distribution across the activity spectrum. Here, we carried out an in-depth analysis of DNN performance for quantitative prediction of acute chemical toxicity using several datasets. We found that the overall performance of DNN models on datasets of up to 30,000 compounds was similar to that of random forest (RF) models, as measured by the RMSE and correlation coefficients between the predicted and experimental results. However, our detailed analyses demonstrated that global performance metrics are inappropriate for datasets with a highly uneven sample distribution, because they show a strong bias for the most populous compounds along the toxicity spectrum. For highly toxic compounds, DNN and RF models trained on all samples performed much worse than the global performance metrics indicated. Surprisingly, our variable nearest neighbor method, which utilizes only structurally similar compounds to make predictions, performed reasonably well, suggesting that information of close near neighbors in the training sets is a key determinant of acute toxicity predictions.
Exact statistical results for binary mixing and reaction in variable density turbulence
NASA Astrophysics Data System (ADS)
Ristorcelli, J. R.
2017-02-01
We report a number of rigorous statistical results on binary active scalar mixing in variable density turbulence. The study is motivated by mixing between pure fluids with very different densities and whose density intensity is of order unity. Our primary focus is the derivation of exact mathematical results for mixing in variable density turbulence and we do point out the potential fields of application of the results. A binary one step reaction is invoked to derive a metric to asses the state of mixing. The mean reaction rate in variable density turbulent mixing can be expressed, in closed form, using the first order Favre mean variables and the Reynolds averaged density variance, ⟨ρ2⟩ . We show that the normalized density variance, ⟨ρ2⟩ , reflects the reduction of the reaction due to mixing and is a mix metric. The result is mathematically rigorous. The result is the variable density analog, the normalized mass fraction variance ⟨c2⟩ used in constant density turbulent mixing. As a consequence, we demonstrate that use of the analogous normalized Favre variance of the mass fraction, c″ 2˜ , as a mix metric is not theoretically justified in variable density turbulence. We additionally derive expressions relating various second order moments of the mass fraction, specific volume, and density fields. The central role of the density specific volume covariance ⟨ρ v ⟩ is highlighted; it is a key quantity with considerable dynamical significance linking various second order statistics. For laboratory experiments, we have developed exact relations between the Reynolds scalar variance ⟨c2⟩ its Favre analog c″ 2˜ , and various second moments including ⟨ρ v ⟩ . For moment closure models that evolve ⟨ρ v ⟩ and not ⟨ρ2⟩ , we provide a novel expression for ⟨ρ2⟩ in terms of a rational function of ⟨ρ v ⟩ that avoids recourse to Taylor series methods (which do not converge for large density differences). We have derived analytic results relating several other second and third order moments and see coupling between odd and even order moments demonstrating a natural and inherent skewness in the mixing in variable density turbulence. The analytic results have applications in the areas of isothermal material mixing, isobaric thermal mixing, and simple chemical reaction (in progress variable formulation).
Analysis of PV Advanced Inverter Functions and Setpoints under Time Series Simulation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seuss, John; Reno, Matthew J.; Broderick, Robert Joseph
Utilities are increasingly concerned about the potential negative impacts distributed PV may have on the operational integrity of their distribution feeders. Some have proposed novel methods for controlling a PV system's grid - tie inverter to mitigate poten tial PV - induced problems. This report investigates the effectiveness of several of these PV advanced inverter controls on improving distribution feeder operational metrics. The controls are simulated on a large PV system interconnected at several locations within two realistic distribution feeder models. Due to the time - domain nature of the advanced inverter controls, quasi - static time series simulations aremore » performed under one week of representative variable irradiance and load data for each feeder. A para metric study is performed on each control type to determine how well certain measurable network metrics improve as a function of the control parameters. This methodology is used to determine appropriate advanced inverter settings for each location on the f eeder and overall for any interconnection location on the feeder.« less
Sediment transport-based metrics of wetland stability
Ganju, Neil K.; Kirwan, Matthew L.; Dickhudt, Patrick J.; Guntenspergen, Glenn R.; Cahoon, Donald R.; Kroeger, Kevin D.
2015-01-01
Despite the importance of sediment availability on wetland stability, vulnerability assessments seldom consider spatiotemporal variability of sediment transport. Models predict that the maximum rate of sea level rise a marsh can survive is proportional to suspended sediment concentration (SSC) and accretion. In contrast, we find that SSC and accretion are higher in an unstable marsh than in an adjacent stable marsh, suggesting that these metrics cannot describe wetland vulnerability. Therefore, we propose the flood/ebb SSC differential and organic-inorganic suspended sediment ratio as better vulnerability metrics. The unstable marsh favors sediment export (18 mg L−1 higher on ebb tides), while the stable marsh imports sediment (12 mg L−1 higher on flood tides). The organic-inorganic SSC ratio is 84% higher in the unstable marsh, and stable isotopes indicate a source consistent with marsh-derived material. These simple metrics scale with sediment fluxes, integrate spatiotemporal variability, and indicate sediment sources.
FAST TRACK COMMUNICATION: Symmetry breaking, conformal geometry and gauge invariance
NASA Astrophysics Data System (ADS)
Ilderton, Anton; Lavelle, Martin; McMullan, David
2010-08-01
When the electroweak action is rewritten in terms of SU(2) gauge-invariant variables, the Higgs can be interpreted as a conformal metric factor. We show that asymptotic flatness of the metric is required to avoid a Gribov problem: without it, the new variables fail to be nonperturbatively gauge invariant. We also clarify the relations between this approach and unitary gauge fixing, and the existence of similar transformations in other gauge theories.
A parallel variable metric optimization algorithm
NASA Technical Reports Server (NTRS)
Straeter, T. A.
1973-01-01
An algorithm, designed to exploit the parallel computing or vector streaming (pipeline) capabilities of computers is presented. When p is the degree of parallelism, then one cycle of the parallel variable metric algorithm is defined as follows: first, the function and its gradient are computed in parallel at p different values of the independent variable; then the metric is modified by p rank-one corrections; and finally, a single univariant minimization is carried out in the Newton-like direction. Several properties of this algorithm are established. The convergence of the iterates to the solution is proved for a quadratic functional on a real separable Hilbert space. For a finite-dimensional space the convergence is in one cycle when p equals the dimension of the space. Results of numerical experiments indicate that the new algorithm will exploit parallel or pipeline computing capabilities to effect faster convergence than serial techniques.
Comparing interpolation techniques for annual temperature mapping across Xinjiang region
NASA Astrophysics Data System (ADS)
Ren-ping, Zhang; Jing, Guo; Tian-gang, Liang; Qi-sheng, Feng; Aimaiti, Yusupujiang
2016-11-01
Interpolating climatic variables such as temperature is challenging due to the highly variable nature of meteorological processes and the difficulty in establishing a representative network of stations. In this paper, based on the monthly temperature data which obtained from the 154 official meteorological stations in the Xinjiang region and surrounding areas, we compared five spatial interpolation techniques: Inverse distance weighting (IDW), Ordinary kriging, Cokriging, thin-plate smoothing splines (ANUSPLIN) and Empirical Bayesian kriging(EBK). Error metrics were used to validate interpolations against independent data. Results indicated that, the ANUSPLIN performed best than the other four interpolation methods.
Waite, Ian R.; Brown, Larry R.; Kennen, Jonathan G.; May, Jason T.; Cuffney, Thomas F.; Orlando, James L.; Jones, Kimberly A.
2010-01-01
The successful use of macroinvertebrates as indicators of stream condition in bioassessments has led to heightened interest throughout the scientific community in the prediction of stream condition. For example, predictive models are increasingly being developed that use measures of watershed disturbance, including urban and agricultural land-use, as explanatory variables to predict various metrics of biological condition such as richness, tolerance, percent predators, index of biotic integrity, functional species traits, or even ordination axes scores. Our primary intent was to determine if effective models could be developed using watershed characteristics of disturbance to predict macroinvertebrate metrics among disparate and widely separated ecoregions. We aggregated macroinvertebrate data from universities and state and federal agencies in order to assemble stream data sets of high enough density appropriate for modeling in three distinct ecoregions in Oregon and California. Extensive review and quality assurance of macroinvertebrate sampling protocols, laboratory subsample counts and taxonomic resolution was completed to assure data comparability. We used widely available digital coverages of land-use and land-cover data summarized at the watershed and riparian scale as explanatory variables to predict macroinvertebrate metrics commonly used by state resource managers to assess stream condition. The “best” multiple linear regression models from each region required only two or three explanatory variables to model macroinvertebrate metrics and explained 41–74% of the variation. In each region the best model contained some measure of urban and/or agricultural land-use, yet often the model was improved by including a natural explanatory variable such as mean annual precipitation or mean watershed slope. Two macroinvertebrate metrics were common among all three regions, the metric that summarizes the richness of tolerant macroinvertebrates (RICHTOL) and some form of EPT (Ephemeroptera, Plecoptera, and Trichoptera) richness. Best models were developed for the same two invertebrate metrics even though the geographic regions reflect distinct differences in precipitation, geology, elevation, slope, population density, and land-use. With further development, models like these can be used to elicit better causal linkages to stream biological attributes or condition and can be used by researchers or managers to predict biological indicators of stream condition at unsampled sites.
NASA Astrophysics Data System (ADS)
Konapala, Goutam; Mishra, Ashok
2017-12-01
The quantification of spatio-temporal hydroclimatic extreme events is a key variable in water resources planning, disaster mitigation, and preparing climate resilient society. However, quantification of these extreme events has always been a great challenge, which is further compounded by climate variability and change. Recently complex network theory was applied in earth science community to investigate spatial connections among hydrologic fluxes (e.g., rainfall and streamflow) in water cycle. However, there are limited applications of complex network theory for investigating hydroclimatic extreme events. This article attempts to provide an overview of complex networks and extreme events, event synchronization method, construction of networks, their statistical significance and the associated network evaluation metrics. For illustration purpose, we apply the complex network approach to study the spatio-temporal evolution of droughts in Continental USA (CONUS). A different drought threshold leads to a new drought event as well as different socio-economic implications. Therefore, it would be interesting to explore the role of thresholds on spatio-temporal evolution of drought through network analysis. In this study, long term (1900-2016) Palmer drought severity index (PDSI) was selected for spatio-temporal drought analysis using three network-based metrics (i.e., strength, direction and distance). The results indicate that the drought events propagate differently at different thresholds associated with initiation of drought events. The direction metrics indicated that onset of mild drought events usually propagate in a more spatially clustered and uniform approach compared to onsets of moderate droughts. The distance metric shows that the drought events propagate for longer distance in western part compared to eastern part of CONUS. We believe that the network-aided metrics utilized in this study can be an important tool in advancing our knowledge on drought propagation as well as other hydroclimatic extreme events. Although the propagation of droughts is investigated using the network approach, however process (physics) based approaches is essential to further understand the dynamics of hydroclimatic extreme events.
Assessing Nutritional Diversity of Cropping Systems in African Villages
DeClerck, Fabrice; Diru, Willy; Fanzo, Jessica; Gaynor, Kaitlyn; Lambrecht, Isabel; Mudiope, Joseph; Mutuo, Patrick K.; Nkhoma, Phelire; Siriri, David; Sullivan, Clare; Palm, Cheryl A.
2011-01-01
Background In Sub-Saharan Africa, 40% of children under five years in age are chronically undernourished. As new investments and attention galvanize action on African agriculture to reduce hunger, there is an urgent need for metrics that monitor agricultural progress beyond calories produced per capita and address nutritional diversity essential for human health. In this study we demonstrate how an ecological tool, functional diversity (FD), has potential to address this need and provide new insights on nutritional diversity of cropping systems in rural Africa. Methods and Findings Data on edible plant species diversity, food security and diet diversity were collected for 170 farms in three rural settings in Sub-Saharan Africa. Nutritional FD metrics were calculated based on farm species composition and species nutritional composition. Iron and vitamin A deficiency were determined from blood samples of 90 adult women. Nutritional FD metrics summarized the diversity of nutrients provided by the farm and showed variability between farms and villages. Regression of nutritional FD against species richness and expected FD enabled identification of key species that add nutrient diversity to the system and assessed the degree of redundancy for nutrient traits. Nutritional FD analysis demonstrated that depending on the original composition of species on farm or village, adding or removing individual species can have radically different outcomes for nutritional diversity. While correlations between nutritional FD, food and nutrition indicators were not significant at household level, associations between these variables were observed at village level. Conclusion This study provides novel metrics to address nutritional diversity in farming systems and examples of how these metrics can help guide agricultural interventions towards adequate nutrient diversity. New hypotheses on the link between agro-diversity, food security and human nutrition are generated and strategies for future research are suggested calling for integration of agriculture, ecology, nutrition, and socio-economics. PMID:21698127
Kumar, Keshav; Espaillat, Akbar; Cava, Felipe
2017-01-01
Bacteria cells are protected from osmotic and environmental stresses by an exoskeleton-like polymeric structure called peptidoglycan (PG) or murein sacculus. This structure is fundamental for bacteria’s viability and thus, the mechanisms underlying cell wall assembly and how it is modulated serve as targets for many of our most successful antibiotics. Therefore, it is now more important than ever to understand the genetics and structural chemistry of the bacterial cell walls in order to find new and effective methods of blocking it for the treatment of disease. In the last decades, liquid chromatography and mass spectrometry have been demonstrated to provide the required resolution and sensitivity to characterize the fine chemical structure of PG. However, the large volume of data sets that can be produced by these instruments today are difficult to handle without a proper data analysis workflow. Here, we present PG-metrics, a chemometric based pipeline that allows fast and easy classification of bacteria according to their muropeptide chromatographic profiles and identification of the subjacent PG chemical variability between e.g. bacterial species, growth conditions and, mutant libraries. The pipeline is successfully validated here using PG samples from different bacterial species and mutants in cell wall proteins. The obtained results clearly demonstrated that PG-metrics pipeline is a valuable bioanalytical tool that can lead us to cell wall classification and biomarker discovery. PMID:29040278
Vorburger, Robert S; Habeck, Christian G; Narkhede, Atul; Guzman, Vanessa A; Manly, Jennifer J; Brickman, Adam M
2016-01-01
Diffusion tensor imaging suffers from an intrinsic low signal-to-noise ratio. Bootstrap algorithms have been introduced to provide a non-parametric method to estimate the uncertainty of the measured diffusion parameters. To quantify the variability of the principal diffusion direction, bootstrap-derived metrics such as the cone of uncertainty have been proposed. However, bootstrap-derived metrics are not independent of the underlying diffusion profile. A higher mean diffusivity causes a smaller signal-to-noise ratio and, thus, increases the measurement uncertainty. Moreover, the goodness of the tensor model, which relies strongly on the complexity of the underlying diffusion profile, influences bootstrap-derived metrics as well. The presented simulations clearly depict the cone of uncertainty as a function of the underlying diffusion profile. Since the relationship of the cone of uncertainty and common diffusion parameters, such as the mean diffusivity and the fractional anisotropy, is not linear, the cone of uncertainty has a different sensitivity. In vivo analysis of the fornix reveals the cone of uncertainty to be a predictor of memory function among older adults. No significant correlation occurs with the common diffusion parameters. The present work not only demonstrates the cone of uncertainty as a function of the actual diffusion profile, but also discloses the cone of uncertainty as a sensitive predictor of memory function. Future studies should incorporate bootstrap-derived metrics to provide more comprehensive analysis.
Li, Aihua; Dhakal, Shital; Glenn, Nancy F.; Spaete, Luke P.; Shinneman, Douglas; Pilliod, David S.; Arkle, Robert; McIlroy, Susan
2017-01-01
Our study objectives were to model the aboveground biomass in a xeric shrub-steppe landscape with airborne light detection and ranging (Lidar) and explore the uncertainty associated with the models we created. We incorporated vegetation vertical structure information obtained from Lidar with ground-measured biomass data, allowing us to scale shrub biomass from small field sites (1 m subplots and 1 ha plots) to a larger landscape. A series of airborne Lidar-derived vegetation metrics were trained and linked with the field-measured biomass in Random Forests (RF) regression models. A Stepwise Multiple Regression (SMR) model was also explored as a comparison. Our results demonstrated that the important predictors from Lidar-derived metrics had a strong correlation with field-measured biomass in the RF regression models with a pseudo R2 of 0.76 and RMSE of 125 g/m2 for shrub biomass and a pseudo R2 of 0.74 and RMSE of 141 g/m2 for total biomass, and a weak correlation with field-measured herbaceous biomass. The SMR results were similar but slightly better than RF, explaining 77–79% of the variance, with RMSE ranging from 120 to 129 g/m2 for shrub and total biomass, respectively. We further explored the computational efficiency and relative accuracies of using point cloud and raster Lidar metrics at different resolutions (1 m to 1 ha). Metrics derived from the Lidar point cloud processing led to improved biomass estimates at nearly all resolutions in comparison to raster-derived Lidar metrics. Only at 1 m were the results from the point cloud and raster products nearly equivalent. The best Lidar prediction models of biomass at the plot-level (1 ha) were achieved when Lidar metrics were derived from an average of fine resolution (1 m) metrics to minimize boundary effects and to smooth variability. Overall, both RF and SMR methods explained more than 74% of the variance in biomass, with the most important Lidar variables being associated with vegetation structure and statistical measures of this structure (e.g., standard deviation of height was a strong predictor of biomass). Using our model results, we developed spatially-explicit Lidar estimates of total and shrub biomass across our study site in the Great Basin, U.S.A., for monitoring and planning in this imperiled ecosystem.
Identifying the controls of wildfire activity in Namibia using multivariate statistics
NASA Astrophysics Data System (ADS)
Mayr, Manuel; Le Roux, Johan; Samimi, Cyrus
2015-04-01
Despite large areas of Namibia being unaffected by fires due to aridity, substantial burning in the northern and north-eastern parts of the country is observed every year. Within the fire-affected regions, a strong spatial and inter-annual variability characterizes the dry-season fire situation. In order to understand these patterns, it appears critical to identify the causative factors behind fire occurrence and to examine their interactions in detail. Furthermore, most studies dealing with causative factor examination focus either on the local or the regional scale. However, these scales seem to be inappropriate from a management perspective, as fire-related strategic action plans are most often set up nationwide. Here, we will present an examination of the fire regimes of Namibia based on a dataset conducted by Le Roux (2011). A decade-spanning fire record (1994-2003) derived from NOAA's Advanced Very High Resolution Radiometer (AVHRR) imagery was used to generate four fire regime metrics (Burned Area, Fire Season Length, Month of Peak Fire Season, and Fire Return Period) and quantitative information on vegetation and phenology derived from Normalized Difference Vegetation Index (NDVI) time series. Further variables contained by this dataset are related to climate, biodiversity, and human activities. Le Roux (2011) analyzed the correlations between the fire metrics mentioned above and the predictor variables. We hypothesize that linear correlations (as estimated by correlation coefficients) simplify the interactions between response and predictor variables. For instance, moderate population densities could induce the highest number of fires, whereas the complete absence of humans lacks one major source of ignition. Around highly populated areas, in contrary, fuels are usually reduced and space is more fragmented - thus, the initiation and spread of a potential fire could as well be inhibited. From a total of over 40 explanatory variables, we will initially use data mining techniques to select a conceivable set of variables by their explanatory value and to remove redundancy. We will then apply two multivariate statistical methods suitable to a large variety of data types and frequently used for (non-linear) causative factor identification: Non-metric Multidimensional Scaling (NMDS) and Regression Trees. The assumed value of these analyses is i) to determine the most important predictor variables of fire activity in Namibia, ii) to decipher their complex interactions in driving fire variability in Namibia, and iii) to compare the performance of two state-of-the-art statistical methods. References: Le Roux, J. (2011): The effect of land use practices on the spatial and temporal characteristics of savanna fires in Namibia. Doctoral thesis at the University of Erlangen-Nuremberg/Germany - 155 pages.
Nuutinen, Mikko; Leskelä, Riikka-Leena; Suojalehto, Ella; Tirronen, Anniina; Komssi, Vesa
2017-04-13
In previous years a substantial number of studies have identified statistically important predictors of nursing home admission (NHA). However, as far as we know, the analyses have been done at the population-level. No prior research has analysed the prediction accuracy of a NHA model for individuals. This study is an analysis of 3056 longer-term home care customers in the city of Tampere, Finland. Data were collected from the records of social and health service usage and RAI-HC (Resident Assessment Instrument - Home Care) assessment system during January 2011 and September 2015. The aim was to find out the most efficient variable subsets to predict NHA for individuals and validate the accuracy. The variable subsets of predicting NHA were searched by sequential forward selection (SFS) method, a variable ranking metric and the classifiers of logistic regression (LR), support vector machine (SVM) and Gaussian naive Bayes (GNB). The validation of the results was guaranteed using randomly balanced data sets and cross-validation. The primary performance metrics for the classifiers were the prediction accuracy and AUC (average area under the curve). The LR and GNB classifiers achieved 78% accuracy for predicting NHA. The most important variables were RAI MAPLE (Method for Assigning Priority Levels), functional impairment (RAI IADL, Activities of Daily Living), cognitive impairment (RAI CPS, Cognitive Performance Scale), memory disorders (diagnoses G30-G32 and F00-F03) and the use of community-based health-service and prior hospital use (emergency visits and periods of care). The accuracy of the classifier for individuals was high enough to convince the officials of the city of Tampere to integrate the predictive model based on the findings of this study as a part of home care information system. Further work need to be done to evaluate variables that are modifiable and responsive to interventions.
Reducing Annotation Effort Using Generalized Expectation Criteria
2007-11-30
constraints additionally consider input variables. Active learning is a related problem in which the learner can choose the particular instances to be...labeled. In pool-based active learning [Cohn et al., 1994], the learner has access to a set of unlabeled instances, and can choose the instance that...has the highest expected utility according to some metric. A standard pool- based active learning method is uncertainty sampling [Lewis and Catlett
Models of Marine Fish Biodiversity: Assessing Predictors from Three Habitat Classification Schemes.
Yates, Katherine L; Mellin, Camille; Caley, M Julian; Radford, Ben T; Meeuwig, Jessica J
2016-01-01
Prioritising biodiversity conservation requires knowledge of where biodiversity occurs. Such knowledge, however, is often lacking. New technologies for collecting biological and physical data coupled with advances in modelling techniques could help address these gaps and facilitate improved management outcomes. Here we examined the utility of environmental data, obtained using different methods, for developing models of both uni- and multivariate biodiversity metrics. We tested which biodiversity metrics could be predicted best and evaluated the performance of predictor variables generated from three types of habitat data: acoustic multibeam sonar imagery, predicted habitat classification, and direct observer habitat classification. We used boosted regression trees (BRT) to model metrics of fish species richness, abundance and biomass, and multivariate regression trees (MRT) to model biomass and abundance of fish functional groups. We compared model performance using different sets of predictors and estimated the relative influence of individual predictors. Models of total species richness and total abundance performed best; those developed for endemic species performed worst. Abundance models performed substantially better than corresponding biomass models. In general, BRT and MRTs developed using predicted habitat classifications performed less well than those using multibeam data. The most influential individual predictor was the abiotic categorical variable from direct observer habitat classification and models that incorporated predictors from direct observer habitat classification consistently outperformed those that did not. Our results show that while remotely sensed data can offer considerable utility for predictive modelling, the addition of direct observer habitat classification data can substantially improve model performance. Thus it appears that there are aspects of marine habitats that are important for modelling metrics of fish biodiversity that are not fully captured by remotely sensed data. As such, the use of remotely sensed data to model biodiversity represents a compromise between model performance and data availability.
Models of Marine Fish Biodiversity: Assessing Predictors from Three Habitat Classification Schemes
Yates, Katherine L.; Mellin, Camille; Caley, M. Julian; Radford, Ben T.; Meeuwig, Jessica J.
2016-01-01
Prioritising biodiversity conservation requires knowledge of where biodiversity occurs. Such knowledge, however, is often lacking. New technologies for collecting biological and physical data coupled with advances in modelling techniques could help address these gaps and facilitate improved management outcomes. Here we examined the utility of environmental data, obtained using different methods, for developing models of both uni- and multivariate biodiversity metrics. We tested which biodiversity metrics could be predicted best and evaluated the performance of predictor variables generated from three types of habitat data: acoustic multibeam sonar imagery, predicted habitat classification, and direct observer habitat classification. We used boosted regression trees (BRT) to model metrics of fish species richness, abundance and biomass, and multivariate regression trees (MRT) to model biomass and abundance of fish functional groups. We compared model performance using different sets of predictors and estimated the relative influence of individual predictors. Models of total species richness and total abundance performed best; those developed for endemic species performed worst. Abundance models performed substantially better than corresponding biomass models. In general, BRT and MRTs developed using predicted habitat classifications performed less well than those using multibeam data. The most influential individual predictor was the abiotic categorical variable from direct observer habitat classification and models that incorporated predictors from direct observer habitat classification consistently outperformed those that did not. Our results show that while remotely sensed data can offer considerable utility for predictive modelling, the addition of direct observer habitat classification data can substantially improve model performance. Thus it appears that there are aspects of marine habitats that are important for modelling metrics of fish biodiversity that are not fully captured by remotely sensed data. As such, the use of remotely sensed data to model biodiversity represents a compromise between model performance and data availability. PMID:27333202
Autocalibrating motion-corrected wave-encoding for highly accelerated free-breathing abdominal MRI.
Chen, Feiyu; Zhang, Tao; Cheng, Joseph Y; Shi, Xinwei; Pauly, John M; Vasanawala, Shreyas S
2017-11-01
To develop a motion-robust wave-encoding technique for highly accelerated free-breathing abdominal MRI. A comprehensive 3D wave-encoding-based method was developed to enable fast free-breathing abdominal imaging: (a) auto-calibration for wave-encoding was designed to avoid extra scan for coil sensitivity measurement; (b) intrinsic butterfly navigators were used to track respiratory motion; (c) variable-density sampling was included to enable compressed sensing; (d) golden-angle radial-Cartesian hybrid view-ordering was incorporated to improve motion robustness; and (e) localized rigid motion correction was combined with parallel imaging compressed sensing reconstruction to reconstruct the highly accelerated wave-encoded datasets. The proposed method was tested on six subjects and image quality was compared with standard accelerated Cartesian acquisition both with and without respiratory triggering. Inverse gradient entropy and normalized gradient squared metrics were calculated, testing whether image quality was improved using paired t-tests. For respiratory-triggered scans, wave-encoding significantly reduced residual aliasing and blurring compared with standard Cartesian acquisition (metrics suggesting P < 0.05). For non-respiratory-triggered scans, the proposed method yielded significantly better motion correction compared with standard motion-corrected Cartesian acquisition (metrics suggesting P < 0.01). The proposed methods can reduce motion artifacts and improve overall image quality of highly accelerated free-breathing abdominal MRI. Magn Reson Med 78:1757-1766, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Distinguishability notion based on Wootters statistical distance: Application to discrete maps
NASA Astrophysics Data System (ADS)
Gomez, Ignacio S.; Portesi, M.; Lamberti, P. W.
2017-08-01
We study the distinguishability notion given by Wootters for states represented by probability density functions. This presents the particularity that it can also be used for defining a statistical distance in chaotic unidimensional maps. Based on that definition, we provide a metric d ¯ for an arbitrary discrete map. Moreover, from d ¯ , we associate a metric space with each invariant density of a given map, which results to be the set of all distinguished points when the number of iterations of the map tends to infinity. Also, we give a characterization of the wandering set of a map in terms of the metric d ¯ , which allows us to identify the dissipative regions in the phase space. We illustrate the results in the case of the logistic and the circle maps numerically and analytically, and we obtain d ¯ and the wandering set for some characteristic values of their parameters. Finally, an extension of the metric space associated for arbitrary probability distributions (not necessarily invariant densities) is given along with some consequences. The statistical properties of distributions given by histograms are characterized in terms of the cardinal of the associated metric space. For two conjugate variables, the uncertainty principle is expressed in terms of the diameters of the associated metric space with those variables.
Alternative metrics for real-ear-to-coupler difference average values in children.
Blumsack, Judith T; Clark-Lewis, Sandra; Watts, Kelli M; Wilson, Martha W; Ross, Margaret E; Soles, Lindsey; Ennis, Cydney
2014-10-01
Ideally, individual real-ear-to-coupler difference (RECD) measurements are obtained for pediatric hearing instrument-fitting purposes. When RECD measurements cannot be obtained, age-related average RECDs based on typically developing North American children are used. Evidence suggests that these values may not be appropriate for populations of children with retarded growth patterns. The purpose of this study was to determine if another metric, such as head circumference, height, or weight, can be used for prediction of RECDs in children. Design was a correlational study. For all participants, RECD values in both ears, head circumference, height, and weight were measured. The sample consisted of 68 North American children (ages 3-11 yr). Height, weight, head circumference, and RECDs were measured and were analyzed for both ears at 500, 750, 1000, 1500, 2000, 3000, 4000, and 6000 Hz. A backward elimination multiple-regression analysis was used to determine if age, height, weight, and/or head circumference are significant predictors of RECDs. For the left ear, head circumference was retained as the only statistically significant variable in the final model. For the right ear, head circumference was retained as the only statistically significant independent variable at all frequencies except at 2000 and 4000 Hz. At these latter frequencies, weight was retained as the only statistically significant independent variable after all other variables were eliminated. Head circumference can be considered as a metric for RECD prediction in children when individual measurements cannot be obtained. In developing countries where equipment is often unavailable and stunted growth can reduce the value of using age as a metric, head circumference can be considered as an alternative metric in the prediction of RECDs. American Academy of Audiology.
NASA Astrophysics Data System (ADS)
Moslehi, M.; de Barros, F.
2017-12-01
Complexity of hydrogeological systems arises from the multi-scale heterogeneity and insufficient measurements of their underlying parameters such as hydraulic conductivity and porosity. An inadequate characterization of hydrogeological properties can significantly decrease the trustworthiness of numerical models that predict groundwater flow and solute transport. Therefore, a variety of data assimilation methods have been proposed in order to estimate hydrogeological parameters from spatially scarce data by incorporating the governing physical models. In this work, we propose a novel framework for evaluating the performance of these estimation methods. We focus on the Ensemble Kalman Filter (EnKF) approach that is a widely used data assimilation technique. It reconciles multiple sources of measurements to sequentially estimate model parameters such as the hydraulic conductivity. Several methods have been used in the literature to quantify the accuracy of the estimations obtained by EnKF, including Rank Histograms, RMSE and Ensemble Spread. However, these commonly used methods do not regard the spatial information and variability of geological formations. This can cause hydraulic conductivity fields with very different spatial structures to have similar histograms or RMSE. We propose a vision-based approach that can quantify the accuracy of estimations by considering the spatial structure embedded in the estimated fields. Our new approach consists of adapting a new metric, Color Coherent Vectors (CCV), to evaluate the accuracy of estimated fields achieved by EnKF. CCV is a histogram-based technique for comparing images that incorporate spatial information. We represent estimated fields as digital three-channel images and use CCV to compare and quantify the accuracy of estimations. The sensitivity of CCV to spatial information makes it a suitable metric for assessing the performance of spatial data assimilation techniques. Under various factors of data assimilation methods such as number, layout, and type of measurements, we compare the performance of CCV with other metrics such as RMSE. By simulating hydrogeological processes using estimated and true fields, we observe that CCV outperforms other existing evaluation metrics.
A priori discretization error metrics for distributed hydrologic modeling applications
NASA Astrophysics Data System (ADS)
Liu, Hongli; Tolson, Bryan A.; Craig, James R.; Shafii, Mahyar
2016-12-01
Watershed spatial discretization is an important step in developing a distributed hydrologic model. A key difficulty in the spatial discretization process is maintaining a balance between the aggregation-induced information loss and the increase in computational burden caused by the inclusion of additional computational units. Objective identification of an appropriate discretization scheme still remains a challenge, in part because of the lack of quantitative measures for assessing discretization quality, particularly prior to simulation. This study proposes a priori discretization error metrics to quantify the information loss of any candidate discretization scheme without having to run and calibrate a hydrologic model. These error metrics are applicable to multi-variable and multi-site discretization evaluation and provide directly interpretable information to the hydrologic modeler about discretization quality. The first metric, a subbasin error metric, quantifies the routing information loss from discretization, and the second, a hydrological response unit (HRU) error metric, improves upon existing a priori metrics by quantifying the information loss due to changes in land cover or soil type property aggregation. The metrics are straightforward to understand and easy to recode. Informed by the error metrics, a two-step discretization decision-making approach is proposed with the advantage of reducing extreme errors and meeting the user-specified discretization error targets. The metrics and decision-making approach are applied to the discretization of the Grand River watershed in Ontario, Canada. Results show that information loss increases as discretization gets coarser. Moreover, results help to explain the modeling difficulties associated with smaller upstream subbasins since the worst discretization errors and highest error variability appear in smaller upstream areas instead of larger downstream drainage areas. Hydrologic modeling experiments under candidate discretization schemes validate the strong correlation between the proposed discretization error metrics and hydrologic simulation responses. Discretization decision-making results show that the common and convenient approach of making uniform discretization decisions across the watershed performs worse than the proposed non-uniform discretization approach in terms of preserving spatial heterogeneity under the same computational cost.
An evaluation of non-metric cranial traits used to estimate ancestry in a South African sample.
L'Abbé, E N; Van Rooyen, C; Nawrocki, S P; Becker, P J
2011-06-15
Establishing ancestry from a skeleton for forensic purposes has been shown to be difficult. The purpose of this paper is to address the application of thirteen non-metric traits to estimate ancestry in three South African groups, namely White, Black and "Coloured". In doing so, the frequency distribution of thirteen non-metric traits among South Africans are presented; the relationship of these non-metric traits with ancestry, sex, age at death are evaluated; and Kappa statistics are utilized to assess the inter and intra-rater reliability. Crania of 520 known individuals were obtained from four skeletal samples in South Africa: the Pretoria Bone Collection, the Raymond A. Dart Collection, the Kirsten Collection and the Student Bone Collection from the University of the Free State. Average age at death was 51, with an age range between 18 and 90. Thirteen commonly used non-metric traits from the face and jaw were scored; definition and illustrations were taken from Hefner, Bass and Hauser and De Stephano. Frequency distributions, ordinal regression and Cohen's Kappa statistics were performed as a means to assess population variation and repeatability. Frequency distributions were highly variable among South Africans. Twelve of the 13 variables had a statistically significant relationship with ancestry. Sex significantly affected only one variable, inter-orbital breadth, and age at death affected two (anterior nasal spine and alveolar prognathism). The interaction of ancestry and sex independently affected three variables (nasal bone contour, nasal breadth, and interorbital breadth). Seven traits had moderate to excellent repeatability, while poor scoring consistency was noted for six variables. Difficulties in repeating several of the trait scores may require either a need for refinement of the definitions, or these character states may not adequately describe the observable morphology in the population. The application of the traditional experience-based approach for estimating ancestry in forensic case work is problematic. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Jung, E.; Yoon, H.
2016-12-01
Natural disasters are substantial source of social and economic damage around the globe. The amount of damage is larger when such catastrophe events happen in urbanized areas where the wealth is concentrated. Disasters cause losses in real estate assets, incurring additional cost of repair and maintenance of the properties. For this reason, natural hazard risk such as flooding and landslide is regarded as one of the important determinants of homebuyers' choice and preference. In this research, we aim to reveal whether the past records of flood affect real estate market values in Busan, Korea in 2014, under a hypothesis that homebuyers' perception of natural hazard is reflected on housing values, using the Mahalanobis-metric matching method. Unlike conventionally used hedonic pricing model to estimate capitalization of flood risk into the sales price of properties, the analytical method we adopt here enables inferring causal effects by efficiently controlling for observed/unobserved omitted variable bias. This matching approach pairs each inundated property (treatment variable) with a non-inundated property (control variable) with the closest Mahalanobis distance between them, and comparing their effects on residential property sales price (outcome variable). As a result, we expect price discounts for inundated properties larger than the one for comparable non-inundated properties. This research will be valuable in establishing the mitigation policies of future climate change to relieve the possible negative economic consequences from the disaster by estimating how people perceive and respond to natural hazard. This work was supported by the Korea Environmental Industry and Technology Institute (KEITI) under Grant (No. 2014-001-310007).
Spatial analysis of groundwater levels using Fuzzy Logic and geostatistical tools
NASA Astrophysics Data System (ADS)
Theodoridou, P. G.; Varouchakis, E. A.; Karatzas, G. P.
2017-12-01
The spatial variability evaluation of the water table of an aquifer provides useful information in water resources management plans. Geostatistical methods are often employed to map the free surface of an aquifer. In geostatistical analysis using Kriging techniques the selection of the optimal variogram is very important for the optimal method performance. This work compares three different criteria to assess the theoretical variogram that fits to the experimental one: the Least Squares Sum method, the Akaike Information Criterion and the Cressie's Indicator. Moreover, variable distance metrics such as the Euclidean, Minkowski, Manhattan, Canberra and Bray-Curtis are applied to calculate the distance between the observation and the prediction points, that affects both the variogram calculation and the Kriging estimator. A Fuzzy Logic System is then applied to define the appropriate neighbors for each estimation point used in the Kriging algorithm. The two criteria used during the Fuzzy Logic process are the distance between observation and estimation points and the groundwater level value at each observation point. The proposed techniques are applied to a data set of 250 hydraulic head measurements distributed over an alluvial aquifer. The analysis showed that the Power-law variogram model and Manhattan distance metric within ordinary kriging provide the best results when the comprehensive geostatistical analysis process is applied. On the other hand, the Fuzzy Logic approach leads to a Gaussian variogram model and significantly improves the estimation performance. The two different variogram models can be explained in terms of a fractional Brownian motion approach and of aquifer behavior at local scale. Finally, maps of hydraulic head spatial variability and of predictions uncertainty are constructed for the area with the two different approaches comparing their advantages and drawbacks.
Winter wheat mapping combining variations before and after estimated heading dates
NASA Astrophysics Data System (ADS)
Qiu, Bingwen; Luo, Yuhan; Tang, Zhenghong; Chen, Chongcheng; Lu, Difei; Huang, Hongyu; Chen, Yunzhi; Chen, Nan; Xu, Weiming
2017-01-01
Accurate and updated information on winter wheat distribution is vital for food security. The intra-class variability of the temporal profiles of vegetation indices presents substantial challenges to current time series-based approaches. This study developed a new method to identify winter wheat over large regions through a transformation and metric-based approach. First, the trend surfaces were established to identify key phenological parameters of winter wheat based on altitude and latitude with references to crop calendar data from the agro-meteorological stations. Second, two phenology-based indicators were developed based on the EVI2 differences between estimated heading and seedling/harvesting dates and the change amplitudes. These two phenology-based indicators revealed variations during the estimated early and late growth stages. Finally, winter wheat data were extracted based on these two metrics. The winter wheat mapping method was applied to China based on the 250 m 8-day composite Moderate Resolution Imaging Spectroradiometer (MODIS) 2-band Enhanced Vegetation Index (EVI2) time series datasets. Accuracy was validated with field survey data, agricultural census data, and Landsat-interpreted results in test regions. When evaluated with 653 field survey sites and Landsat image interpreted data, the overall accuracy of MODIS-derived images in 2012-2013 was 92.19% and 88.86%, respectively. The MODIS-derived winter wheat areas accounted for over 82% of the variability at the municipal level when compared with agricultural census data. The winter wheat mapping method developed in this study demonstrates great adaptability to intra-class variability of the vegetation temporal profiles and has great potential for further applications to broader regions and other types of agricultural crop mapping.
The Model for Final Stage of Gravitational Collapse Massless Scalar Field
NASA Astrophysics Data System (ADS)
Gladush, V. D.; Mironin, D. V.
It is known that in General relativity, for some spherically symmetric initial conditions, the massless scalar field (SF) experience the gravitational collapse (Choptuik, 1989), and arise a black hole (BH). According Bekenstein, a BH has no "hair scalar", so the SF is completely under the horizon. Thus, the study of the final stage for the gravitational collapse of a SF is reduced to the construction of a solution of Einstein's equations describing the evolution of a SF inside the BH. In this work, we build the Lagrangian for scalar and gravitationalfields in the spherically symmetric case, when the metric coefficients and SF depends only on the time. In this case, it is convenient to use the methods of classical mechanics. Since the metric allows an arbitrary transformation of time, then the corresponding field variable (g00) is included in the Lagrangian without time derivative. It is a non-dynamic variable, and is included in the Lagrangian as a Lagrange multiplier. A variation of the action on this variable gives the constraint. It turns out that Hamiltonian is proportional to the constraint, and so it is zero. The corresponding Hamilton-Jacobi equation easily integrated. Hence, we find the relation between the SF and the metric. To restore of time dependence we using an equation dL / dq' = dS / dq After using a gauge condition, it allows us to find solution. Thus, we find the evolution of the SF inside the BH, which describes the final stage of the gravitational collapse of a SF. It turns out that the mass BH associated with a scalar charge G of the corresponding SF inside the BH ratio M = G/(2√ κ).
Shepherd, T; Teras, M; Beichel, RR; Boellaard, R; Bruynooghe, M; Dicken, V; Gooding, MJ; Julyan, PJ; Lee, JA; Lefèvre, S; Mix, M; Naranjo, V; Wu, X; Zaidi, H; Zeng, Z; Minn, H
2017-01-01
The impact of positron emission tomography (PET) on radiation therapy is held back by poor methods of defining functional volumes of interest. Many new software tools are being proposed for contouring target volumes but the different approaches are not adequately compared and their accuracy is poorly evaluated due to the ill-definition of ground truth. This paper compares the largest cohort to date of established, emerging and proposed PET contouring methods, in terms of accuracy and variability. We emphasize spatial accuracy and present a new metric that addresses the lack of unique ground truth. Thirty methods are used at 13 different institutions to contour functional volumes of interest in clinical PET/CT and a custom-built PET phantom representing typical problems in image guided radiotherapy. Contouring methods are grouped according to algorithmic type, level of interactivity and how they exploit structural information in hybrid images. Experiments reveal benefits of high levels of user interaction, as well as simultaneous visualization of CT images and PET gradients to guide interactive procedures. Method-wise evaluation identifies the danger of over-automation and the value of prior knowledge built into an algorithm. PMID:22692898
NASA Astrophysics Data System (ADS)
Buzan, J. R.; Oleson, K.; Huber, M.
2014-08-01
We implement and analyze 13 different metrics (4 moist thermodynamic quantities and 9 heat stress metrics) in the Community Land Model (CLM4.5), the land surface component of the Community Earth System Model (CESM). We call these routines the HumanIndexMod. These heat stress metrics embody three philosophical approaches: comfort, physiology, and empirically based algorithms. The metrics are directly connected to CLM4.5 BareGroundFuxesMod, CanopyFluxesMod, SlakeFluxesMod, and UrbanMod modules in order to differentiate between the distinct regimes even within one gridcell. This allows CLM4.5 to calculate the instantaneous heat stress at every model time step, for every land surface type, capturing all aspects of non-linearity in moisture-temperature covariance. Secondary modules for initialization and archiving are modified to generate the metrics as standard output. All of the metrics implemented depend on the covariance of near surface atmospheric variables: temperature, pressure, and humidity. Accurate wet bulb temperatures are critical for quantifying heat stress (used by 5 of the 9 heat stress metrics). Unfortunately, moist thermodynamic calculations for calculating accurate wet bulb temperatures are not in CLM4.5. To remedy this, we incorporated comprehensive water vapor calculations into CLM4.5. The three advantages of adding these metrics to CLM4.5 are (1) improved thermodynamic calculations within climate models, (2) quantifying human heat stress, and (3) that these metrics may be applied to other animals as well as industrial applications. Additionally, an offline version of the HumanIndexMod is available for applications with weather and climate datasets. Examples of such applications are the high temporal resolution CMIP5 archived data, weather and research forecasting models, CLM4.5 flux tower simulations (or other land surface model validation studies), and local weather station data analysis. To demonstrate the capabilities of the HumanIndexMod, we analyze the top 1% of heat stress events from 1901-2010 at a 4 × daily resolution from a global CLM4.5 simulation. We cross compare these events to the input moisture and temperature conditions, and with each metric. Our results show that heat stress may be divided into two regimes: arid and non-arid. The highest heat stress values are in areas with strong convection (±30° latitude). Equatorial regions have low variability in heat stress values (±20° latitude). Arid regions have large variability in extreme heat stress as compared to the low latitudes.
Hawkins, Keith A; Jennings, Danna; Vincent, Andrea S; Gilliland, Kirby; West, Adrienne; Marek, Kenneth
2012-08-01
The automated neuropsychological assessment metrics battery-4 for PD offers the promise of a computerized approach to cognitive assessment. To assess its utility, the ANAM4-PD was administered to 72 PD patients and 24 controls along with a traditional battery. Reliability was assessed by retesting 26 patients. The cognitive efficiency score (CES; a global score) exhibited high reliability (r = 0.86). Constituent variables exhibited lower reliability. The CES correlated strongly with the traditional battery global score, but displayed weaker relationships to UPDRS scores than the traditional score. Multivariate analysis of variance revealed a significant difference between the patient and control groups in ANAM4-PD performance, with three ANAM4-PD tests, math, tower, and pursuit tracking, displaying sizeable differences. In discriminant analyses these variables were as effective as the total ANAM4-PD in classifying cases designated as impaired based on traditional variables. Principal components analyses uncovered fewer factors in the ANAM4-PD relative to the traditional battery. ANAM4-PD variables correlated at higher levels with traditional motor and processing speed variables than with untimed executive, intellectual or memory variables. The ANAM4-PD displays high global reliability, but variable subtest reliability. The battery assesses a narrower range of cognitive functions than traditional tests, and discriminates between patients and controls less effectively. Three ANAM4-PD tests, pursuit tracking, math, and tower performed as well as the total ANAM4-PD in classifying patients as cognitively impaired. These findings could guide the refinement of the ANAM4-PD as an efficient method of screening for mild to moderate cognitive deficits in PD patients. Copyright © 2012 Elsevier Ltd. All rights reserved.
Classification of forest land attributes using multi-source remotely sensed data
NASA Astrophysics Data System (ADS)
Pippuri, Inka; Suvanto, Aki; Maltamo, Matti; Korhonen, Kari T.; Pitkänen, Juho; Packalen, Petteri
2016-02-01
The aim of the study was to (1) examine the classification of forest land using airborne laser scanning (ALS) data, satellite images and sample plots of the Finnish National Forest Inventory (NFI) as training data and to (2) identify best performing metrics for classifying forest land attributes. Six different schemes of forest land classification were studied: land use/land cover (LU/LC) classification using both national classes and FAO (Food and Agricultural Organization of the United Nations) classes, main type, site type, peat land type and drainage status. Special interest was to test different ALS-based surface metrics in classification of forest land attributes. Field data consisted of 828 NFI plots collected in 2008-2012 in southern Finland and remotely sensed data was from summer 2010. Multinomial logistic regression was used as the classification method. Classification of LU/LC classes were highly accurate (kappa-values 0.90 and 0.91) but also the classification of site type, peat land type and drainage status succeeded moderately well (kappa-values 0.51, 0.69 and 0.52). ALS-based surface metrics were found to be the most important predictor variables in classification of LU/LC class, main type and drainage status. In best classification models of forest site types both spectral metrics from satellite data and point cloud metrics from ALS were used. In turn, in the classification of peat land types ALS point cloud metrics played the most important role. Results indicated that the prediction of site type and forest land category could be incorporated into stand level forest management inventory system in Finland.
Using LDPC Code Constraints to Aid Recovery of Symbol Timing
NASA Technical Reports Server (NTRS)
Jones, Christopher; Villasnor, John; Lee, Dong-U; Vales, Esteban
2008-01-01
A method of utilizing information available in the constraints imposed by a low-density parity-check (LDPC) code has been proposed as a means of aiding the recovery of symbol timing in the reception of a binary-phase-shift-keying (BPSK) signal representing such a code in the presence of noise, timing error, and/or Doppler shift between the transmitter and the receiver. This method and the receiver architecture in which it would be implemented belong to a class of timing-recovery methods and corresponding receiver architectures characterized as pilotless in that they do not require transmission and reception of pilot signals. Acquisition and tracking of a signal of the type described above have traditionally been performed upstream of, and independently of, decoding and have typically involved utilization of a phase-locked loop (PLL). However, the LDPC decoding process, which is iterative, provides information that can be fed back to the timing-recovery receiver circuits to improve performance significantly over that attainable in the absence of such feedback. Prior methods of coupling LDPC decoding with timing recovery had focused on the use of output code words produced as the iterations progress. In contrast, in the present method, one exploits the information available from the metrics computed for the constraint nodes of an LDPC code during the decoding process. In addition, the method involves the use of a waveform model that captures, better than do the waveform models of the prior methods, distortions introduced by receiver timing errors and transmitter/ receiver motions. An LDPC code is commonly represented by use of a bipartite graph containing two sets of nodes. In the graph corresponding to an (n,k) code, the n variable nodes correspond to the code word symbols and the n-k constraint nodes represent the constraints that the code places on the variable nodes in order for them to form a valid code word. The decoding procedure involves iterative computation of values associated with these nodes. A constraint node represents a parity-check equation using a set of variable nodes as inputs. A valid decoded code word is obtained if all parity-check equations are satisfied. After each iteration, the metrics associated with each constraint node can be evaluated to determine the status of the associated parity check. Heretofore, normally, these metrics would be utilized only within the LDPC decoding process to assess whether or not variable nodes had converged to a codeword. In the present method, it is recognized that these metrics can be used to determine accuracy of the timing estimates used in acquiring the sampled data that constitute the input to the LDPC decoder. In fact, the number of constraints that are satisfied exhibits a peak near the optimal timing estimate. Coarse timing estimation (or first-stage estimation as described below) is found via a parametric search for this peak. The present method calls for a two-stage receiver architecture illustrated in the figure. The first stage would correct large time delays and frequency offsets; the second stage would track random walks and correct residual time and frequency offsets. In the first stage, constraint-node feedback from the LDPC decoder would be employed in a search algorithm in which the searches would be performed in successively narrower windows to find the correct time delay and/or frequency offset. The second stage would include a conventional first-order PLL with a decision-aided timing-error detector that would utilize, as its decision aid, decoded symbols from the LDPC decoder. The method has been tested by means of computational simulations in cases involving various timing and frequency errors. The results of the simulations ined in the ideal case of perfect timing in the receiver.
a New Framework for Characterising Simulated Droughts for Future Climates
NASA Astrophysics Data System (ADS)
Sharma, A.; Rashid, M.; Johnson, F.
2017-12-01
Significant attention has been focussed on metrics for quantifying drought. Lesser attention has been given to the unsuitability of current metrics in quantifying drought in a changing climate due to the clear non-stationarity in potential and actual evapotranspiration well into the future (Asadi-Zarch et al, 2015). This talk presents a new basis for simulating drought designed specifically for use with climate model simulations. Given the known uncertainty of climate model rainfall simulations, along with their inability to represent low-frequency variability attributes, the approach here adopts a predictive model for drought using selected atmospheric indicators. This model is based on a wavelet decomposition of relevant atmospheric predictors to filter out less relevant frequencies and formulate a better characterisation of the drought metric chosen as response. Once ascertained using observed precipication and associated atmospheric variables, these can be formulated from GCM simulations using a multivariate bias correction tool (Mehrotra and Sharma, 2016) that accounts for low-frequency variability, and a regression tool that accounts for nonlinear dependence (Sharma and Mehrotra, 2014). Use of only the relevant frequencies, as well as the corrected representation of cross-variable dependence, allows greater accuracy in characterising observed drought, from GCM simulations. Using simulations from a range of GCMs across Australia, we show here that this new method offers considerable advantages in representing drought compared to traditionally followed alternatives that rely on modelled rainfall instead. Reference:Asadi Zarch, M. A., B. Sivakumar, and A. Sharma (2015), Droughts in a warming climate: A global assessment of Standardized precipitation index (SPI) and Reconnaissance drought index (RDI), Journal of Hydrology, 526, 183-195. Mehrotra, R., and A. Sharma (2016), A Multivariate Quantile-Matching Bias Correction Approach with Auto- and Cross-Dependence across Multiple Time Scales: Implications for Downscaling, Journal of Climate, 29(10), 3519-3539. Sharma, A., and R. Mehrotra (2014), An information theoretic alternative to model a natural system using observational information alone, Water Resources Research, 50, 650-660, doi:10.1002/2013WR013845.
SPATIAL VARIABILITY IN POLLUTANTS: IMPLICATIONS FOR EXPOSURE ASSESSMENT
The efforts to evaluate the value of improved exposure metrics on the ability to relate those metrics with outcomes in complex systems have met with varying degrees of success. This work describes the results of recent efforts, mostly involving air pollutants, to improve the sop...
Why "improved" water sources are not always safe.
Shaheed, Ameer; Orgill, Jennifer; Montgomery, Maggie A; Jeuland, Marc A; Brown, Joe
2014-04-01
Existing and proposed metrics for household drinking-water services are intended to measure the availability, safety and accessibility of water sources. However, these attributes can be highly variable over time and space and this variation complicates the task of creating and implementing simple and scalable metrics. In this paper, we highlight those factors - especially those that relate to so-called improved water sources - that contribute to variability in water safety but may not be generally recognized as important by non-experts. Problems in the provision of water in adequate quantities and of adequate quality - interrelated problems that are often influenced by human behaviour - may contribute to an increased risk of poor health. Such risk may be masked by global water metrics that indicate that we are on the way to meeting the world's drinking-water needs. Given the complexity of the topic and current knowledge gaps, international metrics for access to drinking water should be interpreted with great caution. We need further targeted research on the health impacts associated with improvements in drinking-water supplies.
Validation of neural spike sorting algorithms without ground-truth information.
Barnett, Alex H; Magland, Jeremy F; Greengard, Leslie F
2016-05-01
The throughput of electrophysiological recording is growing rapidly, allowing thousands of simultaneous channels, and there is a growing variety of spike sorting algorithms designed to extract neural firing events from such data. This creates an urgent need for standardized, automatic evaluation of the quality of neural units output by such algorithms. We introduce a suite of validation metrics that assess the credibility of a given automatic spike sorting algorithm applied to a given dataset. By rerunning the spike sorter two or more times, the metrics measure stability under various perturbations consistent with variations in the data itself, making no assumptions about the internal workings of the algorithm, and minimal assumptions about the noise. We illustrate the new metrics on standard sorting algorithms applied to both in vivo and ex vivo recordings, including a time series with overlapping spikes. We compare the metrics to existing quality measures, and to ground-truth accuracy in simulated time series. We provide a software implementation. Metrics have until now relied on ground-truth, simulated data, internal algorithm variables (e.g. cluster separation), or refractory violations. By contrast, by standardizing the interface, our metrics assess the reliability of any automatic algorithm without reference to internal variables (e.g. feature space) or physiological criteria. Stability is a prerequisite for reproducibility of results. Such metrics could reduce the significant human labor currently spent on validation, and should form an essential part of large-scale automated spike sorting and systematic benchmarking of algorithms. Copyright © 2016 Elsevier B.V. All rights reserved.
Using directed information for influence discovery in interconnected dynamical systems
NASA Astrophysics Data System (ADS)
Rao, Arvind; Hero, Alfred O.; States, David J.; Engel, James Douglas
2008-08-01
Structure discovery in non-linear dynamical systems is an important and challenging problem that arises in various applications such as computational neuroscience, econometrics, and biological network discovery. Each of these systems have multiple interacting variables and the key problem is the inference of the underlying structure of the systems (which variables are connected to which others) based on the output observations (such as multiple time trajectories of the variables). Since such applications demand the inference of directed relationships among variables in these non-linear systems, current methods that have a linear assumption on structure or yield undirected variable dependencies are insufficient. Hence, in this work, we present a methodology for structure discovery using an information-theoretic metric called directed time information (DTI). Using both synthetic dynamical systems as well as true biological datasets (kidney development and T-cell data), we demonstrate the utility of DTI in such problems.
Edeani, Francis; Malik, Adeel; Kaul, Ajay
2017-03-01
The Chicago classification was based on metrics derived from studies in asymptomatic adult subjects. Our objectives were to characterize esophageal motility disorders in children and to determine whether the spectrum of manometric findings is similar between the pediatric and adult populations. Studies have suggested that the metrics utilized in manometric diagnosis depend on age, size, and manometric assembly. This would imply that a different set of metrics should be used for the pediatric population. There are no standardized and generally accepted metrics for use in the pediatric population, though there have been attempts to establish metrics specific to this population. Overall, we found that the distribution of esophageal motility disorders in children was like that described in adults using the Chicago classification. This analysis will serve as a prequel to follow-up studies exploring the individual metrics for variability among patients, with the objective of establishing novel metrics for the pediatric population.
Distributed Space Mission Design for Earth Observation Using Model-Based Performance Evaluation
NASA Technical Reports Server (NTRS)
Nag, Sreeja; LeMoigne-Stewart, Jacqueline; Cervantes, Ben; DeWeck, Oliver
2015-01-01
Distributed Space Missions (DSMs) are gaining momentum in their application to earth observation missions owing to their unique ability to increase observation sampling in multiple dimensions. DSM design is a complex problem with many design variables, multiple objectives determining performance and cost and emergent, often unexpected, behaviors. There are very few open-access tools available to explore the tradespace of variables, minimize cost and maximize performance for pre-defined science goals, and therefore select the most optimal design. This paper presents a software tool that can multiple DSM architectures based on pre-defined design variable ranges and size those architectures in terms of predefined science and cost metrics. The tool will help a user select Pareto optimal DSM designs based on design of experiments techniques. The tool will be applied to some earth observation examples to demonstrate its applicability in making some key decisions between different performance metrics and cost metrics early in the design lifecycle.
Gravitational radiation from a cylindrical naked singularity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakao, Ken-ichi; Morisawa, Yoshiyuki
We construct an approximate solution which describes the gravitational emission from a naked singularity formed by the gravitational collapse of a cylindrical thick shell composed of dust. The assumed situation is that the collapsing speed of the dust is very large. In this situation, the metric variables are obtained approximately by a kind of linear perturbation analysis in the background Morgan solution which describes the motion of cylindrical null dust. The most important problem in this study is what boundary conditions for metric and matter variables should be imposed at the naked singularity. We find a boundary condition that allmore » the metric and matter variables are everywhere finite at least up to the first order approximation. This implies that the spacetime singularity formed by this high-speed dust collapse is very similar to that formed by the null dust and the final singularity will be a conical one. Weyl curvature is completely released from the collapsed dust.« less
Quantum properties of affine-metric gravity with the cosmological term
NASA Astrophysics Data System (ADS)
Baurov, A. Yu; Pronin, P. I.; Stepanyantz, K. V.
2018-04-01
The paper contains analysis of the one-loop effective action for affine-metric gravity of the Hilbert–Einstein type with the cosmological term. We discuss different approaches to the calculation of the effective action, which depends on two independent variables, namely, the metric tensor and the affine connection. In the one-loop approximation we explain how the effective action can be obtained, if, at the first step of the calculation, the metric tensor is integrated out. It is demonstrated that the result is the same as in the case when one starts by integrating out the connection.
Unbiased Estimation of Refractive State of Aberrated Eyes
Martin, Jesson; Vasudevan, Balamurali; Himebaugh, Nikole; Bradley, Arthur; Thibos, Larry
2011-01-01
To identify unbiased methods for estimating the target vergence required to maximize visual acuity based on wavefront aberration measurements. Experiments were designed to minimize the impact of confounding factors that have hampered previous research. Objective wavefront refractions and subjective acuity refractions were obtained for the same monochromatic wavelength. Accommodation and pupil fluctuations were eliminated by cycloplegia. Unbiased subjective refractions that maximize visual acuity for high contrast letters were performed with a computer controlled forced choice staircase procedure, using 0.125 diopter steps of defocus. All experiments were performed for two pupil diameters (3mm and 6mm). As reported in the literature, subjective refractive error does not change appreciably when the pupil dilates. For 3 mm pupils most metrics yielded objective refractions that were about 0.1D more hyperopic than subjective acuity refractions. When pupil diameter increased to 6 mm, this bias changed in the myopic direction and the variability between metrics also increased. These inaccuracies were small compared to the precision of the measurements, which implies that most metrics provided unbiased estimates of refractive state for medium and large pupils. A variety of image quality metrics may be used to determine ocular refractive state for monochromatic (635nm) light, thereby achieving accurate results without the need for empirical correction factors. PMID:21777601
Resilience Metrics for the Electric Power System: A Performance-Based Approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vugrin, Eric D.; Castillo, Andrea R; Silva-Monroy, Cesar Augusto
Grid resilience is a concept related to a power system's ability to continue operating and delivering power even in the event that low probability, high-consequence disruptions such as hurricanes, earthquakes, and cyber-attacks occur. Grid resilience objectives focus on managing and, ideally, minimizing potential consequences that occur as a result of these disruptions. Currently, no formal grid resilience definitions, metrics, or analysis methods have been universally accepted. This document describes an effort to develop and describe grid resilience metrics and analysis methods. The metrics and methods described herein extend upon the Resilience Analysis Process (RAP) developed by Watson et al. formore » the 2015 Quadrennial Energy Review. The extension allows for both outputs from system models and for historical data to serve as the basis for creating grid resilience metrics and informing grid resilience planning and response decision-making. This document describes the grid resilience metrics and analysis methods. Demonstration of the metrics and methods is shown through a set of illustrative use cases.« less
Mykrä, Heikki; Heino, Jani; Muotka, Timo
2004-09-01
Streams are naturally hierarchical systems, and their biota are affected by factors effective at regional to local scales. However, there have been only a few attempts to quantify variation in ecological attributes across multiple spatial scales. We examined the variation in several macroinvertebrate metrics and environmental variables at three hierarchical scales (ecoregions, drainage systems, streams) in boreal headwater streams. In nested analyses of variance, significant spatial variability was observed for most of the macroinvertebrate metrics and environmental variables examined. For most metrics, ecoregions explained more variation than did drainage systems. There was, however, much variation attributable to residuals, suggesting high among-stream variation in macroinvertebrate assemblage characteristics. Nonmetric multidimensional scaling (NMDS) and multiresponse permutation procedure (MRPP) showed that assemblage composition differed significantly among both drainage systems and ecoregions. The associated R-statistics were, however, very low, indicating wide variation among sites within the defined landscape classifications. Regional delineations explained most of the variation in stream water chemistry, ecoregions being clearly more influential than drainage systems. For physical habitat characteristics, by contrast, the among-stream component was the major source of variation. Distinct differences attributable to stream size were observed for several metrics, especially total number of taxa and abundance of algae-scraping invertebrates. Although ecoregions clearly account for a considerable amount of variation in macroinvertebrate assemblage characteristics, we suggest that a three-tiered classification system (stratification through ecoregion and habitat type, followed by assemblage prediction within these ecologically meaningful units) will be needed for effective bioassessment of boreal running waters.
Sensitivity of intermittent streams to climate variations in the United States
NASA Astrophysics Data System (ADS)
Eng, K.
2015-12-01
There is growing interest in the effects of climate change on streamflows because of the potential negative effects on aquatic biota and water supplies. Previous studies of climate controls on flows have primarily focused on perennial streams, and few studies have examined the effect of climate variability on intermittent streams. Our objectives in this study were to (1) identify regions showing similar patterns of intermittency, and (2) evaluate the sensitivity of intermittent streams to historical variability in climate in the United States. This study was carried out at 265 intermittent streams by evaluating: (1) correlations among time series of flow metrics (number of zero-flow events, the average of the central 50% and largest 10% of flows) with precipitation (magnitudes, durations and intensity) and temperature, and (2) decadal changes in the seasonality and long-term trends of these flow metrics. Results identified five distinct seasonal patterns of flow intermittency: fall, fall-to-winter, non-seasonal, summer, and summer-to-winter intermittent streams. In addition, strong associations between the low-flow metrics and historical climate variability were found. However, the lack of trends in historical variations in precipitation results in no significant seasonal shifts or decade-to-decade trends in the low-flow metrics over the period of record (1950 to 2013).
Micacchion, Mick; Stapanian, Martin A.; Adams, Jean V.
2015-01-01
We determined the best predictors of an index of amphibian biotic integrity calculated from 54 shrub and forested wetlands in Ohio, USA using a two-step sequential holdout validation procedure. We considered 13 variables as predictors: four metrics of wetland condition from the Ohio Rapid Assessment Method (ORAM), a wetland vegetation index of biotic integrity, and eight metrics from a landscape disturbance index. For all iterations, the best model included the single ORAM metric that assesses habitat alteration, substrate disturbance, and habitat development within a wetland. Our results align with results of similar studies that have associated high scores for wetland vegetation indices of biotic integrity with low habitat alteration and substrate disturbance within wetlands. Thus, implementing similar management practices (e.g., not removing downed woody debris, retaining natural morphological features, decreasing nutrient input from surrounding agricultural lands) could concurrently increase ecological integrity of both plant and amphibian communities in a wetland. Further, our results have the unexpected effect of making progress toward a more unifying theory of ecological indices.
Extension of loop quantum gravity to f(R) theories.
Zhang, Xiangdong; Ma, Yongge
2011-04-29
The four-dimensional metric f(R) theories of gravity are cast into connection-dynamical formalism with real su(2) connections as configuration variables. Through this formalism, the classical metric f(R) theories are quantized by extending the loop quantization scheme of general relativity. Our results imply that the nonperturbative quantization procedure of loop quantum gravity is valid not only for general relativity but also for a rather general class of four-dimensional metric theories of gravity.
Bruce, James F.
2002-01-01
The Fountain Creek Basin in and around Colorado Springs, Colorado, is affected by various land- and water-use activities. Biological, hydrological, water-quality, and land-use data were collected at 10 sites in the Fountain Creek Basin from April 1998 through April 2001 to provide a baseline characterization of macroinvertebrate communities and habitat conditions for comparison in subsequent studies; and to assess variation in macroinvertebrate community structure relative to habitat quality. Analysis of variance results indicated that instream and riparian variables were not affected by season, but significant differences were found among sites. Nine metrics were used to describe and evaluate macroinvertebrate community structure. Statistical analysis indicated that for six of the nine metrics, significant variability occurred between spring and fall seasons for 60 percent of the sites. Cluster analysis (unweighted pair group method average) using macroinvertebrate presence-absence data showed a well-defined separation between spring and fall samples. Six of the nine metrics had significant spatial variation. Cluster analysis using Sorenson?s Coefficient of Community values computed from macroinvertebrate density (number of organisms per square meter) data showed that macroinvertebrate community structure was more similar among tributary sites than main-stem sites. Canonical correspondence analysis identified a substrate particle-size gradient from site-specific species-abundance data and environmental correlates that decreased the 10 sites to 5 site clusters and their associated taxa.
NASA Astrophysics Data System (ADS)
Roelofs, Freek; Johnson, Michael D.; Shiokawa, Hotaka; Doeleman, Sheperd S.; Falcke, Heino
2017-09-01
General relativistic magnetohydrodynamic (GRMHD) simulations of accretion disks and jets associated with supermassive black holes show variability on a wide range of timescales. On timescales comparable to or longer than the gravitational timescale {t}G={GM}/{c}3, variation may be dominated by orbital dynamics of the inhomogeneous accretion flow. Turbulent evolution within the accretion disk is expected on timescales comparable to the orbital period, typically an order of magnitude larger than t G . For Sgr A*, t G is much shorter than the typical duration of a VLBI experiment, enabling us to study this variability within a single observation. Closure phases, the sum of interferometric visibility phases on a triangle of baselines, are particularly useful for studying this variability. In addition to a changing source structure, variations in observed closure phase can also be due to interstellar scattering, thermal noise, and the changing geometry of projected baselines over time due to Earth rotation. We present a metric that is able to distinguish the latter two from intrinsic or scattering variability. This metric is validated using synthetic observations of GRMHD simulations of Sgr A*. When applied to existing multi-epoch EHT data of Sgr A*, this metric shows that the data are most consistent with source models containing intrinsic variability from source dynamics, interstellar scattering, or a combination of those. The effects of black hole inclination, orientation, spin, and morphology (disk or jet) on the expected closure phase variability are also discussed.
Quadratic time dependent Hamiltonians and separation of variables
NASA Astrophysics Data System (ADS)
Anzaldo-Meneses, A.
2017-06-01
Time dependent quantum problems defined by quadratic Hamiltonians are solved using canonical transformations. The Green's function is obtained and a comparison with the classical Hamilton-Jacobi method leads to important geometrical insights like exterior differential systems, Monge cones and time dependent Gaussian metrics. The Wei-Norman approach is applied using unitary transformations defined in terms of generators of the associated Lie groups, here the semi-direct product of the Heisenberg group and the symplectic group. A new explicit relation for the unitary transformations is given in terms of a finite product of elementary transformations. The sequential application of adequate sets of unitary transformations leads naturally to a new separation of variables method for time dependent Hamiltonians, which is shown to be related to the Inönü-Wigner contraction of Lie groups. The new method allows also a better understanding of interacting particles or coupled modes and opens an alternative way to analyze topological phases in driven systems.
NASA Astrophysics Data System (ADS)
Stisen, S.; Demirel, C.; Koch, J.
2017-12-01
Evaluation of performance is an integral part of model development and calibration as well as it is of paramount importance when communicating modelling results to stakeholders and the scientific community. There exists a comprehensive and well tested toolbox of metrics to assess temporal model performance in the hydrological modelling community. On the contrary, the experience to evaluate spatial performance is not corresponding to the grand availability of spatial observations readily available and to the sophisticate model codes simulating the spatial variability of complex hydrological processes. This study aims at making a contribution towards advancing spatial pattern oriented model evaluation for distributed hydrological models. This is achieved by introducing a novel spatial performance metric which provides robust pattern performance during model calibration. The promoted SPAtial EFficiency (spaef) metric reflects three equally weighted components: correlation, coefficient of variation and histogram overlap. This multi-component approach is necessary in order to adequately compare spatial patterns. spaef, its three components individually and two alternative spatial performance metrics, i.e. connectivity analysis and fractions skill score, are tested in a spatial pattern oriented model calibration of a catchment model in Denmark. The calibration is constrained by a remote sensing based spatial pattern of evapotranspiration and discharge timeseries at two stations. Our results stress that stand-alone metrics tend to fail to provide holistic pattern information to the optimizer which underlines the importance of multi-component metrics. The three spaef components are independent which allows them to complement each other in a meaningful way. This study promotes the use of bias insensitive metrics which allow comparing variables which are related but may differ in unit in order to optimally exploit spatial observations made available by remote sensing platforms. We see great potential of spaef across environmental disciplines dealing with spatially distributed modelling.
NASA Astrophysics Data System (ADS)
Koch, Julian; Cüneyd Demirel, Mehmet; Stisen, Simon
2018-05-01
The process of model evaluation is not only an integral part of model development and calibration but also of paramount importance when communicating modelling results to the scientific community and stakeholders. The modelling community has a large and well-tested toolbox of metrics to evaluate temporal model performance. In contrast, spatial performance evaluation does not correspond to the grand availability of spatial observations readily available and to the sophisticate model codes simulating the spatial variability of complex hydrological processes. This study makes a contribution towards advancing spatial-pattern-oriented model calibration by rigorously testing a multiple-component performance metric. The promoted SPAtial EFficiency (SPAEF) metric reflects three equally weighted components: correlation, coefficient of variation and histogram overlap. This multiple-component approach is found to be advantageous in order to achieve the complex task of comparing spatial patterns. SPAEF, its three components individually and two alternative spatial performance metrics, i.e. connectivity analysis and fractions skill score, are applied in a spatial-pattern-oriented model calibration of a catchment model in Denmark. Results suggest the importance of multiple-component metrics because stand-alone metrics tend to fail to provide holistic pattern information. The three SPAEF components are found to be independent, which allows them to complement each other in a meaningful way. In order to optimally exploit spatial observations made available by remote sensing platforms, this study suggests applying bias insensitive metrics which further allow for a comparison of variables which are related but may differ in unit. This study applies SPAEF in the hydrological context using the mesoscale Hydrologic Model (mHM; version 5.8), but we see great potential across disciplines related to spatially distributed earth system modelling.
NASA Astrophysics Data System (ADS)
Dhungel, S.; Barber, M. E.
2016-12-01
The objectives of this paper are to use an automated satellite-based remote sensing evapotranspiration (ET) model to assist in parameterization of a cropping system model (CropSyst) and to examine the variability of consumptive water use of various crops across the watershed. The remote sensing model is a modified version of the Mapping Evapotranspiration at high Resolution with Internalized Calibration (METRIC™) energy balance model. We present the application of an automated python-based implementation of METRIC to estimate ET as consumptive water use for agricultural areas in three watersheds in Eastern Washington - Walla Walla, Lower Yakima and Okanogan. We used these ET maps with USDA crop data to identify the variability of crop growth and water use for the major crops in these three watersheds. Some crops, such as grapes and alfalfa, showed high variability in water use in the watershed while others, such as corn, had comparatively less variability. The results helped us to estimate the range and variability of various crop parameters that are used in CropSyst. The paper also presents a systematic approach to estimate parameters of CropSyst for a crop in a watershed using METRIC results. Our initial application of this approach was used to estimate irrigation application rate for CropSyst for a selected farm in Walla Walla and was validated by comparing crop growth (as Leaf Area Index - LAI) and consumptive water use (ET) from METRIC and CropSyst. This coupling of METRIC with CropSyst will allow for more robust parameters in CropSyst and will enable accurate predictions of changes in irrigation practices and crop rotation, which are a challenge in many cropping system models.
Exposure error in studies of ambient air pollution and health that use city-wide measures of exposure may be substantial for pollutants that exhibit spatiotemporal variability. Alternative spatiotemporal metrics of exposure for traffic-related and regional pollutants were applied...
Previous studies have reported that lower-income and minority populations are more likely to live near major roads. This study quantifies associations between socioeconomic status, racial/ethnic variables, and traffic-related exposure metrics for the United States. Using geograph...
Zhang, Kai; Li, Yun; Schwartz, Joel D.; O'Neill, Marie S.
2014-01-01
Hot weather increases risk of mortality. Previous studies used different sets of weather variables to characterize heat stress, resulting in variation in heat-mortality- associations depending on the metric used. We employed a statistical learning method – random forests – to examine which of various weather variables had the greatest impact on heat-related mortality. We compiled a summertime daily weather and mortality counts dataset from four U.S. cities (Chicago, IL; Detroit, MI; Philadelphia, PA; and Phoenix, AZ) from 1998 to 2006. A variety of weather variables were ranked in predicting deviation from typical daily all-cause and cause-specific death counts. Ranks of weather variables varied with city and health outcome. Apparent temperature appeared to be the most important predictor of heat-related mortality for all-cause mortality. Absolute humidity was, on average, most frequently selected one of the top variables for all-cause mortality and seven cause-specific mortality categories. Our analysis affirms that apparent temperature is a reasonable variable for activating heat alerts and warnings, which are commonly based on predictions of total mortality in next few days. Additionally, absolute humidity should be included in future heat-health studies. Finally, random forests can be used to guide choice of weather variables in heat epidemiology studies. PMID:24834832
Comparison of stream invertebrate response models for bioassessment metric
Waite, Ian R.; Kennen, Jonathan G.; May, Jason T.; Brown, Larry R.; Cuffney, Thomas F.; Jones, Kimberly A.; Orlando, James L.
2012-01-01
We aggregated invertebrate data from various sources to assemble data for modeling in two ecoregions in Oregon and one in California. Our goal was to compare the performance of models developed using multiple linear regression (MLR) techniques with models developed using three relatively new techniques: classification and regression trees (CART), random forest (RF), and boosted regression trees (BRT). We used tolerance of taxa based on richness (RICHTOL) and ratio of observed to expected taxa (O/E) as response variables and land use/land cover as explanatory variables. Responses were generally linear; therefore, there was little improvement to the MLR models when compared to models using CART and RF. In general, the four modeling techniques (MLR, CART, RF, and BRT) consistently selected the same primary explanatory variables for each region. However, results from the BRT models showed significant improvement over the MLR models for each region; increases in R2 from 0.09 to 0.20. The O/E metric that was derived from models specifically calibrated for Oregon consistently had lower R2 values than RICHTOL for the two regions tested. Modeled O/E R2 values were between 0.06 and 0.10 lower for each of the four modeling methods applied in the Willamette Valley and were between 0.19 and 0.36 points lower for the Blue Mountains. As a result, BRT models may indeed represent a good alternative to MLR for modeling species distribution relative to environmental variables.
Caskey, Brian J.; Frey, Jeffrey W.; Lowe, B. Scott
2007-01-01
Data were gathered from May through September 2002 at 76 randomly selected sites in the Whitewater River and East Fork White River Basins, Indiana, for algal biomass, habitat, nutrients, and biological communities (fish and invertebrates). Basin characteristics (land use and drainage area) and biolog-ical-community attributes and metric scores were determined for the basin of each sampling site. Yearly Principal Compo-nents Analysis site scores were calculated for algal biomass (periphyton and seston). The yearly Principal Components Analysis site scores for the first axis (PC1) were related using Spearman's rho to the seasonal algal-biomass, basin-charac-teristics, habitat, seasonal nutrient, and biological-community attribute and metric score data. The periphyton PC1 site score was not significantly related to the nine habitat or 12 nutrient variables examined. One land-use variable, drainage area, was negatively related to the periphyton PC1. Of the 43 fish-community attributes and metrics examined, the periphyton PC1 was negatively related to one attribute (large-river percent) and one metric score (car-nivore percent metric score). It was positively related to three fish-community attributes (headwater percent, pioneer percent, and simple lithophil percent). The periphyton PC1 was not statistically related to any of the 21 invertebrate-community attributes or metric scores examined. Of the 12 nutrient variables examined two were nega-tively related to the seston PC1 site score in two seasons: total Kjeldahl nitrogen (July and September), and TP (May and September). There were no statistically significant relations between the seston PC1 and the five basin-characteristics or nine habitat variables examined. Of the 43 fish-community attributes and metrics examined, the seston PC1 was positively related to one attribute (headwater percent) and negatively related to one metric score (large-river percent metric score) . Of the 21 invertebrate-community attributes and metrics exam-ined, the seston PC1 was negatively related to one metric score (number of individuals metric score). To understand how the choice of sampling sites might have affected the results, an analysis of the drainage area and land use was done. The sites selected in the Whitewater River Basin were generally small drainage basins; compared to Whitewater River Basin sites, the sites selected in the East Fork White River Basin were generally larger drainage basins. Although both basins were dominated by agricultural land use the Whitewater River Basin sites had more land in agriculture than the East Fork White River Basin sites. The values for nutrients (nitrate, total Kjeldahl nitrogen, total nitrogen, and total phosphorus) and chlorophyll a (per-iphyton and seston) were compared to published U.S. Environmental Protection Agency (USEPA) values for Aggregate Nutrient Ecoregions VI and IX and USEPA Level III Ecore-gions 55 and 71. Several nutrient values were greater than the 25th percentile of published USEPA values. Chlorophyll a (periphyton and seston) values were either greater than the 25thpercentile of published USEPA values or they extended data ranges in the Aggregate Nutrient and Level III Ecore-gions. If the values for the 25th percentile as proposes by the USEPA were adopted as nutrient water-quality criteria, many samples in the Whitewater River and East Fork White River Basins would have exceeded the criteria.
González-Ferreiro, Eduardo; Arellano-Pérez, Stéfano; Castedo-Dorado, Fernando; Hevia, Andrea; Vega, José Antonio; Vega-Nieva, Daniel; Álvarez-González, Juan Gabriel; Ruiz-González, Ana Daría
2017-01-01
The fuel complex variables canopy bulk density and canopy base height are often used to predict crown fire initiation and spread. Direct measurement of these variables is impractical, and they are usually estimated indirectly by modelling. Recent advances in predicting crown fire behaviour require accurate estimates of the complete vertical distribution of canopy fuels. The objectives of the present study were to model the vertical profile of available canopy fuel in pine stands by using data from the Spanish national forest inventory plus low-density airborne laser scanning (ALS) metrics. In a first step, the vertical distribution of the canopy fuel load was modelled using the Weibull probability density function. In a second step, two different systems of models were fitted to estimate the canopy variables defining the vertical distributions; the first system related these variables to stand variables obtained in a field inventory, and the second system related the canopy variables to airborne laser scanning metrics. The models of each system were fitted simultaneously to compensate the effects of the inherent cross-model correlation between the canopy variables. Heteroscedasticity was also analyzed, but no correction in the fitting process was necessary. The estimated canopy fuel load profiles from field variables explained 84% and 86% of the variation in canopy fuel load for maritime pine and radiata pine respectively; whereas the estimated canopy fuel load profiles from ALS metrics explained 52% and 49% of the variation for the same species. The proposed models can be used to assess the effectiveness of different forest management alternatives for reducing crown fire hazard.
Miller, Matthew P.; Kennen, Jonathan G.; Mabe, Jeffrey A.; Mize, Scott V.
2012-01-01
Site-specific temporal trends in algae, benthic invertebrate, and fish assemblages were investigated in 15 streams and rivers draining basins of varying land use in the south-central United States from 1993–2007. A multivariate approach was used to identify sites with statistically significant trends in aquatic assemblages which were then tested for correlations with assemblage metrics and abiotic environmental variables (climate, water quality, streamflow, and physical habitat). Significant temporal trends in one or more of the aquatic assemblages were identified at more than half (eight of 15) of the streams in the study. Assemblage metrics and abiotic environmental variables found to be significantly correlated with aquatic assemblages differed between land use categories. For example, algal assemblages at undeveloped sites were associated with physical habitat, while algal assemblages at more anthropogenically altered sites (agricultural and urban) were associated with nutrient and streamflow metrics. In urban stream sites results indicate that streamflow metrics may act as important controls on water quality conditions, as represented by aquatic assemblage metrics. The site-specific identification of biotic trends and abiotic–biotic relations presented here will provide valuable information that can inform interpretation of continued monitoring data and the design of future studies. In addition, the subsets of abiotic variables identified as potentially important drivers of change in aquatic assemblages provide policy makers and resource managers with information that will assist in the design and implementation of monitoring programs aimed at the protection of aquatic resources.
Edla, Shwetha; Reisner, Andrew T; Liu, Jianbo; Convertino, Victor A; Carter, Robert; Reifman, Jaques
2015-02-01
During initial assessment of trauma patients, metrics of heart rate variability (HRV) have been associated with high-risk clinical conditions. Yet, despite numerous studies, the potential of HRV to improve clinical outcomes remains unclear. Our objective was to evaluate whether HRV metrics provide additional diagnostic information, beyond routine vital signs, for making a specific clinical assessment: identification of hemorrhaging patients who receive packed red blood cell (PRBC) transfusion. Adult prehospital trauma patients were analyzed retrospectively, excluding those who lacked a complete set of reliable vital signs and a clean electrocardiogram for computation of HRV metrics. We also excluded patients who did not survive to admission. The primary outcome was hemorrhagic injury plus different PRBC transfusion volumes. We performed multivariate regression analysis using HRV metrics and routine vital signs to test the hypothesis that HRV metrics could improve the diagnosis of hemorrhagic injury plus PRBC transfusion vs routine vital signs alone. As univariate predictors, HRV metrics in a data set of 402 subjects had comparable areas under receiver operating characteristic curves compared with routine vital signs. In multivariate regression models containing routine vital signs, HRV parameters were significant (P<.05) but yielded areas under receiver operating characteristic curves with minimal, nonsignificant improvements (+0.00 to +0.05). A novel diagnostic test should improve diagnostic thinking and allow for better decision making in a significant fraction of cases. Our findings do not support that HRV metrics add value over routine vital signs in terms of prehospital identification of hemorrhaging patients who receive PRBC transfusion. Published by Elsevier Inc.
Electromagnetic Metrics of Mental Workload.
1987-09-01
anxiety ). A decrease in performance accuracy has been used in the context of over- load, however it has also been associated with a "high workload". 1.2...heart rate variability ( HRV ) to refer to any varia- tion from a constant heart rate. The term HRV shall not refer to any specific method of numerically...in using HRV as a measure of mental load arose after Kalsbeek & Ettema (1963) reported that HRV was "gradu- ally suppressed when increasing the
NASA Astrophysics Data System (ADS)
Buzan, J. R.; Huber, M.
2014-12-01
We show the new climatic tool, HumanIndexMod (HIM), for quantitatively assessing key climatic variables that are critical for decision making. The HIM calculates 9 different heat stress and 4 moist thermodynamic quantities using meteorological inputs of T, P, and Q. These heat stress metrics are commonly used throughout the world. We show new methods for integrating and standardizing practices for applying these metrics with the latest Earth system models. We implemented the HIM into CLM4.5, a component of CESM, maintained by NCAR. These heat stress metrics cover philosophical approaches of comfort, physiology, and empirically based algorithms. The metrics are directly connected to the Urban, Canopy, Bare Ground, and Lake modules, to differentiate distinct regimes within each grid cell. The module calculates the instantaneous moisture-temperature covariance at every model time step and in every land surface type, capturing all aspects of non-linearity. The HIM uses the most accurate and computationally efficient moist thermodynamic algorithms available. Additionally, we show ways that the HIM may be effectively integrated into climate modeling and observations. The module is flexible. The user may decide which metrics to call, and there is an offline version of the HIM that is available to be used with weather and climate datasets. Examples include using high temporal resolution CMIP5 archive data, local weather station data, and weather and forecasting models. To provide comprehensive standards for applying the HIM to climate data, we executed a CLM4.5 simulation using the RCP8.5 boundary conditions. Preliminary results show moist thermodynamic and heat stress quantities have smaller variability in the extremes as compared to extremes in T (both at the 95th percentile). Additionally, the magnitude of the moist thermodynamic changes over land is similar to sea surface temperature changes. The metric changes from the early part of the 21st century as compared to the end of the 21st century show that many portions of the world switch from moderate levels of heat stress for the top 2 weeks of a year to severe heat stress for the top 2 weeks of a year. These changes are reflected in livestock (THI); evaporative cooling (SWMP80) and air-conditioning; and industrial, military, and athletic heat stress (sWBGT, DI, HI, etc.).
Weykamp, Cas; John, Garry; Gillery, Philippe; English, Emma; Ji, Linong; Lenters-Westra, Erna; Little, Randie R.; Roglic, Gojka; Sacks, David B.; Takei, Izumi
2016-01-01
Background A major objective of the IFCC Task Force on implementation of HbA1c standardization is to develop a model to define quality targets for HbA1c. Methods Two generic models, the Biological Variation and Sigma-metrics model, are investigated. Variables in the models were selected for HbA1c and data of EQA/PT programs were used to evaluate the suitability of the models to set and evaluate quality targets within and between laboratories. Results In the biological variation model 48% of individual laboratories and none of the 26 instrument groups met the minimum performance criterion. In the Sigma-metrics model, with a total allowable error (TAE) set at 5 mmol/mol (0.46% NGSP) 77% of the individual laboratories and 12 of 26 instrument groups met the 2 sigma criterion. Conclusion The Biological Variation and Sigma-metrics model were demonstrated to be suitable for setting and evaluating quality targets within and between laboratories. The Sigma-metrics model is more flexible as both the TAE and the risk of failure can be adjusted to requirements related to e.g. use for diagnosis/monitoring or requirements of (inter)national authorities. With the aim of reaching international consensus on advice regarding quality targets for HbA1c, the Task Force suggests the Sigma-metrics model as the model of choice with default values of 5 mmol/mol (0.46%) for TAE, and risk levels of 2 and 4 sigma for routine laboratories and laboratories performing clinical trials, respectively. These goals should serve as a starting point for discussion with international stakeholders in the field of diabetes. PMID:25737535
Aircraft Conceptual Design and Risk Analysis Using Physics-Based Noise Prediction
NASA Technical Reports Server (NTRS)
Olson, Erik D.; Mavris, Dimitri N.
2006-01-01
An approach was developed which allows for design studies of commercial aircraft using physics-based noise analysis methods while retaining the ability to perform the rapid trade-off and risk analysis studies needed at the conceptual design stage. A prototype integrated analysis process was created for computing the total aircraft EPNL at the Federal Aviation Regulations Part 36 certification measurement locations using physics-based methods for fan rotor-stator interaction tones and jet mixing noise. The methodology was then used in combination with design of experiments to create response surface equations (RSEs) for the engine and aircraft performance metrics, geometric constraints and take-off and landing noise levels. In addition, Monte Carlo analysis was used to assess the expected variability of the metrics under the influence of uncertainty, and to determine how the variability is affected by the choice of engine cycle. Finally, the RSEs were used to conduct a series of proof-of-concept conceptual-level design studies demonstrating the utility of the approach. The study found that a key advantage to using physics-based analysis during conceptual design lies in the ability to assess the benefits of new technologies as a function of the design to which they are applied. The greatest difficulty in implementing physics-based analysis proved to be the generation of design geometry at a sufficient level of detail for high-fidelity analysis.
Differential invariants and exact solutions of the Einstein equations
NASA Astrophysics Data System (ADS)
Lychagin, Valentin; Yumaguzhin, Valeriy
2017-06-01
In this paper (cf. Lychagin and Yumaguzhin, in Anal Math Phys, 2016) a class of totally geodesics solutions for the vacuum Einstein equations is introduced. It consists of Einstein metrics of signature (1,3) such that 2-dimensional distributions, defined by the Weyl tensor, are completely integrable and totally geodesic. The complete and explicit description of metrics from these class is given. It is shown that these metrics depend on two functions in one variable and one harmonic function.
NASA Astrophysics Data System (ADS)
Iyer, B. R.; Kamran, N.
1991-09-01
The question of the separability of the Dirac equation in metrics with local rotational symmetry is reexamined by adapting the analysis of Kamran and McLenaghan [J. Math. Phys. 25, 1019 (1984)] for the metrics admitting a two-dimensional Abelian local isometry group acting orthogonally transitively. This generalized treatment, which involves the choice of a suitable system of local coordinates and spinor frame, allows one to establish the separability of the Dirac equation within the class of metrics for which the previous analysis of Iyer and Vishveshwara [J. Math. Phys. 26, 1034 (1985)] had left the question of separability open.
Woskie, Susan R; Bello, Dhimiter; Gore, Rebecca J; Stowe, Meredith H; Eisen, Ellen A; Liu, Youcheng; Sparer, Judy A; Redlich, Carrie A; Cullen, Mark R
2008-09-01
Because many occupational epidemiologic studies use exposure surrogates rather than quantitative exposure metrics, the UMass Lowell and Yale study of autobody shop workers provided an opportunity to evaluate the relative utility of surrogates and quantitative exposure metrics in an exposure response analysis of cross-week change in respiratory function. A task-based exposure assessment was used to develop several metrics of inhalation exposure to isocyanates. The metrics included the surrogates, job title, counts of spray painting events during the day, counts of spray and bystander exposure events, and a quantitative exposure metric that incorporated exposure determinant models based on task sampling and a personal workplace protection factor for respirator use, combined with a daily task checklist. The result of the quantitative exposure algorithm was an estimate of the daily time-weighted average respirator-corrected total NCO exposure (microg/m(3)). In general, these four metrics were found to be variable in agreement using measures such as weighted kappa and Spearman correlation. A logistic model for 10% drop in FEV(1) from Monday morning to Thursday morning was used to evaluate the utility of each exposure metric. The quantitative exposure metric was the most favorable, producing the best model fit, as well as the greatest strength and magnitude of association. This finding supports the reports of others that reducing exposure misclassification can improve risk estimates that otherwise would be biased toward the null. Although detailed and quantitative exposure assessment can be more time consuming and costly, it can improve exposure-disease evaluations and is more useful for risk assessment purposes. The task-based exposure modeling method successfully produced estimates of daily time-weighted average exposures in the complex and changing autobody shop work environment. The ambient TWA exposures of all of the office workers and technicians and 57% of the painters were found to be below the current U.K. Health and Safety Executive occupational exposure limit (OEL) for total NCO of 20 microg/m(3). When respirator use was incorporated, all personal daily exposures were below the U.K. OEL.
Burton, Carmen; Brown, Larry R.; Belitz, Kenneth
2005-01-01
The Santa Ana River basin is the largest stream system in Southern California and includes a densely populated coastal area. Extensive urbanization has altered the geomorphology and hydrology of the streams, adversely affecting aquatic communities. We studied macroinvertebrate and periphyton assemblages in relation to two categorical features of the highly engineered hydrologic system-water source and channel type. Four water sources were identified-natural, urban-impacted groundwater, urban runoff, and treated wastewater. Three channel types were identified-natural, channelized with natural bottom, and concrete-lined. Nineteen sites, covering the range of these two categorical features, were sampled in summer 2000. To minimize the effects of different substrate types among sites, artificial substrates were used for assessing macroinvertebrate and periphyton assemblages. Physical and chemical variables and metrics calculated from macroinvertebrate and periphyton assemblage data were compared among water sources and channel types using analysis of variance and multiple comparison tests. Macroinvertebrate metrics exhibiting significant (P < 0.05) differences between water sources included taxa and Ephemeroptera-Plecoptera-Trichoptera richness, relative richness and abundance of nonchironomid dipterans, orthoclads, oligochaetes, and some functional-feeding groups such as parasites and shredders. Periphyton metrics showing significant differences between water sources included blue-green algae biovolume and relative abundance of nitrogen heterotrophic, eutrophic, motile, and pollution-sensitive diatoms. The relative abundance of trichopterans, tanytarsini chironomids, noninsects, and filter feeders, as well as the relative richness and abundance of diatoms, were significantly different between channel types. Most physical variables were related to channel type, whereas chemical variables and some physical variables (e.g., discharge, velocity, and channel width) were related to water source. These associations were reflected in correlations between metrics, chemical variables, and physical variables. Significant improvements in the aquatic ecosystem of the Santa Ana River basin are possible with management actions such as conversion of concrete-lined channels to channelized streams with natural bottoms that can still maintain flood control to protect life and property.
The influence of drought on flow‐ecology relationships in Ozark Highland streams
Lynch, Dustin T.; Leasure, D. R.; Magoulick, Daniel D.
2018-01-01
Drought and summer drying can have strong effects on abiotic and biotic components of stream ecosystems. Environmental flow‐ecology relationships may be affected by drought and drying, adding further uncertainty to the already complex interaction of flow with other environmental variables, including geomorphology and water quality.Environment–ecology relationships in stream communities in Ozark Highland streams, USA, were examined over two years with contrasting environmental conditions, a drought year (2012) and a flood year (2013). We analysed fish, crayfish and benthic macroinvertebrate assemblages using two different approaches: (1) a multiple regression analysis incorporating predictor variables related to habitat, water quality, geomorphology and hydrology and (2) a canonical ordination procedure using only hydrologic variables in which forward selection was used to select predictors that were most related to our response variables.Reach‐scale habitat quality and geomorphology were found to be the most important influences on community structure, but hydrology was also important, particularly during the flood year. We also found substantial between‐year variation in environment–ecology relationships. Some ecological responses differed significantly between drought and flood years, while others remained consistent. We found that magnitude was the most important flow component overall, but that there was a shift in relative importance from low flow metrics during the drought year to average flow metrics during the flood year, and the specific metrics of importance varied markedly between assemblages and years.Findings suggest that understanding temporal variation in flow‐ecology relationships may be crucial for resource planning. While some relationships show temporal variation, others are consistent between years. Additionally, different kinds of hydrologic variables can differ greatly in terms of which assemblages they affect and how they affect them. Managers can address this complexity by focusing on relationships that are temporally stable and flow metrics that are consistently important across groups, such as flood frequency and flow variability.
Lewis, Gregory F.; Furman, Senta A.; McCool, Martha F.; Porges, Stephen W.
2011-01-01
Three frequently used RSA metrics are investigated to document violations of assumptions for parametric analyses, moderation by respiration, influences of nonstationarity, and sensitivity to vagal blockade. Although all metrics are highly correlated, new findings illustrate that the metrics are noticeably different on the above dimensions. Only one method conforms to the assumptions for parametric analyses, is not moderated by respiration, is not influenced by nonstationarity, and reliably generates stronger effect sizes. Moreover, this method is also the most sensitive to vagal blockade. Specific features of this method may provide insights into improving the statistical characteristics of other commonly used RSA metrics. These data provide the evidence to question, based on statistical grounds, published reports using particular metrics of RSA. PMID:22138367
Rudnick, Paul A.; Clauser, Karl R.; Kilpatrick, Lisa E.; Tchekhovskoi, Dmitrii V.; Neta, Pedatsur; Blonder, Nikša; Billheimer, Dean D.; Blackman, Ronald K.; Bunk, David M.; Cardasis, Helene L.; Ham, Amy-Joan L.; Jaffe, Jacob D.; Kinsinger, Christopher R.; Mesri, Mehdi; Neubert, Thomas A.; Schilling, Birgit; Tabb, David L.; Tegeler, Tony J.; Vega-Montoto, Lorenzo; Variyath, Asokan Mulayath; Wang, Mu; Wang, Pei; Whiteaker, Jeffrey R.; Zimmerman, Lisa J.; Carr, Steven A.; Fisher, Susan J.; Gibson, Bradford W.; Paulovich, Amanda G.; Regnier, Fred E.; Rodriguez, Henry; Spiegelman, Cliff; Tempst, Paul; Liebler, Daniel C.; Stein, Stephen E.
2010-01-01
A major unmet need in LC-MS/MS-based proteomics analyses is a set of tools for quantitative assessment of system performance and evaluation of technical variability. Here we describe 46 system performance metrics for monitoring chromatographic performance, electrospray source stability, MS1 and MS2 signals, dynamic sampling of ions for MS/MS, and peptide identification. Applied to data sets from replicate LC-MS/MS analyses, these metrics displayed consistent, reasonable responses to controlled perturbations. The metrics typically displayed variations less than 10% and thus can reveal even subtle differences in performance of system components. Analyses of data from interlaboratory studies conducted under a common standard operating procedure identified outlier data and provided clues to specific causes. Moreover, interlaboratory variation reflected by the metrics indicates which system components vary the most between laboratories. Application of these metrics enables rational, quantitative quality assessment for proteomics and other LC-MS/MS analytical applications. PMID:19837981
Variable Bandwidth Filtering for Improved Sensitivity of Cross-Frequency Coupling Metrics
McDaniel, Jonathan; Liu, Song; Cornew, Lauren; Gaetz, William; Roberts, Timothy P.L.; Edgar, J. Christopher
2012-01-01
Abstract There is an increasing interest in examining cross-frequency coupling (CFC) between groups of oscillating neurons. Most CFC studies examine how the phase of lower-frequency brain activity modulates the amplitude of higher-frequency brain activity. This study focuses on the signal filtering that is required to isolate the higher-frequency neuronal activity which is hypothesized to be amplitude modulated. In particular, previous publications have used a filter bandwidth fixed to a constant for all assessed modulation frequencies. The present article demonstrates that fixed bandwidth filtering can destroy amplitude modulation and create false-negative CFC measures. To overcome this limitation, this study presents a variable bandwidth filter that ensures preservation of the amplitude modulation. Simulated time series data were created with theta-gamma, alpha-gamma, and beta-gamma phase-amplitude coupling. Comparisons between filtering methods indicate that the variable bandwidth approach presented in this article is preferred when examining amplitude modulations above the theta band. The variable bandwidth method of filtering an amplitude modulated signal is proposed to preserve amplitude modulation and enable accurate CFC measurements. PMID:22577870
Applying Sigma Metrics to Reduce Outliers.
Litten, Joseph
2017-03-01
Sigma metrics can be used to predict assay quality, allowing easy comparison of instrument quality and predicting which tests will require minimal quality control (QC) rules to monitor the performance of the method. A Six Sigma QC program can result in fewer controls and fewer QC failures for methods with a sigma metric of 5 or better. The higher the number of methods with a sigma metric of 5 or better, the lower the costs for reagents, supplies, and control material required to monitor the performance of the methods. Copyright © 2016 Elsevier Inc. All rights reserved.
Justus, B.G.
2003-01-01
Macroinvertebrate community, fish community, water-quality, and habitat data collected from 36 sites in the Mississippi Alluvial Plain Ecoregion during 1996-98 by the U.S. Geological Survey were considered for a multimetric test of ecological integrity. Test metrics were correlated to site scores of a Detrended Correspondence Analysis of the fish community (the biological community that was the most statistically significant for indicating ecological conditions in the ecoregion) and six metrics--four fish metrics, one chemical metric (total ammonia plus organic nitrogen) and one physical metric (turbidity)--having the highest correlations were selected for the index. Index results indicate that sites in the northern half of the study unit (in Arkansas and Missouri) were less degraded than sites in the southern half of the study unit (in Louisiana and Mississippi). Of 148 landscape variables evaluated, the percentage of Holocene deposits and cotton insecticide rates had the highest correlations to index of ecological integrity results. sites having the highest (best) index scores had the lowest percentages of Holocene deposits and the lowest cotton insecticide use rates, indicating that factors relating to the amount of Holocene deposits and cotton insecticide use rates partially explain differences in ecological conditions throughout the Mississippi Alluvial Plain Ecoregion.
Madison, Guy
2014-03-01
Timing performance becomes less precise for longer intervals, which makes it difficult to achieve simultaneity in synchronisation with a rhythm. The metrical structure of music, characterised by hierarchical levels of binary or ternary subdivisions of time, may function to increase precision by providing additional timing information when the subdivisions are explicit. This hypothesis was tested by comparing synchronisation performance across different numbers of metrical levels conveyed by loudness of sounds, such that the slowest level was loudest and the fastest was softest. Fifteen participants moved their hand with one of 9 inter-beat intervals (IBIs) ranging from 524 to 3,125 ms in 4 metrical level (ML) conditions ranging from 1 (one movement for each sound) to 4 (one movement for every 8th sound). The lowest relative variability (SD/IBI<1.5%) was obtained for the 3 longest IBIs (1600-3,125 ms) and MLs 3-4, significantly less than the smallest value (4-5% at 524-1024 ms) for any ML 1 condition in which all sounds are identical. Asynchronies were also more negative with higher ML. In conclusion, metrical subdivision provides information that facilitates temporal performance, which suggests an underlying neural multi-level mechanism capable of integrating information across levels. © 2013.
Why “improved” water sources are not always safe
Shaheed, Ameer; Orgill, Jennifer; Montgomery, Maggie A; Jeuland, Marc A; Brown, Joe
2014-01-01
Abstract Existing and proposed metrics for household drinking-water services are intended to measure the availability, safety and accessibility of water sources. However, these attributes can be highly variable over time and space and this variation complicates the task of creating and implementing simple and scalable metrics. In this paper, we highlight those factors – especially those that relate to so-called improved water sources – that contribute to variability in water safety but may not be generally recognized as important by non-experts. Problems in the provision of water in adequate quantities and of adequate quality – interrelated problems that are often influenced by human behaviour – may contribute to an increased risk of poor health. Such risk may be masked by global water metrics that indicate that we are on the way to meeting the world’s drinking-water needs. Given the complexity of the topic and current knowledge gaps, international metrics for access to drinking water should be interpreted with great caution. We need further targeted research on the health impacts associated with improvements in drinking-water supplies. PMID:24700996
Beyond Metrics? The Role of Hydrologic Baseline Archetypes in Environmental Water Management.
Lane, Belize A; Sandoval-Solis, Samuel; Stein, Eric D; Yarnell, Sarah M; Pasternack, Gregory B; Dahlke, Helen E
2018-06-22
Balancing ecological and human water needs often requires characterizing key aspects of the natural flow regime and then predicting ecological response to flow alterations. Flow metrics are generally relied upon to characterize long-term average statistical properties of the natural flow regime (hydrologic baseline conditions). However, some key aspects of hydrologic baseline conditions may be better understood through more complete consideration of continuous patterns of daily, seasonal, and inter-annual variability than through summary metrics. Here we propose the additional use of high-resolution dimensionless archetypes of regional stream classes to improve understanding of baseline hydrologic conditions and inform regional environmental flows assessments. In an application to California, we describe the development and analysis of hydrologic baseline archetypes to characterize patterns of flow variability within and between stream classes. We then assess the utility of archetypes to provide context for common flow metrics and improve understanding of linkages between aquatic patterns and processes and their hydrologic controls. Results indicate that these archetypes may offer a distinct and complementary tool for researching mechanistic flow-ecology relationships, assessing regional patterns for streamflow management, or understanding impacts of changing climate.
Bowden, Stephen C; Lissner, Dianne; McCarthy, Kerri A L; Weiss, Lawrence G; Holdnack, James A
2007-10-01
Equivalence of the psychological model underlying Wechsler Adult Intelligence Scale-Third Edition (WAIS-III) scores obtained in the United States and Australia was examined in this study. Examination of metric invariance involves testing the hypothesis that all components of the measurement model relating observed scores to latent variables are numerically equal in different samples. The assumption of metric invariance is necessary for interpretation of scores derived from research studies that seek to generalize patterns of convergent and divergent validity and patterns of deficit or disability. An Australian community volunteer sample was compared to the US standardization data. A pattern of strict metric invariance was observed across samples. In addition, when the effects of different demographic characteristics of the US and Australian samples were included, structural parameters reflecting values of the latent cognitive variables were found not to differ. These results provide important evidence for the equivalence of measurement of core cognitive abilities with the WAIS-III and suggest that latent cognitive abilities in the US and Australia do not differ.
Distance Metric Learning via Iterated Support Vector Machines.
Zuo, Wangmeng; Wang, Faqiang; Zhang, David; Lin, Liang; Huang, Yuchi; Meng, Deyu; Zhang, Lei
2017-07-11
Distance metric learning aims to learn from the given training data a valid distance metric, with which the similarity between data samples can be more effectively evaluated for classification. Metric learning is often formulated as a convex or nonconvex optimization problem, while most existing methods are based on customized optimizers and become inefficient for large scale problems. In this paper, we formulate metric learning as a kernel classification problem with the positive semi-definite constraint, and solve it by iterated training of support vector machines (SVMs). The new formulation is easy to implement and efficient in training with the off-the-shelf SVM solvers. Two novel metric learning models, namely Positive-semidefinite Constrained Metric Learning (PCML) and Nonnegative-coefficient Constrained Metric Learning (NCML), are developed. Both PCML and NCML can guarantee the global optimality of their solutions. Experiments are conducted on general classification, face verification and person re-identification to evaluate our methods. Compared with the state-of-the-art approaches, our methods can achieve comparable classification accuracy and are efficient in training.
NASA Astrophysics Data System (ADS)
Samardzic, Nikolina
The effectiveness of in-vehicle speech communication can be a good indicator of the perception of the overall vehicle quality and customer satisfaction. Currently available speech intelligibility metrics do not account in their procedures for essential parameters needed for a complete and accurate evaluation of in-vehicle speech intelligibility. These include the directivity and the distance of the talker with respect to the listener, binaural listening, hearing profile of the listener, vocal effort, and multisensory hearing. In the first part of this research the effectiveness of in-vehicle application of these metrics is investigated in a series of studies to reveal their shortcomings, including a wide range of scores resulting from each of the metrics for a given measurement configuration and vehicle operating condition. In addition, the nature of a possible correlation between the scores obtained from each metric is unknown. The metrics and the subjective perception of speech intelligibility using, for example, the same speech material have not been compared in literature. As a result, in the second part of this research, an alternative method for speech intelligibility evaluation is proposed for use in the automotive industry by utilizing a virtual reality driving environment for ultimately setting targets, including the associated statistical variability, for future in-vehicle speech intelligibility evaluation. The Speech Intelligibility Index (SII) was evaluated at the sentence Speech Receptions Threshold (sSRT) for various listening situations and hearing profiles using acoustic perception jury testing and a variety of talker and listener configurations and background noise. In addition, the effect of individual sources and transfer paths of sound in an operating vehicle to the vehicle interior sound, specifically their effect on speech intelligibility was quantified, in the framework of the newly developed speech intelligibility evaluation method. Lastly, as an example of the significance of speech intelligibility evaluation in the context of an applicable listening environment, as indicated in this research, it was found that the jury test participants required on average an approximate 3 dB increase in sound pressure level of speech material while driving and listening compared to when just listening, for an equivalent speech intelligibility performance and the same listening task.
A framework for quantification of groundwater dynamics - concepts and hydro(geo-)logical metrics
NASA Astrophysics Data System (ADS)
Haaf, Ezra; Heudorfer, Benedikt; Stahl, Kerstin; Barthel, Roland
2017-04-01
Fluctuation patterns in groundwater hydrographs are generally assumed to contain information on aquifer characteristics, climate and environmental controls. However, attempts to disentangle this information and map the dominant controls have been few. This is due to the substantial heterogeneity and complexity of groundwater systems, which is reflected in the abundance of morphologies of groundwater time series. To describe the structure and shape of hydrographs, descriptive terms like "slow"/ "fast" or "flashy"/ "inert" are frequently used, which are subjective, irreproducible and limited. This lack of objective and refined concepts limit approaches for regionalization of hydrogeological characteristics as well as our understanding of dominant processes controlling groundwater dynamics. Therefore, we propose a novel framework for groundwater hydrograph characterization in an attempt to categorize morphologies explicitly and quantitatively based on perceptual concepts of aspects of the dynamics. This quantitative framework is inspired by the existing and operational eco-hydrological classification frameworks for streamflow. The need for a new framework for groundwater systems is justified by the fundamental differences between the state variable groundwater head and the flow variable streamflow. Conceptually, we extracted exemplars of specific dynamic patterns, attributing descriptive terms for means of systematisation. Metrics, primarily taken from streamflow literature, were subsequently adapted to groundwater and assigned to the described patterns for means of quantification. In this study, we focused on the particularities of groundwater as a state variable. Furthermore, we investigated the descriptive skill of individual metrics as well as their usefulness for groundwater hydrographs. The ensemble of categorized metrics result in a framework, which can be used to describe and quantify groundwater dynamics. It is a promising tool for the setup of a successful similarity classification framework for groundwater hydrographs. However, the overabundance of metrics available calls for a systematic redundancy analysis of the metrics, which we describe in a second study (Heudorfer et al., 2017). Heudorfer, B., Haaf, E., Barthel, R., Stahl, K., 2017. A framework for quantification of groundwater dynamics - redundancy and transferability of hydro(geo-)logical metrics. EGU General Assembly 2017, Vienna, Austria.
Edwards, Darrin C.; Metz, Charles E.
2012-01-01
Although a fully general extension of ROC analysis to classification tasks with more than two classes has yet to be developed, the potential benefits to be gained from a practical performance evaluation methodology for classification tasks with three classes have motivated a number of research groups to propose methods based on constrained or simplified observer or data models. Here we consider an ideal observer in a task with underlying data drawn from three univariate normal distributions. We investigate the behavior of the resulting ideal observer’s decision variables and ROC surface. In particular, we show that the pair of ideal observer decision variables is constrained to a parametric curve in two-dimensional likelihood ratio space, and that the decision boundary line segments used by the ideal observer can intersect this curve in at most six places. From this, we further show that the resulting ROC surface has at most four degrees of freedom at any point, and not the five that would be required, in general, for a surface in a six-dimensional space to be non-degenerate. In light of the difficulties we have previously pointed out in generalizing the well-known area under the ROC curve performance metric to tasks with three or more classes, the problem of developing a suitable and fully general performance metric for classification tasks with three or more classes remains unsolved. PMID:23162165
Metrics for glycaemic control - from HbA1c to continuous glucose monitoring.
Kovatchev, Boris P
2017-07-01
As intensive treatment to lower levels of HbA 1c characteristically results in an increased risk of hypoglycaemia, patients with diabetes mellitus face a life-long optimization problem to reduce average levels of glycaemia and postprandial hyperglycaemia while simultaneously avoiding hypoglycaemia. This optimization can only be achieved in the context of lowering glucose variability. In this Review, I discuss topics that are related to the assessment, quantification and optimal control of glucose fluctuations in diabetes mellitus. I focus on markers of average glycaemia and the utility and/or shortcomings of HbA 1c as a 'gold-standard' metric of glycaemic control; the notion that glucose variability is characterized by two principal dimensions, amplitude and time; measures of glucose variability that are based on either self-monitoring of blood glucose data or continuous glucose monitoring (CGM); and the control of average glycaemia and glucose variability through the use of pharmacological agents or closed-loop control systems commonly referred to as the 'artificial pancreas'. I conclude that HbA 1c and the various available metrics of glucose variability reflect the management of diabetes mellitus on different timescales, ranging from months (for HbA 1c ) to minutes (for CGM). Comprehensive assessment of the dynamics of glycaemic fluctuations is therefore crucial for providing accurate and complete information to the patient, physician, automated decision-support or artificial-pancreas system.
Expanding space-time and variable vacuum energy
NASA Astrophysics Data System (ADS)
Parmeggiani, Claudio
2017-08-01
The paper describes a cosmological model which contemplates the presence of a vacuum energy varying, very slightly (now), with time. The constant part of the vacuum energy generated, some 6 Gyr ago, a deceleration/acceleration transition of the metric expansion; so now, in an aged Universe, the expansion is inexorably accelerating. The vacuum energy varying part is instead assumed to be eventually responsible of an acceleration/deceleration transition, which occurred about 14 Gyr ago; this transition has a dynamic origin: it is a consequence of the general relativistic Einstein-Friedmann equations. Moreover, the vacuum energy (constant and variable) is here related to the zero-point energy of some quantum fields (scalar, vector, or spinor); these fields are necessarily described in a general relativistic way: their structure depends on the space-time metric, typically non-flat. More precisely, the commutators of the (quantum field) creation/annihilation operators are here assumed to depend on the local value of the space-time metric tensor (and eventually of its curvature); furthermore, these commutators rapidly decrease for high momentum values and they reduce to the standard ones for a flat metric. In this way, the theory is ”gravitationally” regularized; in particular, the zero-point (vacuum) energy density has a well defined value and, for a non static metric, depends on the (cosmic) time. Note that this varying vacuum energy can be negative (Fermi fields) and that a change of its sign typically leads to a minimum for the metric expansion factor (a ”bounce”).
Braided river flow and invasive vegetation dynamics in the Southern Alps, New Zealand.
Caruso, Brian S; Edmondson, Laura; Pithie, Callum
2013-07-01
In mountain braided rivers, extreme flow variability, floods and high flow pulses are fundamental elements of natural flow regimes and drivers of floodplain processes, understanding of which is essential for management and restoration. This study evaluated flow dynamics and invasive vegetation characteristics and changes in the Ahuriri River, a free-flowing braided, gravel-bed river in the Southern Alps of New Zealand's South Island. Sixty-seven flow metrics based on indicators of hydrologic alteration and environmental flow components (extreme low flows, low flows, high flow pulses, small floods and large floods) were analyzed using a 48-year flow record. Changes in the areal cover of floodplain and invasive vegetation classes and patch characteristics over 20 years (1991-2011) were quantified using five sets of aerial photographs, and the correlation between flow metrics and cover changes were evaluated. The river exhibits considerable hydrologic variability characteristic of mountain braided rivers, with large variation in floods and other flow regime metrics. The flow regime, including flood and high flow pulses, has variable effects on floodplain invasive vegetation, and creates dynamic patch mosaics that demonstrate the concepts of a shifting mosaic steady state and biogeomorphic succession. As much as 25 % of the vegetation cover was removed by the largest flood on record (570 m(3)/s, ~50-year return period), with preferential removal of lupin and less removal of willow. However, most of the vegetation regenerated and spread relatively quickly after floods. Some flow metrics analyzed were highly correlated with vegetation cover, and key metrics included the peak magnitude of the largest flood, flood frequency, and time since the last flood in the interval between photos. These metrics provided a simple multiple regression model of invasive vegetation cover in the aerial photos evaluated. Our analysis of relationships among flow regimes and invasive vegetation cover has implications for braided rivers impacted by hydroelectric power production, where increases in invasive vegetation cover are typically greater than in unimpacted rivers.
Hinojosa-Laborde, Carmen; Rickards, Caroline A; Ryan, Kathy L; Convertino, Victor A
2011-01-01
Heart rate variability (HRV) decreases during hemorrhage, and has been proposed as a new vital sign to assess cardiovascular stability in trauma patients. The purpose of this study was to determine if any of the HRV metrics could accurately distinguish between individuals with different tolerance to simulated hemorrhage. Specifically, we hypothesized that (1) HRV would be similar in low tolerant (LT) and high tolerant (HT) subjects at presyncope when both groups are on the verge of hemodynamic collapse; and (2) HRV could distinguish LT subjects at presyncope from hemodynamically stable HT subjects (i.e., at a submaximal level of hypovolemia). Lower body negative pressure (LBNP) was used as a model of hemorrhage in healthy human subjects, eliciting central hypovolemia to the point of presyncopal symptoms (onset of hemodynamic collapse). Subjects were classified as LT if presyncopal symptoms occurred during the -15 to -60 mmHg levels of LBNP, and HT if symptoms occurred after LBNP of -60 mmHg. A total of 20 HRV metrics were derived from R-R interval measurements at the time of presyncope, and at one level prior to presyncope (submax) in LT and HT groups. Only four HRV metrics (Long-range Detrended Fluctuation Analysis, Forbidden Words, Poincaré Plot Descriptor Ratio, and Fractal Dimensions by Curve Length) supported both hypotheses. These four HRV metrics were evaluated further for their ability to identify individual LT subjects at presyncope when compared to HT subjects at submax. Variability in individual LT and HT responses was so high that LT responses overlapped with HT responses by 85-97%. The sensitivity of these HRV metrics to distinguish between individual LT from HT subjects was 6-33%, and positive predictive values were 40-73%. These results indicate that while a small number of HRV metrics can accurately distinguish between LT and HT subjects using group mean data, individual HRV values are poor indicators of tolerance to hypovolemia.
Roberts, James J.; Bruce, James F.; Zuellig, Robert E.
2018-01-08
The analysis described in this report is part of a longterm project monitoring the biological communities, habitat, and water quality of the Fountain Creek Basin. Biology, habitat, and water-quality data have been collected at 10 sites since 2003. These data include annual samples of aquatic invertebrate communities, fish communities, water quality, and quantitative riverine habitat. This report examines trends in biological communities from 2003 to 2016 and explores relationships between biological communities and abiotic variables (antecedent streamflow, physical habitat, and water quality). Six biological metrics (three invertebrate and three fish) and four individual fish species were used to examine trends in these data and how streamflow, habitat, and (or) water quality may explain these trends. The analysis of 79 trends shows that the majority of significant trends decreased over the trend period. Overall, 19 trends before adjustments for streamflow in the fish (12) and invertebrate (7) metrics were all decreasing except for the metric Invertebrate Species Richness at the most upstream site in Monument Creek. Seven of these trends were explained by streamflow and four trends were revealed that were originally masked by variability in antecedent streamflow. Only two sites (Jimmy Camp Creek at Fountain, CO and Fountain Creek near Pinon, CO) had no trends in the fish or invertebrate metrics. Ten of the streamflow-adjusted trends were explained by habitat, one was explained by water quality, and five were not explained by any of the variables that were tested. Overall, from 2003 to 2016, all the fish metric trends were decreasing with an average decline of 40 percent, and invertebrate metrics decreased on average by 9.5 percent. A potential peak streamflow threshold was identified above which there is severely limited production of age-0 flathead chub (Platygobio gracilis).
Laurent, Olivier; Wu, Jun; Li, Lianfa; Chung, Judith; Bartell, Scott
2013-02-17
Exposure to air pollution is frequently associated with reductions in birth weight but results of available studies vary widely, possibly in part because of differences in air pollution metrics. Further insight is needed to identify the air pollution metrics most strongly and consistently associated with birth weight. We used a hospital-based obstetric database of more than 70,000 births to study the relationships between air pollution and the risk of low birth weight (LBW, <2,500 g), as well as birth weight as a continuous variable, in term-born infants. Complementary metrics capturing different aspects of air pollution were used (measurements from ambient monitoring stations, predictions from land use regression models and from a Gaussian dispersion model, traffic density, and proximity to roads). Associations between air pollution metrics and birth outcomes were investigated using generalized additive models, adjusting for maternal age, parity, race/ethnicity, insurance status, poverty, gestational age and sex of the infants. Increased risks of LBW were associated with ambient O(3) concentrations as measured by monitoring stations, as well as traffic density and proximity to major roadways. LBW was not significantly associated with other air pollution metrics, except that a decreased risk was associated with ambient NO(2) concentrations as measured by monitoring stations. When birth weight was analyzed as a continuous variable, small increases in mean birth weight were associated with most air pollution metrics (<40 g per inter-quartile range in air pollution metrics). No such increase was observed for traffic density or proximity to major roadways, and a significant decrease in mean birth weight was associated with ambient O3 concentrations. We found contrasting results according to the different air pollution metrics examined. Unmeasured confounders and/or measurement errors might have produced spurious positive associations between birth weight and some air pollution metrics. Despite this, ambient O(3) was associated with a decrement in mean birth weight and significant increases in the risk of LBW were associated with traffic density, proximity to roads and ambient O(3). This suggests that in our study population, these air pollution metrics are more likely related to increased risks of LBW than the other metrics we studied. Further studies are necessary to assess the consistency of such patterns across populations.
Moles of a Substance per Cell Is a Highly Informative Dosing Metric in Cell Culture
Wagner, Brett A.; Buettner, Garry R.
2015-01-01
Background The biological consequences upon exposure of cells in culture to a dose of xenobiotic are not only dependent on biological variables, but also the physical aspects of experiments e.g. cell number and media volume. Dependence on physical aspects is often overlooked due to the unrecognized ambiguity in the dominant metric used to express exposure, i.e. initial concentration of xenobiotic delivered to the culture medium over the cells. We hypothesize that for many xenobiotics, specifying dose as moles per cell will reduce this ambiguity. Dose as moles per cell can also provide additional information not easily obtainable with traditional dosing metrics. Methods Here, 1,4-benzoquinone and oligomycin A are used as model compounds to investigate moles per cell as an informative dosing metric. Mechanistic insight into reactions with intracellular molecules, differences between sequential and bolus addition of xenobiotic and the influence of cell volume and protein content on toxicity are also investigated. Results When the dose of 1,4-benzoquinone or oligomycin A was specified as moles per cell, toxicity was independent of the physical conditions used (number of cells, volume of medium). When using moles per cell as a dose-metric, direct quantitative comparisons can be made between biochemical or biological endpoints and the dose of xenobiotic applied. For example, the toxicity of 1,4-benzoquinone correlated inversely with intracellular volume for all five cell lines exposed (C6, MDA-MB231, A549, MIA PaCa-2, and HepG2). Conclusions Moles per cell is a useful and informative dosing metric in cell culture. This dosing metric is a scalable parameter that: can reduce ambiguity between experiments having different physical conditions; provides additional mechanistic information; allows direct comparison between different cells; affords a more uniform platform for experimental design; addresses the important issue of repeatability of experimental results, and could increase the translatability of information gained from in vitro experiments. PMID:26172833
Budczies, Jan; Klauschen, Frederick; Sinn, Bruno V.; Győrffy, Balázs; Schmitt, Wolfgang D.; Darb-Esfahani, Silvia; Denkert, Carsten
2012-01-01
Gene or protein expression data are usually represented by metric or at least ordinal variables. In order to translate a continuous variable into a clinical decision, it is necessary to determine a cutoff point and to stratify patients into two groups each requiring a different kind of treatment. Currently, there is no standard method or standard software for biomarker cutoff determination. Therefore, we developed Cutoff Finder, a bundle of optimization and visualization methods for cutoff determination that is accessible online. While one of the methods for cutoff optimization is based solely on the distribution of the marker under investigation, other methods optimize the correlation of the dichotomization with respect to an outcome or survival variable. We illustrate the functionality of Cutoff Finder by the analysis of the gene expression of estrogen receptor (ER) and progesterone receptor (PgR) in breast cancer tissues. This distribution of these important markers is analyzed and correlated with immunohistologically determined ER status and distant metastasis free survival. Cutoff Finder is expected to fill a relevant gap in the available biometric software repertoire and will enable faster optimization of new diagnostic biomarkers. The tool can be accessed at http://molpath.charite.de/cutoff. PMID:23251644
Castedo-Dorado, Fernando; Hevia, Andrea; Vega, José Antonio; Vega-Nieva, Daniel; Ruiz-González, Ana Daría
2017-01-01
The fuel complex variables canopy bulk density and canopy base height are often used to predict crown fire initiation and spread. Direct measurement of these variables is impractical, and they are usually estimated indirectly by modelling. Recent advances in predicting crown fire behaviour require accurate estimates of the complete vertical distribution of canopy fuels. The objectives of the present study were to model the vertical profile of available canopy fuel in pine stands by using data from the Spanish national forest inventory plus low-density airborne laser scanning (ALS) metrics. In a first step, the vertical distribution of the canopy fuel load was modelled using the Weibull probability density function. In a second step, two different systems of models were fitted to estimate the canopy variables defining the vertical distributions; the first system related these variables to stand variables obtained in a field inventory, and the second system related the canopy variables to airborne laser scanning metrics. The models of each system were fitted simultaneously to compensate the effects of the inherent cross-model correlation between the canopy variables. Heteroscedasticity was also analyzed, but no correction in the fitting process was necessary. The estimated canopy fuel load profiles from field variables explained 84% and 86% of the variation in canopy fuel load for maritime pine and radiata pine respectively; whereas the estimated canopy fuel load profiles from ALS metrics explained 52% and 49% of the variation for the same species. The proposed models can be used to assess the effectiveness of different forest management alternatives for reducing crown fire hazard. PMID:28448524
Datamining approaches for modeling tumor control probability.
Naqa, Issam El; Deasy, Joseph O; Mu, Yi; Huang, Ellen; Hope, Andrew J; Lindsay, Patricia E; Apte, Aditya; Alaly, James; Bradley, Jeffrey D
2010-11-01
Tumor control probability (TCP) to radiotherapy is determined by complex interactions between tumor biology, tumor microenvironment, radiation dosimetry, and patient-related variables. The complexity of these heterogeneous variable interactions constitutes a challenge for building predictive models for routine clinical practice. We describe a datamining framework that can unravel the higher order relationships among dosimetric dose-volume prognostic variables, interrogate various radiobiological processes, and generalize to unseen data before when applied prospectively. Several datamining approaches are discussed that include dose-volume metrics, equivalent uniform dose, mechanistic Poisson model, and model building methods using statistical regression and machine learning techniques. Institutional datasets of non-small cell lung cancer (NSCLC) patients are used to demonstrate these methods. The performance of the different methods was evaluated using bivariate Spearman rank correlations (rs). Over-fitting was controlled via resampling methods. Using a dataset of 56 patients with primary NCSLC tumors and 23 candidate variables, we estimated GTV volume and V75 to be the best model parameters for predicting TCP using statistical resampling and a logistic model. Using these variables, the support vector machine (SVM) kernel method provided superior performance for TCP prediction with an rs=0.68 on leave-one-out testing compared to logistic regression (rs=0.4), Poisson-based TCP (rs=0.33), and cell kill equivalent uniform dose model (rs=0.17). The prediction of treatment response can be improved by utilizing datamining approaches, which are able to unravel important non-linear complex interactions among model variables and have the capacity to predict on unseen data for prospective clinical applications.
Ren, J; Guo, X L; Lu, Z L; Zhang, J Y; Tang, J L; Chen, X; Gao, C C; Xu, C X; Xu, A Q
2016-09-07
Cardiovascular disease (CVD) is the leading cause of morbidity and mortality in the world. In 2010, a goal released by the American Heart Association (AHA) Committee focused on the primary reduction in cardiovascular risk. Data collected from 7683 men and 7667 women aged 18-69 years were analyzed. The distribution of ideal cardiovascular health metrics based on 7 cardiovascular disease risk factors or health behaviors in according to the definition of AHA was evaluated among the subjects. The association of the socioeconomic factors on the prevalence of meeting 5 or more ideal cardiovascular health metrics was estimated by logistic regression analysis, and a chi-square test for categorical variables and the general linear model (GLM) procedure for continuous variables were used to compare differences in prevalence and in means among genders. Seven of 15350 participants (0.05 %) met all 7 cardiovascular health metrics. The women had a higher proportion of meeting 5 or more ideal health metrics compared with men (32.67 VS.14.27 %). The subjects with a higher education and income level had a higher proportion of meeting 5 or more ideal health metrics than the subjects with a lower education and income level. A comparison between subjects with meeting 5 or more ideal cardiovascular health metrics with subjects meeting 4 or fewer ideal cardiovascular health metrics reveals that adjusted odds ratio [OR, 95 % confidence intervals (95 % CI)] was 1.42 (0.95, 2.21) in men and 2.59 (1.74, 3.87) in women for higher education and income, respectively. The prevalence of meeting all 7 cardiovascular health metrics was low in the adult population. Women, young subjects, and those with higher levels of education or income tend to have a greater number of the ideal cardiovascular health metrics. Higher socioeconomic status was associated with an increasing prevalence of meeting 5 or more cardiovascular health metrics in women but not in men. It's urgent to develop comprehensive population-based interventions to improve the cardiovascular risk factors in Shandong Province in China.
Joint learning of labels and distance metric.
Liu, Bo; Wang, Meng; Hong, Richang; Zha, Zhengjun; Hua, Xian-Sheng
2010-06-01
Machine learning algorithms frequently suffer from the insufficiency of training data and the usage of inappropriate distance metric. In this paper, we propose a joint learning of labels and distance metric (JLLDM) approach, which is able to simultaneously address the two difficulties. In comparison with the existing semi-supervised learning and distance metric learning methods that focus only on label prediction or distance metric construction, the JLLDM algorithm optimizes the labels of unlabeled samples and a Mahalanobis distance metric in a unified scheme. The advantage of JLLDM is multifold: 1) the problem of training data insufficiency can be tackled; 2) a good distance metric can be constructed with only very few training samples; and 3) no radius parameter is needed since the algorithm automatically determines the scale of the metric. Extensive experiments are conducted to compare the JLLDM approach with different semi-supervised learning and distance metric learning methods, and empirical results demonstrate its effectiveness.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dréan, Gaël; Acosta, Oscar, E-mail: Oscar.Acosta@univ-rennes1.fr; Simon, Antoine
2016-06-15
Purpose: Recent studies revealed a trend toward voxelwise population analysis in order to understand the local dose/toxicity relationships in prostate cancer radiotherapy. Such approaches require, however, an accurate interindividual mapping of the anatomies and 3D dose distributions toward a common coordinate system. This step is challenging due to the high interindividual variability. In this paper, the authors propose a method designed for interindividual nonrigid registration of the rectum and dose mapping for population analysis. Methods: The method is based on the computation of a normalized structural description of the rectum using a Laplacian-based model. This description takes advantage of themore » tubular structure of the rectum and its centerline to be embedded in a nonrigid registration-based scheme. The performances of the method were evaluated on 30 individuals treated for prostate cancer in a leave-one-out cross validation. Results: Performance was measured using classical metrics (Dice score and Hausdorff distance), along with new metrics devised to better assess dose mapping in relation with structural deformation (dose-organ overlap). Considering these scores, the proposed method outperforms intensity-based and distance maps-based registration methods. Conclusions: The proposed method allows for accurately mapping interindividual 3D dose distributions toward a single anatomical template, opening the way for further voxelwise statistical analysis.« less
A general-purpose optimization program for engineering design
NASA Technical Reports Server (NTRS)
Vanderplaats, G. N.; Sugimoto, H.
1986-01-01
A new general-purpose optimization program for engineering design is described. ADS (Automated Design Synthesis) is a FORTRAN program for nonlinear constrained (or unconstrained) function minimization. The optimization process is segmented into three levels: Strategy, Optimizer, and One-dimensional search. At each level, several options are available so that a total of nearly 100 possible combinations can be created. An example of available combinations is the Augmented Lagrange Multiplier method, using the BFGS variable metric unconstrained minimization together with polynomial interpolation for the one-dimensional search.
White, Ian R.; Kennen, Jonathan G.; May, Jason T.; Brown, Larry R.; Cuffney, Thomas F.; Jones, Kimberly A.; Orlando, James L.
2014-01-01
We developed independent predictive disturbance models for a full regional data set and four individual ecoregions (Full Region vs. Individual Ecoregion models) to evaluate effects of spatial scale on the assessment of human landscape modification, on predicted response of stream biota, and the effect of other possible confounding factors, such as watershed size and elevation, on model performance. We selected macroinvertebrate sampling sites for model development (n = 591) and validation (n = 467) that met strict screening criteria from four proximal ecoregions in the northeastern U.S.: North Central Appalachians, Ridge and Valley, Northeastern Highlands, and Northern Piedmont. Models were developed using boosted regression tree (BRT) techniques for four macroinvertebrate metrics; results were compared among ecoregions and metrics. Comparing within a region but across the four macroinvertebrate metrics, the average richness of tolerant taxa (RichTOL) had the highest R2 for BRT models. Across the four metrics, final BRT models had between four and seven explanatory variables and always included a variable related to urbanization (e.g., population density, percent urban, or percent manmade channels), and either a measure of hydrologic runoff (e.g., minimum April, average December, or maximum monthly runoff) and(or) a natural landscape factor (e.g., riparian slope, precipitation, and elevation), or a measure of riparian disturbance. Contrary to our expectations, Full Region models explained nearly as much variance in the macroinvertebrate data as Individual Ecoregion models, and taking into account watershed size or elevation did not appear to improve model performance. As a result, it may be advantageous for bioassessment programs to develop large regional models as a preliminary assessment of overall disturbance conditions as long as the range in natural landscape variability is not excessive.
Evaluation Metrics for Simulations of Tropical South America
NASA Astrophysics Data System (ADS)
Gallup, S.; Baker, I. T.; Denning, A. S.; Cheeseman, M.; Haynes, K. D.; Phillips, M.
2017-12-01
The evergreen broadleaf forest of the Amazon Basin is the largest rainforest on earth, and has teleconnections to global climate and carbon cycle characteristics. This region defies simple characterization, spanning large gradients in total rainfall and seasonal variability. Broadly, the region can be thought of as trending from light-limited in its wettest areas to water-limited near the ecotone, with individual landscapes possibly exhibiting the characteristics of either (or both) limitations during an annual cycle. A basin-scale classification of mean behavior has been elusive, and ecosystem response to seasonal cycles and anomalous drought events has resulted in some disagreement in the literature, to say the least. However, new observational platforms and instruments make characterization of the heterogeneity and variability more feasible.To evaluate simulations of ecophysiological function, we develop metrics that correlate various observational products with meteorological variables such as precipitation and radiation. Observations include eddy covariance fluxes, Solar Induced Fluorescence (SIF, from GOME2 and OCO2), biomass and vegetation indices. We find that the modest correlation between SIF and precipitation decreases with increasing annual precipitation, although the relationship is not consistent between products. Biomass increases with increasing precipitation. Although vegetation indices are generally correlated with biomass and precipitation, they can saturate or experience retrieval issues during cloudy periods.Using these observational products and relationships, we develop a set of model evaluation metrics. These metrics are designed to call attention to models that get "the right answer only if it's for the right reason," and provide an opportunity for more critical evaluation of model physics. These metrics represent a testbed that can be applied to multiple models as a means to evaluate their performance in tropical South America.
Waite, Ian R.; Kennen, Jonathan G.; May, Jason T.; Brown, Larry R.; Cuffney, Thomas F.; Jones, Kimberly A.; Orlando, James L.
2014-01-01
We developed independent predictive disturbance models for a full regional data set and four individual ecoregions (Full Region vs. Individual Ecoregion models) to evaluate effects of spatial scale on the assessment of human landscape modification, on predicted response of stream biota, and the effect of other possible confounding factors, such as watershed size and elevation, on model performance. We selected macroinvertebrate sampling sites for model development (n = 591) and validation (n = 467) that met strict screening criteria from four proximal ecoregions in the northeastern U.S.: North Central Appalachians, Ridge and Valley, Northeastern Highlands, and Northern Piedmont. Models were developed using boosted regression tree (BRT) techniques for four macroinvertebrate metrics; results were compared among ecoregions and metrics. Comparing within a region but across the four macroinvertebrate metrics, the average richness of tolerant taxa (RichTOL) had the highest R2 for BRT models. Across the four metrics, final BRT models had between four and seven explanatory variables and always included a variable related to urbanization (e.g., population density, percent urban, or percent manmade channels), and either a measure of hydrologic runoff (e.g., minimum April, average December, or maximum monthly runoff) and(or) a natural landscape factor (e.g., riparian slope, precipitation, and elevation), or a measure of riparian disturbance. Contrary to our expectations, Full Region models explained nearly as much variance in the macroinvertebrate data as Individual Ecoregion models, and taking into account watershed size or elevation did not appear to improve model performance. As a result, it may be advantageous for bioassessment programs to develop large regional models as a preliminary assessment of overall disturbance conditions as long as the range in natural landscape variability is not excessive. PMID:24675770
Which metric of ambient ozone to predict daily mortality?
NASA Astrophysics Data System (ADS)
Moshammer, Hanns; Hutter, Hans-Peter; Kundi, Michael
2013-02-01
It is well known that ozone concentration is associated with daily cause specific mortality. But which ozone metric is the best predictor of the daily variability in mortality? We performed a time series analysis on daily deaths (all causes, respiratory and cardiovascular causes as well as death in elderly 65+) in Vienna for the years 1991-2009. We controlled for seasonal and long term trend, day of the week, temperature and humidity using the same basic model for all pollutant metrics. We found model fit was best for same day variability of ozone concentration (calculated as the difference between daily hourly maximum and minimum) and hourly maximum. Of these the variability displayed a more linear dose-response function. Maximum 8 h moving average and daily mean value performed not so well. Nitrogen dioxide (daily mean) in comparison performed better when previous day values were assessed. Same day ozone and previous day nitrogen dioxide effect estimates did not confound each other. Variability in daily ozone levels or peak ozone levels seem to be a better proxy of a complex reactive secondary pollutant mixture than daily average ozone levels in the Middle European setting. If this finding is confirmed this would have implications for the setting of legally binding limit values.
Deriving injury risk curves using survival analysis from biomechanical experiments.
Yoganandan, Narayan; Banerjee, Anjishnu; Hsu, Fang-Chi; Bass, Cameron R; Voo, Liming; Pintar, Frank A; Gayzik, F Scott
2016-10-03
Injury risk curves from biomechanical experimental data analysis are used in automotive studies to improve crashworthiness and advance occupant safety. Metrics such as acceleration and deflection coupled with outcomes such as fractures and anatomical disruptions from impact tests are used in simple binary regression models. As an improvement, the International Standards Organization suggested a different approach. It was based on survival analysis. While probability curves for side-impact-induced thorax and abdominal injuries and frontal impact-induced foot-ankle-leg injuries are developed using this approach, deficiencies are apparent. The objective of this study is to present an improved, robust and generalizable methodology in an attempt to resolve these issues. It includes: (a) statistical identification of the most appropriate independent variable (metric) from a pool of candidate metrics, measured and or derived during experimentation and analysis processes, based on the highest area under the receiver operator curve, (b) quantitative determination of the most optimal probability distribution based on the lowest Akaike information criterion, (c) supplementing the qualitative/visual inspection method for comparing the selected distribution with a non-parametric distribution with objective measures, (d) identification of overly influential observations using different methods, and (e) estimation of confidence intervals using techniques more appropriate to the underlying survival statistical model. These clear and quantified details can be easily implemented with commercial/open source packages. They can be used in retrospective analysis and prospective design of experiments, and in applications to different loading scenarios such as underbody blast events. The feasibility of the methodology is demonstrated using post mortem human subject experiments and 24 metrics associated with thoracic/abdominal injuries in side-impacts. Published by Elsevier Ltd.
American Solar Eclipses 2017 & 2024
NASA Astrophysics Data System (ADS)
DiCanzio, Albert
2016-06-01
This research focuses on harnessing the statistical capacity of many available concurrent observers to advance scientific knowledge. By analogy to some Galilean measurement-experiments in which he used minimal instrumentation, this researcher will address the question: How might an individual observer, with a suitably chosen common metric and with widely available, reasonably affordable equipment, contribute to new knowledge from observing the solar eclipse of 2017? Each observer would report data to an institutional sponsor who would analyze these data statistically toward new knowledge about some question currently unsettled in astronomy or in the target field connected with the question which the chosen metric is targeted to address. A subordinate question will be discussed: As a tradeoff between “best question to answer” and “easiest question for observers’ data to answer”, is there an event property and related target question that, with high potential utility and low cost, would be measurable by an observer positioned in the path of totality with minimal or inexpensive equipment and training? (And that, as a statistical sample point, might contribute to new knowledge?) In dialog with the audience, the presenter will suggest some measurables; e.g., solar flares, ground shadow bands, atmospheric metrics, coronal structure, etc., correlated or not with certain other dependent variables. The independent variable would be time in the intervention interval from eclipse contacts 1 -- 4. By the aforementioned analogy, the presenter will review as examples some measurement-experiments conducted or suggested by Galileo; e.g., pendulum laws, Jovian satellite eclipse times, geokinesis as later seen in Bessel's parallactic measurement, and Michelson's measurement of light speed. Because criteria of metrics-determination would naturally include existence of a data-collection-analysis method, this presentation requires dialogue with a critical mass of audience members who would participate in the consideration of the research objective and of candidate institutional sponsors as a function of candidate target questions.
Huben, Neil; Hussein, Ahmed; May, Paul; Whittum, Michelle; Kraswowki, Collin; Ahmed, Youssef; Jing, Zhe; Khan, Hijab; Kim, Hyung; Schwaab, Thomas; Underwood Iii, Willie; Kauffman, Eric; Mohler, James L; Guru, Khurshid A
2018-04-10
To develop a methodology for predicting operative times for robot-assisted radical prostatectomy (RARP) using preoperative patient, disease, procedural and surgeon variables to facilitate operating room (OR) scheduling. The model included preoperative metrics: BMI, ASA score, clinical stage, National Comprehensive Cancer Network (NCCN) risk, prostate weight, nerve-sparing status, extent and laterality of lymph node dissection, and operating surgeon (6 surgeons were included in the study). A binary decision tree was fit using a conditional inference tree method to predict operative times. The variables most associated with operative time were determined using permutation tests. The data was split at the value of the variable that results in the largest difference in means for surgical time across the split. This process was repeated recursively on the resultant data. 1709 RARPs were included. The variable most strongly associated with operative time was the surgeon (surgeons 2 and 4 - 102 minutes shorter than surgeons 1, 3, 5, and 6, p<0.001). Among surgeons 2 and 4, BMI had the strongest association with surgical time (p<0.001). Among patients operated by surgeons 1, 3, 5 and 6, RARP time was again most strongly associated with the surgeon performing RARP. Surgeons 1, 3, and 6 were on average 76 minutes faster than surgeon 5 (p<0.001). The regression tree output in the form of box plots showed operative time median and ranges according to patient, disease, procedural and surgeon metrics. We developed a methodology that can predict operative times for RARP based on patient, disease and surgeon variables. This methodology can be utilized for quality control, facilitate OR scheduling and maximize OR efficiency.
A PDE approach for quantifying and visualizing tumor progression and regression
NASA Astrophysics Data System (ADS)
Sintay, Benjamin J.; Bourland, J. Daniel
2009-02-01
Quantification of changes in tumor shape and size allows physicians the ability to determine the effectiveness of various treatment options, adapt treatment, predict outcome, and map potential problem sites. Conventional methods are often based on metrics such as volume, diameter, or maximum cross sectional area. This work seeks to improve the visualization and analysis of tumor changes by simultaneously analyzing changes in the entire tumor volume. This method utilizes an elliptic partial differential equation (PDE) to provide a roadmap of boundary displacement that does not suffer from the discontinuities associated with other measures such as Euclidean distance. Streamline pathways defined by Laplace's equation (a commonly used PDE) are used to track tumor progression and regression at the tumor boundary. Laplace's equation is particularly useful because it provides a smooth, continuous solution that can be evaluated with sub-pixel precision on variable grid sizes. Several metrics are demonstrated including maximum, average, and total regression and progression. This method provides many advantages over conventional means of quantifying change in tumor shape because it is observer independent, stable for highly unusual geometries, and provides an analysis of the entire three-dimensional tumor volume.
Estimating Bacterial Diversity for Ecological Studies: Methods, Metrics, and Assumptions
Birtel, Julia; Walser, Jean-Claude; Pichon, Samuel; Bürgmann, Helmut; Matthews, Blake
2015-01-01
Methods to estimate microbial diversity have developed rapidly in an effort to understand the distribution and diversity of microorganisms in natural environments. For bacterial communities, the 16S rRNA gene is the phylogenetic marker gene of choice, but most studies select only a specific region of the 16S rRNA to estimate bacterial diversity. Whereas biases derived from from DNA extraction, primer choice and PCR amplification are well documented, we here address how the choice of variable region can influence a wide range of standard ecological metrics, such as species richness, phylogenetic diversity, β-diversity and rank-abundance distributions. We have used Illumina paired-end sequencing to estimate the bacterial diversity of 20 natural lakes across Switzerland derived from three trimmed variable 16S rRNA regions (V3, V4, V5). Species richness, phylogenetic diversity, community composition, β-diversity, and rank-abundance distributions differed significantly between 16S rRNA regions. Overall, patterns of diversity quantified by the V3 and V5 regions were more similar to one another than those assessed by the V4 region. Similar results were obtained when analyzing the datasets with different sequence similarity thresholds used during sequences clustering and when the same analysis was used on a reference dataset of sequences from the Greengenes database. In addition we also measured species richness from the same lake samples using ARISA Fingerprinting, but did not find a strong relationship between species richness estimated by Illumina and ARISA. We conclude that the selection of 16S rRNA region significantly influences the estimation of bacterial diversity and species distributions and that caution is warranted when comparing data from different variable regions as well as when using different sequencing techniques. PMID:25915756
Using complexity metrics with R-R intervals and BPM heart rate measures.
Wallot, Sebastian; Fusaroli, Riccardo; Tylén, Kristian; Jegindø, Else-Marie
2013-01-01
Lately, growing attention in the health sciences has been paid to the dynamics of heart rate as indicator of impending failures and for prognoses. Likewise, in social and cognitive sciences, heart rate is increasingly employed as a measure of arousal, emotional engagement and as a marker of interpersonal coordination. However, there is no consensus about which measurements and analytical tools are most appropriate in mapping the temporal dynamics of heart rate and quite different metrics are reported in the literature. As complexity metrics of heart rate variability depend critically on variability of the data, different choices regarding the kind of measures can have a substantial impact on the results. In this article we compare linear and non-linear statistics on two prominent types of heart beat data, beat-to-beat intervals (R-R interval) and beats-per-min (BPM). As a proof-of-concept, we employ a simple rest-exercise-rest task and show that non-linear statistics-fractal (DFA) and recurrence (RQA) analyses-reveal information about heart beat activity above and beyond the simple level of heart rate. Non-linear statistics unveil sustained post-exercise effects on heart rate dynamics, but their power to do so critically depends on the type data that is employed: While R-R intervals are very susceptible to non-linear analyses, the success of non-linear methods for BPM data critically depends on their construction. Generally, "oversampled" BPM time-series can be recommended as they retain most of the information about non-linear aspects of heart beat dynamics.
Using complexity metrics with R-R intervals and BPM heart rate measures
Wallot, Sebastian; Fusaroli, Riccardo; Tylén, Kristian; Jegindø, Else-Marie
2013-01-01
Lately, growing attention in the health sciences has been paid to the dynamics of heart rate as indicator of impending failures and for prognoses. Likewise, in social and cognitive sciences, heart rate is increasingly employed as a measure of arousal, emotional engagement and as a marker of interpersonal coordination. However, there is no consensus about which measurements and analytical tools are most appropriate in mapping the temporal dynamics of heart rate and quite different metrics are reported in the literature. As complexity metrics of heart rate variability depend critically on variability of the data, different choices regarding the kind of measures can have a substantial impact on the results. In this article we compare linear and non-linear statistics on two prominent types of heart beat data, beat-to-beat intervals (R-R interval) and beats-per-min (BPM). As a proof-of-concept, we employ a simple rest-exercise-rest task and show that non-linear statistics—fractal (DFA) and recurrence (RQA) analyses—reveal information about heart beat activity above and beyond the simple level of heart rate. Non-linear statistics unveil sustained post-exercise effects on heart rate dynamics, but their power to do so critically depends on the type data that is employed: While R-R intervals are very susceptible to non-linear analyses, the success of non-linear methods for BPM data critically depends on their construction. Generally, “oversampled” BPM time-series can be recommended as they retain most of the information about non-linear aspects of heart beat dynamics. PMID:23964244
Smith, Laurel B; Radomski, Mary Vining; Davidson, Leslie Freeman; Finkelstein, Marsha; Weightman, Margaret M; McCulloch, Karen L; Scherer, Matthew R
2014-01-01
OBJECTIVES. Executive functioning deficits may result from concussion. The Charge of Quarters (CQ) Duty Task is a multitask assessment designed to assess executive functioning in servicemembers after concussion. In this article, we discuss the rationale and process used in the development of the CQ Duty Task and present pilot data from the preliminary evaluation of interrater reliability (IRR). METHOD. Three evaluators observed as 12 healthy participants performed the CQ Duty Task and measured performance using various metrics. Intraclass correlation coefficient (ICC) quantified IRR. RESULTS. The ICC for task completion was .94. ICCs for other assessment metrics were variable. CONCLUSION. Preliminary IRR data for the CQ Duty Task are encouraging, but further investigation is needed to improve IRR in some domains. Lessons learned in the development of the CQ Duty Task could benefit future test development efforts with populations other than the military. Copyright © 2014 by the American Occupational Therapy Association, Inc.
Radomski, Mary Vining; Davidson, Leslie Freeman; Finkelstein, Marsha; Weightman, Margaret M.; McCulloch, Karen L.; Scherer, Matthew R.
2014-01-01
OBJECTIVES. Executive functioning deficits may result from concussion. The Charge of Quarters (CQ) Duty Task is a multitask assessment designed to assess executive functioning in servicemembers after concussion. In this article, we discuss the rationale and process used in the development of the CQ Duty Task and present pilot data from the preliminary evaluation of interrater reliability (IRR). METHOD. Three evaluators observed as 12 healthy participants performed the CQ Duty Task and measured performance using various metrics. Intraclass correlation coefficient (ICC) quantified IRR. RESULTS. The ICC for task completion was .94. ICCs for other assessment metrics were variable. CONCLUSION. Preliminary IRR data for the CQ Duty Task are encouraging, but further investigation is needed to improve IRR in some domains. Lessons learned in the development of the CQ Duty Task could benefit future test development efforts with populations other than the military. PMID:25005507
Detection of periodicity based on independence tests - III. Phase distance correlation periodogram
NASA Astrophysics Data System (ADS)
Zucker, Shay
2018-02-01
I present the Phase Distance Correlation (PDC) periodogram - a new periodicity metric, based on the Distance Correlation concept of Gábor Székely. For each trial period, PDC calculates the distance correlation between the data samples and their phases. PDC requires adaptation of the Székely's distance correlation to circular variables (phases). The resulting periodicity metric is best suited to sparse data sets, and it performs better than other methods for sawtooth-like periodicities. These include Cepheid and RR-Lyrae light curves, as well as radial velocity curves of eccentric spectroscopic binaries. The performance of the PDC periodogram in other contexts is almost as good as that of the Generalized Lomb-Scargle periodogram. The concept of phase distance correlation can be adapted also to astrometric data, and it has the potential to be suitable also for large evenly spaced data sets, after some algorithmic perfection.
Dirichlet Component Regression and its Applications to Psychiatric Data.
Gueorguieva, Ralitza; Rosenheck, Robert; Zelterman, Daniel
2008-08-15
We describe a Dirichlet multivariable regression method useful for modeling data representing components as a percentage of a total. This model is motivated by the unmet need in psychiatry and other areas to simultaneously assess the effects of covariates on the relative contributions of different components of a measure. The model is illustrated using the Positive and Negative Syndrome Scale (PANSS) for assessment of schizophrenia symptoms which, like many other metrics in psychiatry, is composed of a sum of scores on several components, each in turn, made up of sums of evaluations on several questions. We simultaneously examine the effects of baseline socio-demographic and co-morbid correlates on all of the components of the total PANSS score of patients from a schizophrenia clinical trial and identify variables associated with increasing or decreasing relative contributions of each component. Several definitions of residuals are provided. Diagnostics include measures of overdispersion, Cook's distance, and a local jackknife influence metric.
Fontes, Cristiano Hora; Budman, Hector
2017-11-01
A clustering problem involving multivariate time series (MTS) requires the selection of similarity metrics. This paper shows the limitations of the PCA similarity factor (SPCA) as a single metric in nonlinear problems where there are differences in magnitude of the same process variables due to expected changes in operation conditions. A novel method for clustering MTS based on a combination between SPCA and the average-based Euclidean distance (AED) within a fuzzy clustering approach is proposed. Case studies involving either simulated or real industrial data collected from a large scale gas turbine are used to illustrate that the hybrid approach enhances the ability to recognize normal and fault operating patterns. This paper also proposes an oversampling procedure to create synthetic multivariate time series that can be useful in commonly occurring situations involving unbalanced data sets. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Robust optimization based upon statistical theory.
Sobotta, B; Söhn, M; Alber, M
2010-08-01
Organ movement is still the biggest challenge in cancer treatment despite advances in online imaging. Due to the resulting geometric uncertainties, the delivered dose cannot be predicted precisely at treatment planning time. Consequently, all associated dose metrics (e.g., EUD and maxDose) are random variables with a patient-specific probability distribution. The method that the authors propose makes these distributions the basis of the optimization and evaluation process. The authors start from a model of motion derived from patient-specific imaging. On a multitude of geometry instances sampled from this model, a dose metric is evaluated. The resulting pdf of this dose metric is termed outcome distribution. The approach optimizes the shape of the outcome distribution based on its mean and variance. This is in contrast to the conventional optimization of a nominal value (e.g., PTV EUD) computed on a single geometry instance. The mean and variance allow for an estimate of the expected treatment outcome along with the residual uncertainty. Besides being applicable to the target, the proposed method also seamlessly includes the organs at risk (OARs). The likelihood that a given value of a metric is reached in the treatment is predicted quantitatively. This information reveals potential hazards that may occur during the course of the treatment, thus helping the expert to find the right balance between the risk of insufficient normal tissue sparing and the risk of insufficient tumor control. By feeding this information to the optimizer, outcome distributions can be obtained where the probability of exceeding a given OAR maximum and that of falling short of a given target goal can be minimized simultaneously. The method is applicable to any source of residual motion uncertainty in treatment delivery. Any model that quantifies organ movement and deformation in terms of probability distributions can be used as basis for the algorithm. Thus, it can generate dose distributions that are robust against interfraction and intrafraction motion alike, effectively removing the need for indiscriminate safety margins.
Brewer, Shannon K.; Worthington, Thomas A.; Zhang, Tianjioa; Logue, Daniel R.; Mittelstet, Aaron R.
2016-01-01
Truncated distributions of pelagophilic fishes have been observed across the Great Plains of North America, with water use and landscape fragmentation implicated as contributing factors. Developing conservation strategies for these species is hindered by the existence of multiple competing flow regime hypotheses related to species persistence. Our primary study objective was to compare the predicted distributions of one pelagophil, the Arkansas River Shiner Notropis girardi, constructed using different flow regime metrics. Further, we investigated different approaches for improving temporal transferability of the species distribution model (SDM). We compared four hypotheses: mean annual flow (a baseline), the 75th percentile of daily flow, the number of zero-flow days, and the number of days above 55th percentile flows, to examine the relative importance of flows during the spawning period. Building on an earlier SDM, we added covariates that quantified wells in each catchment, point source discharges, and non-native species presence to a structured variable framework. We assessed the effects on model transferability and fit by reducing multicollinearity using Spearman’s rank correlations, variance inflation factors, and principal component analysis, as well as altering the regularization coefficient (β) within MaxEnt. The 75th percentile of daily flow was the most important flow metric related to structuring the species distribution. The number of wells and point source discharges were also highly ranked. At the default level of β, model transferability was improved using all methods to reduce collinearity; however, at higher levels of β, the correlation method performed best. Using β = 5 provided the best model transferability, while retaining the majority of variables that contributed 95% to the model. This study provides a workflow for improving model transferability and also presents water-management options that may be considered to improve the conservation status of pelagophils.
NASA Astrophysics Data System (ADS)
Brinkkemper, S.; Rossi, M.
1994-12-01
As customizable computer aided software engineering (CASE) tools, or CASE shells, have been introduced in academia and industry, there has been a growing interest into the systematic construction of methods and their support environments, i.e. method engineering. To aid the method developers and method selectors in their tasks, we propose two sets of metrics, which measure the complexity of diagrammatic specification techniques on the one hand, and of complete systems development methods on the other hand. Proposed metrics provide a relatively fast and simple way to analyze the technique (or method) properties, and when accompanied with other selection criteria, can be used for estimating the cost of learning the technique and the relative complexity of a technique compared to others. To demonstrate the applicability of the proposed metrics, we have applied them to 34 techniques and 15 methods.
Wind Plant Performance Prediction (WP3) Project
DOE Office of Scientific and Technical Information (OSTI.GOV)
Craig, Anna
The methods for analysis of operational wind plant data are highly variable across the wind industry, leading to high uncertainties in the validation and bias-correction of preconstruction energy estimation methods. Lack of credibility in the preconstruction energy estimates leads to significant impacts on project financing and therefore the final levelized cost of energy for the plant. In this work, the variation in the evaluation of a wind plant's operational energy production as a result of variations in the processing methods applied to the operational data is examined. Preliminary results indicate that selection of the filters applied to the data andmore » the filter parameters can have significant impacts in the final computed assessment metrics.« less
On the convergence of a linesearch based proximal-gradient method for nonconvex optimization
NASA Astrophysics Data System (ADS)
Bonettini, S.; Loris, I.; Porta, F.; Prato, M.; Rebegoldi, S.
2017-05-01
We consider a variable metric linesearch based proximal gradient method for the minimization of the sum of a smooth, possibly nonconvex function plus a convex, possibly nonsmooth term. We prove convergence of this iterative algorithm to a critical point if the objective function satisfies the Kurdyka-Łojasiewicz property at each point of its domain, under the assumption that a limit point exists. The proposed method is applied to a wide collection of image processing problems and our numerical tests show that our algorithm results to be flexible, robust and competitive when compared to recently proposed approaches able to address the optimization problems arising in the considered applications.
NASA Astrophysics Data System (ADS)
Boé, Julien; Terray, Laurent
2014-05-01
Ensemble approaches for climate change projections have become ubiquitous. Because of large model-to-model variations and, generally, lack of rationale for the choice of a particular climate model against others, it is widely accepted that future climate change and its impacts should not be estimated based on a single climate model. Generally, as a default approach, the multi-model ensemble mean (MMEM) is considered to provide the best estimate of climate change signals. The MMEM approach is based on the implicit hypothesis that all the models provide equally credible projections of future climate change. This hypothesis is unlikely to be true and ideally one would want to give more weight to more realistic models. A major issue with this alternative approach lies in the assessment of the relative credibility of future climate projections from different climate models, as they can only be evaluated against present-day observations: which present-day metric(s) should be used to decide which models are "good" and which models are "bad" in the future climate? Once a supposedly informative metric has been found, other issues arise. What is the best statistical method to combine multiple models results taking into account their relative credibility measured by a given metric? How to be sure in the end that the metric-based estimate of future climate change is not in fact less realistic than the MMEM? It is impossible to provide strict answers to those questions in the climate change context. Yet, in this presentation, we propose a methodological approach based on a perfect model framework that could bring some useful elements of answer to the questions previously mentioned. The basic idea is to take a random climate model in the ensemble and treat it as if it were the truth (results of this model, in both past and future climate, are called "synthetic observations"). Then, all the other members from the multi-model ensemble are used to derive thanks to a metric-based approach a posterior estimate of climate change, based on the synthetic observation of the metric. Finally, it is possible to compare the posterior estimate to the synthetic observation of future climate change to evaluate the skill of the method. The main objective of this presentation is to describe and apply this perfect model framework to test different methodological issues associated with non-uniform model weighting and similar metric-based approaches. The methodology presented is general, but will be applied to the specific case of summer temperature change in France, for which previous works have suggested potentially useful metrics associated with soil-atmosphere and cloud-temperature interactions. The relative performances of different simple statistical approaches to combine multiple model results based on metrics will be tested. The impact of ensemble size, observational errors, internal variability, and model similarity will be characterized. The potential improvements associated with metric-based approaches compared to the MMEM is terms of errors and uncertainties will be quantified.
Bowler, Michael G; Bowler, Matthew W
2014-01-01
The advent of micro-focused X-ray beams has led to the development of a number of advanced methods of sample evaluation and data collection. In particular, multiple-position data-collection and helical oscillation strategies are now becoming commonplace in order to alleviate the problems associated with radiation damage. However, intra-crystal and inter-crystal variation means that it is not always obvious on which crystals or on which region or regions of a crystal these protocols should be performed. For the automation of this process for large-scale screening, and to provide an indication of the best strategy for data collection, a metric of crystal variability could be useful. Here, measures of the intrinsic variability within protein crystals are presented and their implications for optimal data-collection strategies are discussed.
G-index: A new metric to describe dynamic refractive index effects in HPLC absorbance detection.
Kraiczek, Karsten G; Rozing, Gerard P; Zengerle, Roland
2018-09-01
High performance liquid chromatography (HPLC) with a solvent gradient and absorbance detection is one of the most widely used methods in analytical chemistry. The observed absorbance baseline is affected by the changes in the refractive index (RI) of the mobile phase. Near the limited of detection, this complicates peak quantitation. The general aspects of these RI-induced apparent absorbance effects are discussed. Two different detectors with fundamentally different optics and flow cell concepts, a variable-wavelength detector equipped with a conventional flow cell and a diode-array detector equipped with a liquid core waveguide flow cell, are compared with respect to their RI behavior. A simple method to separate static - partly unavoidable - RI effects from dynamic RI effects is presented. It is shown that the dynamic RI behavior of an absorbance detector can be well described using a single, relatively easy-to-determine metric called the G-index. The G-index is typically in the order of a few seconds and its sign depends on the optical flow cell concept. Copyright © 2018 Elsevier B.V. All rights reserved.
National evaluation of multidisciplinary quality metrics for head and neck cancer.
Cramer, John D; Speedy, Sedona E; Ferris, Robert L; Rademaker, Alfred W; Patel, Urjeet A; Samant, Sandeep
2017-11-15
The National Quality Forum has endorsed quality-improvement measures for multiple cancer types that are being developed into actionable tools to improve cancer care. No nationally endorsed quality metrics currently exist for head and neck cancer. The authors identified patients with surgically treated, invasive, head and neck squamous cell carcinoma in the National Cancer Data Base from 2004 to 2014 and compared the rate of adherence to 5 different quality metrics and whether compliance with these quality metrics impacted overall survival. The metrics examined included negative surgical margins, neck dissection lymph node (LN) yield ≥ 18, appropriate adjuvant radiation, appropriate adjuvant chemoradiation, adjuvant therapy within 6 weeks, as well as overall quality. In total, 76,853 eligible patients were identified. There was substantial variability in patient-level adherence, which was 80% for negative surgical margins, 73.1% for neck dissection LN yield, 69% for adjuvant radiation, 42.6% for adjuvant chemoradiation, and 44.5% for adjuvant therapy within 6 weeks. Risk-adjusted Cox proportional-hazard models indicated that all metrics were associated with a reduced risk of death: negative margins (hazard ratio [HR] 0.73; 95% confidence interval [CI], 0.71-0.76), LN yield ≥ 18 (HR, 0.93; 95% CI, 0.89-0.96), adjuvant radiation (HR, 0.67; 95% CI, 0.64-0.70), adjuvant chemoradiation (HR, 0.84; 95% CI, 0.79-0.88), and adjuvant therapy ≤6 weeks (HR, 0.92; 95% CI, 0.89-0.96). Patients who received high-quality care had a 19% reduced adjusted hazard of mortality (HR, 0.81; 95% CI, 0.79-0.83). Five head and neck cancer quality metrics were identified that have substantial variability in adherence and meaningfully impact overall survival. These metrics are appropriate candidates for national adoption. Cancer 2017;123:4372-81. © 2017 American Cancer Society. © 2017 American Cancer Society.
Gieswein, Alexander; Hering, Daniel; Feld, Christian K
2017-09-01
Freshwater ecosystems are impacted by a range of stressors arising from diverse human-caused land and water uses. Identifying the relative importance of single stressors and understanding how multiple stressors interact and jointly affect biology is crucial for River Basin Management. This study addressed multiple human-induced stressors and their effects on the aquatic flora and fauna based on data from standard WFD monitoring schemes. For altogether 1095 sites within a mountainous catchment, we used 12 stressor variables covering three different stressor groups: riparian land use, physical habitat quality and nutrient enrichment. Twenty-one biological metrics calculated from taxa lists of three organism groups (fish, benthic invertebrates and aquatic macrophytes) served as response variables. Stressor and response variables were subjected to Boosted Regression Tree (BRT) analysis to identify stressor hierarchy and stressor interactions and subsequently to Generalised Linear Regression Modelling (GLM) to quantify the stressors standardised effect size. Our results show that riverine habitat degradation was the dominant stressor group for the river fauna, notably the bed physical habitat structure. Overall, the explained variation in benthic invertebrate metrics was higher than it was in fish and macrophyte metrics. In particular, general integrative (aggregate) metrics such as % Ephemeroptera, Plecoptera and Trichoptera (EPT) taxa performed better than ecological traits (e.g. % feeding types). Overall, additive stressor effects dominated, while significant and meaningful stressor interactions were generally rare and weak. We concluded that given the type of stressor and ecological response variables addressed in this study, river basin managers do not need to bother much about complex stressor interactions, but can focus on the prevailing stressors according to the hierarchy identified. Copyright © 2017 Elsevier B.V. All rights reserved.
Ramilo, P; Martínez-Falcón, A P; García-López, A; Brustel, H; Galante, E; Micó, E
2017-12-08
Mediterranean oak forests of the Iberian Peninsula host a great diversity of saproxylic beetles. For centuries, humans have carried out traditional management practices in this area, at both habitat and tree level, causing changes in forest structure. The aim of this study was to evaluate the anthropic effect of these traditional practices on saproxylic beetle diversity by measuring a set of environmental variables related to forest structure at both plot and tree level. Fauna was collected using window traps over a period of 12 mo. Multiple regression procedures showed which variables significantly affected the diversity of the studied assemblage. Our results demonstrated that the different metrics used to assess the diversity of assemblages responded variably depending on the management strategies applied and the level at which they were carried out. Certain management practices that disrupted the landscape from its natural state, such as the introduction of livestock or the local removal of particular trees, maximized species richness but, nevertheless, had a negative effect on the rest of diversity metrics analyzed. However, other practices such as pollarding, which involves the suppression of the main branch of the tree, had a positive effect on all diversity metrics evaluated as it promoted the formation of potential microhabitats for saproxylic fauna. We concluded that not all types and degrees of traditional forest management favor saproxylic beetle diversity and that different diversity metrics should be taken into consideration in future strategies for the protection and conservation of this fauna. © The Authors 2017. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Cundy, Thomas P; Thangaraj, Evelyn; Rafii-Tari, Hedyeh; Payne, Christopher J; Azzie, Georges; Sodergren, Mikael H; Yang, Guang-Zhong; Darzi, Ara
2015-04-01
Excessive or inappropriate tissue interaction force during laparoscopic surgery is a recognized contributor to surgical error, especially for robotic surgery. Measurement of force at the tool-tissue interface is, therefore, a clinically relevant skill assessment variable that may improve effectiveness of surgical simulation. Popular box trainer simulators lack the necessary technology to measure force. The aim of this study was to develop a force sensing unit that may be integrated easily with existing box trainer simulators and to (1) validate multiple force variables as objective measurements of laparoscopic skill, and (2) determine concurrent validity of a revised scoring metric. A base plate unit sensitized to a force transducer was retrofitted to a box trainer. Participants of 3 different levels of operative experience performed 5 repetitions of a peg transfer and suture task. Multiple outcome variables of force were assessed as well as a revised scoring metric that incorporated a penalty for force error. Mean, maximum, and overall magnitudes of force were significantly different among the 3 levels of experience, as well as force error. Experts were found to exert the least force and fastest task completion times, and vice versa for novices. Overall magnitude of force was the variable most correlated with experience level and task completion time. The revised scoring metric had similar predictive strength for experience level compared with the standard scoring metric. Current box trainer simulators can be adapted for enhanced objective measurements of skill involving force sensing. These outcomes are significantly influenced by level of expertise and are relevant to operative safety in laparoscopic surgery. Conventional proficiency standards that focus predominantly on task completion time may be integrated with force-based outcomes to be more accurately reflective of skill quality. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Michel, N. L.; Wilsey, C.; Burkhalter, C.; Trusty, B.; Langham, G.
2017-12-01
Scalable indicators of biodiversity change are critical to reporting overall progress towards national and global targets for biodiversity conservation (e.g. Aichi Targets) and sustainable development (SDGs). These essential biodiversity variables capitalize on new remote sensing technologies and growth of community science participation. Here we present a novel biodiversity metric quantifying resilience of bird communities and, by extension, of their associated ecological communities. This metric adds breadth to the community composition class of essential biodiversity variables that track trends in condition and vulnerability of ecological communities. We developed this index for use with North American grassland birds, a guild that has experienced stronger population declines than any other avian guild, in order to evaluate gains from the implementation of best management practices on private lands. The Bird Community Resilience Index was designed to incorporate the full suite of species-specific responses to management actions, and be flexible enough to work across broad climatic, land cover, and bird community gradients (i.e., grasslands from northern Mexico through Canada). The Bird Community Resilience Index consists of four components: density estimates of grassland and arid land birds; weighting based on conservation need; a functional diversity metric to incorporate resiliency of bird communities and their ecosystems; and a standardized scoring system to control for interannual variation caused by extrinsic factors (e.g., climate). We present an analysis of bird community resilience across ranches in the Northern Great Plains region of the United States. As predicted, Bird Community Resilience was higher in lands implementing best management practices than elsewhere. While developed for grassland birds, this metric holds great potential for use as an Essential Biodiversity Variable for community composition in a variety of habitat.
Ordin, Mikhail; Polyanskaya, Leona
2015-08-01
The development of speech rhythm in second language (L2) acquisition was investigated. Speech rhythm was defined as durational variability that can be captured by the interval-based rhythm metrics. These metrics were used to examine the differences in durational variability between proficiency levels in L2 English spoken by French and German learners. The results reveal that durational variability increased as L2 acquisition progressed in both groups of learners. This indicates that speech rhythm in L2 English develops from more syllable-timed toward more stress-timed patterns irrespective of whether the native language of the learner is rhythmically similar to or different from the target language. Although both groups showed similar development of speech rhythm in L2 acquisition, there were also differences: German learners achieved a degree of durational variability typical of the target language, while French learners exhibited lower variability than native British speakers, even at an advanced proficiency level.
Energy-Based Metrics for Arthroscopic Skills Assessment.
Poursartip, Behnaz; LeBel, Marie-Eve; McCracken, Laura C; Escoto, Abelardo; Patel, Rajni V; Naish, Michael D; Trejos, Ana Luisa
2017-08-05
Minimally invasive skills assessment methods are essential in developing efficient surgical simulators and implementing consistent skills evaluation. Although numerous methods have been investigated in the literature, there is still a need to further improve the accuracy of surgical skills assessment. Energy expenditure can be an indication of motor skills proficiency. The goals of this study are to develop objective metrics based on energy expenditure, normalize these metrics, and investigate classifying trainees using these metrics. To this end, different forms of energy consisting of mechanical energy and work were considered and their values were divided by the related value of an ideal performance to develop normalized metrics. These metrics were used as inputs for various machine learning algorithms including support vector machines (SVM) and neural networks (NNs) for classification. The accuracy of the combination of the normalized energy-based metrics with these classifiers was evaluated through a leave-one-subject-out cross-validation. The proposed method was validated using 26 subjects at two experience levels (novices and experts) in three arthroscopic tasks. The results showed that there are statistically significant differences between novices and experts for almost all of the normalized energy-based metrics. The accuracy of classification using SVM and NN methods was between 70% and 95% for the various tasks. The results show that the normalized energy-based metrics and their combination with SVM and NN classifiers are capable of providing accurate classification of trainees. The assessment method proposed in this study can enhance surgical training by providing appropriate feedback to trainees about their level of expertise and can be used in the evaluation of proficiency.
Metric Measures and the Consumer. Reprint from FDA CONSUMER, Dec. 1975-Jan. 1976.
ERIC Educational Resources Information Center
Food and Drug Administration (DHEW), Washington, DC.
Advantages of the metric system for the consumer are discussed. Basic metric units are described, then methods of comparison shopping when items are marked in metric units are explained. The effect of the change to the metric system on packaging and labelling requirements is discussed. (DT)
Cuffney, T.F.; Zappia, H.; Giddings, E.M.P.; Coles, J.F.
2005-01-01
Responses of invertebrate assemblages along gradients of urban intensity were examined in three metropolitan areas with contrasting climates and topography (Boston, Massachusetts; Birmingham, Alabama; Salt Lake City, Utah). Urban gradients were defined using an urban intensity index (UII) derived from basin-scale population, infrastructure, land-use, land-cover, and socioeconomic characteristics. Responses based on assemblage metrics, indices of biotic integrity (B-IBI), and ordinations were readily detected in all three urban areas and many responses could be accurately predicted simply using regional UIIs. Responses to UII were linear and did not indicate any initial resistance to urbanization. Richness metrics were better indicators of urbanization than were density metrics. Metrics that were good indicators were specific to each study except for a richness-based tolerance metric (TOLr) and one B-IBI. Tolerances to urbanization were derived for 205 taxa. These tolerances differed among studies and with published tolerance values, but provided similar characterizations of site conditions. Basin-scale land-use changes were the most important variables for explaining invertebrate responses to urbanization. Some chemical and instream physical habitat variables were important in individual studies, but not among studies. Optimizing the study design to detect basin-scale effects may have reduced the ability to detect local-scale effects. ?? 2005 by the American Fisheries Society.
Introducing Co-Activation Pattern Metrics to Quantify Spontaneous Brain Network Dynamics
Chen, Jingyuan E.; Chang, Catie; Greicius, Michael D.; Glover, Gary H.
2015-01-01
Recently, fMRI researchers have begun to realize that the brain's intrinsic network patterns may undergo substantial changes during a single resting state (RS) scan. However, despite the growing interest in brain dynamics, metrics that can quantify the variability of network patterns are still quite limited. Here, we first introduce various quantification metrics based on the extension of co-activation pattern (CAP) analysis, a recently proposed point-process analysis that tracks state alternations at each individual time frame and relies on very few assumptions; then apply these proposed metrics to quantify changes of brain dynamics during a sustained 2-back working memory (WM) task compared to rest. We focus on the functional connectivity of two prominent RS networks, the default-mode network (DMN) and executive control network (ECN). We first demonstrate less variability of global Pearson correlations with respect to the two chosen networks using a sliding-window approach during WM task compared to rest; then we show that the macroscopic decrease in variations in correlations during a WM task is also well characterized by the combined effect of a reduced number of dominant CAPs, increased spatial consistency across CAPs, and increased fractional contributions of a few dominant CAPs. These CAP metrics may provide alternative and more straightforward quantitative means of characterizing brain network dynamics than time-windowed correlation analyses. PMID:25662866
Disturbance metrics predict a wetland Vegetation Index of Biotic Integrity
Stapanian, Martin A.; Mack, John; Adams, Jean V.; Gara, Brian; Micacchion, Mick
2013-01-01
Indices of biological integrity of wetlands based on vascular plants (VIBIs) have been developed in many areas in the USA. Knowledge of the best predictors of VIBIs would enable management agencies to make better decisions regarding mitigation site selection and performance monitoring criteria. We use a novel statistical technique to develop predictive models for an established index of wetland vegetation integrity (Ohio VIBI), using as independent variables 20 indices and metrics of habitat quality, wetland disturbance, and buffer area land use from 149 wetlands in Ohio, USA. For emergent and forest wetlands, predictive models explained 61% and 54% of the variability, respectively, in Ohio VIBI scores. In both cases the most important predictor of Ohio VIBI score was a metric that assessed habitat alteration and development in the wetland. Of secondary importance as a predictor was a metric that assessed microtopography, interspersion, and quality of vegetation communities in the wetland. Metrics and indices assessing disturbance and land use of the buffer area were generally poor predictors of Ohio VIBI scores. Our results suggest that vegetation integrity of emergent and forest wetlands could be most directly enhanced by minimizing substrate and habitat disturbance within the wetland. Such efforts could include reducing or eliminating any practices that disturb the soil profile, such as nutrient enrichment from adjacent farm land, mowing, grazing, or cutting or removing woody plants.
Metrics for linear kinematic features in sea ice
NASA Astrophysics Data System (ADS)
Levy, G.; Coon, M.; Sulsky, D.
2006-12-01
The treatment of leads as cracks or discontinuities (see Coon et al. presentation) requires some shift in the procedure of evaluation and comparison of lead-resolving models and their validation against observations. Common metrics used to evaluate ice model skills are by and large an adaptation of a least square "metric" adopted from operational numerical weather prediction data assimilation systems and are most appropriate for continuous fields and Eilerian systems where the observations and predictions are commensurate. However, this class of metrics suffers from some flaws in areas of sharp gradients and discontinuities (e.g., leads) and when Lagrangian treatments are more natural. After a brief review of these metrics and their performance in areas of sharp gradients, we present two new metrics specifically designed to measure model accuracy in representing linear features (e.g., leads). The indices developed circumvent the requirement that both the observations and model variables be commensurate (i.e., measured with the same units) by considering the frequencies of the features of interest/importance. We illustrate the metrics by scoring several hypothetical "simulated" discontinuity fields against the lead interpreted from RGPS observations.
CLIVAR Asian-Australian Monsoon Panel Report to Scientific Steering Group-18
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sperber, Ken R.; Hendon, Harry H.
2011-05-04
These are a set of slides on CLIVAR Asian-Australian Monsoon Panel Report to Scientific Steering Group-18. These are the major topics covered within: major activities over the past year, AAMP Monsoon Diagnostics/Metrics Task Team, Boreal Summer Asian Monsoon, Workshop on Modelling Monsoon Intraseasonal Variability, Workshop on Interdecadal Variability and Predictability of the Asian-Australian Monsoon, Evidence of Interdecadal Variability of the Asian-Australian Monsoon, Development of MJO metrics/process-oriented diagnostics/model evaluation/prediction with MJOTF and GCSS, YOTC MJOTF, GEWEX GCSS, AAMP MJO Diabatic Heating Experiment, Hindcast Experiment for Intraseasonal Prediction, Support and Coordination for CINDY2011/DYNAMO, Outreach to CORDEX, Interaction with FOCRAII, WWRP/WCRP Multi-Week Predictionmore » Project, Major Future Plans/Activities, Revised AAMP Terms of Reference, Issues and Challenges.« less
Hirsch, Irl B; Balo, Andrew K; Sayer, Kevin; Garcia, Arturo; Buckingham, Bruce A; Peyser, Thomas A
2017-06-01
The potential clinical benefits of continuous glucose monitoring (CGM) have been recognized for many years, but CGM is used by a small fraction of patients with diabetes. One obstacle to greater use of the technology is the lack of simplified tools for assessing glycemic control from CGM data without complicated visual displays of data. We developed a simple new metric, the personal glycemic state (PGS), to assess glycemic control solely from continuous glucose monitoring data. PGS is a composite index that assesses four domains of glycemic control: mean glucose, glycemic variability, time in range and frequency and severity of hypoglycemia. The metric was applied to data from six clinical studies for the G4 Platinum continuous glucose monitoring system (Dexcom, San Diego, CA). The PGS was also applied to data from a study of artificial pancreas comparing results from open loop and closed loop in adolescents and in adults. The new metric for glycemic control, PGS, was able to characterize the quality of glycemic control in a wide range of study subjects with various mean glucose, minimal, moderate, and excessive glycemic variability and subjects on open loop versus closed loop control. A new composite metric for the assessment of glycemic control based on CGM data has been defined for use in assessing glycemic control in clinical practice and research settings. The new metric may help rapidly identify problems in glycemic control and may assist with optimizing diabetes therapy during time-constrained physician office visits.
Variable Camber Continuous Aerodynamic Control Surfaces and Methods for Active Wing Shaping Control
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T. (Inventor)
2016-01-01
An aerodynamic control apparatus for an air vehicle improves various aerodynamic performance metrics by employing multiple spanwise flap segments that jointly form a continuous or a piecewise continuous trailing edge to minimize drag induced by lift or vortices. At least one of the multiple spanwise flap segments includes a variable camber flap subsystem having multiple chordwise flap segments that may be independently actuated. Some embodiments also employ a continuous leading edge slat system that includes multiple spanwise slat segments, each of which has one or more chordwise slat segment. A method and an apparatus for implementing active control of a wing shape are also described and include the determination of desired lift distribution to determine the improved aerodynamic deflection of the wings. Flap deflections are determined and control signals are generated to actively control the wing shape to approximate the desired deflection.
Sensitivity of intermittent streams to climate variations in the western United States
NASA Astrophysics Data System (ADS)
Eng, K.; Wolock, D.; Dettinger, M. D.
2014-12-01
There is a great deal of interest in streamflow changes caused by climate change because of the potential negative effects on aquatic biota and water supplies. Most previous studies have focused on perennial streams, and only a few studies have examined the effect of climate variability on intermittent streams. Our objective in this study was to evaluate the sensitivity of intermittent streams to historical variability in climate in the semi-arid regions of the western United States. This study was carried out at 45 intermittent streams that had a minimum of 45 years of daily-streamgage record by evaluating: (1) correlations among time series of flow metrics (number of zero-flow events, the average of the central 50% and largest 10% of flows) with climate, and (2) decadal changes in the seasonality and long-term trends of these flow metrics. Results showed strong associations between the low-flow metrics and historical changes in climate. The decadal analysis, in contrast, suggested no significant seasonal shifts or decade-to-decade trends in the low-flow metrics. The lack of trends or changes in seasonality is likely due to unchanged long-term patterns in precipitation over the time period examined.
Barrier Island Shorelines Extracted from Landsat Imagery
Guy, Kristy K.
2015-10-13
The shoreline is a common variable used as a metric for coastal erosion or change (Himmelstoss and others, 2010). Although shorelines are often extracted from topographic data (for example, ground-based surveys and light detection and ranging [lidar]), image-based shorelines, corrected for their inherent uncertainties (Moore and others, 2006), have provided much of our understanding of long-term shoreline change because they pre-date routine lidar elevation survey methods. Image-based shorelines continue to be valuable because of their higher temporal resolution compared to costly airborne lidar surveys. A method for extracting sandy shorelines from 30-meter (m) resolution Landsat imagery is presented here.
NASA Astrophysics Data System (ADS)
Yushkov, A.; Risse, M.; Werner, M.; Krieg, J.
2016-12-01
We present a method to determine the proton-to-helium ratio in cosmic rays at ultra-high energies. It makes use of the exponential slope, Λ, of the tail of the Xmax distribution measured by an air shower experiment. The method is quite robust with respect to uncertainties from modeling hadronic interactions and to systematic errors on Xmax and energy, and to the possible presence of primary nuclei heavier than helium. Obtaining the proton-to-helium ratio with air shower experiments would be a remarkable achievement. To quantify the applicability of a particular mass-sensitive variable for mass composition analysis despite hadronic uncertainties we introduce as a metric the 'analysis indicator' and find an improved performance of the Λ method compared to other variables currently used in the literature. The fraction of events in the tail of the Xmax distribution can provide additional information on the presence of nuclei heavier than helium in the primary beam.
Boosting quantum annealer performance via sample persistence
NASA Astrophysics Data System (ADS)
Karimi, Hamed; Rosenberg, Gili
2017-07-01
We propose a novel method for reducing the number of variables in quadratic unconstrained binary optimization problems, using a quantum annealer (or any sampler) to fix the value of a large portion of the variables to values that have a high probability of being optimal. The resulting problems are usually much easier for the quantum annealer to solve, due to their being smaller and consisting of disconnected components. This approach significantly increases the success rate and number of observations of the best known energy value in samples obtained from the quantum annealer, when compared with calling the quantum annealer without using it, even when using fewer annealing cycles. Use of the method results in a considerable improvement in success metrics even for problems with high-precision couplers and biases, which are more challenging for the quantum annealer to solve. The results are further enhanced by applying the method iteratively and combining it with classical pre-processing. We present results for both Chimera graph-structured problems and embedded problems from a real-world application.
Yeung, Dit-Yan; Chang, Hong; Dai, Guang
2008-11-01
In recent years, metric learning in the semisupervised setting has aroused a lot of research interest. One type of semisupervised metric learning utilizes supervisory information in the form of pairwise similarity or dissimilarity constraints. However, most methods proposed so far are either limited to linear metric learning or unable to scale well with the data set size. In this letter, we propose a nonlinear metric learning method based on the kernel approach. By applying low-rank approximation to the kernel matrix, our method can handle significantly larger data sets. Moreover, our low-rank approximation scheme can naturally lead to out-of-sample generalization. Experiments performed on both artificial and real-world data show very promising results.
Landsat phenological metrics and their relation to aboveground carbon in the Brazilian Savanna.
Schwieder, M; Leitão, P J; Pinto, J R R; Teixeira, A M C; Pedroni, F; Sanchez, M; Bustamante, M M; Hostert, P
2018-05-15
The quantification and spatially explicit mapping of carbon stocks in terrestrial ecosystems is important to better understand the global carbon cycle and to monitor and report change processes, especially in the context of international policy mechanisms such as REDD+ or the implementation of Nationally Determined Contributions (NDCs) and the UN Sustainable Development Goals (SDGs). Especially in heterogeneous ecosystems, such as Savannas, accurate carbon quantifications are still lacking, where highly variable vegetation densities occur and a strong seasonality hinders consistent data acquisition. In order to account for these challenges we analyzed the potential of land surface phenological metrics derived from gap-filled 8-day Landsat time series for carbon mapping. We selected three areas located in different subregions in the central Brazil region, which is a prominent example of a Savanna with significant carbon stocks that has been undergoing extensive land cover conversions. Here phenological metrics from the season 2014/2015 were combined with aboveground carbon field samples of cerrado sensu stricto vegetation using Random Forest regression models to map the regional carbon distribution and to analyze the relation between phenological metrics and aboveground carbon. The gap filling approach enabled to accurately approximate the original Landsat ETM+ and OLI EVI values and the subsequent derivation of annual phenological metrics. Random Forest model performances varied between the three study areas with RMSE values of 1.64 t/ha (mean relative RMSE 30%), 2.35 t/ha (46%) and 2.18 t/ha (45%). Comparable relationships between remote sensing based land surface phenological metrics and aboveground carbon were observed in all study areas. Aboveground carbon distributions could be mapped and revealed comprehensible spatial patterns. Phenological metrics were derived from 8-day Landsat time series with a spatial resolution that is sufficient to capture gradual changes in carbon stocks of heterogeneous Savanna ecosystems. These metrics revealed the relationship between aboveground carbon and the phenology of the observed vegetation. Our results suggest that metrics relating to the seasonal minimum and maximum values were the most influential variables and bear potential to improve spatially explicit mapping approaches in heterogeneous ecosystems, where both spatial and temporal resolutions are critical.
Hybrid performance measurement of a business process outsourcing - A Malaysian company perspective
NASA Astrophysics Data System (ADS)
Oluyinka, Oludapo Samson; Tamyez, Puteri Fadzline; Kie, Cheng Jack; Freida, Ayodele Ozavize
2017-05-01
It's no longer new that customer perceived value for product and services are now greatly influenced by its psychological and social advantages. In order to meet up with the increasing operational cost, response time, quality and innovative capabilities many companies turned their fixed operational cost to a variable cost through outsourcing. Hence, the researcher explored different underlying outsourcing theories and infer that these theories are essential to performance improvement. In this study, the researcher evaluates the performance of a business process outsource company by a combination of lean and agile method. To test the hypotheses, we analyze different variability that a business process company faces, how lean and agile have been used in other industry to address such variability and discuss the result using a predictive multiple regression analysis on data collected from companies in Malaysia. The findings from this study revealed that while each method has its own advantage, a business process outsource company could achieve more (up to 87%) increase in performance level by developing a strategy which focuses on a perfect mixture of lean and agile improvement methods. Secondly, this study shows that performance indicator could be better evaluated with non-metrics variables of the agile method. Thirdly, this study also shows that business process outsourcing company could perform better when they concentrate more on strengthening internal process integration of employees.
NASA Astrophysics Data System (ADS)
Douglas, Michael R.; Karp, Robert L.; Lukic, Sergio; Reinbacher, René
2008-03-01
We develop numerical methods for approximating Ricci flat metrics on Calabi-Yau hypersurfaces in projective spaces. Our approach is based on finding balanced metrics and builds on recent theoretical work by Donaldson. We illustrate our methods in detail for a one parameter family of quintics. We also suggest several ways to extend our results.
Ten Commonly Asked Questions by Teachers About Metric Education
ERIC Educational Resources Information Center
Thompson, Thomas E.
1977-01-01
Lists and answers the ten questions most frequently asked by teachers in inservice programs on metric system education. Questions include ones about reasons for converting to metrics and successful methods, activities, and materials for teaching metrics. (CS)
Porta, Alberto; Bari, Vlasta; Marchi, Andrea; De Maria, Beatrice; Cysarz, Dirk; Van Leeuwen, Peter; Takahashi, Anielle C. M.; Catai, Aparecida M.; Gnecchi-Ruscone, Tomaso
2015-01-01
Two diverse complexity metrics quantifying time irreversibility and local prediction, in connection with a surrogate data approach, were utilized to detect nonlinear dynamics in short heart period (HP) variability series recorded in fetuses, as a function of the gestational period, and in healthy humans, as a function of the magnitude of the orthostatic challenge. The metrics indicated the presence of two distinct types of nonlinear HP dynamics characterized by diverse ranges of time scales. These findings stress the need to render more specific the analysis of nonlinear components of HP dynamics by accounting for different temporal scales. PMID:25806002
StreamThermal: A software package for calculating thermal metrics from stream temperature data
Tsang, Yin-Phan; Infante, Dana M.; Stewart, Jana S.; Wang, Lizhu; Tingly, Ralph; Thornbrugh, Darren; Cooper, Arthur; Wesley, Daniel
2016-01-01
Improving quality and better availability of continuous stream temperature data allows natural resource managers, particularly in fisheries, to understand associations between different characteristics of stream thermal regimes and stream fishes. However, there is no convenient tool to efficiently characterize multiple metrics reflecting stream thermal regimes with the increasing amount of data. This article describes a software program packaged as a library in R to facilitate this process. With this freely-available package, users will be able to quickly summarize metrics that describe five categories of stream thermal regimes: magnitude, variability, frequency, timing, and rate of change. The installation and usage instruction of this package, the definition of calculated thermal metrics, as well as the output format from the package are described, along with an application showing the utility for multiple metrics. We believe this package can be widely utilized by interested stakeholders and greatly assist more studies in fisheries.
An objective method for a video quality evaluation in a 3DTV service
NASA Astrophysics Data System (ADS)
Wilczewski, Grzegorz
2015-09-01
The following article describes proposed objective method for a 3DTV video quality evaluation, a Compressed Average Image Intensity (CAII) method. Identification of the 3DTV service's content chain nodes enables to design a versatile, objective video quality metric. It is based on an advanced approach to the stereoscopic videostream analysis. Insights towards designed metric mechanisms, as well as the evaluation of performance of the designed video quality metric, in the face of the simulated environmental conditions are herein discussed. As a result, created CAII metric might be effectively used in a variety of service quality assessment applications.
Chang, M-C Oliver; Shields, J Erin
2017-06-01
To reliably measure at the low particulate matter (PM) levels needed to meet California's Low Emission Vehicle (LEV III) 3- and 1-mg/mile particulate matter (PM) standards, various approaches other than gravimetric measurement have been suggested for testing purposes. In this work, a feasibility study of solid particle number (SPN, d50 = 23 nm) and black carbon (BC) as alternatives to gravimetric PM mass was conducted, based on the relationship of these two metrics to gravimetric PM mass, as well as the variability of each of these metrics. More than 150 Federal Test Procedure (FTP-75) or Supplemental Federal Test Procedure (US06) tests were conducted on 46 light-duty vehicles, including port-fuel-injected and direct-injected gasoline vehicles, as well as several light-duty diesel vehicles equipped with diesel particle filters (LDD/DPF). For FTP tests, emission variability of gravimetric PM mass was found to be slightly less than that of either SPN or BC, whereas the opposite was observed for US06 tests. Emission variability of PM mass for LDD/DPF was higher than that of both SPN and BC, primarily because of higher PM mass measurement uncertainties (background and precision) near or below 0.1 mg/mile. While strong correlations were observed from both SPN and BC to PM mass, the slopes are dependent on engine technologies and driving cycles, and the proportionality between the metrics can vary over the course of the test. Replacement of the LEV III PM mass emission standard with one other measurement metric may imperil the effectiveness of emission reduction, as a correlation-based relationship may evolve over future technologies for meeting stringent greenhouse standards. Solid particle number and black carbon were suggested in place of PM mass for the California LEV III 1-mg/mile FTP standard. Their equivalence, proportionality, and emission variability in comparison to PM mass, based on a large light-duty vehicle fleet examined, are dependent on engine technologies and driving cycles. Such empirical derived correlations exhibit the limitation of using these metrics for enforcement and certification standards as vehicle combustion and after-treatment technologies advance.
Nonlinear Semi-Supervised Metric Learning Via Multiple Kernels and Local Topology.
Li, Xin; Bai, Yanqin; Peng, Yaxin; Du, Shaoyi; Ying, Shihui
2018-03-01
Changing the metric on the data may change the data distribution, hence a good distance metric can promote the performance of learning algorithm. In this paper, we address the semi-supervised distance metric learning (ML) problem to obtain the best nonlinear metric for the data. First, we describe the nonlinear metric by the multiple kernel representation. By this approach, we project the data into a high dimensional space, where the data can be well represented by linear ML. Then, we reformulate the linear ML by a minimization problem on the positive definite matrix group. Finally, we develop a two-step algorithm for solving this model and design an intrinsic steepest descent algorithm to learn the positive definite metric matrix. Experimental results validate that our proposed method is effective and outperforms several state-of-the-art ML methods.
Fighter agility metrics, research, and test
NASA Technical Reports Server (NTRS)
Liefer, Randall K.; Valasek, John; Eggold, David P.
1990-01-01
Proposed new metrics to assess fighter aircraft agility are collected and analyzed. A framework for classification of these new agility metrics is developed and applied. A completed set of transient agility metrics is evaluated with a high fidelity, nonlinear F-18 simulation provided by the NASA Dryden Flight Research Center. Test techniques and data reduction methods are proposed. A method of providing cuing information to the pilot during flight test is discussed. The sensitivity of longitudinal and lateral agility metrics to deviations from the pilot cues is studied in detail. The metrics are shown to be largely insensitive to reasonable deviations from the nominal test pilot commands. Instrumentation required to quantify agility via flight test is also considered. With one exception, each of the proposed new metrics may be measured with instrumentation currently available. Simulation documentation and user instructions are provided in an appendix.
Application of effective discharge analysis to environmental flow decision-making
McKay, S. Kyle; Freeman, Mary C.; Covich, A.P.
2016-01-01
Well-informed river management decisions rely on an explicit statement of objectives, repeatable analyses, and a transparent system for assessing trade-offs. These components may then be applied to compare alternative operational regimes for water resource infrastructure (e.g., diversions, locks, and dams). Intra- and inter-annual hydrologic variability further complicates these already complex environmental flow decisions. Effective discharge analysis (developed in studies of geomorphology) is a powerful tool for integrating temporal variability of flow magnitude and associated ecological consequences. Here, we adapt the effectiveness framework to include multiple elements of the natural flow regime (i.e., timing, duration, and rate-of-change) as well as two flow variables. We demonstrate this analytical approach using a case study of environmental flow management based on long-term (60 years) daily discharge records in the Middle Oconee River near Athens, GA, USA. Specifically, we apply an existing model for estimating young-of-year fish recruitment based on flow-dependent metrics to an effective discharge analysis that incorporates hydrologic variability and multiple focal taxa. We then compare three alternative methods of environmental flow provision. Percentage-based withdrawal schemes outcompete other environmental flow methods across all levels of water withdrawal and ecological outcomes.
Application of Effective Discharge Analysis to Environmental Flow Decision-Making.
McKay, S Kyle; Freeman, Mary C; Covich, Alan P
2016-06-01
Well-informed river management decisions rely on an explicit statement of objectives, repeatable analyses, and a transparent system for assessing trade-offs. These components may then be applied to compare alternative operational regimes for water resource infrastructure (e.g., diversions, locks, and dams). Intra- and inter-annual hydrologic variability further complicates these already complex environmental flow decisions. Effective discharge analysis (developed in studies of geomorphology) is a powerful tool for integrating temporal variability of flow magnitude and associated ecological consequences. Here, we adapt the effectiveness framework to include multiple elements of the natural flow regime (i.e., timing, duration, and rate-of-change) as well as two flow variables. We demonstrate this analytical approach using a case study of environmental flow management based on long-term (60 years) daily discharge records in the Middle Oconee River near Athens, GA, USA. Specifically, we apply an existing model for estimating young-of-year fish recruitment based on flow-dependent metrics to an effective discharge analysis that incorporates hydrologic variability and multiple focal taxa. We then compare three alternative methods of environmental flow provision. Percentage-based withdrawal schemes outcompete other environmental flow methods across all levels of water withdrawal and ecological outcomes.
Map visualization of groundwater withdrawals at the sub-basin scale
NASA Astrophysics Data System (ADS)
Goode, Daniel J.
2016-06-01
A simple method is proposed to visualize the magnitude of groundwater withdrawals from wells relative to user-defined water-resource metrics. The map is solely an illustration of the withdrawal magnitudes, spatially centered on wells—it is not capture zones or source areas contributing recharge to wells. Common practice is to scale the size (area) of withdrawal well symbols proportional to pumping rate. Symbols are drawn large enough to be visible, but not so large that they overlap excessively. In contrast to such graphics-based symbol sizes, the proposed method uses a depth-rate index (length per time) to visualize the well withdrawal rates by volumetrically consistent areas, called "footprints". The area of each individual well's footprint is the withdrawal rate divided by the depth-rate index. For example, the groundwater recharge rate could be used as a depth-rate index to show how large withdrawals are relative to that recharge. To account for the interference of nearby wells, composite footprints are computed by iterative nearest-neighbor distribution of excess withdrawals on a computational and display grid having uniform square cells. The map shows circular footprints at individual isolated wells and merged footprint areas where wells' individual footprints overlap. Examples are presented for depth-rate indexes corresponding to recharge, to spatially variable stream baseflow (normalized by basin area), and to the average rate of water-table decline (scaled by specific yield). These depth-rate indexes are water-resource metrics, and the footprints visualize the magnitude of withdrawals relative to these metrics.
Map visualization of groundwater withdrawals at the sub-basin scale
Goode, Daniel J.
2016-01-01
A simple method is proposed to visualize the magnitude of groundwater withdrawals from wells relative to user-defined water-resource metrics. The map is solely an illustration of the withdrawal magnitudes, spatially centered on wells—it is not capture zones or source areas contributing recharge to wells. Common practice is to scale the size (area) of withdrawal well symbols proportional to pumping rate. Symbols are drawn large enough to be visible, but not so large that they overlap excessively. In contrast to such graphics-based symbol sizes, the proposed method uses a depth-rate index (length per time) to visualize the well withdrawal rates by volumetrically consistent areas, called “footprints”. The area of each individual well’s footprint is the withdrawal rate divided by the depth-rate index. For example, the groundwater recharge rate could be used as a depth-rate index to show how large withdrawals are relative to that recharge. To account for the interference of nearby wells, composite footprints are computed by iterative nearest-neighbor distribution of excess withdrawals on a computational and display grid having uniform square cells. The map shows circular footprints at individual isolated wells and merged footprint areas where wells’ individual footprints overlap. Examples are presented for depth-rate indexes corresponding to recharge, to spatially variable stream baseflow (normalized by basin area), and to the average rate of water-table decline (scaled by specific yield). These depth-rate indexes are water-resource metrics, and the footprints visualize the magnitude of withdrawals relative to these metrics.
Linking multimetric and multivariate approaches to assess the ecological condition of streams.
Collier, Kevin J
2009-10-01
Few attempts have been made to combine multimetric and multivariate analyses for bioassessment despite recognition that an integrated method could yield powerful tools for bioassessment. An approach is described that integrates eight macroinvertebrate community metrics into a Principal Components Analysis to develop a Multivariate Condition Score (MCS) from a calibration dataset of 511 samples. The MCS is compared to an Index of Biotic Integrity (IBI) derived using the same metrics based on the ratio to the reference site mean. Both approaches were highly correlated although the MCS appeared to offer greater potential for discriminating a wider range of impaired conditions. Both the MCS and IBI displayed low temporal variability within reference sites, and were able to distinguish between reference conditions and low levels of catchment modification and local habitat degradation, although neither discriminated among three levels of low impact. Pseudosamples developed to test the response of the metric aggregation approaches to organic enrichment, urban, mining, pastoral and logging stressor scenarios ranked pressures in the same order, but the MCS provided a lower score for the urban scenario and a higher score for the pastoral scenario. The MCS was calculated for an independent test dataset of urban and reference sites, and yielded similar results to the IBI. Although both methods performed comparably, the MCS approach may have some advantages because it removes the subjectivity of assigning thresholds for scoring biological condition, and it appears to discriminate a wider range of degraded conditions.
Development and application of a novel metric to assess effectiveness of biomedical data
Bloom, Gregory C; Eschrich, Steven; Hang, Gang; Schabath, Matthew B; Bhansali, Neera; Hoerter, Andrew M; Morgan, Scott; Fenstermacher, David A
2013-01-01
Objective Design a metric to assess the comparative effectiveness of biomedical data elements within a study that incorporates their statistical relatedness to a given outcome variable as well as a measurement of the quality of their underlying data. Materials and methods The cohort consisted of 874 patients with adenocarcinoma of the lung, each with 47 clinical data elements. The p value for each element was calculated using the Cox proportional hazard univariable regression model with overall survival as the endpoint. An attribute or A-score was calculated by quantification of an element's four quality attributes; Completeness, Comprehensiveness, Consistency and Overall-cost. An effectiveness or E-score was obtained by calculating the conditional probabilities of the p-value and A-score within the given data set with their product equaling the effectiveness score (E-score). Results The E-score metric provided information about the utility of an element beyond an outcome-related p value ranking. E-scores for elements age-at-diagnosis, gender and tobacco-use showed utility above what their respective p values alone would indicate due to their relative ease of acquisition, that is, higher A-scores. Conversely, elements surgery-site, histologic-type and pathological-TNM stage were down-ranked in comparison to their p values based on lower A-scores caused by significantly higher acquisition costs. Conclusions A novel metric termed E-score was developed which incorporates standard statistics with data quality metrics and was tested on elements from a large lung cohort. Results show that an element's underlying data quality is an important consideration in addition to p value correlation to outcome when determining the element's clinical or research utility in a study. PMID:23975264
Multivariate decoding of brain images using ordinal regression.
Doyle, O M; Ashburner, J; Zelaya, F O; Williams, S C R; Mehta, M A; Marquand, A F
2013-11-01
Neuroimaging data are increasingly being used to predict potential outcomes or groupings, such as clinical severity, drug dose response, and transitional illness states. In these examples, the variable (target) we want to predict is ordinal in nature. Conventional classification schemes assume that the targets are nominal and hence ignore their ranked nature, whereas parametric and/or non-parametric regression models enforce a metric notion of distance between classes. Here, we propose a novel, alternative multivariate approach that overcomes these limitations - whole brain probabilistic ordinal regression using a Gaussian process framework. We applied this technique to two data sets of pharmacological neuroimaging data from healthy volunteers. The first study was designed to investigate the effect of ketamine on brain activity and its subsequent modulation with two compounds - lamotrigine and risperidone. The second study investigates the effect of scopolamine on cerebral blood flow and its modulation using donepezil. We compared ordinal regression to multi-class classification schemes and metric regression. Considering the modulation of ketamine with lamotrigine, we found that ordinal regression significantly outperformed multi-class classification and metric regression in terms of accuracy and mean absolute error. However, for risperidone ordinal regression significantly outperformed metric regression but performed similarly to multi-class classification both in terms of accuracy and mean absolute error. For the scopolamine data set, ordinal regression was found to outperform both multi-class and metric regression techniques considering the regional cerebral blood flow in the anterior cingulate cortex. Ordinal regression was thus the only method that performed well in all cases. Our results indicate the potential of an ordinal regression approach for neuroimaging data while providing a fully probabilistic framework with elegant approaches for model selection. Copyright © 2013. Published by Elsevier Inc.
Sáez, Carlos; Zurriaga, Oscar; Pérez-Panadés, Jordi; Melchor, Inma; Robles, Montserrat; García-Gómez, Juan M
2016-11-01
To assess the variability in data distributions among data sources and over time through a case study of a large multisite repository as a systematic approach to data quality (DQ). Novel probabilistic DQ control methods based on information theory and geometry are applied to the Public Health Mortality Registry of the Region of Valencia, Spain, with 512 143 entries from 2000 to 2012, disaggregated into 24 health departments. The methods provide DQ metrics and exploratory visualizations for (1) assessing the variability among multiple sources and (2) monitoring and exploring changes with time. The methods are suited to big data and multitype, multivariate, and multimodal data. The repository was partitioned into 2 probabilistically separated temporal subgroups following a change in the Spanish National Death Certificate in 2009. Punctual temporal anomalies were noticed due to a punctual increment in the missing data, along with outlying and clustered health departments due to differences in populations or in practices. Changes in protocols, differences in populations, biased practices, or other systematic DQ problems affected data variability. Even if semantic and integration aspects are addressed in data sharing infrastructures, probabilistic variability may still be present. Solutions include fixing or excluding data and analyzing different sites or time periods separately. A systematic approach to assessing temporal and multisite variability is proposed. Multisite and temporal variability in data distributions affects DQ, hindering data reuse, and an assessment of such variability should be a part of systematic DQ procedures. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
The Application of Time-Frequency Methods to HUMS
NASA Technical Reports Server (NTRS)
Pryor, Anna H.; Mosher, Marianne; Lewicki, David G.; Norvig, Peter (Technical Monitor)
2001-01-01
This paper reports the study of four time-frequency transforms applied to vibration signals and presents a new metric for comparing them for fault detection. The four methods to be described and compared are the Short Time Frequency Transform (STFT), the Choi-Williams Distribution (WV-CW), the Continuous Wavelet Transform (CWT) and the Discrete Wavelet Transform (DWT). Vibration data of bevel gear tooth fatigue cracks, under a variety of operating load levels, are analyzed using these methods. The new metric for automatic fault detection is developed and can be produced from any systematic numerical representation of the vibration signals. This new metric reveals indications of gear damage with all of the methods on this data set. Analysis with the CWT detects mechanical problems with the test rig not found with the other transforms. The WV-CW and CWT use considerably more resources than the STFT and the DWT. More testing of the new metric is needed to determine its value for automatic fault detection and to develop methods of setting the threshold for the metric.
A Practical Method for Collecting Social Media Campaign Metrics
ERIC Educational Resources Information Center
Gharis, Laurie W.; Hightower, Mary F.
2017-01-01
Today's Extension professionals are tasked with more work and fewer resources. Integrating social media campaigns into outreach efforts can be an efficient way to meet work demands. If resources go toward social media, a practical method for collecting metrics is needed. Collecting metrics adds one more task to the workloads of Extension…
Metric Education in Mathematics Methods Classes.
ERIC Educational Resources Information Center
Trent, John H.
A pre-test on knowledge of the metric system was administered to elementary mathematics methods classes at the University of Nevada at the beginning of the 1975 Spring Semester. A one-hour lesson was prepared and taught regarding metric length, weight, volume, and temperature. At the end of the semester the original test was given as the…
NASA Technical Reports Server (NTRS)
Bakhtiari-Nejad, Maryam; Nguyen, Nhan T.; Krishnakumar, Kalmanje Srinvas
2009-01-01
This paper presents the application of Bounded Linear Stability Analysis (BLSA) method for metrics driven adaptive control. The bounded linear stability analysis method is used for analyzing stability of adaptive control models, without linearizing the adaptive laws. Metrics-driven adaptive control introduces a notion that adaptation should be driven by some stability metrics to achieve robustness. By the application of bounded linear stability analysis method the adaptive gain is adjusted during the adaptation in order to meet certain phase margin requirements. Analysis of metrics-driven adaptive control is evaluated for a linear damaged twin-engine generic transport model of aircraft. The analysis shows that the system with the adjusted adaptive gain becomes more robust to unmodeled dynamics or time delay.
Bigus, Paulina; Tsakovski, Stefan; Simeonov, Vasil; Namieśnik, Jacek; Tobiszewski, Marek
2016-05-01
This study presents an application of the Hasse diagram technique (HDT) as the assessment tool to select the most appropriate analytical procedures according to their greenness or the best analytical performance. The dataset consists of analytical procedures for benzo[a]pyrene determination in sediment samples, which were described by 11 variables concerning their greenness and analytical performance. Two analyses with the HDT were performed-the first one with metrological variables and the second one with "green" variables as input data. Both HDT analyses ranked different analytical procedures as the most valuable, suggesting that green analytical chemistry is not in accordance with metrology when benzo[a]pyrene in sediment samples is determined. The HDT can be used as a good decision support tool to choose the proper analytical procedure concerning green analytical chemistry principles and analytical performance merits.
McCabe, Collin M; Nunn, Charles L
2018-01-01
The transmission of infectious disease through a population is often modeled assuming that interactions occur randomly in groups, with all individuals potentially interacting with all other individuals at an equal rate. However, it is well known that pairs of individuals vary in their degree of contact. Here, we propose a measure to account for such heterogeneity: effective network size (ENS), which refers to the size of a maximally complete network (i.e., unstructured, where all individuals interact with all others equally) that corresponds to the outbreak characteristics of a given heterogeneous, structured network. We simulated susceptible-infected (SI) and susceptible-infected-recovered (SIR) models on maximally complete networks to produce idealized outbreak duration distributions for a disease on a network of a given size. We also simulated the transmission of these same diseases on random structured networks and then used the resulting outbreak duration distributions to predict the ENS for the group or population. We provide the methods to reproduce these analyses in a public R package, "enss." Outbreak durations of simulations on randomly structured networks were more variable than those on complete networks, but tended to have similar mean durations of disease spread. We then applied our novel metric to empirical primate networks taken from the literature and compared the information represented by our ENSs to that by other established social network metrics. In AICc model comparison frameworks, group size and mean distance proved to be the metrics most consistently associated with ENS for SI simulations, while group size, centralization, and modularity were most consistently associated with ENS for SIR simulations. In all cases, ENS was shown to be associated with at least two other independent metrics, supporting its use as a novel metric. Overall, our study provides a proof of concept for simulation-based approaches toward constructing metrics of ENS, while also revealing the conditions under which this approach is most promising.
Sewer, Alain; Gubian, Sylvain; Kogel, Ulrike; Veljkovic, Emilija; Han, Wanjiang; Hengstermann, Arnd; Peitsch, Manuel C; Hoeng, Julia
2014-05-17
High-quality expression data are required to investigate the biological effects of microRNAs (miRNAs). The goal of this study was, first, to assess the quality of miRNA expression data based on microarray technologies and, second, to consolidate it by applying a novel normalization method. Indeed, because of significant differences in platform designs, miRNA raw data cannot be normalized blindly with standard methods developed for gene expression. This fundamental observation motivated the development of a novel multi-array normalization method based on controllable assumptions, which uses the spike-in control probes to adjust the measured intensities across arrays. Raw expression data were obtained with the Exiqon dual-channel miRCURY LNA™ platform in the "common reference design" and processed as "pseudo-single-channel". They were used to apply several quality metrics based on the coefficient of variation and to test the novel spike-in controls based normalization method. Most of the considerations presented here could be applied to raw data obtained with other platforms. To assess the normalization method, it was compared with 13 other available approaches from both data quality and biological outcome perspectives. The results showed that the novel multi-array normalization method reduced the data variability in the most consistent way. Further, the reliability of the obtained differential expression values was confirmed based on a quantitative reverse transcription-polymerase chain reaction experiment performed for a subset of miRNAs. The results reported here support the applicability of the novel normalization method, in particular to datasets that display global decreases in miRNA expression similarly to the cigarette smoke-exposed mouse lung dataset considered in this study. Quality metrics to assess between-array variability were used to confirm that the novel spike-in controls based normalization method provided high-quality miRNA expression data suitable for reliable downstream analysis. The multi-array miRNA raw data normalization method was implemented in an R software package called ExiMiR and deposited in the Bioconductor repository.
2014-01-01
Background High-quality expression data are required to investigate the biological effects of microRNAs (miRNAs). The goal of this study was, first, to assess the quality of miRNA expression data based on microarray technologies and, second, to consolidate it by applying a novel normalization method. Indeed, because of significant differences in platform designs, miRNA raw data cannot be normalized blindly with standard methods developed for gene expression. This fundamental observation motivated the development of a novel multi-array normalization method based on controllable assumptions, which uses the spike-in control probes to adjust the measured intensities across arrays. Results Raw expression data were obtained with the Exiqon dual-channel miRCURY LNA™ platform in the “common reference design” and processed as “pseudo-single-channel”. They were used to apply several quality metrics based on the coefficient of variation and to test the novel spike-in controls based normalization method. Most of the considerations presented here could be applied to raw data obtained with other platforms. To assess the normalization method, it was compared with 13 other available approaches from both data quality and biological outcome perspectives. The results showed that the novel multi-array normalization method reduced the data variability in the most consistent way. Further, the reliability of the obtained differential expression values was confirmed based on a quantitative reverse transcription–polymerase chain reaction experiment performed for a subset of miRNAs. The results reported here support the applicability of the novel normalization method, in particular to datasets that display global decreases in miRNA expression similarly to the cigarette smoke-exposed mouse lung dataset considered in this study. Conclusions Quality metrics to assess between-array variability were used to confirm that the novel spike-in controls based normalization method provided high-quality miRNA expression data suitable for reliable downstream analysis. The multi-array miRNA raw data normalization method was implemented in an R software package called ExiMiR and deposited in the Bioconductor repository. PMID:24886675
New Methods for Personal Exposure Monitoring for Airborne Particles
Koehler, Kirsten A.; Peters, Thomas
2016-01-01
Airborne particles have been associated with a range of adverse cardiopulmonary outcomes, which has driven its monitoring at stationary, central sites throughout the world. Individual exposures, however, can differ substantially from concentrations measured at central sites due to spatial variability across a region and sources unique to the individual, such as cooking or cleaning in homes, traffic emissions during commutes, and widely varying sources encountered at work. Personal monitoring with small, battery-powered instruments enables the measurement of an individual’s exposure as they go about their daily activities. Personal monitoring can substantially reduce exposure misclassification and improve the power to detect relationships between particulate pollution and adverse health outcomes. By partitioning exposures to known locations and sources, it may be possible to account for variable toxicity of different sources. This review outlines recent advances in the field of personal exposure assessment for particulate pollution. Advances in battery technology have improved the feasibility of 24-hour monitoring, providing the ability to more completely attribute exposures to microenvironment (e.g., work, home, commute). New metrics to evaluate the relationship between particulate matter and health are also being considered, including particle number concentration, particle composition measures, and particle oxidative load. Such metrics provide opportunities to develop more precise associations between airborne particles and health and may provide opportunities for more effective regulations. PMID:26385477
Bu, Hongmei; Zhang, Yuan; Meng, Wei; Song, Xianfang
2016-05-15
This study investigated the effects of land-use patterns on nitrogen pollution in the Haicheng River basin in Northeast China during 2010 by conducting statistical and spatial analyses and by analyzing the isotopic composition of nitrate. Correlation and stepwise regressions indicated that land-use types and landscape metrics were correlated well with most river nitrogen variables and significantly predicted them during different sampling seasons. Built-up land use and shape metrics dominated in predicting nitrogen variables over seasons. According to the isotopic compositions of river nitrate in different zones, the nitrogen sources of the river principally originated from synthetic fertilizer, domestic sewage/manure, soil organic matter, and atmospheric deposition. Isotope mixing models indicated that source contributions of river nitrogen significantly varied from forested headwaters to densely populated towns of the river basin. Domestic sewage/manure was a major contributor to river nitrogen with the proportions of 76.4 ± 6.0% and 62.8 ± 2.1% in residence and farmland-residence zones, respectively. This research suggested that regulating built-up land uses and reducing discharges of domestic sewage and industrial wastewater would be effective methods for river nitrogen control. Copyright © 2016 Elsevier B.V. All rights reserved.
Al-Shargabi, T; Govindan, R B; Dave, R; Metzler, M; Wang, Y; du Plessis, A; Massaro, A N
2017-06-01
To determine whether systemic inflammation-modulating cytokine expression is related to heart rate variability (HRV) in newborns with hypoxic-ischemic encephalopathy (HIE). The data from 30 newborns with HIE were analyzed. Cytokine levels (IL-2, IL-4, IL-6, IL-8, IL-10, IL-13, IL-1β, TNF-α, IFN-λ) were measured either at 24 h of cooling (n=5), 72 h of cooling (n=4) or at both timepoints (n=21). The following HRV metrics were quantified in the time domain: alpha_S, alpha_L, root mean square (RMS) at short time scales (RMS_S), RMS at long time scales (RMS_L), while low-frequency power (LF) and high-frequency power (HF) were quantified in the frequency domain. The relationships between HRV metrics and cytokines were evaluated using mixed-models. IL-6, IL-8, IL-10, and IL-13 levels were inversely related to selected HRV metrics. Inflammation-modulating cytokines may be important mediators in the autonomic dysfunction observed in newborns with HIE.
Munro, Sarah A; Lund, Steven P; Pine, P Scott; Binder, Hans; Clevert, Djork-Arné; Conesa, Ana; Dopazo, Joaquin; Fasold, Mario; Hochreiter, Sepp; Hong, Huixiao; Jafari, Nadereh; Kreil, David P; Łabaj, Paweł P; Li, Sheng; Liao, Yang; Lin, Simon M; Meehan, Joseph; Mason, Christopher E; Santoyo-Lopez, Javier; Setterquist, Robert A; Shi, Leming; Shi, Wei; Smyth, Gordon K; Stralis-Pavese, Nancy; Su, Zhenqiang; Tong, Weida; Wang, Charles; Wang, Jian; Xu, Joshua; Ye, Zhan; Yang, Yong; Yu, Ying; Salit, Marc
2014-09-25
There is a critical need for standard approaches to assess, report and compare the technical performance of genome-scale differential gene expression experiments. Here we assess technical performance with a proposed standard 'dashboard' of metrics derived from analysis of external spike-in RNA control ratio mixtures. These control ratio mixtures with defined abundance ratios enable assessment of diagnostic performance of differentially expressed transcript lists, limit of detection of ratio (LODR) estimates and expression ratio variability and measurement bias. The performance metrics suite is applicable to analysis of a typical experiment, and here we also apply these metrics to evaluate technical performance among laboratories. An interlaboratory study using identical samples shared among 12 laboratories with three different measurement processes demonstrates generally consistent diagnostic power across 11 laboratories. Ratio measurement variability and bias are also comparable among laboratories for the same measurement process. We observe different biases for measurement processes using different mRNA-enrichment protocols.
Does external walking environment affect gait patterns?
Patterson, Matthew R; Whelan, Darragh; Reginatto, Brenda; Caprani, Niamh; Walsh, Lorcan; Smeaton, Alan F; Inomata, Akihiro; Caulfield, Brian
2014-01-01
The objective of this work is to develop an understanding of the relationship between mobility metrics obtained outside of the clinic or laboratory and the context of the external environment. Ten subjects walked with an inertial sensor on each shank and a wearable camera around their neck. They were taken on a thirty minute walk in which they mobilized over the following conditions; normal path, busy hallway, rough ground, blind folded and on a hill. Stride time, stride time variability, stance time and peak shank rotation rate during swing were calculated using previously published algorithms. Stride time was significantly different between several of the conditions. Technological advances mean that gait variables can now be captured as patients go about their daily lives. The results of this study show that the external environment has a significant impact on the quality of gait metrics. Thus, context of external walking environment is an important consideration when analyzing ambulatory gait metrics from the unsupervised home and community setting.
Dooley, Christopher J; Tenore, Francesco V; Gayzik, F Scott; Merkle, Andrew C
2018-04-27
Biological tissue testing is inherently susceptible to the wide range of variability specimen to specimen. A primary resource for encapsulating this range of variability is the biofidelity response corridor or BRC. In the field of injury biomechanics, BRCs are often used for development and validation of both physical, such as anthropomorphic test devices, and computational models. For the purpose of generating corridors, post-mortem human surrogates were tested across a range of loading conditions relevant to under-body blast events. To sufficiently cover the wide range of input conditions, a relatively small number of tests were performed across a large spread of conditions. The high volume of required testing called for leveraging the capabilities of multiple impact test facilities, all with slight variations in test devices. A method for assessing similitude of responses between test devices was created as a metric for inclusion of a response in the resulting BRC. The goal of this method was to supply a statistically sound, objective method to assess the similitude of an individual response against a set of responses to ensure that the BRC created from the set was affected primarily by biological variability, not anomalies or differences stemming from test devices. Copyright © 2018 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guba, O.; Taylor, M. A.; Ullrich, P. A.
2014-11-27
We evaluate the performance of the Community Atmosphere Model's (CAM) spectral element method on variable-resolution grids using the shallow-water equations in spherical geometry. We configure the method as it is used in CAM, with dissipation of grid scale variance, implemented using hyperviscosity. Hyperviscosity is highly scale selective and grid independent, but does require a resolution-dependent coefficient. For the spectral element method with variable-resolution grids and highly distorted elements, we obtain the best results if we introduce a tensor-based hyperviscosity with tensor coefficients tied to the eigenvalues of the local element metric tensor. The tensor hyperviscosity is constructed so that, formore » regions of uniform resolution, it matches the traditional constant-coefficient hyperviscosity. With the tensor hyperviscosity, the large-scale solution is almost completely unaffected by the presence of grid refinement. This later point is important for climate applications in which long term climatological averages can be imprinted by stationary inhomogeneities in the truncation error. We also evaluate the robustness of the approach with respect to grid quality by considering unstructured conforming quadrilateral grids generated with a well-known grid-generating toolkit and grids generated by SQuadGen, a new open source alternative which produces lower valence nodes.« less
Guba, O.; Taylor, M. A.; Ullrich, P. A.; ...
2014-06-25
We evaluate the performance of the Community Atmosphere Model's (CAM) spectral element method on variable resolution grids using the shallow water equations in spherical geometry. We configure the method as it is used in CAM, with dissipation of grid scale variance implemented using hyperviscosity. Hyperviscosity is highly scale selective and grid independent, but does require a resolution dependent coefficient. For the spectral element method with variable resolution grids and highly distorted elements, we obtain the best results if we introduce a tensor-based hyperviscosity with tensor coefficients tied to the eigenvalues of the local element metric tensor. The tensor hyperviscosity ismore » constructed so that for regions of uniform resolution it matches the traditional constant coefficient hyperviscsosity. With the tensor hyperviscosity the large scale solution is almost completely unaffected by the presence of grid refinement. This later point is important for climate applications where long term climatological averages can be imprinted by stationary inhomogeneities in the truncation error. We also evaluate the robustness of the approach with respect to grid quality by considering unstructured conforming quadrilateral grids generated with a well-known grid-generating toolkit and grids generated by SQuadGen, a new open source alternative which produces lower valence nodes.« less
NASA Astrophysics Data System (ADS)
Santa Vélez, Camilo; Enea Romano, Antonio
2018-05-01
Static coordinates can be convenient to solve the vacuum Einstein's equations in presence of spherical symmetry, but for cosmological applications comoving coordinates are more suitable to describe an expanding Universe, especially in the framework of cosmological perturbation theory (CPT). Using CPT we develop a method to transform static spherically symmetric (SSS) modifications of the de Sitter solution from static coordinates to the Newton gauge. We test the method with the Schwarzschild de Sitter (SDS) metric and then derive general expressions for the Bardeen's potentials for a class of SSS metrics obtained by adding to the de Sitter metric a term linear in the mass and proportional to a general function of the radius. Using the gauge invariance of the Bardeen's potentials we then obtain a gauge invariant definition of the turn around radius. We apply the method to an SSS solution of the Brans-Dicke theory, confirming the results obtained independently by solving the perturbation equations in the Newton gauge. The Bardeen's potentials are then derived for new SSS metrics involving logarithmic, power law and exponential modifications of the de Sitter metric. We also apply the method to SSS metrics which give flat rotation curves, computing the radial energy density profile in comoving coordinates in presence of a cosmological constant.
Vogtmann, Emily; Hua, Xing; Zhou, Liang; Wan, Yunhu; Suman, Shalabh; Zhu, Bin; Dagnall, Casey L; Hutchinson, Amy; Jones, Kristine; Hicks, Belynda D; Sinha, Rashmi; Shi, Jianxin; Abnet, Christian C
2018-05-01
Background: Few studies have prospectively evaluated the association between oral microbiota and health outcomes. Precise estimates of the intrasubject microbial metric stability will allow better study planning. Therefore, we conducted a study to evaluate the temporal variability of oral microbiota. Methods: Forty individuals provided six oral samples using the OMNIgene ORAL kit and Scope mouthwash oral rinses approximately every two months over 10 months. DNA was extracted using the QIAsymphony and the V4 region of the 16S rRNA gene was amplified and sequenced using the MiSeq. To estimate temporal variation, we calculated intraclass correlation coefficients (ICCs) for a variety of metrics and examined stability after clustering samples into distinct community types using Dirichlet multinomial models (DMMs). Results: The ICCs for the alpha diversity measures were high, including for number of observed bacterial species [0.74; 95% confidence interval (CI): 0.65-0.82 and 0.79; 95% CI: 0.75-0.94] from OMNIgene ORAL and Scope mouthwash, respectively. The ICCs for the relative abundance of the top four phyla and beta diversity matrices were lower. Three clusters provided the best model fit for the DMM from the OMNIgene ORAL samples, and the probability of remaining in a specific cluster was high (59.5%-80.7%). Conclusions: The oral microbiota appears to be stable over time for multiple metrics, but some measures, particularly relative abundance, were less stable. Impact: We used this information to calculate stability-adjusted power calculations that will inform future field study protocols and experimental analytic designs. Cancer Epidemiol Biomarkers Prev; 27(5); 594-600. ©2018 AACR . ©2018 American Association for Cancer Research.
Evaluation techniques and metrics for assessment of pan+MSI fusion (pansharpening)
NASA Astrophysics Data System (ADS)
Mercovich, Ryan A.
2015-05-01
Fusion of broadband panchromatic data with narrow band multispectral data - pansharpening - is a common and often studied problem in remote sensing. Many methods exist to produce data fusion results with the best possible spatial and spectral characteristics, and a number have been commercially implemented. This study examines the output products of 4 commercial implementations with regard to their relative strengths and weaknesses for a set of defined image characteristics and analyst use-cases. Image characteristics used are spatial detail, spatial quality, spectral integrity, and composite color quality (hue and saturation), and analyst use-cases included a variety of object detection and identification tasks. The imagery comes courtesy of the RIT SHARE 2012 collect. Two approaches are used to evaluate the pansharpening methods, analyst evaluation or qualitative measure and image quality metrics or quantitative measures. Visual analyst evaluation results are compared with metric results to determine which metrics best measure the defined image characteristics and product use-cases and to support future rigorous characterization the metrics' correlation with the analyst results. Because pansharpening represents a trade between adding spatial information from the panchromatic image, and retaining spectral information from the MSI channels, the metrics examined are grouped into spatial improvement metrics and spectral preservation metrics. A single metric to quantify the quality of a pansharpening method would necessarily be a combination of weighted spatial and spectral metrics based on the importance of various spatial and spectral characteristics for the primary task of interest. Appropriate metrics and weights for such a combined metric are proposed here, based on the conducted analyst evaluation. Additionally, during this work, a metric was developed specifically focused on assessment of spatial structure improvement relative to a reference image and independent of scene content. Using analysis of Fourier transform images, a measure of high-frequency content is computed in small sub-segments of the image. The average increase in high-frequency content across the image is used as the metric, where averaging across sub-segments combats the scene dependent nature of typical image sharpness techniques. This metric had an improved range of scores, better representing difference in the test set than other common spatial structure metrics.
Gulliver, John; Morley, David; Dunster, Chrissi; McCrea, Adrienne; van Nunen, Erik; Tsai, Ming-Yi; Probst-Hensch, Nicoltae; Eeftens, Marloes; Imboden, Medea; Ducret-Stich, Regina; Naccarati, Alessio; Galassi, Claudia; Ranzi, Andrea; Nieuwenhuijsen, Mark; Curto, Ariadna; Donaire-Gonzalez, David; Cirach, Marta; Vermeulen, Roel; Vineis, Paolo; Hoek, Gerard; Kelly, Frank J
2018-01-01
Oxidative potential (OP) of particulate matter (PM) is proposed as a biologically-relevant exposure metric for studies of air pollution and health. We aimed to evaluate the spatial variability of the OP of measured PM 2.5 using ascorbate (AA) and (reduced) glutathione (GSH), and develop land use regression (LUR) models to explain this spatial variability. We estimated annual average values (m -3 ) of OP AA and OP GSH for five areas (Basel, CH; Catalonia, ES; London-Oxford, UK (no OP GSH ); the Netherlands; and Turin, IT) using PM 2.5 filters. OP AA and OP GSH LUR models were developed using all monitoring sites, separately for each area and combined-areas. The same variables were then used in repeated sub-sampling of monitoring sites to test sensitivity of variable selection; new variables were offered where variables were excluded (p > .1). On average, measurements of OP AA and OP GSH were moderately correlated (maximum Pearson's maximum Pearson's R = = .7) with PM 2.5 and other metrics (PM 2.5 absorbance, NO 2 , Cu, Fe). HOV (hold-out validation) R 2 for OP AA models was .21, .58, .45, .53, and .13 for Basel, Catalonia, London-Oxford, the Netherlands and Turin respectively. For OP GSH , the only model achieving at least moderate performance was for the Netherlands (R 2 = .31). Combined models for OP AA and OP GSH were largely explained by study area with weak local predictors of intra-area contrasts; we therefore do not endorse them for use in epidemiologic studies. Given the moderate correlation of OP AA with other pollutants, the three reasonably performing LUR models for OP AA could be used independently of other pollutant metrics in epidemiological studies. Copyright © 2017 Elsevier Inc. All rights reserved.
Riato, Luisa; Leira, Manel; Della Bella, Valentina; Oberholster, Paul J
2018-01-15
Acid mine drainage (AMD) from coal mining in the Mpumalanga Highveld region of South Africa has caused severe chemical and biological degradation of aquatic habitats, specifically depressional wetlands, as mines use these wetlands for storage of AMD. Diatom-based multimetric indices (MMIs) to assess wetland condition have mostly been developed to assess agricultural and urban land use impacts. No diatom MMI of wetland condition has been developed to assess AMD impacts related to mining activities. Previous approaches to diatom-based MMI development in wetlands have not accounted for natural variability. Natural variability among depressional wetlands may influence the accuracy of MMIs. Epiphytic diatom MMIs sensitive to AMD were developed for a range of depressional wetland types to account for natural variation in biological metrics. For this, we classified wetland types based on diatom typologies. A range of 4-15 final metrics were selected from a pool of ~140 candidate metrics to develop the MMIs based on their: (1) broad range, (2) high separation power and (3) low correlation among metrics. Final metrics were selected from three categories: similarity to reference sites, functional groups, and taxonomic composition, which represent different aspects of diatom assemblage structure and function. MMI performances were evaluated according to their precision in distinguishing reference sites, responsiveness to discriminate reference and disturbed sites, sensitivity to human disturbances and relevancy to AMD-related stressors. Each MMI showed excellent discriminatory power, whether or not it accounted for natural variation. However, accounting for variation by grouping sites based on diatom typologies improved overall performance of MMIs. Our study highlights the usefulness of diatom-based metrics and provides a model for the biological assessment of depressional wetland condition in South Africa and elsewhere. Copyright © 2017 Elsevier B.V. All rights reserved.
Validation of Metrics as Error Predictors
NASA Astrophysics Data System (ADS)
Mendling, Jan
In this chapter, we test the validity of metrics that were defined in the previous chapter for predicting errors in EPC business process models. In Section 5.1, we provide an overview of how the analysis data is generated. Section 5.2 describes the sample of EPCs from practice that we use for the analysis. Here we discuss a disaggregation by the EPC model group and by error as well as a correlation analysis between metrics and error. Based on this sample, we calculate a logistic regression model for predicting error probability with the metrics as input variables in Section 5.3. In Section 5.4, we then test the regression function for an independent sample of EPC models from textbooks as a cross-validation. Section 5.5 summarizes the findings.
A Teacher's Guide to Metrics. A Series of In-Service Booklets Designed for Adult Educators.
ERIC Educational Resources Information Center
Wendel, Robert, Ed.; And Others
This series of seven booklets is designed to train teachers of adults in metrication, as a prerequisite to offering metrics in adult basic education and general educational development programs. The seven booklets provide a guide representing an integration of metric teaching methods and metric materials to place the adult in an active learning…
ERIC Educational Resources Information Center
Exum, Kenith Gene
Examined is the effectiveness of a method of teaching the metric system using the booklet, Metric Supplement to Mathematics, in combination with a physical science textbook. The participants in the study were randomly selected undergraduates in a non-science oriented program of study. Instruments used included the Metric Supplement to Mathematics…
Examining shifts in zooplankton community as a response of environmental change in Lakes
NASA Astrophysics Data System (ADS)
Ghadouani, Anas; Mines, Conor; Legendre, Pierre; Yan, Norman
2014-05-01
We examined 20 years of zooplankton samples from Harp Lake for shifts in zooplankton variability following invasion by zooplankton predator Bythotrephes longimanus, using organism body size—as measured at high resolution by Laser Optical Plankton Counter (LOPC)—as the primary metric of investigation. A period of transitory high variability in the 2yr post-invasion was observed for both body size compositional variability and aggregate variability metrics, with both measures of variability shifting from low or intermediate to high variability immediately following invasion, before shifting again to intermediate variability, 2 yr post-invasion. Aggregate and compositional variability dynamics were also considered in combination over the study period, revealing that the period of transitory high variability coincided with a shift from a community-wide stasis variability pattern to one of asynchrony, before a shift back to stasis 2 yr post-invasion. These dynamics were related to changes in the significant zooplankton species within the Harp Lake community over the pre- and post- invasion periods, and are likely to be indicative of changes in the stability in the zooplankton community following invasion by Bythotrephes. The dual consideration of aggregate and compositional variability as measured by LOPC was found to provide a valuable means to assess the ecological effects of biological invasion on zooplankton communities as a whole, extending our knowledge of the effects of invasion beyond that already revealed through more traditional taxonomic investigation.
New Objective Refraction Metric Based on Sphere Fitting to the Wavefront.
Jaskulski, Mateusz; Martínez-Finkelshtein, Andreí; López-Gil, Norberto
2017-01-01
To develop an objective refraction formula based on the ocular wavefront error (WFE) expressed in terms of Zernike coefficients and pupil radius, which would be an accurate predictor of subjective spherical equivalent (SE) for different pupil sizes. A sphere is fitted to the ocular wavefront at the center and at a variable distance, t . The optimal fitting distance, t opt , is obtained empirically from a dataset of 308 eyes as a function of objective refraction pupil radius, r 0 , and used to define the formula of a new wavefront refraction metric (MTR). The metric is tested in another, independent dataset of 200 eyes. For pupil radii r 0 ≤ 2 mm, the new metric predicts the equivalent sphere with similar accuracy (<0.1D), however, for r 0 > 2 mm, the mean error of traditional metrics can increase beyond 0.25D, and the MTR remains accurate. The proposed metric allows clinicians to obtain an accurate clinical spherical equivalent value without rescaling/refitting of the wavefront coefficients. It has the potential to be developed into a metric which will be able to predict full spherocylindrical refraction for the desired illumination conditions and corresponding pupil size.
Model assessment using a multi-metric ranking technique
NASA Astrophysics Data System (ADS)
Fitzpatrick, P. J.; Lau, Y.; Alaka, G.; Marks, F.
2017-12-01
Validation comparisons of multiple models presents challenges when skill levels are similar, especially in regimes dominated by the climatological mean. Assessing skill separation will require advanced validation metrics and identifying adeptness in extreme events, but maintain simplicity for management decisions. Flexibility for operations is also an asset. This work postulates a weighted tally and consolidation technique which ranks results by multiple types of metrics. Variables include absolute error, bias, acceptable absolute error percentages, outlier metrics, model efficiency, Pearson correlation, Kendall's Tau, reliability Index, multiplicative gross error, and root mean squared differences. Other metrics, such as root mean square difference and rank correlation were also explored, but removed when the information was discovered to be generally duplicative to other metrics. While equal weights are applied, weights could be altered depending for preferred metrics. Two examples are shown comparing ocean models' currents and tropical cyclone products, including experimental products. The importance of using magnitude and direction for tropical cyclone track forecasts instead of distance, along-track, and cross-track are discussed. Tropical cyclone intensity and structure prediction are also assessed. Vector correlations are not included in the ranking process, but found useful in an independent context, and will be briefly reported.
Modeling marbled murrelet (Brachyramphus marmoratus) habitat using LiDAR-derived canopy data
Hagar, Joan C.; Eskelson, Bianca N.I.; Haggerty, Patricia K.; Nelson, S. Kim; Vesely, David G.
2014-01-01
LiDAR (Light Detection And Ranging) is an emerging remote-sensing tool that can provide fine-scale data describing vertical complexity of vegetation relevant to species that are responsive to forest structure. We used LiDAR data to estimate occupancy probability for the federally threatened marbled murrelet (Brachyramphus marmoratus) in the Oregon Coast Range of the United States. Our goal was to address the need identified in the Recovery Plan for a more accurate estimate of the availability of nesting habitat by developing occupancy maps based on refined measures of nest-strand structure. We used murrelet occupancy data collected by the Bureau of Land Management Coos Bay District, and canopy metrics calculated from discrete return airborne LiDAR data, to fit a logistic regression model predicting the probability of occupancy. Our final model for stand-level occupancy included distance to coast, and 5 LiDAR-derived variables describing canopy structure. With an area under the curve value (AUC) of 0.74, this model had acceptable discrimination and fair agreement (Cohen's κ = 0.24), especially considering that all sites in our sample were regarded by managers as potential habitat. The LiDAR model provided better discrimination between occupied and unoccupied sites than did a model using variables derived from Gradient Nearest Neighbor maps that were previously reported as important predictors of murrelet occupancy (AUC = 0.64, κ = 0.12). We also evaluated LiDAR metrics at 11 known murrelet nest sites. Two LiDAR-derived variables accurately discriminated nest sites from random sites (average AUC = 0.91). LiDAR provided a means of quantifying 3-dimensional canopy structure with variables that are ecologically relevant to murrelet nesting habitat, and have not been as accurately quantified by other mensuration methods.
Asymptomatic Alzheimer disease: Defining resilience.
Hohman, Timothy J; McLaren, Donald G; Mormino, Elizabeth C; Gifford, Katherine A; Libon, David J; Jefferson, Angela L
2016-12-06
To define robust resilience metrics by leveraging CSF biomarkers of Alzheimer disease (AD) pathology within a latent variable framework and to demonstrate the ability of such metrics to predict slower rates of cognitive decline and protection against diagnostic conversion. Participants with normal cognition (n = 297) and mild cognitive impairment (n = 432) were drawn from the Alzheimer's Disease Neuroimaging Initiative. Resilience metrics were defined at baseline by examining the residuals when regressing brain aging outcomes (hippocampal volume and cognition) on CSF biomarkers. A positive residual reflected better outcomes than expected for a given level of pathology (high resilience). Residuals were integrated into a latent variable model of resilience and validated by testing their ability to independently predict diagnostic conversion, cognitive decline, and the rate of ventricular dilation. Latent variables of resilience predicted a decreased risk of conversion (hazard ratio < 0.54, p < 0.0001), slower cognitive decline (β > 0.02, p < 0.001), and slower rates of ventricular dilation (β < -4.7, p < 2 × 10 -15 ). These results were significant even when analyses were restricted to clinically normal individuals. Furthermore, resilience metrics interacted with biomarker status such that biomarker-positive individuals with low resilience showed the greatest risk of subsequent decline. Robust phenotypes of resilience calculated by leveraging AD biomarkers and baseline brain aging outcomes provide insight into which individuals are at greatest risk of short-term decline. Such comprehensive definitions of resilience are needed to further our understanding of the mechanisms that protect individuals from the clinical manifestation of AD dementia, especially among biomarker-positive individuals. © 2016 American Academy of Neurology.
Measuring phenological variability from satellite imagery
Reed, Bradley C.; Brown, Jesslyn F.; Vanderzee, D.; Loveland, Thomas R.; Merchant, James W.; Ohlen, Donald O.
1994-01-01
Vegetation phenological phenomena are closely related to seasonal dynamics of the lower atmosphere and are therefore important elements in global models and vegetation monitoring. Normalized difference vegetation index (NDVI) data derived from the National Oceanic and Atmospheric Administration's Advanced Very High Resolution Radiometer (AVHRR) satellite sensor offer a means of efficiently and objectively evaluating phenological characteristics over large areas. Twelve metrics linked to key phenological events were computed based on time-series NDVI data collected from 1989 to 1992 over the conterminous United States. These measures include the onset of greenness, time of peak NDVI, maximum NDVI, rate of greenup, rate of senescence, and integrated NDVI. Measures of central tendency and variability of the measures were computed and analyzed for various land cover types. Results from the analysis showed strong coincidence between the satellite-derived metrics and predicted phenological characteristics. In particular, the metrics identified interannual variability of spring wheat in North Dakota, characterized the phenology of four types of grasslands, and established the phenological consistency of deciduous and coniferous forests. These results have implications for large- area land cover mapping and monitoring. The utility of re- motely sensed data as input to vegetation mapping is demonstrated by showing the distinct phenology of several land cover types. More stable information contained in ancillary data should be incorporated into the mapping process, particularly in areas with high phenological variability. In a regional or global monitoring system, an increase in variability in a region may serve as a signal to perform more detailed land cover analysis with higher resolution imagery.
Single tree biomass modelling using airborne laser scanning
NASA Astrophysics Data System (ADS)
Kankare, Ville; Räty, Minna; Yu, Xiaowei; Holopainen, Markus; Vastaranta, Mikko; Kantola, Tuula; Hyyppä, Juha; Hyyppä, Hannu; Alho, Petteri; Viitala, Risto
2013-11-01
Accurate forest biomass mapping methods would provide the means for e.g. detecting bioenergy potential, biofuel and forest-bound carbon. The demand for practical biomass mapping methods at all forest levels is growing worldwide, and viable options are being developed. Airborne laser scanning (ALS) is a promising forest biomass mapping technique, due to its capability of measuring the three-dimensional forest vegetation structure. The objective of the study was to develop new methods for tree-level biomass estimation using metrics derived from ALS point clouds and to compare the results with field references collected using destructive sampling and with existing biomass models. The study area was located in Evo, southern Finland. ALS data was collected in 2009 with pulse density equalling approximately 10 pulses/m2. Linear models were developed for the following tree biomass components: total, stem wood, living branch and total canopy biomass. ALS-derived geometric and statistical point metrics were used as explanatory variables when creating the models. The total and stem biomass root mean square error per cents equalled 26.3% and 28.4% for Scots pine (Pinus sylvestris L.), and 36.8% and 27.6% for Norway spruce (Picea abies (L.) H. Karst.), respectively. The results showed that higher estimation accuracy for all biomass components can be achieved with models created in this study compared to existing allometric biomass models when ALS-derived height and diameter were used as input parameters. Best results were achieved when adding field-measured diameter and height as inputs in the existing biomass models. The only exceptions to this were the canopy and living branch biomass estimations for spruce. The achieved results are encouraging for the use of ALS-derived metrics in biomass mapping and for further development of the models.
NASA Astrophysics Data System (ADS)
Zhang, Lulu; Liu, Jingling; Li, Yi
2015-03-01
The influence of spatial differences, which are caused by different anthropogenic disturbances, and temporal changes, which are caused by natural conditions, on macroinvertebrates with periphyton communities in Baiyangdian Lake was compared. Periphyton and macrobenthos assemblage samples were simultaneously collected on four occasions during 2009 and 2010. Based on the physical and chemical attributes in the water and sediment, the 8 sampling sites can be divided into 5 habitat types by using cluster analysis. According to coefficients variation analysis (CV), three primary conclusions can be drawn: (1) the metrics of Hilsenhoff Biotic Index (HBI), Percent Tolerant Taxa (PTT), Percent dominant taxon (PDT), and community loss index (CLI), based on macroinvertebrates, and the metrics of algal density (AD), the proportion of chlorophyta (CHL), and the proportion of cyanophyta (CYA), based on periphytons, were mostly constant throughout our study; (2) in terms of spatial variation, the CV values in the macroinvertebratebased metrics were lower than the CV values in the periphyton-based metrics, and these findings may be caused by the effects of changes in environmental factors; whereas, the CV values in the macroinvertebrate-based metrics were higher than those in the periphyton-based metrics, and these results may be linked to the influences of phenology and life history patterns of the macroinvertebrate individuals; and (3) the CV values for the functionalbased metrics were higher than those for the structuralbased metrics. Therefore, spatial and temporal variation for metrics should be considered when assessing applying the biometrics.
Fransson, Boel A; Chen, Chi-Ya; Noyes, Julie A; Ragle, Claude A
2016-11-01
To determine the construct and concurrent validity of instrument motion metrics for laparoscopic skills assessment in virtual reality and augmented reality simulators. Evaluation study. Veterinarian students (novice, n = 14) and veterinarians (experienced, n = 11) with no or variable laparoscopic experience. Participants' minimally invasive surgery (MIS) experience was determined by hospital records of MIS procedures performed in the Teaching Hospital. Basic laparoscopic skills were assessed by 5 tasks using a physical box trainer. Each participant completed 2 tasks for assessments in each type of simulator (virtual reality: bowel handling and cutting; augmented reality: object positioning and a pericardial window model). Motion metrics such as instrument path length, angle or drift, and economy of motion of each simulator were recorded. None of the motion metrics in a virtual reality simulator showed correlation with experience, or to the basic laparoscopic skills score. All metrics in augmented reality were significantly correlated with experience (time, instrument path, and economy of movement), except for the hand dominance metric. The basic laparoscopic skills score was correlated to all performance metrics in augmented reality. The augmented reality motion metrics differed between American College of Veterinary Surgeons diplomates and residents, whereas basic laparoscopic skills score and virtual reality metrics did not. Our results provide construct validity and concurrent validity for motion analysis metrics for an augmented reality system, whereas a virtual reality system was validated only for the time score. © Copyright 2016 by The American College of Veterinary Surgeons.
Witt, Emitt C.
2016-01-01
Historic lead and zinc (Pb-Zn) mining in southeast Missouri’s ―Old Lead Belt‖ has left large chat piles dominating the landscape where prior to 1972 mining was the major industry of the region. As a result of variable beneficiation methods over the history of mining activity, these piles remain with large quantities of unrecovered Pb and Zn and to a lesser extent cadmium (Cd). Quantifying the residual content of trace metals in chat piles is problematic because of the extensive field effort that must go into collecting elevation points for volumetric analysis. This investigation demonstrates that publicly available lidar point data from the U.S. Geological Survey 3D Elevation Program (3DEP) can be used to effectively calculate chat pile volumes as a method of more accurately estimating the total residual trace metal content in these mining wastes. Five chat piles located in St. Francois County, Missouri, were quantified for residual trace metal content. Utilizing lidar point cloud data collected in 2011 and existing trace metal concentration data obtained during remedial investigations, residual content of these chat piles ranged from 9247 to 88,579 metric tons Pb, 1925 to 52,306 metric tons Zn, and 51 to 1107 metric tons Cd. Development of new beneficiation methods for recovering these constituents from chat piles would need to achieve current Federal soil screening standards. To achieve this for the five chat piles investigated, 42 to 72% of residual Pb would require mitigation to the 1200 mg/kg Federal non-playground standard, 88 to 98% of residual Zn would require mitigation to the Ecological Soil Screening level (ESSL) for plant life, and 70% to 98% of Cd would require mitigation to achieve the ESSL. Achieving these goals through an existing or future beneficiation method(s) would remediate chat to a trace metal concentration level that would support its use as a safe agricultural soil amendment.
Anatomical contouring variability in thoracic organs at risk
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCall, Ross, E-mail: rmccall86@gmail.com; MacLennan, Grayden; Taylor, Matthew
2016-01-01
The purpose of this study was to determine whether contouring thoracic organs at risk was consistent among medical dosimetrists and to identify how trends in dosimetrist's education and experience affected contouring accuracy. Qualitative and quantitative methods were used to contextualize the raw data that were obtained. A total of 3 different computed tomography (CT) data sets were provided to medical dosimetrists (N = 13) across 5 different institutions. The medical dosimetrists were directed to contour the lungs, heart, spinal cord, and esophagus. The medical dosimetrists were instructed to contour in line with their institutional standards and were allowed to usemore » any contouring tool or technique that they would traditionally use. The contours from each medical dosimetrist were evaluated against “gold standard” contours drawn and validated by 2 radiation oncology physicians. The dosimetrist-derived contours were evaluated against the gold standard using both a Dice coefficient method and a penalty-based metric scoring system. A short survey was also completed by each medical dosimetrist to evaluate their individual contouring experience. There was no significant variation in the contouring consistency of the lungs and spinal cord. Intradosimetrist contouring was consistent for those who contoured the esophagus and heart correctly; however, medical dosimetrists with a poor metric score showed erratic and inconsistent methods of contouring.« less
Understanding software faults and their role in software reliability modeling
NASA Technical Reports Server (NTRS)
Munson, John C.
1994-01-01
This study is a direct result of an on-going project to model the reliability of a large real-time control avionics system. In previous modeling efforts with this system, hardware reliability models were applied in modeling the reliability behavior of this system. In an attempt to enhance the performance of the adapted reliability models, certain software attributes were introduced in these models to control for differences between programs and also sequential executions of the same program. As the basic nature of the software attributes that affect software reliability become better understood in the modeling process, this information begins to have important implications on the software development process. A significant problem arises when raw attribute measures are to be used in statistical models as predictors, for example, of measures of software quality. This is because many of the metrics are highly correlated. Consider the two attributes: lines of code, LOC, and number of program statements, Stmts. In this case, it is quite obvious that a program with a high value of LOC probably will also have a relatively high value of Stmts. In the case of low level languages, such as assembly language programs, there might be a one-to-one relationship between the statement count and the lines of code. When there is a complete absence of linear relationship among the metrics, they are said to be orthogonal or uncorrelated. Usually the lack of orthogonality is not serious enough to affect a statistical analysis. However, for the purposes of some statistical analysis such as multiple regression, the software metrics are so strongly interrelated that the regression results may be ambiguous and possibly even misleading. Typically, it is difficult to estimate the unique effects of individual software metrics in the regression equation. The estimated values of the coefficients are very sensitive to slight changes in the data and to the addition or deletion of variables in the regression equation. Since most of the existing metrics have common elements and are linear combinations of these common elements, it seems reasonable to investigate the structure of the underlying common factors or components that make up the raw metrics. The technique we have chosen to use to explore this structure is a procedure called principal components analysis. Principal components analysis is a decomposition technique that may be used to detect and analyze collinearity in software metrics. When confronted with a large number of metrics measuring a single construct, it may be desirable to represent the set by some smaller number of variables that convey all, or most, of the information in the original set. Principal components are linear transformations of a set of random variables that summarize the information contained in the variables. The transformations are chosen so that the first component accounts for the maximal amount of variation of the measures of any possible linear transform; the second component accounts for the maximal amount of residual variation; and so on. The principal components are constructed so that they represent transformed scores on dimensions that are orthogonal. Through the use of principal components analysis, it is possible to have a set of highly related software attributes mapped into a small number of uncorrelated attribute domains. This definitively solves the problem of multi-collinearity in subsequent regression analysis. There are many software metrics in the literature, but principal component analysis reveals that there are few distinct sources of variation, i.e. dimensions, in this set of metrics. It would appear perfectly reasonable to characterize the measurable attributes of a program with a simple function of a small number of orthogonal metrics each of which represents a distinct software attribute domain.
NASA Astrophysics Data System (ADS)
Forsythe, N.; Blenkinsop, S.; Fowler, H. J.
2015-05-01
A three-step climate classification was applied to a spatial domain covering the Himalayan arc and adjacent plains regions using input data from four global meteorological reanalyses. Input variables were selected based on an understanding of the climatic drivers of regional water resource variability and crop yields. Principal component analysis (PCA) of those variables and k-means clustering on the PCA outputs revealed a reanalysis ensemble consensus for eight macro-climate zones. Spatial statistics of input variables for each zone revealed consistent, distinct climatologies. This climate classification approach has potential for enhancing assessment of climatic influences on water resources and food security as well as for characterising the skill and bias of gridded data sets, both meteorological reanalyses and climate models, for reproducing subregional climatologies. Through their spatial descriptors (area, geographic centroid, elevation mean range), climate classifications also provide metrics, beyond simple changes in individual variables, with which to assess the magnitude of projected climate change. Such sophisticated metrics are of particular interest for regions, including mountainous areas, where natural and anthropogenic systems are expected to be sensitive to incremental climate shifts.
Determining a regional framework for assessing biotic integrity of virginia streams
Smogor, Roy A.; Angermeier, P.L.
2001-01-01
The utility of an index of biotic integrity (IBI) depends on its ability to distinguish anthropogenic effects on biota amid natural biological variability. To enhance this ability, we examined fish assemblage data from least-disturbed stream sites in Virginia to determine the best way to regionally stratify natural variation in candidate IBI metrics and their scoring criteria. Specifically, we examined metric variation among physiographic regions, U.S. Environmental Protection Agency ecoregions, and drainage basins to judge their utility as regions in which to develop and use distinct versions of the IBI for Virginia warmwater streams. Statewide, metrics differed most among physiographic regions; thus, we recommend their use as IBI regions. Largest differences were found for taxonomic metrics between coastal plain and mountain sites, particularly in numbers of native minnow (Cyprinidae), sunfish (Centrarchidae), and darter (Percidae) species. Trophic and reproductive metrics also differed between coastal plain and more-upland streams, presumably reflecting differences in functional adaptations of fishes to upland versus lowland stream habitats. We suggest three preliminary regional IBis for Virginia, each having a distinctive set of taxonomic, trophic, and reproductive metrics and corresponding scoring criteria.
Metrics, Lumber, and the Shop Teacher
ERIC Educational Resources Information Center
Craemer, Peter J.
1978-01-01
As producers of lumber are preparing to convert their output to the metric system, wood shop and building construction teachers must become familiar with the metric measurement language and methods. Manufacturers prefer the "soft conversion" process of changing English to metric units rather than hard conversion, or redimensioning of lumber. Some…
2015-01-01
different PRBC transfusion volumes. We performed multivariate regression analysis using HRV metrics and routine vital signs to test the hypothesis that...study sponsors did not have any role in the study design, data collection, analysis and interpretation of data, report writing, or the decision to...primary outcome was hemorrhagic injury plus different PRBC transfusion volumes. We performed multivariate regression analysis using HRV metrics and
Sensitivity of intermittent streams to climate variations in the USA
Eng, Kenny; Wolock, David M.; Dettinger, Mike
2015-01-01
There is a great deal of interest in the literature on streamflow changes caused by climate change because of the potential negative effects on aquatic biota and water supplies. Most previous studies have primarily focused on perennial streams, and there have been only a few studies examining the effect of climate variability on intermittent streams. Our objectives in this study were to (1) identify regions of similar zero-flow behavior, and (2) evaluate the sensitivity of intermittent streams to historical variability in climate in the United States. This study was carried out at 265 intermittent streams by evaluating: (1) correlations among time series of flow metrics (number of zero-flow events, the average of the central 50% and largest 10% of flows) with climate (magnitudes, durations and intensity), and (2) decadal changes in the seasonality and long-term trends of these flow metrics. Results identified five distinct seasonality patterns in the zero-flow events. In addition, strong associations between the low-flow metrics and historical changes in climate were found. The decadal analysis suggested no significant seasonal shifts or decade-to-decade trends in the low-flow metrics. The lack of trends or changes in seasonality is likely due to unchanged long-term patterns in precipitation over the time period examined.
Qian, Hong; Chen, Shengbin; Zhang, Jin-Long
2017-07-17
Niche-based and neutrality-based theories are two major classes of theories explaining the assembly mechanisms of local communities. Both theories have been frequently used to explain species diversity and composition in local communities but their relative importance remains unclear. Here, we analyzed 57 assemblages of angiosperm trees in 0.1-ha forest plots across China to examine the effects of environmental heterogeneity (relevant to niche-based processes) and spatial contingency (relevant to neutrality-based processes) on phylogenetic structure of angiosperm tree assemblages distributed across a wide range of environment and space. Phylogenetic structure was quantified with six phylogenetic metrics (i.e., phylogenetic diversity, mean pairwise distance, mean nearest taxon distance, and the standardized effect sizes of these three metrics), which emphasize on different depths of evolutionary histories and account for different degrees of species richness effects. Our results showed that the variation in phylogenetic metrics explained independently by environmental variables was on average much greater than that explained independently by spatial structure, and the vast majority of the variation in phylogenetic metrics was explained by spatially structured environmental variables. We conclude that niche-based processes have played a more important role than neutrality-based processes in driving phylogenetic structure of angiosperm tree species in forest communities in China.
Miller, Anna N; Kozar, Rosemary; Wolinsky, Philip
2017-06-01
Reproducible metrics are needed to evaluate the delivery of orthopaedic trauma care, national care, norms, and outliers. The American College of Surgeons (ACS) is uniquely positioned to collect and evaluate the data needed to evaluate orthopaedic trauma care via the Committee on Trauma and the Trauma Quality Improvement Project. We evaluated the first quality metrics the ACS has collected for orthopaedic trauma surgery to determine whether these metrics can be appropriately collected with accuracy and completeness. The metrics include the time to administration of the first dose of antibiotics for open fractures, the time to surgical irrigation and débridement of open tibial fractures, and the percentage of patients who undergo stabilization of femoral fractures at trauma centers nationwide. These metrics were analyzed to evaluate for variances in the delivery of orthopaedic care across the country. The data showed wide variances for all metrics, and many centers had incomplete ability to collect the orthopaedic trauma care metrics. There was a large variability in the results of the metrics collected among different trauma center levels, as well as among centers of a particular level. The ACS has successfully begun tracking orthopaedic trauma care performance measures, which will help inform reevaluation of the goals and continued work on data collection and improvement of patient care. Future areas of research may link these performance measures with patient outcomes, such as long-term tracking, to assess nonunion and function. This information can provide insight into center performance and its effect on patient outcomes. The ACS was able to successfully collect and evaluate the data for three metrics used to assess the quality of orthopaedic trauma care. However, additional research is needed to determine whether these metrics are suitable for evaluating orthopaedic trauma care and cutoff values for each metric.
On Applying the Prognostic Performance Metrics
NASA Technical Reports Server (NTRS)
Saxena, Abhinav; Celaya, Jose; Saha, Bhaskar; Saha, Sankalita; Goebel, Kai
2009-01-01
Prognostics performance evaluation has gained significant attention in the past few years. As prognostics technology matures and more sophisticated methods for prognostic uncertainty management are developed, a standardized methodology for performance evaluation becomes extremely important to guide improvement efforts in a constructive manner. This paper is in continuation of previous efforts where several new evaluation metrics tailored for prognostics were introduced and were shown to effectively evaluate various algorithms as compared to other conventional metrics. Specifically, this paper presents a detailed discussion on how these metrics should be interpreted and used. Several shortcomings identified, while applying these metrics to a variety of real applications, are also summarized along with discussions that attempt to alleviate these problems. Further, these metrics have been enhanced to include the capability of incorporating probability distribution information from prognostic algorithms as opposed to evaluation based on point estimates only. Several methods have been suggested and guidelines have been provided to help choose one method over another based on probability distribution characteristics. These approaches also offer a convenient and intuitive visualization of algorithm performance with respect to some of these new metrics like prognostic horizon and alpha-lambda performance, and also quantify the corresponding performance while incorporating the uncertainty information.
Empirical methods for assessing meaningful neuropsychological change following epilepsy surgery.
Sawrie, S M; Chelune, G J; Naugle, R I; Lüders, H O
1996-11-01
Traditional methods for assessing the neurocognitive effects of epilepsy surgery are confounded by practice effects, test-retest reliability issues, and regression to the mean. This study employs 2 methods for assessing individual change that allow direct comparison of changes across both individuals and test measures. Fifty-one medically intractable epilepsy patients completed a comprehensive neuropsychological battery twice, approximately 8 months apart, prior to any invasive monitoring or surgical intervention. First, a Reliable Change (RC) index score was computed for each test score to take into account the reliability of that measure, and a cutoff score was empirically derived to establish the limits of statistically reliable change. These indices were subsequently adjusted for expected practice effects. The second approach used a regression technique to establish "change norms" along a common metric that models both expected practice effects and regression to the mean. The RC index scores provide the clinician with a statistical means of determining whether a patient's retest performance is "significantly" changed from baseline. The regression norms for change allow the clinician to evaluate the magnitude of a given patient's change on 1 or more variables along a common metric that takes into account the reliability and stability of each test measure. Case data illustrate how these methods provide an empirically grounded means for evaluating neurocognitive outcomes following medical interventions such as epilepsy surgery.
A neural net-based approach to software metrics
NASA Technical Reports Server (NTRS)
Boetticher, G.; Srinivas, Kankanahalli; Eichmann, David A.
1992-01-01
Software metrics provide an effective method for characterizing software. Metrics have traditionally been composed through the definition of an equation. This approach is limited by the fact that all the interrelationships among all the parameters be fully understood. This paper explores an alternative, neural network approach to modeling metrics. Experiments performed on two widely accepted metrics, McCabe and Halstead, indicate that the approach is sound, thus serving as the groundwork for further exploration into the analysis and design of software metrics.
Space Transportation Operations: Assessment of Methodologies and Models
NASA Technical Reports Server (NTRS)
Joglekar, Prafulla
2001-01-01
The systems design process for future space transportation involves understanding multiple variables and their effect on lifecycle metrics. Variables such as technology readiness or potential environmental impact are qualitative, while variables such as reliability, operations costs or flight rates are quantitative. In deciding what new design concepts to fund, NASA needs a methodology that would assess the sum total of all relevant qualitative and quantitative lifecycle metrics resulting from each proposed concept. The objective of this research was to review the state of operations assessment methodologies and models used to evaluate proposed space transportation systems and to develop recommendations for improving them. It was found that, compared to the models available from other sources, the operations assessment methodology recently developed at Kennedy Space Center has the potential to produce a decision support tool that will serve as the industry standard. Towards that goal, a number of areas of improvement in the Kennedy Space Center's methodology are identified.
Space Transportation Operations: Assessment of Methodologies and Models
NASA Technical Reports Server (NTRS)
Joglekar, Prafulla
2002-01-01
The systems design process for future space transportation involves understanding multiple variables and their effect on lifecycle metrics. Variables such as technology readiness or potential environmental impact are qualitative, while variables such as reliability, operations costs or flight rates are quantitative. In deciding what new design concepts to fund, NASA needs a methodology that would assess the sum total of all relevant qualitative and quantitative lifecycle metrics resulting from each proposed concept. The objective of this research was to review the state of operations assessment methodologies and models used to evaluate proposed space transportation systems and to develop recommendations for improving them. It was found that, compared to the models available from other sources, the operations assessment methodology recently developed at Kennedy Space Center has the potential to produce a decision support tool that will serve as the industry standard. Towards that goal, a number of areas of improvement in the Kennedy Space Center's methodology are identified.
Frey, Jeffrey W.; Caskey, Brian J.; Lowe, B. Scott
2007-01-01
Data were gathered from July through September 2001 at 34 randomly selected sites in the West Fork White River Basin, Indiana for algal biomass, habitat, nutrients, and biological communities (fish and invertebrates). Basin characteristics (drainage area and land use) and biological-community attributes and metric scores were determined for the basin of each sampling site. Yearly Principal Components Analysis site scores were calculated for algal biomass (periphyton and seston). The yearly Principal Components Analysis site scores for the first axis (PC1) were related, using Spearman's rho, to the seasonal algal-biomass, basin-characteristics, habitat, seasonal nutrient, biological-community attribute and metric score data. The periphyton PC1 site score, which was most influenced by ash-free dry mass, was negatively related to one (percent closed canopy) of nine habitat variables examined. Of the 43 fish-community attributes and metric scores examined, the periphyton PC1 was positively related to one fish-community attribute (percent tolerant). Of the 21 invertebrate-community attributes and metric scores examined, the periphyton PC1 was positively related to one attribute (Ephemeroptera, Plecoptera, and Trichoptera (EPT) index) and one metric score (EPT index metric score). The periphyton PC1 was not related to the five basin-characteristic or 12 nutrient variables examined. The seston PC1 site score, which was most influenced by particulate organic carbon, was negatively related to two of the 12 nutrient variables examined: total Kjeldahl nitrogen (July) and total phosphorus (July). Of the 43 fish-community attributes and metric scores examined, the seston PC1 was negatively related to one attribute (large-river percent). Of the 21 invertebrate-community attributes and metric scores examined, the seston PC1 was negatively related to one attribute (EPT-to-total ratio). The seston PC1 was not related to the five basin-characteristics or nine habitat variables examined. To understand how the choice of sampling sites might have affected the results, an analysis of the drainage area and land use was done. The 34 randomly selected sites in the West Fork White River Basin in 2001 were skewed to small streams. The dominant mean land use of the sites sampled was agriculture, followed by forest, and urban. The values for nutrients (nitrate, total Kjeldahl nitrogen, total nitrogen, and total phosphorus) and chlorophyll a (periphyton and seston) were compared to published U.S. Environmental Protection Agency (USEPA) values for Aggregate Nutrient Ecoregions VI and IX and Level III Ecoregions 55 and 72. Several nutrient values were greater than the 25th percentile of the published USEPA values. Chlorophyll a (periphyton and seston) values were either greater than the 25th percentile of published USEPA values or extended data ranges in the Aggregate Nutrient Ecoregions and Level III Ecoregions. If the proposed values for the 25th percentile were adopted as nutrient water-quality criteria, many samples in the West Fork White River Basin would have exceeded the criteria.
Stochastic Control Synthesis of Systems with Structured Uncertainty
NASA Technical Reports Server (NTRS)
Padula, Sharon L. (Technical Monitor); Crespo, Luis G.
2003-01-01
This paper presents a study on the design of robust controllers by using random variables to model structured uncertainty for both SISO and MIMO feedback systems. Once the parameter uncertainty is prescribed with probability density functions, its effects are propagated through the analysis leading to stochastic metrics for the system's output. Control designs that aim for satisfactory performances while guaranteeing robust closed loop stability are attained by solving constrained non-linear optimization problems in the frequency domain. This approach permits not only to quantify the probability of having unstable and unfavorable responses for a particular control design but also to search for controls while favoring the values of the parameters with higher chance of occurrence. In this manner, robust optimality is achieved while the characteristic conservatism of conventional robust control methods is eliminated. Examples that admit closed form expressions for the probabilistic metrics of the output are used to elucidate the nature of the problem at hand and validate the proposed formulations.
Dirichlet Component Regression and its Applications to Psychiatric Data
Gueorguieva, Ralitza; Rosenheck, Robert; Zelterman, Daniel
2011-01-01
Summary We describe a Dirichlet multivariable regression method useful for modeling data representing components as a percentage of a total. This model is motivated by the unmet need in psychiatry and other areas to simultaneously assess the effects of covariates on the relative contributions of different components of a measure. The model is illustrated using the Positive and Negative Syndrome Scale (PANSS) for assessment of schizophrenia symptoms which, like many other metrics in psychiatry, is composed of a sum of scores on several components, each in turn, made up of sums of evaluations on several questions. We simultaneously examine the effects of baseline socio-demographic and co-morbid correlates on all of the components of the total PANSS score of patients from a schizophrenia clinical trial and identify variables associated with increasing or decreasing relative contributions of each component. Several definitions of residuals are provided. Diagnostics include measures of overdispersion, Cook’s distance, and a local jackknife influence metric. PMID:22058582
Monro, Donald M; Rakshit, Soumyadip; Zhang, Dexin
2007-04-01
This paper presents a novel iris coding method based on differences of discrete cosine transform (DCT) coefficients of overlapped angular patches from normalized iris images. The feature extraction capabilities of the DCT are optimized on the two largest publicly available iris image data sets, 2,156 images of 308 eyes from the CASIA database and 2,955 images of 150 eyes from the Bath database. On this data, we achieve 100 percent Correct Recognition Rate (CRR) and perfect Receiver-Operating Characteristic (ROC) Curves with no registered false accepts or rejects. Individual feature bit and patch position parameters are optimized for matching through a product-of-sum approach to Hamming distance calculation. For verification, a variable threshold is applied to the distance metric and the False Acceptance Rate (FAR) and False Rejection Rate (FRR) are recorded. A new worst-case metric is proposed for predicting practical system performance in the absence of matching failures, and the worst case theoretical Equal Error Rate (EER) is predicted to be as low as 2.59 x 10(-4) on the available data sets.
Multistressor predictive models of invertebrate condition in the Corn Belt, USA
Waite, Ian R.; Van Metre, Peter C.
2017-01-01
Understanding the complex relations between multiple environmental stressors and ecological conditions in streams can help guide resource-management decisions. During 14 weeks in spring/summer 2013, personnel from the US Geological Survey and the US Environmental Protection Agency sampled 98 wadeable streams across the Midwest Corn Belt region of the USA for water and sediment quality, physical and habitat characteristics, and ecological communities. We used these data to develop independent predictive disturbance models for 3 macroinvertebrate metrics and a multimetric index. We developed the models based on boosted regression trees (BRT) for 3 stressor categories, land use/land cover (geographic information system [GIS]), all in-stream stressors combined (nutrients, habitat, and contaminants), and for GIS plus in-stream stressors. The GIS plus in-stream stressor models had the best overall performance with an average cross-validation R2 across all models of 0.41. The models were generally consistent in the explanatory variables selected within each stressor group across the 4 invertebrate metrics modeled. Variables related to riparian condition, substrate size or embeddedness, velocity and channel shape, nutrients (primarily NH3), and contaminants (pyrethroid degradates) were important descriptors of the invertebrate metrics. Models based on all measured in-stream stressors performed comparably to models based on GIS landscape variables, suggesting that the in-stream stressor characterization reasonably represents the dominant factors affecting invertebrate communities and that GIS variables are acting as surrogates for in-stream stressors that directly affect in-stream biota.
Toward a perceptual video-quality metric
NASA Astrophysics Data System (ADS)
Watson, Andrew B.
1998-07-01
The advent of widespread distribution of digital video creates a need for automated methods for evaluating the visual quality of digital video. This is particularly so since most digital video is compressed using lossy methods, which involve the controlled introduction of potentially visible artifacts. Compounding the problem is the bursty nature of digital video, which requires adaptive bit allocation based on visual quality metrics, and the economic need to reduce bit-rate to the lowest level that yields acceptable quality. In previous work, we have developed visual quality metrics for evaluating, controlling,a nd optimizing the quality of compressed still images. These metrics incorporate simplified models of human visual sensitivity to spatial and chromatic visual signals. Here I describe a new video quality metric that is an extension of these still image metrics into the time domain. Like the still image metrics, it is based on the Discrete Cosine Transform. An effort has been made to minimize the amount of memory and computation required by the metric, in order that might be applied in the widest range of applications. To calibrate the basic sensitivity of this metric to spatial and temporal signals we have made measurements of visual thresholds for temporally varying samples of DCT quantization noise.
Term Based Comparison Metrics for Controlled and Uncontrolled Indexing Languages
ERIC Educational Resources Information Center
Good, B. M.; Tennis, J. T.
2009-01-01
Introduction: We define a collection of metrics for describing and comparing sets of terms in controlled and uncontrolled indexing languages and then show how these metrics can be used to characterize a set of languages spanning folksonomies, ontologies and thesauri. Method: Metrics for term set characterization and comparison were identified and…
Spatial modelling of landscape aesthetic potential in urban-rural fringes.
Sahraoui, Yohan; Clauzel, Céline; Foltête, Jean-Christophe
2016-10-01
The aesthetic potential of landscape has to be modelled to provide tools for land-use planning. This involves identifying landscape attributes and revealing individuals' landscape preferences. Landscape aesthetic judgments of individuals (n = 1420) were studied by means of a photo-based survey. A set of landscape visibility metrics was created to measure landscape composition and configuration in each photograph using spatial data. These metrics were used as explanatory variables in multiple linear regressions to explain aesthetic judgments. We demonstrate that landscape aesthetic judgments may be synthesized in three consensus groups. The statistical results obtained show that landscape visibility metrics have good explanatory power. Ultimately, we propose a spatial modelling of landscape aesthetic potential based on these results combined with systematic computation of visibility metrics. Copyright © 2016 Elsevier Ltd. All rights reserved.
Colonoscopy Quality: Metrics and Implementation
Calderwood, Audrey H.; Jacobson, Brian C.
2013-01-01
Synopsis Colonoscopy is an excellent area for quality improvement 1 because it is high volume, has significant associated risk and expense, and there is evidence that variability in its performance affects outcomes. The best endpoint for validation of quality metrics in colonoscopy is colorectal cancer incidence and mortality, but because of feasibility issues, a more readily accessible metric is the adenoma detection rate (ADR). Fourteen quality metrics were proposed by the joint American Society of Gastrointestinal Endoscopy/American College of Gastroenterology Task Force on “Quality Indicators for Colonoscopy” in 2006, which are described in further detail below. Use of electronic health records and quality-oriented registries will facilitate quality measurement and reporting. Unlike traditional clinical research, implementation of quality improvement initiatives involves rapid assessments and changes on an iterative basis, and can be done at the individual, group, or facility level. PMID:23931862
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, F; Byrd, D; Bowen, S
2015-06-15
Purpose: Texture metrics extracted from oncologic PET have been investigated with respect to their usefulness as definitive indicants for prognosis in a variety of cancer. Metric calculation is often based on cubic voxels. Most commonly used PET scanners, however, produce rectangular voxels, which may change texture metrics. The objective of this study was to examine the variability of PET texture feature metrics resulting from voxel anisotropy. Methods: Sinograms of NEMA NU-2 phantom for 18F-FDG were simulated using the ASIM simulation tool. The obtained projection data was reconstructed (3D-OSEM) on grids of cubic and rectangular voxels, producing PET images of resolutionmore » of 2.73x2.73x3.27mm3 and 3.27x3.27x3.27mm3, respectively. An interpolated dataset obtained from resampling the rectangular voxel data for isotropic voxel dimension (3.27mm) was also considered. For each image dataset, 28 texture parameters based on grey-level co-occurrence matrices (GLCOM), intensity histograms (GLIH), neighborhood difference matrices (GLNDM), and zone size matrices (GLZSM) were evaluated within lesions of diameter of 33, 28, 22, and 17mm. Results: In reference to the isotopic image data, texture features appearing on the rectangular voxel data varied with a range of -34-10% for GLCOM based, -31-39% for GLIH based, -80 -161% for GLNDM based, and −6–45% for GLZSM based while varied with a range of -35-23% for GLCOM based, -27-35% for GLIH based, -65-86% for GLNDM based, and -22 -18% for GLZSM based for the interpolated image data. For the anisotropic data, GLNDM-cplx exhibited the largest extent of variation (161%) while GLZSM-zp showed the least (<1%). As to the interpolated data, GLNDM-busy varied the most (86%) while GLIH-engy varied the least (<1%). Conclusion: Variability of texture appearance on oncologic PET with respect to voxel representation is substantial and feature-dependent. It necessitates consideration of standardized voxel representation for inter-institution studies attempting to validate prognostic values of PET texture features in cancer treatment.« less
Anderson, Donald D; Kilburg, Anthony T; Thomas, Thaddeus P; Marsh, J Lawrence
2016-01-01
Post-traumatic osteoarthritis (PTOA) is common after intra-articular fractures of the tibial plafond. An objective CT-based measure of fracture severity was previously found to reliably predict whether PTOA developed following surgical treatment of such fractures. However, the extended time required obtaining the fracture energy metric and its reliance upon an intact contralateral limb CT limited its clinical applicability. The objective of this study was to establish an expedited fracture severity metric that provided comparable PTOA predictive ability without the prior limitations. An expedited fracture severity metric was computed from the CT scans of 30 tibial plafond fractures using textural analysis to quantify disorder in CT images. The expedited method utilized an intact surrogate model to enable severity assessment without requiring a contralateral limb CT. Agreement between the expedited fracture severity metric and the Kellgren-Lawrence (KL) radiographic OA score at two-year follow-up was assessed using concordance. The ability of the metric to differentiate between patients that did or did not develop PTOA was assessed using the Wilcoxon Ranked Sum test. The expedited severity metric agreed well (75.2% concordance) with the KL scores. The initial fracture severity of cases that developed PTOA differed significantly (p = 0.004) from those that did not. Receiver operating characteristic analysis showed that the expedited severity metric could accurately predict PTOA outcome in 80% of the cases. The time required to obtain the expedited severity metric averaged 14.9 minutes/ case, and the metric was obtained without using an intact contralateral CT. The expedited CT-based methods for fracture severity assessment present a solution to issues limiting the utility of prior methods. In a relatively short amount of time, the expedited methodology provided a severity score capable of predicting PTOA risk, without needing to have the intact contralateral limb included in the CT scan. The described methods provide surgeons an objective, quantitative representation of the severity of a fracture. Obtained prior to the surgery, it provides a reasonable alternative to current subjective classification systems. The expedited severity metric offers surgeons an objective means for factoring severity of joint insult into treatment decision-making.
Towards a Visual Quality Metric for Digital Video
NASA Technical Reports Server (NTRS)
Watson, Andrew B.
1998-01-01
The advent of widespread distribution of digital video creates a need for automated methods for evaluating visual quality of digital video. This is particularly so since most digital video is compressed using lossy methods, which involve the controlled introduction of potentially visible artifacts. Compounding the problem is the bursty nature of digital video, which requires adaptive bit allocation based on visual quality metrics. In previous work, we have developed visual quality metrics for evaluating, controlling, and optimizing the quality of compressed still images. These metrics incorporate simplified models of human visual sensitivity to spatial and chromatic visual signals. The challenge of video quality metrics is to extend these simplified models to temporal signals as well. In this presentation I will discuss a number of the issues that must be resolved in the design of effective video quality metrics. Among these are spatial, temporal, and chromatic sensitivity and their interactions, visual masking, and implementation complexity. I will also touch on the question of how to evaluate the performance of these metrics.
Jonas, Jayne L.; Buhl, Deborah A.; Symstad, Amy J.
2015-01-01
Better understanding the influence of precipitation and temperature on plant assemblages is needed to predict the effects of climate change. Many studies have examined the relationship between plant productivity and weather (primarily precipitation), but few have directly assessed the relationship between plant richness or diversity and weather despite their increased use as metrics of ecosystem condition. We focus on the grasslands of central North America, which are characterized by high temporal climatic variability. Over the next 100 years, these grasslands are predicted to experience further increased variability in growing season precipitation, as well as increased temperatures, due to global climate change. We assess 1) the portion of interannual variability of richness and diversity explained by weather, 2) how relationships between these metrics and weather vary among plant assemblages, and 3) which aspects of weather best explain temporal variability. We used an information-theoretic approach to assess relationships between long-term plant richness and diversity patterns and a priori weather covariates using six datasets from four grasslands. Weather explained up to 49% and 63% of interannual variability in total plant species richness and diversity, respectively. However, richness and diversity responses to specific weather variables varied both among sites and among experimental treatments within sites. In general, we found many instances in which temperature was of equal or greater importance as precipitation, as well as evidence of the importance of lagged effects and precipitation or temperature variability. Although precipitation has been shown to be a key driver of productivity in grasslands, our results indicate that increasing temperatures alone, without substantial changes in precipitation patterns, could have measurable effects on Great Plains grassland plant assemblages and biodiversity metrics. Our results also suggest that richness and diversity will respond in unique ways to changing climate and management can affect these responses; additional research and monitoring will be essential for further understanding of these complex relationships.Read More: http://www.esajournals.org/doi/abs/10.1890/14-1989.1
Jonas, Jayne L; Buhl, Deborah A; Symstad, Amy J
2015-09-01
Better understanding the influence of precipitation and temperature on plant assemblages is needed to predict the effects of climate change. Many studies have examined the relationship between plant productivity and weather (primarily precipitation), but few have directly assessed the relationship between plant richness or diversity and weather despite their increased use as metrics of ecosystem condition. We focus on the grasslands of central North America, which are characterized by high temporal climatic variability. Over the next 100 years, these grasslands are predicted to experience further increased variability in growing season precipitation, as well as increased temperatures, due to global climate change. We assess the portion of interannual variability of richness and diversity explained by weather, how relationships between these metrics and weather vary among plant assemblages, and which aspects of weather best explain temporal variability. We used an information-theoretic approach to assess relationships between long-term plant richness and diversity patterns and a priori weather covariates using six data sets from four grasslands. Weather explained up to 49% and 63% of interannual variability in total plant species richness and diversity, respectively. However, richness and diversity responses to specific weather variables varied both among sites and among experimental treatments within sites. In general, we found many instances in which temperature was of equal or greater importance as precipitation, as well as evidence of the importance of lagged effects and precipitation or temperature variability. Although precipitation has been shown to be a key driver of productivity in grasslands, our results indicate that increasing temperatures alone, without substantial changes in precipitation patterns, could have measurable effects on Great Plains grassland plant assemblages and biodiversity metrics. Our results also suggest that richness and diversity will respond in unique ways to changing climate and management can affect these responses; additional research and monitoring will be essential for further understanding of these complex relationships.
A new approach for modeling gravitational radiation from the inspiral of two neutron stars
NASA Astrophysics Data System (ADS)
Luke, Stephen A.
In this dissertation, a new method of applying the ADM formalism of general relativity to model the gravitational radiation emitted from the realistic inspiral of a neutron star binary is described. A description of the conformally flat condition (CFC) is summarized, and the ADM equations are solved by use of the CFC approach for a neutron star binary. The advantages and limitations of this approach are discussed, and the need for a more accurate improvement to this approach is described. To address this need, a linearized perturbation of the CFC spatial three metric is then introduced. The general relativistic hydrodynamic equations are then allowed to evolve against this basis under the assumption that the first-order corrections to the hydrodynamic variables are negligible compared to their CFC values. As a first approximation, the linear corrections to the conformal factor, lapse function, and shift vector are also assumed to be small compared to the extrinsic curvature and the three metric. A boundary matching method is then introduced as a way of computing the gravitational radiation of this relativistic system without use of the multipole expansion as employed by earlier applications of the CFC approach. It is assumed that at a location far from the source, the three metric is accurately described by a linear correction to Minkowski spacetime. The two polarizations of gravitational radiation can then be computed at that point in terms of the linearized correction to the metric. The evolution equations obtained from the linearized perturbative correction to the CFC approach and the method for recovery of the gravity wave signal are then tested by use of a three-dimensional numerical simulation. This code is used to compute the gravity wave signal emitted a pair of equal mass neutron stars in quasi-stable circular orbits at a point early in their inspiral phase. From this simple numerical analysis, the correct general trend of gravitational radiation is recovered. Comparisons with (5/2) post-Newtonian solutions show a similar gravitational waveform, although inaccuracies are still found to exist from this computation. Finally, several areas for improvement and potential future applications of this technique are discussed.
Horn, Felix C; Marshall, Helen; Collier, Guilhem J; Kay, Richard; Siddiqui, Salman; Brightling, Christopher E; Parra-Robles, Juan; Wild, Jim M
2017-09-01
Purpose To assess the magnitude of regional response to respiratory therapeutic agents in the lungs by using treatment response mapping (TRM) with hyperpolarized gas magnetic resonance (MR) imaging. TRM was used to quantify regional physiologic response in adults with asthma who underwent a bronchodilator challenge. Materials and Methods This study was approved by the national research ethics committee and was performed with informed consent. Imaging was performed in 20 adult patients with asthma by using hyperpolarized helium 3 ( 3 He) ventilation MR imaging. Two sets of baseline images were acquired before inhalation of a bronchodilating agent (salbutamol 400 μg), and one set was acquired after. All images were registered for voxelwise comparison. Regional treatment response, ΔR(r), was calculated as the difference in regional gas distribution (R[r] = ratio of inhaled gas to total volume of a voxel when normalized for lung inflation volume) before and after intervention. A voxelwise activation threshold from the variability of the baseline images was applied to ΔR(r) maps. The summed global treatment response map (ΔR net ) was then used as a global lung index for comparison with metrics of bronchodilator response measured by using spirometry and the global imaging metric percentage ventilated volume (%VV). Results ΔR net showed significant correlation (P < .01) with changes in forced expiratory volume in 1 second (r = 0.70), forced vital capacity (r = 0.84), and %VV (r = 0.56). A significant (P < .01) positive treatment effect was detected with all metrics; however, ΔR net showed a lower intersubject coefficient of variation (64%) than all of the other tests (coefficient of variation, ≥99%). Conclusion TRM provides regional quantitative information on changes in inhaled gas ventilation in response to therapy. This method could be used as a sensitive regional outcome metric for novel respiratory interventions. © RSNA, 2017 Online supplemental material is available for this article.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saleh, Z; Thor, M; Apte, A
2014-06-01
Purpose: The quantitative evaluation of deformable image registration (DIR) is currently challenging due to lack of a ground truth. In this study we test a new method proposed for quantifying multiple-image based DIRrelated uncertainties, for DIR of pelvic images. Methods: 19 patients were analyzed, each with 6 CT scans, who previously had radiotherapy for prostate cancer. Manually delineated structures for rectum and bladder, which served as ground truth structures, were delineated on the planning CT and each subsequent scan. For each patient, voxel-by-voxel DIR-related uncertainties were evaluated, following B-spline based DIR, by applying a previously developed metric, the distance discordancemore » metric (DDM; Saleh et al., PMB (2014) 59:733). The DDM map was superimposed on the first acquired CT scan and DDM statistics were assessed, also relative to two metrics estimating the agreement between the propagated and the manually delineated structures. Results: The highest DDM values which correspond to greatest spatial uncertainties were observed near the body surface and in the bowel due to the presence of gas. The mean rectal and bladder DDM values ranged from 1.1–11.1 mm and 1.5–12.7 mm, respectively. There was a strong correlation in the DDMs between the rectum and bladder (Pearson R = 0.68 for the max DDM). For both structures, DDM was correlated with the ratio between the DIR-propagated and manually delineated volumes (R = 0.74 for the max rectal DDM). The maximum rectal DDM was negatively correlated with the Dice Similarity Coefficient between the propagated and the manually delineated volumes (R= −0.52). Conclusion: The multipleimage based DDM map quantified considerable DIR variability across different structures and among patients. Besides using the DDM for quantifying DIR-related uncertainties it could potentially be used to adjust for uncertainties in DIR-based accumulated dose distributions.« less
USDA-ARS?s Scientific Manuscript database
Semiarid grasslands contribute significantly to net terrestrial carbon flux as plant productivity and heterotrophic respiration in these moisture-limited systems are correlated with metrics related to water availability (e.g., precipitation, Actual EvapoTranspiration or AET). These variables are als...
Developing a Predictive Metric to Assess School Viability
ERIC Educational Resources Information Center
James, John T.; Tichy, Karen L.; Collins, Alan; Schwob, John
2008-01-01
This article examines a wide range of parish school indicators that can be used to predict long-term viability. The study reported in this article explored the relationship between demographic variables, financial variables, and parish grade school closures in the Archdiocese of Saint Louis. Specifically, this study investigated whether…
Pressure-specific and multiple pressure response of fish assemblages in European running waters☆
Schinegger, Rafaela; Trautwein, Clemens; Schmutz, Stefan
2013-01-01
We classified homogenous river types across Europe and searched for fish metrics qualified to show responses to specific pressures (hydromorphological pressures or water quality pressures) vs. multiple pressures in these river types. We analysed fish taxa lists from 3105 sites in 16 ecoregions and 14 countries. Sites were pre-classified for 15 selected pressures to separate unimpacted from impacted sites. Hierarchical cluster analysis was used to split unimpacted sites into four homogenous river types based on species composition and geographical location. Classification trees were employed to predict associated river types for impacted sites with four environmental variables. We defined a set of 129 candidate fish metrics to select the best reacting metrics for each river type. The candidate metrics represented tolerances/intolerances of species associated with six metric types: habitat, migration, water quality sensitivity, reproduction, trophic level and biodiversity. The results showed that 17 uncorrelated metrics reacted to pressures in the four river types. Metrics responded specifically to water quality pressures and hydromorphological pressures in three river types and to multiple pressures in all river types. Four metrics associated with water quality sensitivity showed a significant reaction in up to three river types, whereas 13 metrics were specific to individual river types. Our results contribute to the better understanding of fish assemblage response to human pressures at a pan-European scale. The results are especially important for European river management and restoration, as it is necessary to uncover underlying processes and effects of human pressures on aquatic communities. PMID:24003262
DR-TAMAS: Diffeomorphic Registration for Tensor Accurate alignMent of Anatomical Structures
Irfanoglu, M. Okan; Nayak, Amritha; Jenkins, Jeffrey; Hutchinson, Elizabeth B.; Sadeghi, Neda; Thomas, Cibu P.; Pierpaoli, Carlo
2016-01-01
In this work, we propose DR-TAMAS (Diffeomorphic Registration for Tensor Accurate alignMent of Anatomical Structures), a novel framework for intersubject registration of Diffusion Tensor Imaging (DTI) data sets. This framework is optimized for brain data and its main goal is to achieve an accurate alignment of all brain structures, including white matter (WM), gray matter (GM), and spaces containing cerebrospinal fluid (CSF). Currently most DTI-based spatial normalization algorithms emphasize alignment of anisotropic structures. While some diffusion-derived metrics, such as diffusion anisotropy and tensor eigenvector orientation, are highly informative for proper alignment of WM, other tensor metrics such as the trace or mean diffusivity (MD) are fundamental for a proper alignment of GM and CSF boundaries. Moreover, it is desirable to include information from structural MRI data, e.g., T1-weighted or T2-weighted images, which are usually available together with the diffusion data. The fundamental property of DR-TAMAS is to achieve global anatomical accuracy by incorporating in its cost function the most informative metrics locally. Another important feature of DR-TAMAS is a symmetric time-varying velocity-based transformation model, which enables it to account for potentially large anatomical variability in healthy subjects and patients. The performance of DR-TAMAS is evaluated with several data sets and compared with other widely-used diffeomorphic image registration techniques employing both full tensor information and/or DTI-derived scalar maps. Our results show that the proposed method has excellent overall performance in the entire brain, while being equivalent to the best existing methods in WM. PMID:26931817
NASA Astrophysics Data System (ADS)
Whiles, Matt R.; Brock, Brent L.; Franzen, Annette C.; Dinsmore, Steven C., II
2000-11-01
We used invertebrate bioassessment, habitat analysis, geographic information system analysis of land use, and water chemistry monitoring to evaluate tributaries of a degraded northeast Nebraska, USA, reservoir. Bimonthly invertebrate collections and monthly water chemistry samples were collected for two years on six stream reaches to identify sources contributing to reservoir degradation and test suitability of standard rapid bioassessment methods in this region. A composite biotic index composed of seven commonly used metrics was effective for distinguishing between differentially impacted sites and responded to a variety of disturbances. Individual metrics varied greatly in precision and ability to discriminate between relatively impacted and unimpacted stream reaches. A modified Hilsenhoff index showed the highest precision (reference site CV = 0.08) but was least effective at discriminating among sites. Percent dominance and the EPT (number of Ephemeroptera, Plecoptera, and Trichoptera taxa) metrics were most effective at discriminating between sites and exhibited intermediate precision. A trend of higher biotic integrity during summer was evident, indicating seasonal corrections should differ from other regions. Poor correlations were evident between water chemistry variables and bioassessment results. However, land-use factors, particularly within 18-m riparian zones, were correlated with bioassessment scores. For example, there was a strong negative correlation between percentage of rangeland in 18-m riparian zones and percentage of dominance in streams (r 2 = 0.90, P < 0.01). Results demonstrate that standard rapid bioassessment methods, with some modifications, are effective for use in this agricultural region of the Great Plains and that riparian land use may be the best predictor of stream biotic integrity.
DR-TAMAS: Diffeomorphic Registration for Tensor Accurate Alignment of Anatomical Structures.
Irfanoglu, M Okan; Nayak, Amritha; Jenkins, Jeffrey; Hutchinson, Elizabeth B; Sadeghi, Neda; Thomas, Cibu P; Pierpaoli, Carlo
2016-05-15
In this work, we propose DR-TAMAS (Diffeomorphic Registration for Tensor Accurate alignMent of Anatomical Structures), a novel framework for intersubject registration of Diffusion Tensor Imaging (DTI) data sets. This framework is optimized for brain data and its main goal is to achieve an accurate alignment of all brain structures, including white matter (WM), gray matter (GM), and spaces containing cerebrospinal fluid (CSF). Currently most DTI-based spatial normalization algorithms emphasize alignment of anisotropic structures. While some diffusion-derived metrics, such as diffusion anisotropy and tensor eigenvector orientation, are highly informative for proper alignment of WM, other tensor metrics such as the trace or mean diffusivity (MD) are fundamental for a proper alignment of GM and CSF boundaries. Moreover, it is desirable to include information from structural MRI data, e.g., T1-weighted or T2-weighted images, which are usually available together with the diffusion data. The fundamental property of DR-TAMAS is to achieve global anatomical accuracy by incorporating in its cost function the most informative metrics locally. Another important feature of DR-TAMAS is a symmetric time-varying velocity-based transformation model, which enables it to account for potentially large anatomical variability in healthy subjects and patients. The performance of DR-TAMAS is evaluated with several data sets and compared with other widely-used diffeomorphic image registration techniques employing both full tensor information and/or DTI-derived scalar maps. Our results show that the proposed method has excellent overall performance in the entire brain, while being equivalent to the best existing methods in WM. Copyright © 2016 Elsevier Inc. All rights reserved.
Characterizing Sub-Daily Flow Regimes: Implications of Hydrologic Resolution on Ecohydrology Studies
Bevelhimer, Mark S.; McManamay, Ryan A.; O'Connor, B.
2014-05-26
Natural variability in flow is a primary factor controlling geomorphic and ecological processes in riverine ecosystems. Within the hydropower industry, there is growing pressure from environmental groups and natural resource managers to change reservoir releases from daily peaking to run-of-river operations on the basis of the assumption that downstream biological communities will improve under a more natural flow regime. In this paper, we discuss the importance of assessing sub-daily flows for understanding the physical and ecological dynamics within river systems. We present a variety of metrics for characterizing sub-daily flow variation and use these metrics to evaluate general trends amongmore » streams affected by peaking hydroelectric projects, run-of-river projects and streams that are largely unaffected by flow altering activities. Univariate and multivariate techniques were used to assess similarity among different stream types on the basis of these sub-daily metrics. For comparison, similar analyses were performed using analogous metrics calculated with mean daily flow values. Our results confirm that sub-daily flow metrics reveal variation among and within streams that are not captured by daily flow statistics. Using sub-daily flow statistics, we were able to quantify the degree of difference between unaltered and peaking streams and the amount of similarity between unaltered and run-of-river streams. The sub-daily statistics were largely uncorrelated with daily statistics of similar scope. Furthermore, on short temporal scales, sub-daily statistics reveal the relatively constant nature of unaltered streamreaches and the highly variable nature of hydropower-affected streams, whereas daily statistics show just the opposite over longer temporal scales.« less
Park, Sung Wook; Brenneman, Michael; Cooke, William H; Cordova, Alberto; Fogt, Donovan
The purpose was to determine if heart rate (HR) and heart rate variability (HRV) responses would reflect anaerobic threshold (AT) using a discontinuous, incremental, cycle test. AT was determined by ventilatory threshold (VT). Cyclists (30.6±5.9y; 7 males, 8 females) completed a discontinuous cycle test consisting of 7 stages (6 min each with 3 min of rest between). Three stages were performed at power outputs (W) below those corresponding to a previously established AT, one at W corresponding to AT, and 3 at W above those corresponding to AT. The W at the intersection of the trend lines was considered each metric's "threshold". The averaged stage data for Ve, HR, and time- and frequency-domain HRV metrics were plotted versus W. The W at the "threshold" for the metrics of interest were compared using correlation analysis and paired-sample t -test. In all, several heart rate-related parameters accurately reflected AT with significant correlations (p≤0.05) were observed between AT W and HR, mean RR interval (MRR), low and high frequency spectral energy (LF and HR, respectively), high frequency peak (fHF), and HFxfHF metrics' threshold W (i.e., MRRTW, etc.). Differences in HR or HRV metric threshold W and AT for all subjects were less than 14 W. The steady state data from discontinuous protocols may allow for a true indication of steady-state physiologic stress responses and corresponding W at AT, compared to continuous protocols using 1-2 min exercise stages.
Characterizing Heterogeneity in Infiltration Rates During Managed Aquifer Recharge.
Mawer, Chloe; Parsekian, Andrew; Pidlisecky, Adam; Knight, Rosemary
2016-11-01
Infiltration rate is the key parameter that describes how water moves from the surface into a groundwater aquifer during managed aquifer recharge (MAR). Characterization of infiltration rate heterogeneity in space and time is valuable information for MAR system operation. In this study, we utilized fiber optic distributed temperature sensing (FO-DTS) observations and the phase shift of the diurnal temperature signal between two vertically co-located fiber optic cables to characterize infiltration rate spatially and temporally in a MAR basin. The FO-DTS measurements revealed spatial heterogeneity of infiltration rate: approximately 78% of the recharge water infiltrated through 50% of the pond bottom on average. We also introduced a metric for quantifying how the infiltration rate in a recharge pond changes over time, which enables FO-DTS to be used as a method for monitoring MAR and informing maintenance decisions. By monitoring this metric, we found high-spatial variability in how rapidly infiltration rate changed during the test period. We attributed this variability to biological pore clogging and found a relationship between high initial infiltration rate and the most rapid pore clogging. We found a strong relationship (R 2 = 0.8) between observed maximum infiltration rates and electrical resistivity measurements from electrical resistivity tomography data taken in the same basin when dry. This result shows that the combined acquisition of DTS and ERT data can improve the design and operation of a MAR pond significantly by providing the critical information needed about spatial variability in parameters controlling infiltration rates. © 2016, National Ground Water Association.
Griffith, Michael B; Lazorchak, James M; Herlihy, Alan T
2004-07-01
If bioassessments are to help diagnose the specific environmental stressors affecting streams, a better understanding is needed of the relationships between community metrics and ambient criteria or ambient bioassays. However, this relationship is not simple, because metrics assess responses at the community level of biological organization, while ambient criteria and ambient bioassays assess or are based on responses at the individual level. For metals, the relationship is further complicated by the influence of other chemical variables, such as hardness, on their bioavailability and toxicity. In 1993 and 1994, U.S. Environmental Protection Agency (U.S. EPA) conducted a Regional Environmental Monitoring and Assessment Program (REMAP) survey on wadeable streams in Colorado's (USA) Southern Rockies Ecoregion. In this ecoregion, mining over the past century has resulted in metals contamination of streams. The surveys collected data on fish and macroinvertebrate assemblages, physical habitat, and sediment and water chemistry and toxicity. These data provide a framework for assessing diagnostic community metrics for specific environmental stressors. We characterized streams as metals-affected based on exceedence of hardness-adjusted criteria for cadmium, copper, lead, and zinc in water; on water toxicity tests (48-h Pimephales promelas and Ceriodaphnia dubia survival); on exceedence of sediment threshold effect levels (TELs); or on sediment toxicity tests (7-d Hyalella azteca survival and growth). Macroinvertebrate and fish metrics were compared among affected and unaffected sites to identify metrics sensitive to metals. Several macroinvertebrate metrics, particularly richness metrics, were less in affected streams, while other metrics were not. This is a function of the sensitivity of the individual metrics to metals effects. Fish metrics were less sensitive to metals because of the low diversity of fish in these streams.
Characterizing local biological hotspots in the Gulf of Maine using remote sensing data
NASA Astrophysics Data System (ADS)
Ribera, Marta M.
Researchers increasingly advocate the use of ecosystem-based management (EBM) for managing complex marine ecosystems. This approach requires managers to focus on processes and cross-scale interactions, rather than individual components. However, they often lack appropriate tools and data sources to pursue this change in management approach. One method that has been proposed to understand the ecological complexity inherent in marine ecosystems is the study of biological hotspots. Biological hotspots are locations where organisms from different trophic levels aggregate to feed on abundant supplies, and they are considered a first step toward understanding the processes driving spatial and temporal heterogeneity in marine systems. Biological hotspots are supported by phytoplankton aggregations, which are characterized by high spatial and temporal variability. As a result, methods developed to locate biological hotspots in relatively stable terrestrial systems are not well suited for more dynamic marine ecosystems. The main objective of this thesis is thus to identify and characterize local-scale biological hotspots in the western side of the Gulf of Maine. The first chapter describes a new methodological framework with the steps needed to locate these types of hotspots in marine ecosystems using remote sensing datasets. Then, in the second chapter these hotspots are characterized using a novel metric that uses time series information and spatial statistics to account for both the temporal variability and spatial structure of these marine aggregations. This metric redefines biological hotspots as areas with a high probability of exhibiting positive anomalies of productivity compared to the expected regional seasonal pattern. Finally, the third chapter compares the resulting biological hotspots to fishery-dependent abundance indices of surface and benthic predators to determine the effect of the location and magnitude of phytoplankton aggregations on the rest of the ecosystem. Analyses indicate that the spatial scale and magnitude of biological hotspots in the Gulf of Maine depend on the location and time of the year. Results also show that these hotspots change over time in response to both short-term oceanographic processes and long-term climatic cycles. Finally, the new metric presented here facilitates the spatial comparison between different trophic levels, thus allowing interdisciplinary ecosystem-wide studies.
Qualifying variability: patterns in water quality and biota from a long-term, multi-stream dataset
Camille Flinders; Douglas McLaughlin
2016-01-01
Effective water resources assessment and management requires quantitative information on the variability of ambient and biological conditions in aquatic communities. Although it is understood that natural systems are variable, robust estimates of variation in water quality and biotic endpoints (e.g. community-based structure and function metrics) are rare in US waters...
Vasileios, Papathanasiou; Sotiris, Orfanidis
2017-12-07
The variation of eleven Cymodocea nodosa metrics was studied along two anthropogenic gradients in the North Aegean Sea, in two separate periods (July 2004 and July 2013). The aim was to specify existing monitoring programs on different kind of human-induced or natural stress for a better decision-making support. Key water variables (N-NO 2 , N-NO 3 , N-NH 4 , P-PO 4 , Chl-a, attenuation coefficient-K, and suspended solids) along with the stress index MALUSI were also estimated in each sampling effort. All metrics (except one) showed significant differences (p<0.05) and highest variation at the meadows scale in both sampling periods. The body size, e.g., CymoSkew, total and maximum leaf length, and leaf area (cm 2 /shoot), rather than the abundance, e.g., shoot density (shoots/m 2 ), leaf area index (m 2 /m 2 ), metrics were related to anthropogenic eutrophication variables represented by N-NH 4 , N-NO 3, N/P and MALUSI. The temporal analysis was restricted to two (2) meadows and water variables that were common between the two periods. PERMANOVA and PCA of common meadows and metrics within nine years showed significant but not consistent differences. While the most impacted studied site of Viamyl remained unchanged, a significant improvement of water quality was observed in the second most impacted meadow of Nea Karvali, which however was reduced to half of its previous area. On the one hand that was the result of combined management practices in nearby aquacultures and lower industrial activities due to the economic crisis. On the contrary, dredging and excess siltation from changes in land catchments and construction of permanent structures may decrease seagrass abundance. Copyright © 2017 Elsevier Ltd. All rights reserved.
Kuhn, T; Gullett, J M; Nguyen, P; Boutzoukas, A E; Ford, A; Colon-Perez, L M; Triplett, W; Carney, P R; Mareci, T H; Price, C C; Bauer, R M
2016-06-01
This study examined the reliability of high angular resolution diffusion tensor imaging (HARDI) data collected on a single individual across several sessions using the same scanner. HARDI data was acquired for one healthy adult male at the same time of day on ten separate days across a one-month period. Environmental factors (e.g. temperature) were controlled across scanning sessions. Tract Based Spatial Statistics (TBSS) was used to assess session-to-session variability in measures of diffusion, fractional anisotropy (FA) and mean diffusivity (MD). To address reliability within specific structures of the medial temporal lobe (MTL; the focus of an ongoing investigation), probabilistic tractography segmented the Entorhinal cortex (ERc) based on connections with Hippocampus (HC), Perirhinal (PRc) and Parahippocampal (PHc) cortices. Streamline tractography generated edge weight (EW) metrics for the aforementioned ERc connections and, as comparison regions, connections between left and right rostral and caudal anterior cingulate cortex (ACC). Coefficients of variation (CoV) were derived for the surface area and volumes of these ERc connectivity-defined regions (CDR) and for EW across all ten scans, expecting that scan-to-scan reliability would yield low CoVs. TBSS revealed no significant variation in FA or MD across scanning sessions. Probabilistic tractography successfully reproduced histologically-verified adjacent medial temporal lobe circuits. Tractography-derived metrics displayed larger ranges of scanner-to-scanner variability. Connections involving HC displayed greater variability than metrics of connection between other investigated regions. By confirming the test retest reliability of HARDI data acquisition, support for the validity of significant results derived from diffusion data can be obtained.
NASA Astrophysics Data System (ADS)
Pereira, A. A.; Gironas, J. A.; Passalacqua, P.; Mejia, A.; Niemann, J. D.
2017-12-01
Previous work has shown that lithological, tectonic and climatic processes have a major influence in shaping the geomorphology of river networks. Accordingly, quantitative classification methods have been developed to identify and characterize network types (dendritic, parallel, pinnate, rectangular and trellis) based solely on the self-affinity of their planform properties, computed from available Digital Elevation Model (DEM) data. In contrast, this research aim is to include both horizontal and vertical properties to evaluate a quantitative classification method for river networks. We include vertical properties to consider the unique surficial conditions (e.g., large and steep height drops, volcanic activity, and complexity of stream networks) of the Andes Mountains. Furthermore, the goal of the research is also to explain the implications and possible relations between the hydro-geomorphological properties and climatic conditions. The classification method is applied to 42 basins in the southern Andes in Chile, ranging in size from 208 Km2 to 8,000 Km2. The planform metrics include the incremental drainage area, stream course irregularity and junction angles, while the vertical metrics include the hypsometric curve and the slope-area relationship. We introduce new network structures (Brush, Funnel and Low Sinuosity Rectangular), possibly unique to the Andes, that can be quantitatively differentiated from previous networks identified in other geographic regions. Then, this research evaluates the effect that excluding different Strahler order streams has on the horizontal properties and therefore in the classification. We found that climatic conditions are not only linked to horizontal parameters, but also to vertical ones, finding significant correlation between climatic variables (average near-surface temperature and rainfall) and vertical measures (parameters associated with the hypsometric curve and slope-area relation). The proposed classification shows differences among basins previously classified as the same type, which are not noticeable in their horizontal properties and helps reduce misclassifications within the old clusters. Additional hydro-geomorphological metrics are to be considered in the classification method to improve the effectiveness of it.
Ring-push metric learning for person reidentification
NASA Astrophysics Data System (ADS)
He, Botao; Yu, Shaohua
2017-05-01
Person reidentification (re-id) has been widely studied because of its extensive use in video surveillance and forensics applications. It aims to search a specific person among a nonoverlapping camera network, which is highly challenging due to large variations in the cluttered background, human pose, and camera viewpoint. We present a metric learning algorithm for learning a Mahalanobis distance for re-id. Generally speaking, there exist two forces in the conventional metric learning process, one pulling force that pulls points of the same class closer and the other pushing force that pushes points of different classes as far apart as possible. We argue that, when only a limited number of training data are given, forcing interclass distances to be as large as possible may drive the metric to overfit the uninformative part of the images, such as noises and backgrounds. To alleviate overfitting, we propose the ring-push metric learning algorithm. Different from other metric learning methods that only punish too small interclass distances, in the proposed method, both too small and too large inter-class distances are punished. By introducing the generalized logistic function as the loss, we formulate the ring-push metric learning as a convex optimization problem and utilize the projected gradient descent method to solve it. The experimental results on four public datasets demonstrate the effectiveness of the proposed algorithm.
Measuring changes in Plasmodium falciparum transmission: Precision, accuracy and costs of metrics
Tusting, Lucy S.; Bousema, Teun; Smith, David L.; Drakeley, Chris
2016-01-01
As malaria declines in parts of Africa and elsewhere, and as more countries move towards elimination, it is necessary to robustly evaluate the effect of interventions and control programmes on malaria transmission. To help guide the appropriate design of trials to evaluate transmission-reducing interventions, we review eleven metrics of malaria transmission, discussing their accuracy, precision, collection methods and costs, and presenting an overall critique. We also review the non-linear scaling relationships between five metrics of malaria transmission; the entomological inoculation rate, force of infection, sporozoite rate, parasite rate and the basic reproductive number, R0. Our review highlights that while the entomological inoculation rate is widely considered the gold standard metric of malaria transmission and may be necessary for measuring changes in transmission in highly endemic areas, it has limited precision and accuracy and more standardised methods for its collection are required. In areas of low transmission, parasite rate, sero-conversion rates and molecular metrics including MOI and mFOI may be most appropriate. When assessing a specific intervention, the most relevant effects will be detected by examining the metrics most directly affected by that intervention. Future work should aim to better quantify the precision and accuracy of malaria metrics and to improve methods for their collection. PMID:24480314
Graphical CONOPS Prototype to Demonstrate Emerging Methods, Processes, and Tools at ARDEC
2013-07-17
Concept Engineering Framework (ICEF), an extensive literature review was conducted to discover metrics that exist for evaluating concept engineering...language to ICEF to SysML ................................................ 34 Table 5 Artifact metrics ...50 Table 6 Collaboration metrics
Assessing precision, bias and sigma-metrics of 53 measurands of the Alinity ci system.
Westgard, Sten; Petrides, Victoria; Schneider, Sharon; Berman, Marvin; Herzogenrath, Jörg; Orzechowski, Anthony
2017-12-01
Assay performance is dependent on the accuracy and precision of a given method. These attributes can be combined into an analytical Sigma-metric, providing a simple value for laboratorians to use in evaluating a test method's capability to meet its analytical quality requirements. Sigma-metrics were determined for 37 clinical chemistry assays, 13 immunoassays, and 3 ICT methods on the Alinity ci system. Analytical Performance Specifications were defined for the assays, following a rationale of using CLIA goals first, then Ricos Desirable goals when CLIA did not regulate the method, and then other sources if the Ricos Desirable goal was unrealistic. A precision study was conducted at Abbott on each assay using the Alinity ci system following the CLSI EP05-A2 protocol. Bias was estimated following the CLSI EP09-A3 protocol using samples with concentrations spanning the assay's measuring interval tested in duplicate on the Alinity ci system and ARCHITECT c8000 and i2000 SR systems, where testing was also performed at Abbott. Using the regression model, the %bias was estimated at an important medical decisions point. Then the Sigma-metric was estimated for each assay and was plotted on a method decision chart. The Sigma-metric was calculated using the equation: Sigma-metric=(%TEa-|%bias|)/%CV. The Sigma-metrics and Normalized Method Decision charts demonstrate that a majority of the Alinity assays perform at least at five Sigma or higher, at or near critical medical decision levels. More than 90% of the assays performed at Five and Six Sigma. None performed below Three Sigma. Sigma-metrics plotted on Normalized Method Decision charts provide useful evaluations of performance. The majority of Alinity ci system assays had sigma values >5 and thus laboratories can expect excellent or world class performance. Laboratorians can use these tools as aids in choosing high-quality products, further contributing to the delivery of excellent quality healthcare for patients. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Measuring quality in anatomic pathology.
Raab, Stephen S; Grzybicki, Dana Marie
2008-06-01
This article focuses mainly on diagnostic accuracy in measuring quality in anatomic pathology, noting that measuring any quality metric is complex and demanding. The authors discuss standardization and its variability within and across areas of care delivery and efforts involving defining and measuring error to achieve pathology quality and patient safety. They propose that data linking error to patient outcome are critical for developing quality improvement initiatives targeting errors that cause patient harm in addition to using methods of root cause analysis, beyond those traditionally used in cytologic-histologic correlation, to assist in the development of error reduction and quality improvement plans.
Lin, Meihua; Li, Haoli; Zhao, Xiaolei; Qin, Jiheng
2013-01-01
Genome-wide analysis of gene-gene interactions has been recognized as a powerful avenue to identify the missing genetic components that can not be detected by using current single-point association analysis. Recently, several model-free methods (e.g. the commonly used information based metrics and several logistic regression-based metrics) were developed for detecting non-linear dependence between genetic loci, but they are potentially at the risk of inflated false positive error, in particular when the main effects at one or both loci are salient. In this study, we proposed two conditional entropy-based metrics to challenge this limitation. Extensive simulations demonstrated that the two proposed metrics, provided the disease is rare, could maintain consistently correct false positive rate. In the scenarios for a common disease, our proposed metrics achieved better or comparable control of false positive error, compared to four previously proposed model-free metrics. In terms of power, our methods outperformed several competing metrics in a range of common disease models. Furthermore, in real data analyses, both metrics succeeded in detecting interactions and were competitive with the originally reported results or the logistic regression approaches. In conclusion, the proposed conditional entropy-based metrics are promising as alternatives to current model-based approaches for detecting genuine epistatic effects. PMID:24339984
Examination of a Rotorcraft Noise Prediction Method and Comparison to Flight Test Data
NASA Technical Reports Server (NTRS)
Boyd, D. Douglas, Jr.; Greenwood, Eric; Watts, Michael E.; Lopes, Leonard V.
2017-01-01
With a view that rotorcraft noise should be included in the preliminary design process, a relatively fast noise prediction method is examined in this paper. A comprehensive rotorcraft analysis is combined with a noise prediction method to compute several noise metrics of interest. These predictions are compared to flight test data. Results show that inclusion of only the main rotor noise will produce results that severely underpredict integrated metrics of interest. Inclusion of the tail rotor frequency content is essential for accurately predicting these integrated noise metrics.
A defect-driven diagnostic method for machine tool spindles
Vogl, Gregory W.; Donmez, M. Alkan
2016-01-01
Simple vibration-based metrics are, in many cases, insufficient to diagnose machine tool spindle condition. These metrics couple defect-based motion with spindle dynamics; diagnostics should be defect-driven. A new method and spindle condition estimation device (SCED) were developed to acquire data and to separate system dynamics from defect geometry. Based on this method, a spindle condition metric relying only on defect geometry is proposed. Application of the SCED on various milling and turning spindles shows that the new approach is robust for diagnosing the machine tool spindle condition. PMID:28065985
1980-06-01
measuring program understanding. Shneiderman, Mayer, McKay, and Heller [241 found that flowcharts are redundant and have a potential negative affect on...dictionaries of program variables are superior to macro flowcharts as an aid to understand program control and data structures. Chrysler [5], using no...procedures as do beginners . Also; guaranteeing that groups of begining programmers have equal ability is not trivial. 3-10 The problem with material
Reed, Bradley C.; Budde, Michael E.; Spencer, Page; Miller, Amy E.
2009-01-01
Impacts of global climate change are expected to result in greater variation in the seasonality of snowpack, lake ice, and vegetation dynamics in southwest Alaska. All have wide-reaching physical and biological ecosystem effects in the region. We used Moderate Resolution Imaging Spectroradiometer (MODIS) calibrated radiance, snow cover extent, and vegetation index products for interpreting interannual variation in the duration and extent of snowpack, lake ice, and vegetation dynamics for southwest Alaska. The approach integrates multiple seasonal metrics across large ecological regions. Throughout the observation period (2001-2007), snow cover duration was stable within ecoregions, with variable start and end dates. The start of the lake ice season lagged the snow season by 2 to 3??months. Within a given lake, freeze-up dates varied in timing and duration, while break-up dates were more consistent. Vegetation phenology varied less than snow and ice metrics, with start-of-season dates comparatively consistent across years. The start of growing season and snow melt were related to one another as they are both temperature dependent. Higher than average temperatures during the El Ni??o winter of 2002-2003 were expressed in anomalous ice and snow season patterns. We are developing a consistent, MODIS-based dataset that will be used to monitor temporal trends of each of these seasonal metrics and to map areas of change for the study area.
Metzler, Marina; Govindan, Rathinaswamy; Al-Shargabi, Tareq; Vezina, Gilbert; Andescavage, Nickie; Wang, Yunfei; du Plessis, Adre; Massaro, An N
2017-09-01
BackgroundDecreased heart rate variability (HRV) is a measure of autonomic dysfunction and brain injury in newborns with hypoxic ischemic encephalopathy (HIE). This study aimed to characterize the relationship between HRV and brain injury pattern using magnetic resonance imaging (MRI) in newborns with HIE undergoing therapeutic hypothermia.MethodsHRV metrics were quantified in the time domain (α S , α L , and root mean square at short (RMS S ) and long (RMS L ) timescales) and frequency domain (relative low-(LF) and high-frequency (HF) power) over 24-27 h of life. The brain injury pattern shown by MRI was classified as no injury, pure cortical/white matter injury, mixed watershed/mild basal ganglia injury, predominant basal ganglia or global injury, and death. HRV metrics were compared across brain injury pattern groups using a random-effects mixed model.ResultsData from 74 infants were analyzed. Brain injury pattern was significantly associated with the degree of HRV suppression. Specifically, negative associations were observed between the pattern of brain injury and RMS S (estimate -0.224, SE 0.082, P=0.006), RMS L (estimate -0.189, SE 0.082, P=0.021), and LF power (estimate -0.044, SE 0.016, P=0.006).ConclusionDegree of HRV depression is related to the pattern of brain injury. HRV monitoring may provide insights into the pattern of brain injury at the bedside.
Metzler, Marina; Govindan, Rathinaswamy; Al-Shargabi, Tareq; Vezina, Gilbert; Andescavage, Nickie; Wang, Yunfei; du Plessis, Adre; Massaro, An N
2017-01-01
Background Decreased heart rate variability (HRV) is a measure of autonomic dysfunction and brain injury in newborns with hypoxic ischemic encephalopathy (HIE). This study aimed to characterize the relationship between HRV and brain injury pattern by MRI in newborns with HIE undergoing therapeutic hypothermia. Methods HRV metrics were quantified in the time domain (αS, αL, and root mean square at short [RMSS] and long [RMSL] time scales) and frequency domain (relative low-[LF] and high-frequency [HF] power) during the time period 24–27 hours of life. Brain injury pattern by MRI was classified as no injury, pure cortical/white matter injury, mixed watershed/mild basal nuclei injury, predominant basal nuclei or global injury, and died. HRV metrics were compared across brain injury pattern groups using a random effects mixed model. Results Data from 74 infants were analyzed. Brain injury pattern was significantly associated with degree of HRV suppression. Specifically, negative associations were observed between pattern of brain injury and RMSS (estimate −0.224, SE 0.082, p=0.006), RMSL (estimate −0.189, SE 0.082, p=0.021), and LF power (estimate −0.044, SE 0.016, p=0.006). Conclusion Degree of HRV depression is related to pattern of brain injury. HRV monitoring may provide insights into pattern of brain injury at the bedside. PMID:28376079
Bodner, Todd E.
2017-01-01
Wilkinson and Task Force on Statistical Inference (1999) recommended that researchers include information on the practical magnitude of effects (e.g., using standardized effect sizes) to distinguish between the statistical and practical significance of research results. To date, however, researchers have not widely incorporated this recommendation into the interpretation and communication of the conditional effects and differences in conditional effects underlying statistical interactions involving a continuous moderator variable where at least one of the involved variables has an arbitrary metric. This article presents a descriptive approach to investigate two-way statistical interactions involving continuous moderator variables where the conditional effects underlying these interactions are expressed in standardized effect size metrics (i.e., standardized mean differences and semi-partial correlations). This approach permits researchers to evaluate and communicate the practical magnitude of particular conditional effects and differences in conditional effects using conventional and proposed guidelines, respectively, for the standardized effect size and therefore provides the researcher important supplementary information lacking under current approaches. The utility of this approach is demonstrated with two real data examples and important assumptions underlying the standardization process are highlighted. PMID:28484404
Technical Note: Approximate Bayesian parameterization of a process-based tropical forest model
NASA Astrophysics Data System (ADS)
Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.
2014-02-01
Inverse parameter estimation of process-based models is a long-standing problem in many scientific disciplines. A key question for inverse parameter estimation is how to define the metric that quantifies how well model predictions fit to the data. This metric can be expressed by general cost or objective functions, but statistical inversion methods require a particular metric, the probability of observing the data given the model parameters, known as the likelihood. For technical and computational reasons, likelihoods for process-based stochastic models are usually based on general assumptions about variability in the observed data, and not on the stochasticity generated by the model. Only in recent years have new methods become available that allow the generation of likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional Markov chain Monte Carlo (MCMC) sampler, performs well in retrieving known parameter values from virtual inventory data generated by the forest model. We analyze the results of the parameter estimation, examine its sensitivity to the choice and aggregation of model outputs and observed data (summary statistics), and demonstrate the application of this method by fitting the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss how this approach differs from approximate Bayesian computation (ABC), another method commonly used to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can be successfully applied to process-based models of high complexity. The methodology is particularly suitable for heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models.
Statistical structure of intrinsic climate variability under global warming
NASA Astrophysics Data System (ADS)
Zhu, Xiuhua; Bye, John; Fraedrich, Klaus
2017-04-01
Climate variability is often studied in terms of fluctuations with respect to the mean state, whereas the dependence between the mean and variability is rarely discussed. We propose a new climate metric to measure the relationship between means and standard deviations of annual surface temperature computed over non-overlapping 100-year segments. This metric is analyzed based on equilibrium simulations of the Max Planck Institute-Earth System Model (MPI-ESM): the last millennium climate (800-1799), the future climate projection following the A1B scenario (2100-2199), and the 3100-year unforced control simulation. A linear relationship is globally observed in the control simulation and thus termed intrinsic climate variability, which is most pronounced in the tropical region with negative regression slopes over the Pacific warm pool and positive slopes in the eastern tropical Pacific. It relates to asymmetric changes in temperature extremes and associates fluctuating climate means with increase or decrease in intensity and occurrence of both El Niño and La Niña events. In the future scenario period, the linear regression slopes largely retain their spatial structure with appreciable changes in intensity and geographical locations. Since intrinsic climate variability describes the internal rhythm of the climate system, it may serve as guidance for interpreting climate variability and climate change signals in the past and the future.
Brooks, Scott C.; Brandt, Craig C.; Griffiths, Natalie A.
2016-10-07
Nutrient spiraling is an important ecosystem process characterizing nutrient transport and uptake in streams. Various nutrient addition methods are used to estimate uptake metrics; however, uncertainty in the metrics is not often evaluated. A method was developed to quantify uncertainty in ambient and saturation nutrient uptake metrics estimated from saturating pulse nutrient additions (Tracer Additions for Spiraling Curve Characterization; TASCC). Using a Monte Carlo (MC) approach, the 95% confidence interval (CI) was estimated for ambient uptake lengths (S w-amb) and maximum areal uptake rates (U max) based on 100,000 datasets generated from each of four nitrogen and five phosphorous TASCCmore » experiments conducted seasonally in a forest stream in eastern Tennessee, U.S.A. Uncertainty estimates from the MC approach were compared to the CIs estimated from ordinary least squares (OLS) and non-linear least squares (NLS) models used to calculate S w-amb and U max, respectively, from the TASCC method. The CIs for Sw-amb and Umax were large, but were not consistently larger using the MC method. Despite the large CIs, significant differences (based on nonoverlapping CIs) in nutrient metrics among seasons were found with more significant differences using the OLS/NLS vs. the MC method. Lastly, we suggest that the MC approach is a robust way to estimate uncertainty, as the calculation of S w-amb and U max violates assumptions of OLS/NLS while the MC approach is free of these assumptions. The MC approach can be applied to other ecosystem metrics that are calculated from multiple parameters, providing a more robust estimate of these metrics and their associated uncertainties.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brooks, Scott C.; Brandt, Craig C.; Griffiths, Natalie A.
Nutrient spiraling is an important ecosystem process characterizing nutrient transport and uptake in streams. Various nutrient addition methods are used to estimate uptake metrics; however, uncertainty in the metrics is not often evaluated. A method was developed to quantify uncertainty in ambient and saturation nutrient uptake metrics estimated from saturating pulse nutrient additions (Tracer Additions for Spiraling Curve Characterization; TASCC). Using a Monte Carlo (MC) approach, the 95% confidence interval (CI) was estimated for ambient uptake lengths (S w-amb) and maximum areal uptake rates (U max) based on 100,000 datasets generated from each of four nitrogen and five phosphorous TASCCmore » experiments conducted seasonally in a forest stream in eastern Tennessee, U.S.A. Uncertainty estimates from the MC approach were compared to the CIs estimated from ordinary least squares (OLS) and non-linear least squares (NLS) models used to calculate S w-amb and U max, respectively, from the TASCC method. The CIs for Sw-amb and Umax were large, but were not consistently larger using the MC method. Despite the large CIs, significant differences (based on nonoverlapping CIs) in nutrient metrics among seasons were found with more significant differences using the OLS/NLS vs. the MC method. Lastly, we suggest that the MC approach is a robust way to estimate uncertainty, as the calculation of S w-amb and U max violates assumptions of OLS/NLS while the MC approach is free of these assumptions. The MC approach can be applied to other ecosystem metrics that are calculated from multiple parameters, providing a more robust estimate of these metrics and their associated uncertainties.« less
Adaptive distance metric learning for diffusion tensor image segmentation.
Kong, Youyong; Wang, Defeng; Shi, Lin; Hui, Steve C N; Chu, Winnie C W
2014-01-01
High quality segmentation of diffusion tensor images (DTI) is of key interest in biomedical research and clinical application. In previous studies, most efforts have been made to construct predefined metrics for different DTI segmentation tasks. These methods require adequate prior knowledge and tuning parameters. To overcome these disadvantages, we proposed to automatically learn an adaptive distance metric by a graph based semi-supervised learning model for DTI segmentation. An original discriminative distance vector was first formulated by combining both geometry and orientation distances derived from diffusion tensors. The kernel metric over the original distance and labels of all voxels were then simultaneously optimized in a graph based semi-supervised learning approach. Finally, the optimization task was efficiently solved with an iterative gradient descent method to achieve the optimal solution. With our approach, an adaptive distance metric could be available for each specific segmentation task. Experiments on synthetic and real brain DTI datasets were performed to demonstrate the effectiveness and robustness of the proposed distance metric learning approach. The performance of our approach was compared with three classical metrics in the graph based semi-supervised learning framework.
Adaptive Distance Metric Learning for Diffusion Tensor Image Segmentation
Kong, Youyong; Wang, Defeng; Shi, Lin; Hui, Steve C. N.; Chu, Winnie C. W.
2014-01-01
High quality segmentation of diffusion tensor images (DTI) is of key interest in biomedical research and clinical application. In previous studies, most efforts have been made to construct predefined metrics for different DTI segmentation tasks. These methods require adequate prior knowledge and tuning parameters. To overcome these disadvantages, we proposed to automatically learn an adaptive distance metric by a graph based semi-supervised learning model for DTI segmentation. An original discriminative distance vector was first formulated by combining both geometry and orientation distances derived from diffusion tensors. The kernel metric over the original distance and labels of all voxels were then simultaneously optimized in a graph based semi-supervised learning approach. Finally, the optimization task was efficiently solved with an iterative gradient descent method to achieve the optimal solution. With our approach, an adaptive distance metric could be available for each specific segmentation task. Experiments on synthetic and real brain DTI datasets were performed to demonstrate the effectiveness and robustness of the proposed distance metric learning approach. The performance of our approach was compared with three classical metrics in the graph based semi-supervised learning framework. PMID:24651858
NASA Astrophysics Data System (ADS)
Rodriguez-Galiano, Victor; Aragones, David; Caparros-Santiago, Jose A.; Navarro-Cerrillo, Rafael M.
2017-10-01
Land surface phenology (LSP) can improve the characterisation of forest areas and their change processes. The aim of this work was: i) to characterise the temporal dynamics in Mediterranean Pinus forests, and ii) to evaluate the potential of LSP for species discrimination. The different experiments were based on 679 mono-specific plots for the 5 native species on the Iberian Peninsula: P. sylvestris, P. pinea, P. halepensis, P. nigra and P. pinaster. The entire MODIS NDVI time series (2000-2016) of the MOD13Q1 product was used to characterise phenology. The following phenological parameters were extracted: the start, end and median days of the season, and the length of the season in days, as well as the base value, maximum value, amplitude and integrated value. Multi-temporal metrics were calculated to synthesise the inter-annual variability of the phenological parameters. The species were discriminated by the application of Random Forest (RF) classifiers from different subsets of variables: model 1) NDVI-smoothed time series, model 2) multi-temporal metrics of the phenological parameters, and model 3) multi-temporal metrics and the auxiliary physical variables (altitude, slope, aspect and distance to the coastline). Model 3 was the best, with an overall accuracy of 82%, a kappa coefficient of 0.77 and whose most important variables were: elevation, coast distance, and the end and start days of the growing season. The species that presented the largest errors was P. nigra, (kappa= 0.45), having locations with a similar behaviour to P. sylvestris or P. pinaster.
Analysing Relationships Between Urban Land Use Fragmentation Metrics and Socio-Economic Variables
NASA Astrophysics Data System (ADS)
Sapena, M.; Ruiz, L. A.; Goerlich, F. J.
2016-06-01
Analysing urban regions is essential for their correct monitoring and planning. This is mainly accounted for the sharp increase of people living in urban areas, and consequently, the need to manage them. At the same time there has been a rise in the use of spatial and statistical datasets, such as the Urban Atlas, which offers high-resolution urban land use maps obtained from satellite imagery, and the Urban Audit, which provides statistics of European cities and their surroundings. In this study, we analyse the relations between urban fragmentation metrics derived from Land Use and Land Cover (LULC) data from the Urban Atlas dataset, and socio-economic data from the Urban Audit for the reference years 2006 and 2012. We conducted the analysis on a sample of sixty-eight Functional Urban Areas (FUAs). One-date and two-date based fragmentation indices were computed for each FUA, land use class and date. Correlation tests and principal component analysis were then applied to select the most representative indices. Finally, multiple regression models were tested to explore the prediction of socio-economic variables, using different combinations of land use metrics as explanatory variables, both at a given date and in a dynamic context. The outcomes show that demography, living conditions, labour, and transportation variables have a clear relation with the morphology of the FUAs. This methodology allows us to compare European FUAs in terms of the spatial distribution of the land use classes, their complexity, and their structural changes, as well as to preview and model different growth patterns and socio-economic indicators.
Brown, L.R.
2000-01-01
Twenty sites in the lower San Joaquin River drainage, California, were sampled from 1993 to 1995 to characterize fish communities and their associations with measures of water quality and habitat quality. The feasibility of developing an Index of Biotic Integrity was assessed by evaluating four fish community metrics, including percentages of native fish, omnivorous fish, fish intolerant of environmental degradation, and fish with external anomalies. Of the thirty-one taxa of fish captured during the study, only 10 taxa were native to the drainage. Multivariate analyses of percentage data identified four site groups characterized by different groups of species. The distributions of fish species were related to specific conductance, gradient, and mean depth; however, specific conductance acted as a surrogate variable for a large group of correlated variables. Two of the fish community metrics - percentage of introduced fish and percentage of intolerant fish - appeared to be responsive to environmental quality but the responses of the other two metrics - percentage of omnivorous fish and percentage of fish with anomalies - were less direct. The conclusion of the study is that fish communities are responsive to environmental conditions, including conditions associated with human-caused disturbances, particularly agriculture and water development. The results suggest that changes in water management and water quality could result in changes in species distributions. Balancing the costs and benefits of such changes poses a considerable challenge to resource managers.
Brown, Larry R.
1998-01-01
Twenty sites in the lower San Joaquin River drainage, California, were sampled from 1993 to 1995 to characterize fish assemblages and their associations with measures of water quality and habitat quality. In addition, four fish community metrics were assessed, including percentages of native fish, omnivorous fish, fish intolerant of environmental degradation, and fish with external anomalies. Of the 31 taxa of fish captured during the study, only 10 taxa were native to the drainage. Multivariate analyses of percentage data identified four site groups characterized by characterized by different groups of species. The distributions of fish species were related to specific conductance, gradient, and mean depth; however, specific conductance acted as a surrogate variable for a large group of correlated variables. Two of the fish community metrics--percentage of introduced fish and percentage of intolerant fish--appeared to be responsive to environmental quality but the responses of the other two metrics--percentage of omnivorous fish and percentage of fish with anomalies--were less direct. The conclusion of the study is that fish assemblages are responsive to environmental conditions, including conditions associated with human-caused disturbances, particularly agriculture and water development. The results suggest that changes in water management and water quality could result in changes in species distributions. Balancing the costs and benefits of such changes poses a considerable challenge to resource managers. different groups of species.
NASA Astrophysics Data System (ADS)
Asadzadeh, M.; Maclean, A.; Tolson, B. A.; Burn, D. H.
2009-05-01
Hydrologic model calibration aims to find a set of parameters that adequately simulates observations of watershed behavior, such as streamflow, or a state variable, such as snow water equivalent (SWE). There are different metrics for evaluating calibration effectiveness that involve quantifying prediction errors, such as the Nash-Sutcliffe (NS) coefficient and bias evaluated for the entire calibration period, on a seasonal basis, for low flows, or for high flows. Many of these metrics are conflicting such that the set of parameters that maximizes the high flow NS differs from the set of parameters that maximizes the low flow NS. Conflicting objectives are very likely when different calibration objectives are based on different fluxes and/or state variables (e.g., NS based on streamflow versus SWE). One of the most popular ways to balance different metrics is to aggregate them based on their importance and find the set of parameters that optimizes a weighted sum of the efficiency metrics. Comparing alternative hydrologic models (e.g., assessing model improvement when a process or more detail is added to the model) based on the aggregated objective might be misleading since it represents one point on the tradeoff of desired error metrics. To derive a more comprehensive model comparison, we solved a bi-objective calibration problem to estimate the tradeoff between two error metrics for each model. Although this approach is computationally more expensive than the aggregation approach, it results in a better understanding of the effectiveness of selected models at each level of every error metric and therefore provides a better rationale for judging relative model quality. The two alternative models used in this study are two MESH hydrologic models (version 1.2) of the Wolf Creek Research basin that differ in their watershed spatial discretization (a single Grouped Response Unit, GRU, versus multiple GRUs). The MESH model, currently under development by Environment Canada, is a coupled land-surface and hydrologic model. Results will demonstrate the conclusions a modeller might make regarding the value of additional watershed spatial discretization under both an aggregated (single-objective) and multi-objective model comparison framework.
Upgrades to the REA method for producing probabilistic climate change projections
NASA Astrophysics Data System (ADS)
Xu, Ying; Gao, Xuejie; Giorgi, Filippo
2010-05-01
We present an augmented version of the Reliability Ensemble Averaging (REA) method designed to generate probabilistic climate change information from ensembles of climate model simulations. Compared to the original version, the augmented one includes consideration of multiple variables and statistics in the calculation of the performance-based weights. In addition, the model convergence criterion previously employed is removed. The method is applied to the calculation of changes in mean and variability for temperature and precipitation over different sub-regions of East Asia based on the recently completed CMIP3 multi-model ensemble. Comparison of the new and old REA methods, along with the simple averaging procedure, and the use of different combinations of performance metrics shows that at fine sub-regional scales the choice of weighting is relevant. This is mostly because the models show a substantial spread in performance for the simulation of precipitation statistics, a result that supports the use of model weighting as a useful option to account for wide ranges of quality of models. The REA method, and in particular the upgraded one, provides a simple and flexible framework for assessing the uncertainty related to the aggregation of results from ensembles of models in order to produce climate change information at the regional scale. KEY WORDS: REA method, Climate change, CMIP3
Comparison of Collection Methods for Fecal Samples in Microbiome Studies
Vogtmann, Emily; Chen, Jun; Amir, Amnon; Shi, Jianxin; Abnet, Christian C.; Nelson, Heidi; Knight, Rob; Chia, Nicholas; Sinha, Rashmi
2017-01-01
Prospective cohort studies are needed to assess the relationship between the fecal microbiome and human health and disease. To evaluate fecal collection methods, we determined technical reproducibility, stability at ambient temperature, and accuracy of 5 fecal collection methods (no additive, 95% ethanol, RNAlater Stabilization Solution, fecal occult blood test cards, and fecal immunochemical test tubes). Fifty-two healthy volunteers provided fecal samples at the Mayo Clinic in Rochester, Minnesota, in 2014. One set from each sample collection method was frozen immediately, and a second set was incubated at room temperature for 96 hours and then frozen. Intraclass correlation coefficients (ICCs) were calculated for the relative abundance of 3 phyla, 2 alpha diversity metrics, and 4 beta diversity metrics. Technical reproducibility was high, with ICCs for duplicate fecal samples between 0.64 and 1.00. Stability for most methods was generally high, although the ICCs were below 0.60 for 95% ethanol in metrics that were more sensitive to relative abundance. When compared with fecal samples that were frozen immediately, the ICCs were below 0.60 for the metrics that were sensitive to relative abundance; however, the remaining 2 alpha diversity and 3 beta diversity metrics were all relatively accurate, with ICCs above 0.60. In conclusion, all fecal sample collection methods appear relatively reproducible, stable, and accurate. Future studies could use these collection methods for microbiome analyses. PMID:27986704
A Framework for Orbital Performance Evaluation in Distributed Space Missions for Earth Observation
NASA Technical Reports Server (NTRS)
Nag, Sreeja; LeMoigne-Stewart, Jacqueline; Miller, David W.; de Weck, Olivier
2015-01-01
Distributed Space Missions (DSMs) are gaining momentum in their application to earth science missions owing to their unique ability to increase observation sampling in spatial, spectral and temporal dimensions simultaneously. DSM architectures have a large number of design variables and since they are expected to increase mission flexibility, scalability, evolvability and robustness, their design is a complex problem with many variables and objectives affecting performance. There are very few open-access tools available to explore the tradespace of variables which allow performance assessment and are easy to plug into science goals, and therefore select the most optimal design. This paper presents a software tool developed on the MATLAB engine interfacing with STK, for DSM orbit design and selection. It is capable of generating thousands of homogeneous constellation or formation flight architectures based on pre-defined design variable ranges and sizing those architectures in terms of predefined performance metrics. The metrics can be input into observing system simulation experiments, as available from the science teams, allowing dynamic coupling of science and engineering designs. Design variables include but are not restricted to constellation type, formation flight type, FOV of instrument, altitude and inclination of chief orbits, differential orbital elements, leader satellites, latitudes or regions of interest, planes and satellite numbers. Intermediate performance metrics include angular coverage, number of accesses, revisit coverage, access deterioration over time at every point of the Earth's grid. The orbit design process can be streamlined and variables more bounded along the way, owing to the availability of low fidelity and low complexity models such as corrected HCW equations up to high precision STK models with J2 and drag. The tool can thus help any scientist or program manager select pre-Phase A, Pareto optimal DSM designs for a variety of science goals without having to delve into the details of the engineering design process.
Hall, S. A.; Burke, I.C.; Box, D. O.; Kaufmann, M. R.; Stoker, Jason M.
2005-01-01
The ponderosa pine forests of the Colorado Front Range, USA, have historically been subjected to wildfires. Recent large burns have increased public interest in fire behavior and effects, and scientific interest in the carbon consequences of wildfires. Remote sensing techniques can provide spatially explicit estimates of stand structural characteristics. Some of these characteristics can be used as inputs to fire behavior models, increasing our understanding of the effect of fuels on fire behavior. Others provide estimates of carbon stocks, allowing us to quantify the carbon consequences of fire. Our objective was to use discrete-return lidar to estimate such variables, including stand height, total aboveground biomass, foliage biomass, basal area, tree density, canopy base height and canopy bulk density. We developed 39 metrics from the lidar data, and used them in limited combinations in regression models, which we fit to field estimates of the stand structural variables. We used an information–theoretic approach to select the best model for each variable, and to select the subset of lidar metrics with most predictive potential. Observed versus predicted values of stand structure variables were highly correlated, with r2 ranging from 57% to 87%. The most parsimonious linear models for the biomass structure variables, based on a restricted dataset, explained between 35% and 58% of the observed variability. Our results provide us with useful estimates of stand height, total aboveground biomass, foliage biomass and basal area. There is promise for using this sensor to estimate tree density, canopy base height and canopy bulk density, though more research is needed to generate robust relationships. We selected 14 lidar metrics that showed the most potential as predictors of stand structure. We suggest that the focus of future lidar studies should broaden to include low density forests, particularly systems where the vertical structure of the canopy is important, such as fire prone forests.
A simple test for spacetime symmetry
NASA Astrophysics Data System (ADS)
Houri, Tsuyoshi; Yasui, Yukinori
2015-03-01
This paper presents a simple method for investigating spacetime symmetry for a given metric. The method makes use of the curvature conditions that are obtained from the Killing equations. We use the solutions of the curvature conditions to compute an upper bound on the number of Killing vector fields, as well as Killing-Yano (KY) tensors and closed conformal KY tensors. We also use them in the integration of the Killing equations. By means of the method, we thoroughly investigate KY symmetry of type D vacuum solutions such as the Kerr metric in four dimensions. The method is also applied to a large variety of physical metrics in four and five dimensions.
Neural processing of musical meter in musicians and non-musicians.
Zhao, T Christina; Lam, H T Gloria; Sohi, Harkirat; Kuhl, Patricia K
2017-11-01
Musical sounds, along with speech, are the most prominent sounds in our daily lives. They are highly dynamic, yet well structured in the temporal domain in a hierarchical manner. The temporal structures enhance the predictability of musical sounds. Western music provides an excellent example: while time intervals between musical notes are highly variable, underlying beats can be realized. The beat-level temporal structure provides a sense of regular pulses. Beats can be further organized into units, giving the percept of alternating strong and weak beats (i.e. metrical structure or meter). Examining neural processing at the meter level offers a unique opportunity to understand how the human brain extracts temporal patterns, predicts future stimuli and optimizes neural resources for processing. The present study addresses two important questions regarding meter processing, using the mismatch negativity (MMN) obtained with electroencephalography (EEG): 1) how tempo (fast vs. slow) and type of metrical structure (duple: two beats per unit vs. triple: three beats per unit) affect the neural processing of metrical structure in non-musically trained individuals, and 2) how early music training modulates the neural processing of metrical structure. Metrical structures were established by patterns of consecutive strong and weak tones (Standard) with occasional violations that disrupted and reset the structure (Deviant). Twenty non-musicians listened passively to these tones while their neural activities were recorded. MMN indexed the neural sensitivity to the meter violations. Results suggested that MMNs were larger for fast tempo and for triple meter conditions. Further, 20 musically trained individuals were tested using the same methods and the results were compared to the non-musicians. While tempo and meter type similarly influenced MMNs in both groups, musicians overall exhibited significantly reduced MMNs, compared to their non-musician counterparts. Further analyses indicated that the reduction was driven by responses to sounds that defined the structure (Standard), not by responses to Deviants. We argue that musicians maintain a more accurate and efficient mental model for metrical structures, which incorporates occasional disruptions using significantly fewer neural resources. Copyright © 2017 Elsevier Ltd. All rights reserved.
Raza, Ali S.; Zhang, Xian; De Moraes, Carlos G. V.; Reisman, Charles A.; Liebmann, Jeffrey M.; Ritch, Robert; Hood, Donald C.
2014-01-01
Purpose. To improve the detection of glaucoma, techniques for assessing local patterns of damage and for combining structure and function were developed. Methods. Standard automated perimetry (SAP) and frequency-domain optical coherence tomography (fdOCT) data, consisting of macular retinal ganglion cell plus inner plexiform layer (mRGCPL) as well as macular and optic disc retinal nerve fiber layer (mRNFL and dRNFL) thicknesses, were collected from 52 eyes of 52 healthy controls and 156 eyes of 96 glaucoma suspects and patients. In addition to generating simple global metrics, SAP and fdOCT data were searched for contiguous clusters of abnormal points and converted to a continuous metric (pcc). The pcc metric, along with simpler methods, was used to combine the information from the SAP and fdOCT. The performance of different methods was assessed using the area under receiver operator characteristic curves (AROC scores). Results. The pcc metric performed better than simple global measures for both the fdOCT and SAP. The best combined structure-function metric (mRGCPL&SAP pcc, AROC = 0.868 ± 0.032) was better (statistically significant) than the best metrics for independent measures of structure and function. When SAP was used as part of the inclusion and exclusion criteria, AROC scores increased for all metrics, including the best combined structure-function metric (AROC = 0.975 ± 0.014). Conclusions. A combined structure-function metric improved the detection of glaucomatous eyes. Overall, the primary sources of value-added for glaucoma detection stem from the continuous cluster search (the pcc), the mRGCPL data, and the combination of structure and function. PMID:24408977
Robustness surfaces of complex networks
NASA Astrophysics Data System (ADS)
Manzano, Marc; Sahneh, Faryad; Scoglio, Caterina; Calle, Eusebi; Marzo, Jose Luis
2014-09-01
Despite the robustness of complex networks has been extensively studied in the last decade, there still lacks a unifying framework able to embrace all the proposed metrics. In the literature there are two open issues related to this gap: (a) how to dimension several metrics to allow their summation and (b) how to weight each of the metrics. In this work we propose a solution for the two aforementioned problems by defining the R*-value and introducing the concept of robustness surface (Ω). The rationale of our proposal is to make use of Principal Component Analysis (PCA). We firstly adjust to 1 the initial robustness of a network. Secondly, we find the most informative robustness metric under a specific failure scenario. Then, we repeat the process for several percentage of failures and different realizations of the failure process. Lastly, we join these values to form the robustness surface, which allows the visual assessment of network robustness variability. Results show that a network presents different robustness surfaces (i.e., dissimilar shapes) depending on the failure scenario and the set of metrics. In addition, the robustness surface allows the robustness of different networks to be compared.
Cuesta-Frau, David; Miró-Martínez, Pau; Jordán Núñez, Jorge; Oltra-Crespo, Sandra; Molina Picó, Antonio
2017-08-01
This paper evaluates the performance of first generation entropy metrics, featured by the well known and widely used Approximate Entropy (ApEn) and Sample Entropy (SampEn) metrics, and what can be considered an evolution from these, Fuzzy Entropy (FuzzyEn), in the Electroencephalogram (EEG) signal classification context. The study uses the commonest artifacts found in real EEGs, such as white noise, and muscular, cardiac, and ocular artifacts. Using two different sets of publicly available EEG records, and a realistic range of amplitudes for interfering artifacts, this work optimises and assesses the robustness of these metrics against artifacts in class segmentation terms probability. The results show that the qualitative behaviour of the two datasets is similar, with SampEn and FuzzyEn performing the best, and the noise and muscular artifacts are the most confounding factors. On the contrary, there is a wide variability as regards initialization parameters. The poor performance achieved by ApEn suggests that this metric should not be used in these contexts. Copyright © 2017 Elsevier Ltd. All rights reserved.
Robustness surfaces of complex networks.
Manzano, Marc; Sahneh, Faryad; Scoglio, Caterina; Calle, Eusebi; Marzo, Jose Luis
2014-09-02
Despite the robustness of complex networks has been extensively studied in the last decade, there still lacks a unifying framework able to embrace all the proposed metrics. In the literature there are two open issues related to this gap: (a) how to dimension several metrics to allow their summation and (b) how to weight each of the metrics. In this work we propose a solution for the two aforementioned problems by defining the R*-value and introducing the concept of robustness surface (Ω). The rationale of our proposal is to make use of Principal Component Analysis (PCA). We firstly adjust to 1 the initial robustness of a network. Secondly, we find the most informative robustness metric under a specific failure scenario. Then, we repeat the process for several percentage of failures and different realizations of the failure process. Lastly, we join these values to form the robustness surface, which allows the visual assessment of network robustness variability. Results show that a network presents different robustness surfaces (i.e., dissimilar shapes) depending on the failure scenario and the set of metrics. In addition, the robustness surface allows the robustness of different networks to be compared.
Peeters, Sanne; Simas, Tiago; Suckling, John; Gronenschild, Ed; Patel, Ameera; Habets, Petra; van Os, Jim; Marcelis, Machteld
2015-01-01
Background Dysconnectivity in schizophrenia can be understood in terms of dysfunctional integration of a distributed network of brain regions. Here we propose a new methodology to analyze complex networks based on semi-metric behavior, whereby higher levels of semi-metricity may represent a higher level of redundancy and dispersed communication. It was hypothesized that individuals with (increased risk for) psychotic disorder would have more semi-metric paths compared to controls and that this would be associated with symptoms. Methods Resting-state functional MRI scans were obtained from 73 patients with psychotic disorder, 83 unaffected siblings and 72 controls. Semi-metric percentages (SMP) at the whole brain, hemispheric and lobar level were the dependent variables in a multilevel random regression analysis to investigate group differences. SMP was further examined in relation to symptomatology (i.e., psychotic/cognitive symptoms). Results At the whole brain and hemispheric level, patients had a significantly higher SMP compared to siblings and controls, with no difference between the latter. In the combined sibling and control group, individuals with high schizotypy had intermediate SMP values in the left hemisphere with respect to patients and individuals with low schizotypy. Exploratory analyses in patients revealed higher SMP in 12 out of 42 lobar divisions compared to controls, of which some were associated with worse PANSS symptomatology (i.e., positive symptoms, excitement and emotional distress) and worse cognitive performance on attention and emotion processing tasks. In the combined group of patients and controls, working memory, attention and social cognition were associated with higher SMP. Discussion The results are suggestive of more dispersed network communication in patients with psychotic disorder, with some evidence for trait-based network alterations in high-schizotypy individuals. Dispersed communication may contribute to the clinical phenotype in psychotic disorder. In addition, higher SMP may contribute to neuro- and social cognition, independent of psychosis risk. PMID:26740914
Indicators and Metrics for Evaluating the Sustainability of Chemical Processes
A metric-based method, called GREENSCOPE, has been developed for evaluating process sustainability. Using lab-scale information and engineering assumptions the method evaluates full-scale epresentations of processes in environmental, efficiency, energy and economic areas. The m...
Methods and Metrics of Voice Communications
DOT National Transportation Integrated Search
1996-03-01
This report consists of the proceedings of the Methods and Metrics of Voice Communication Workship organized by the FAA-Civil Aeromedical Institure, NASA-Ames Research Center and Armstrong Laboratory-brooks Air Force Base, held May 13-14, 1994 in San...
Choi, M H; Oh, S N; Park, G E; Yeo, D-M; Jung, S E
2018-05-10
To evaluate the interobserver and intermethod correlations of histogram metrics of dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) parameters acquired by multiple readers using the single-section and whole-tumor volume methods. Four DCE parameters (K trans , K ep , V e , V p ) were evaluated in 45 patients (31 men and 14 women; mean age, 61±11 years [range, 29-83 years]) with locally advanced rectal cancer using pre-chemoradiotherapy (CRT) MRI. Ten histogram metrics were extracted using two methods of lesion selection performed by three radiologists: the whole-tumor volume method for the whole tumor on axial section-by-section images and the single-section method for the entire area of the tumor on one axial image. The interobserver and intermethod correlations were evaluated using the intraclass correlation coefficients (ICCs). The ICCs showed excellent interobserver and intermethod correlations in most of histogram metrics of the DCE parameters. The ICCs among the three readers were > 0.7 (P<0.001) for all histogram metrics, except for the minimum and maximum. The intermethod correlations for most of the histogram metrics were excellent for each radiologist, regardless of the differences in the radiologists' experience. The interobserver and intermethod correlations for most of the histogram metrics of the DCE parameters are excellent in rectal cancer. Therefore, the single-section method may be a potential alternative to the whole-tumor volume method using pre-CRT MRI, despite the fact that the high agreement between the two methods cannot be extrapolated to post-CRT MRI. Copyright © 2018 Société française de radiologie. Published by Elsevier Masson SAS. All rights reserved.
Within-Hospital Variation in 30-Day Adverse Events: Implications for Measuring Quality.
Burke, Robert E; Glorioso, Thomas; Barón, Anna K; Kaboli, Peter J; Ho, P Michael
Novel measures of hospital quality are needed. Because quality improvement efforts seek to reduce variability in processes and outcomes, hospitals with higher variability in adverse events may be delivering poorer quality care. We sought to evaluate whether within-hospital variability in adverse events after a procedure might function as a quality metric that is correlated with facility-level mortality rates. We analyzed all percutaneous coronary interventions (PCIs) performed in the Veterans Health Administration (VHA) system from 2007 to 2013 to evaluate the correlation between within-hospital variability in 30-day postdischarge adverse events (readmission, emergency department visit, and repeat revascularization), and facility-level mortality rates, after adjustment for patient demographics, comorbidities, PCI indication, and PCI urgency. The study cohort included 47,567 patients at 48 VHA hospitals. The overall 30-day adverse event rate was 22.0% and 1-year mortality rate was 4.9%. The most variable sites had relative changes of 20% in 30-day rates of adverse events period-to-period. However, within-hospital variability in 30-day events was not correlated with 1-year mortality rates (correlation coefficient = .06; p = .66). Thus, measuring within-hospital variability in postdischarge adverse events may not improve identification of low-performing hospitals. Evaluation in other conditions, populations, and in relationship with other quality metrics may reveal stronger correlations with care quality.
Three-variable solution in the (2+1)-dimensional null-surface formulation
NASA Astrophysics Data System (ADS)
Harriott, Tina A.; Williams, J. G.
2018-04-01
The null-surface formulation of general relativity (NSF) describes gravity by using families of null surfaces instead of a spacetime metric. Despite the fact that the NSF is (to within a conformal factor) equivalent to general relativity, the equations of the NSF are exceptionally difficult to solve, even in 2+1 dimensions. The present paper gives the first exact (2+1)-dimensional solution that depends nontrivially upon all three of the NSF's intrinsic spacetime variables. The metric derived from this solution is shown to represent a spacetime whose source is a massless scalar field that satisfies the general relativistic wave equation and the Einstein equations with minimal coupling. The spacetime is identified as one of a family of (2+1)-dimensional general relativistic spacetimes discovered by Cavaglià.
Mao, Shasha; Xiong, Lin; Jiao, Licheng; Feng, Tian; Yeung, Sai-Kit
2017-05-01
Riemannian optimization has been widely used to deal with the fixed low-rank matrix completion problem, and Riemannian metric is a crucial factor of obtaining the search direction in Riemannian optimization. This paper proposes a new Riemannian metric via simultaneously considering the Riemannian geometry structure and the scaling information, which is smoothly varying and invariant along the equivalence class. The proposed metric can make a tradeoff between the Riemannian geometry structure and the scaling information effectively. Essentially, it can be viewed as a generalization of some existing metrics. Based on the proposed Riemanian metric, we also design a Riemannian nonlinear conjugate gradient algorithm, which can efficiently solve the fixed low-rank matrix completion problem. By experimenting on the fixed low-rank matrix completion, collaborative filtering, and image and video recovery, it illustrates that the proposed method is superior to the state-of-the-art methods on the convergence efficiency and the numerical performance.
Automated Assessment of Visual Quality of Digital Video
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Ellis, Stephen R. (Technical Monitor)
1997-01-01
The advent of widespread distribution of digital video creates a need for automated methods for evaluating visual quality of digital video. This is particularly so since most digital video is compressed using lossy methods, which involve the controlled introduction of potentially visible artifacts. Compounding the problem is the bursty nature of digital video, which requires adaptive bit allocation based on visual quality metrics. In previous work, we have developed visual quality metrics for evaluating, controlling, and optimizing the quality of compressed still images[1-4]. These metrics incorporate simplified models of human visual sensitivity to spatial and chromatic visual signals. The challenge of video quality metrics is to extend these simplified models to temporal signals as well. In this presentation I will discuss a number of the issues that must be resolved in the design of effective video quality metrics. Among these are spatial, temporal, and chromatic sensitivity and their interactions, visual masking, and implementation complexity. I will also touch on the question of how to evaluate the performance of these metrics.
Wetland habitat disturbance best predicts metrics of an amphibian index of biotic integrity
Stapanian, Martin A.; Micacchion, Mick; Adams, Jean V.
2015-01-01
Regression and classification trees were used to identify the best predictors of the five component metrics of the Ohio Amphibian Index of Biotic Integrity (AmphIBI) in 54 wetlands in Ohio, USA. Of the 17 wetland- and surrounding landscape-scale variables considered, the best predictor for all AmphIBI metrics was habitat alteration and development within the wetland. The results were qualitatively similar to the best predictors for a wetland vegetation index of biotic integrity, suggesting that similar management practices (e.g., reducing or eliminating nutrient enrichment from agriculture, mowing, grazing, logging, and removing down woody debris) within the boundaries of the wetland can be applied to effectively increase the quality of wetland vegetation and amphibian communities.
ADS: A FORTRAN program for automated design synthesis: Version 1.10
NASA Technical Reports Server (NTRS)
Vanderplaats, G. N.
1985-01-01
A new general-purpose optimization program for engineering design is described. ADS (Automated Design Synthesis - Version 1.10) is a FORTRAN program for solution of nonlinear constrained optimization problems. The program is segmented into three levels: strategy, optimizer, and one-dimensional search. At each level, several options are available so that a total of over 100 possible combinations can be created. Examples of available strategies are sequential unconstrained minimization, the Augmented Lagrange Multiplier method, and Sequential Linear Programming. Available optimizers include variable metric methods and the Method of Feasible Directions as examples, and one-dimensional search options include polynomial interpolation and the Golden Section method as examples. Emphasis is placed on ease of use of the program. All information is transferred via a single parameter list. Default values are provided for all internal program parameters such as convergence criteria, and the user is given a simple means to over-ride these, if desired.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haas, Nicholas A.; O'Connor, Ben L.; Hayse, John W.
2014-07-22
Environmental flows are an important consideration in licensing hydropower projects because operational flow releases can result in adverse conditions to downstream ecological communities. Flow variability assessments have typically focused on pre- and post-dam conditions using metrics based on daily-averaged flow values. This study used subdaily and daily flow data to assess environmental flow response to changes in hydropower operations from daily-peaking to run-of-river. An analysis tool was developed to quantify subdaily to seasonal flow variability metrics and was applied to four hydropower projects that underwent operational changes based on regulatory requirements. Results indicate that the distribution of flows is significantly different between daily-peaking and run-of- river operations and that daily-peaking operations are flashier than run-of-river operations; these differences are seen using hourly-averaged flow datasets and are less pronounced or not noticeable using daily-averaged flow datasets. Of all variability metrics analyzed, hydrograph rise and fall rates were the most sensitive to using daily versus subdaily flow data. This outcome has implications for the development of flow-ecology relationships that quantify effects of rate of change on processes such as fish stranding and displacement, along with habitat stability. The quantification of flow variability statistics should be done using subdaily datasets and metric to accurately represent the nature of hydropower operations , especially for facilities that utilize daily-peaking operations.
NASA Astrophysics Data System (ADS)
Chen, Jie; Li, Chao; Brissette, François P.; Chen, Hua; Wang, Mingna; Essou, Gilles R. C.
2018-05-01
Bias correction is usually implemented prior to using climate model outputs for impact studies. However, bias correction methods that are commonly used treat climate variables independently and often ignore inter-variable dependencies. The effects of ignoring such dependencies on impact studies need to be investigated. This study aims to assess the impacts of correcting the inter-variable correlation of climate model outputs on hydrological modeling. To this end, a joint bias correction (JBC) method which corrects the joint distribution of two variables as a whole is compared with an independent bias correction (IBC) method; this is considered in terms of correcting simulations of precipitation and temperature from 26 climate models for hydrological modeling over 12 watersheds located in various climate regimes. The results show that the simulated precipitation and temperature are considerably biased not only in the individual distributions, but also in their correlations, which in turn result in biased hydrological simulations. In addition to reducing the biases of the individual characteristics of precipitation and temperature, the JBC method can also reduce the bias in precipitation-temperature (P-T) correlations. In terms of hydrological modeling, the JBC method performs significantly better than the IBC method for 11 out of the 12 watersheds over the calibration period. For the validation period, the advantages of the JBC method are greatly reduced as the performance becomes dependent on the watershed, GCM and hydrological metric considered. For arid/tropical and snowfall-rainfall-mixed watersheds, JBC performs better than IBC. For snowfall- or rainfall-dominated watersheds, however, the two methods behave similarly, with IBC performing somewhat better than JBC. Overall, the results emphasize the advantages of correcting the P-T correlation when using climate model-simulated precipitation and temperature to assess the impact of climate change on watershed hydrology. However, a thorough validation and a comparison with other methods are recommended before using the JBC method, since it may perform worse than the IBC method for some cases due to bias nonstationarity of climate model outputs.
McAdams, Harley; AlQuraishi, Mohammed
2015-04-21
Techniques for determining values for a metric of microscale interactions include determining a mesoscale metric for a plurality of mesoscale interaction types, wherein a value of the mesoscale metric for each mesoscale interaction type is based on a corresponding function of values of the microscale metric for the plurality of the microscale interaction types. A plurality of observations that indicate the values of the mesoscale metric are determined for the plurality of mesoscale interaction types. Values of the microscale metric are determined for the plurality of microscale interaction types based on the plurality of observations and the corresponding functions and compressed sensing.
An Examination of Diameter Density Prediction with k-NN and Airborne Lidar
Strunk, Jacob L.; Gould, Peter J.; Packalen, Petteri; ...
2017-11-16
While lidar-based forest inventory methods have been widely demonstrated, performances of methods to predict tree diameters with airborne lidar (lidar) are not well understood. One cause for this is that the performance metrics typically used in studies for prediction of diameters can be difficult to interpret, and may not support comparative inferences between sampling designs and study areas. To help with this problem we propose two indices and use them to evaluate a variety of lidar and k nearest neighbor (k-NN) strategies for prediction of tree diameter distributions. The indices are based on the coefficient of determination ( R 2),more » and root mean square deviation (RMSD). Both of the indices are highly interpretable, and the RMSD-based index facilitates comparisons with alternative (non-lidar) inventory strategies, and with projects in other regions. K-NN diameter distribution prediction strategies were examined using auxiliary lidar for 190 training plots distribute across the 800 km 2 Savannah River Site in South Carolina, USA. In conclusion, we evaluate the performance of k-NN with respect to distance metrics, number of neighbors, predictor sets, and response sets. K-NN and lidar explained 80% of variability in diameters, and Mahalanobis distance with k = 3 neighbors performed best according to a number of criteria.« less
An Examination of Diameter Density Prediction with k-NN and Airborne Lidar
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strunk, Jacob L.; Gould, Peter J.; Packalen, Petteri
While lidar-based forest inventory methods have been widely demonstrated, performances of methods to predict tree diameters with airborne lidar (lidar) are not well understood. One cause for this is that the performance metrics typically used in studies for prediction of diameters can be difficult to interpret, and may not support comparative inferences between sampling designs and study areas. To help with this problem we propose two indices and use them to evaluate a variety of lidar and k nearest neighbor (k-NN) strategies for prediction of tree diameter distributions. The indices are based on the coefficient of determination ( R 2),more » and root mean square deviation (RMSD). Both of the indices are highly interpretable, and the RMSD-based index facilitates comparisons with alternative (non-lidar) inventory strategies, and with projects in other regions. K-NN diameter distribution prediction strategies were examined using auxiliary lidar for 190 training plots distribute across the 800 km 2 Savannah River Site in South Carolina, USA. In conclusion, we evaluate the performance of k-NN with respect to distance metrics, number of neighbors, predictor sets, and response sets. K-NN and lidar explained 80% of variability in diameters, and Mahalanobis distance with k = 3 neighbors performed best according to a number of criteria.« less
High-Dimensional Intrinsic Interpolation Using Gaussian Process Regression and Diffusion Maps
Thimmisetty, Charanraj A.; Ghanem, Roger G.; White, Joshua A.; ...
2017-10-10
This article considers the challenging task of estimating geologic properties of interest using a suite of proxy measurements. The current work recast this task as a manifold learning problem. In this process, this article introduces a novel regression procedure for intrinsic variables constrained onto a manifold embedded in an ambient space. The procedure is meant to sharpen high-dimensional interpolation by inferring non-linear correlations from the data being interpolated. The proposed approach augments manifold learning procedures with a Gaussian process regression. It first identifies, using diffusion maps, a low-dimensional manifold embedded in an ambient high-dimensional space associated with the data. Itmore » relies on the diffusion distance associated with this construction to define a distance function with which the data model is equipped. This distance metric function is then used to compute the correlation structure of a Gaussian process that describes the statistical dependence of quantities of interest in the high-dimensional ambient space. The proposed method is applicable to arbitrarily high-dimensional data sets. Here, it is applied to subsurface characterization using a suite of well log measurements. The predictions obtained in original, principal component, and diffusion space are compared using both qualitative and quantitative metrics. Considerable improvement in the prediction of the geological structural properties is observed with the proposed method.« less
High-Dimensional Intrinsic Interpolation Using Gaussian Process Regression and Diffusion Maps
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thimmisetty, Charanraj A.; Ghanem, Roger G.; White, Joshua A.
This article considers the challenging task of estimating geologic properties of interest using a suite of proxy measurements. The current work recast this task as a manifold learning problem. In this process, this article introduces a novel regression procedure for intrinsic variables constrained onto a manifold embedded in an ambient space. The procedure is meant to sharpen high-dimensional interpolation by inferring non-linear correlations from the data being interpolated. The proposed approach augments manifold learning procedures with a Gaussian process regression. It first identifies, using diffusion maps, a low-dimensional manifold embedded in an ambient high-dimensional space associated with the data. Itmore » relies on the diffusion distance associated with this construction to define a distance function with which the data model is equipped. This distance metric function is then used to compute the correlation structure of a Gaussian process that describes the statistical dependence of quantities of interest in the high-dimensional ambient space. The proposed method is applicable to arbitrarily high-dimensional data sets. Here, it is applied to subsurface characterization using a suite of well log measurements. The predictions obtained in original, principal component, and diffusion space are compared using both qualitative and quantitative metrics. Considerable improvement in the prediction of the geological structural properties is observed with the proposed method.« less
Comparing image quality of print-on-demand books and photobooks from web-based vendors
NASA Astrophysics Data System (ADS)
Phillips, Jonathan; Bajorski, Peter; Burns, Peter; Fredericks, Erin; Rosen, Mitchell
2010-01-01
Because of the emergence of e-commerce and developments in print engines designed for economical output of very short runs, there are increased business opportunities and consumer options for print-on-demand books and photobooks. The current state of these printing modes allows for direct uploading of book files via the web, printing on nonoffset printers, and distributing by standard parcel or mail delivery services. The goal of this research is to assess the image quality of print-on-demand books and photobooks produced by various Web-based vendors and to identify correlations between psychophysical results and objective metrics. Six vendors were identified for one-off (single-copy) print-on-demand books, and seven vendors were identified for photobooks. Participants rank ordered overall quality of a subset of individual pages from each book, where the pages included text, photographs, or a combination of the two. Observers also reported overall quality ratings and price estimates for the bound books. Objective metrics of color gamut, color accuracy, accuracy of International Color Consortium profile usage, eye-weighted root mean square L*, and cascaded modulation transfer acutance were obtained and compared to the observer responses. We introduce some new methods for normalizing data as well as for strengthening the statistical significance of the results. Our approach includes the use of latent mixed-effect models. We found statistically significant correlation with overall image quality and some of the spatial metrics, but correlations between psychophysical results and other objective metrics were weak or nonexistent. Strong correlation was found between psychophysical results of overall quality assessment and estimated price associated with quality. The photobook set of vendors reached higher image-quality ratings than the set of print-on-demand vendors. However, the photobook set had higher image-quality variability.
Quantitative evaluation of muscle synergy models: a single-trial task decoding approach
Delis, Ioannis; Berret, Bastien; Pozzo, Thierry; Panzeri, Stefano
2013-01-01
Muscle synergies, i.e., invariant coordinated activations of groups of muscles, have been proposed as building blocks that the central nervous system (CNS) uses to construct the patterns of muscle activity utilized for executing movements. Several efficient dimensionality reduction algorithms that extract putative synergies from electromyographic (EMG) signals have been developed. Typically, the quality of synergy decompositions is assessed by computing the Variance Accounted For (VAF). Yet, little is known about the extent to which the combination of those synergies encodes task-discriminating variations of muscle activity in individual trials. To address this question, here we conceive and develop a novel computational framework to evaluate muscle synergy decompositions in task space. Unlike previous methods considering the total variance of muscle patterns (VAF based metrics), our approach focuses on variance discriminating execution of different tasks. The procedure is based on single-trial task decoding from muscle synergy activation features. The task decoding based metric evaluates quantitatively the mapping between synergy recruitment and task identification and automatically determines the minimal number of synergies that captures all the task-discriminating variability in the synergy activations. In this paper, we first validate the method on plausibly simulated EMG datasets. We then show that it can be applied to different types of muscle synergy decomposition and illustrate its applicability to real data by using it for the analysis of EMG recordings during an arm pointing task. We find that time-varying and synchronous synergies with similar number of parameters are equally efficient in task decoding, suggesting that in this experimental paradigm they are equally valid representations of muscle synergies. Overall, these findings stress the effectiveness of the decoding metric in systematically assessing muscle synergy decompositions in task space. PMID:23471195
Kelly, Brendan S; Rainford, Louise A; Darcy, Sarah P; Kavanagh, Eoin C; Toomey, Rachel J
2016-07-01
Purpose To investigate the development of chest radiograph interpretation skill through medical training by measuring both diagnostic accuracy and eye movements during visual search. Materials and Methods An institutional exemption from full ethical review was granted for the study. Five consultant radiologists were deemed the reference expert group, and four radiology registrars, five senior house officers (SHOs), and six interns formed four clinician groups. Participants were shown 30 chest radiographs, 14 of which had a pneumothorax, and were asked to give their level of confidence as to whether a pneumothorax was present. Receiver operating characteristic (ROC) curve analysis was carried out on diagnostic decisions. Eye movements were recorded with a Tobii TX300 (Tobii Technology, Stockholm, Sweden) eye tracker. Four eye-tracking metrics were analyzed. Variables were compared to identify any differences between groups. All data were compared by using the Friedman nonparametric method. Results The average area under the ROC curve for the groups increased with experience (0.947 for consultants, 0.792 for registrars, 0.693 for SHOs, and 0.659 for interns; P = .009). A significant difference in diagnostic accuracy was found between consultants and registrars (P = .046). All four eye-tracking metrics decreased with experience, and there were significant differences between registrars and SHOs. Total reading time decreased with experience; it was significantly lower for registrars compared with SHOs (P = .046) and for SHOs compared with interns (P = .025). Conclusion Chest radiograph interpretation skill increased with experience, both in terms of diagnostic accuracy and visual search. The observed level of experience at which there was a significant difference was higher for diagnostic accuracy than for eye-tracking metrics. (©) RSNA, 2016 Online supplemental material is available for this article.
NASA Astrophysics Data System (ADS)
Shahedi, Maysam; Fenster, Aaron; Cool, Derek W.; Romagnoli, Cesare; Ward, Aaron D.
2013-03-01
3D segmentation of the prostate in medical images is useful to prostate cancer diagnosis and therapy guidance, but is time-consuming to perform manually. Clinical translation of computer-assisted segmentation algorithms for this purpose requires a comprehensive and complementary set of evaluation metrics that are informative to the clinical end user. We have developed an interactive 3D prostate segmentation method for 1.5T and 3.0T T2-weighted magnetic resonance imaging (T2W MRI) acquired using an endorectal coil. We evaluated our method against manual segmentations of 36 3D images using complementary boundary-based (mean absolute distance; MAD), regional overlap (Dice similarity coefficient; DSC) and volume difference (ΔV) metrics. Our technique is based on inter-subject prostate shape and local boundary appearance similarity. In the training phase, we calculated a point distribution model (PDM) and a set of local mean intensity patches centered on the prostate border to capture shape and appearance variability. To segment an unseen image, we defined a set of rays - one corresponding to each of the mean intensity patches computed in training - emanating from the prostate centre. We used a radial-based search strategy and translated each mean intensity patch along its corresponding ray, selecting as a candidate the boundary point with the highest normalized cross correlation along each ray. These boundary points were then regularized using the PDM. For the whole gland, we measured a mean+/-std MAD of 2.5+/-0.7 mm, DSC of 80+/-4%, and ΔV of 1.1+/-8.8 cc. We also provided an anatomic breakdown of these metrics within the prostatic base, mid-gland, and apex.
Construction of self-dual codes in the Rosenbloom-Tsfasman metric
NASA Astrophysics Data System (ADS)
Krisnawati, Vira Hari; Nisa, Anzi Lina Ukhtin
2017-12-01
Linear code is a very basic code and very useful in coding theory. Generally, linear code is a code over finite field in Hamming metric. Among the most interesting families of codes, the family of self-dual code is a very important one, because it is the best known error-correcting code. The concept of Hamming metric is develop into Rosenbloom-Tsfasman metric (RT-metric). The inner product in RT-metric is different from Euclid inner product that is used to define duality in Hamming metric. Most of the codes which are self-dual in Hamming metric are not so in RT-metric. And, generator matrix is very important to construct a code because it contains basis of the code. Therefore in this paper, we give some theorems and methods to construct self-dual codes in RT-metric by considering properties of the inner product and generator matrix. Also, we illustrate some examples for every kind of the construction.
Development of the Expert System Domain Advisor and Analysis Tool
1991-09-01
analysis. Typical of the current methods in use at this time is the " tarot metric". This method defines a decision rule whose output is whether to go...B - TAROT METRIC B. ::TTRODUCTION The system chart of ESEM, Figure 1, shows the following three risk-based decision points: i. At prolect initiation...34 decisions. B-I 201 PRELIMINARY T" B-I. Evaluais Factan for ES Deyelopsineg FACTORS POSSIBLE VALUE RATINGS TAROT metric (overall suitability) Poor, Fair
Siegel, Miriam; Starks, Sarah E.; Sanderson, Wayne T.; Kamel, Freya; Hoppin, Jane A.; Gerr, Fred
2017-01-01
Purpose Although organic solvents are often used in agricultural operations, neurotoxic effects of solvent exposure have not been extensively studied among famers. The current analysis examined associations between questionnaire-based metrics of organic solvent exposure and depressive symptoms among farmers. Methods Results from 692 male Agricultural Health Study participants were analyzed. Solvent type and exposure duration were assessed by questionnaire. An “ever-use” variable and years of use categories were constructed for exposure to gasoline, paint/lacquer thinner, petroleum distillates, and any solvent. Depressive symptoms were ascertained with the Center for Epidemiologic Studies Depression Scale (CES-D); scores were analyzed separately as continuous (0-60) and dichotomous (<16 versus ≥16) variables. Multivariate linear and logistic regression models were used to estimate crude and adjusted associations between measures of solvent exposure and CES-D score. Results Forty-one percent of the sample reported some solvent exposure. The mean CES-D score was 6.5 (SD=6.4; median=5; range=0 – 44); 92% of the sample had a score below 16. After adjusting for covariates, statistically significant associations were observed between ever-use of any solvent, long duration of any solvent exposure, ever-use of gasoline, ever-use of petroleum distillates, and short duration of petroleum distillate exposure and continuous CES-D score (p<0.05). Although nearly all associations were positive, fewer statistically significant associations were observed between metrics of solvent exposure and the dichotomized CES-D variable. Conclusions Solvent exposures were associated with depressive symptoms among farmers. Efforts to limit exposure to organic solvents may reduce the risk of depressive symptoms among farmers. PMID:28702848
Visible spectrum-based non-contact HRV and dPTT for stress detection
NASA Astrophysics Data System (ADS)
Kaur, Balvinder; Hutchinson, J. Andrew; Ikonomidou, Vasiliki N.
2017-05-01
Stress is a major health concern that not only compromises our quality of life, but also affects our physical health and well-being. Despite its importance, our ability to objectively detect and quantify it in a real-time, non-invasive manner is very limited. This capability would have a wide variety of medical, military, and security applications. We have developed a pipeline of image and signal processing algorithms to make such a system practical, which includes remote cardiac pulse detection based on visible spectrum videos and physiological stress detection based on the variability in the remotely detected cardiac signals. First, to determine a reliable cardiac pulse, principal component analysis (PCA) was applied for noise reduction and independent component analysis (ICA) was applied for source selection. To determine accurate cardiac timing for heart rate variability (HRV) analysis, a blind source separation method based least squares (LS) estimate was used to determine signal peaks that were closely related to R-peaks of the electrocardiogram (ECG) signal. A new metric, differential pulse transit time (dPTT), defined as the difference in arrival time of the remotely acquired cardiac signal at two separate distal locations, was derived. It was demonstrated that the remotely acquired metrics, HRV and dPTT, have potential for remote stress detection. The developed algorithms were tested against human subject data collected under two physiological conditions using the modified Trier Social Stress Test (TSST) and the Affective Stress Response Test (ASRT). This research provides evidence that the variability in remotely-acquired blood wave (BW) signals can be used for stress (high and mild) detection, and as a guide for further development of a real-time remote stress detection system based on remote HRV and dPTT.
Fischer, H Felix; Rose, Matthias
2016-10-19
Recently, a growing number of Item-Response Theory (IRT) models has been published, which allow estimation of a common latent variable from data derived by different Patient Reported Outcomes (PROs). When using data from different PROs, direct estimation of the latent variable has some advantages over the use of sum score conversion tables. It requires substantial proficiency in the field of psychometrics to fit such models using contemporary IRT software. We developed a web application ( http://www.common-metrics.org ), which allows estimation of latent variable scores more easily using IRT models calibrating different measures on instrument independent scales. Currently, the application allows estimation using six different IRT models for Depression, Anxiety, and Physical Function. Based on published item parameters, users of the application can directly estimate latent trait estimates using expected a posteriori (EAP) for sum scores as well as for specific response patterns, Bayes modal (MAP), Weighted likelihood estimation (WLE) and Maximum likelihood (ML) methods and under three different prior distributions. The obtained estimates can be downloaded and analyzed using standard statistical software. This application enhances the usability of IRT modeling for researchers by allowing comparison of the latent trait estimates over different PROs, such as the Patient Health Questionnaire Depression (PHQ-9) and Anxiety (GAD-7) scales, the Center of Epidemiologic Studies Depression Scale (CES-D), the Beck Depression Inventory (BDI), PROMIS Anxiety and Depression Short Forms and others. Advantages of this approach include comparability of data derived with different measures and tolerance against missing values. The validity of the underlying models needs to be investigated in the future.
NASA Astrophysics Data System (ADS)
Jiang, Yicheng; Cheng, Ping; Ou, Yangkui
2001-09-01
A new method for target classification of high-range resolution radar is proposed. It tries to use neural learning to obtain invariant subclass features of training range profiles. A modified Euclidean metric based on the Box-Cox transformation technique is investigated for Nearest Neighbor target classification improvement. The classification experiments using real radar data of three different aircraft have demonstrated that classification error can reduce 8% if this method proposed in this paper is chosen instead of the conventional method. The results of this paper have shown that by choosing an optimized metric, it is indeed possible to reduce the classification error without increasing the number of samples.
Ribeiro, Sónia Carvalho; Lovett, Andrew
2009-07-01
The integration of socio-economic and environmental objectives is a major challenge in developing strategies for sustainable landscapes. We investigated associations between socio-economic variables, landscape metrics and measures of forest condition in the context of Portugal. The main goals of the study were to 1) investigate relationships between forest conditions and measures of socio-economic development at national and regional scales, 2) test the hypothesis that a systematic variation in forest landscape metrics occurs according to the stage of socio-economic development and, 3) assess the extent to which landscape metrics can inform strategies to enhance forest sustainability. A ranking approach and statistical techniques such as Principal Component Analysis were used to achieve these objectives. Relationships between socio-economic characteristics, landscape metrics and measures of forest condition were only significant in the regional analysis of municipalities in Northern Portugal. Landscape metrics for different tree species displayed significant variations across socio-economic groups of municipalities and these differences were consistent with changes in characteristics suggested by the forest transition model. The use of metrics also helped inform place-specific strategies to improve forest management, though it was also apparent that further work was required to better incorporate differences in forest functions into sustainability planning.
Vargas, Hebert Alberto; Kramer, Gem M; Scott, Andrew M; Weickhardt, Andrew; Meier, Andreas A; Parada, Nicole; Beattie, Bradley J; Humm, John L; Staton, Kevin D; Zanzonico, Pat B; Lyashchenko, Serge K; Lewis, Jason S; Yaqub, Maqsood; Sosa, Ramon E; van den Eertwegh, Alfons J; Davis, Ian D; Ackermann, Uwe; Pathmaraj, Kunthi; Schuit, Robert C; Windhorst, Albert D; Chua, Sue; Weber, Wolfgang A; Larson, Steven M; Scher, Howard I; Lammertsma, Adriaan A; Hoekstra, Otto; Morris, Michael J
2018-04-06
18 F-fluorodihydrotestosterone ( 18 F-FDHT) is a radiolabeled analogue of the androgen receptor's primary ligand that is currently being credentialed as a biomarker for prognosis, response, and pharmacodynamic effects of new therapeutics. As part of the biomarker qualification process, we prospectively assessed its reproducibility and repeatability in men with metastatic castration-resistant prostate cancer (mCRPC). Methods: We conducted a prospective multi-institutional study of mCRPC patients undergoing two (test/re-test) 18 F-FDHT PET/CT scans on two consecutive days. Two independent readers evaluated all examinations and recorded standardized uptake values (SUVs), androgen receptor-positive tumor volumes (ARTV), and total lesion uptake (TLU) for the most avid lesion detected in each of 32 pre-defined anatomical regions. The relative absolute difference and reproducibility coefficient (RC) of each metric were calculated between the test and re-test scans. Linear regression analyses, intra-class correlation coefficients (ICC), and Bland-Altman plots were used to evaluate repeatability of 18 F-FDHT metrics. The coefficient of variation (COV) and ICC were used to assess inter-observer reproducibility. Results: Twenty-seven patients with 140 18 F-FDHT-avid regions were included. The best repeatability among 18 F-FDHT uptake metrics was found for SUV metrics (SUV max , SUVmean, and SUVpeak), with no significant differences in repeatability found among them. Correlations between the test and re-test scans were strong for all SUV metrics (R2 ≥ 0.92; ICC ≥ 0.97). The RCs of the SUV metrics ranged from 21.3% for SUVpeak to 24.6% for SUV max The test and re-test ARTV and TLU, respectively, were highly correlated (R2 and ICC ≥ 0.97), although variability was significantly higher than that for SUV (RCs > 46.4%). The PSA levels, Gleason score, weight, and age did not affect repeatability, nor did total injected activity, uptake measurement time, or differences in uptake time between the two scans. Including the single most avid lesion per patient, the five most avid lesions per patient, only lesions ≥ 4.2 mL, only lesions with an SUV ≥ 4 g/mL, or normalizing of SUV to area under the parent plasma activity concentration-time curve did not significantly affect repeatability. All metrics showed high inter-observer reproducibility (ICC > 0.98; COV < 0.2-10.8%). Conclusion: 18 F-FDHT is a highly reproducible means of imaging mCRPC. Amongst 18 F-FDHT uptake metrics, SUV had the highest repeatability among the measures assessed. These performance characteristics lend themselves to further biomarker development and clinical qualification of the tracer. Copyright © 2018 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
The response dynamics of preferential choice.
Koop, Gregory J; Johnson, Joseph G
2013-12-01
The ubiquity of psychological process models requires an increased degree of sophistication in the methods and metrics that we use to evaluate them. We contribute to this venture by capitalizing on recent work in cognitive science analyzing response dynamics, which shows that the bearing information processing dynamics have on intended action is also revealed in the motor system. This decidedly "embodied" view suggests that researchers are missing out on potential dependent variables with which to evaluate their models-those associated with the motor response that produces a choice. The current work develops a method for collecting and analyzing such data in the domain of decision making. We first validate this method using widely normed stimuli from the International Affective Picture System (Experiment 1), and demonstrate that curvature in response trajectories provides a metric of the competition between choice options. We next extend the method to risky decision making (Experiment 2) and develop predictions for three popular classes of process model. The data provided by response dynamics demonstrate that choices contrary to the maxim of risk seeking in losses and risk aversion in gains may be the product of at least one "online" preference reversal, and can thus begin to discriminate amongst the candidate models. Finally, we incorporate attentional data collected via eye-tracking (Experiment 3) to develop a formal computational model of joint information sampling and preference accumulation. In sum, we validate response dynamics for use in preferential choice tasks and demonstrate the unique conclusions afforded by response dynamics over and above traditional methods. Copyright © 2013 Elsevier Inc. All rights reserved.
Comparing exposure metrics for classifying ‘dangerous heat’ in heat wave and health warning systems
Zhang, Kai; Rood, Richard B.; Michailidis, George; Oswald, Evan M.; Schwartz, Joel D.; Zanobetti, Antonella; Ebi, Kristie L.; O’Neill, Marie S.
2012-01-01
Heat waves have been linked to excess mortality and morbidity, and are projected to increase in frequency and intensity with a warming climate. This study compares exposure metrics to trigger heat wave and health warning systems (HHWS), and introduces a novel multi-level hybrid clustering method to identify potential dangerously hot days. Two-level and three-level hybrid clustering analysis as well as common indices used to trigger HHWS, including spatial synoptic classification (SSC); and 90th, 95th, and 99th percentiles of minimum and relative minimum temperature (using a 10 day reference period), were calculated using a summertime weather dataset in Detroit from 1976 to 2006. The days classified as ‘hot’ with hybrid clustering analysis, SSC, minimum and relative minimum temperature methods differed by method type. SSC tended to include the days with, on average, 2.6 °C lower daily minimum temperature and 5.3 °C lower dew point than days identified by other methods. These metrics were evaluated by comparing their performance in predicting excess daily mortality. The 99th percentile of minimum temperature was generally the most predictive, followed by the three-level hybrid clustering method, the 95th percentile of minimum temperature, SSC and others. Our proposed clustering framework has more flexibility and requires less substantial meteorological prior information than the synoptic classification methods. Comparison of these metrics in predicting excess daily mortality suggests that metrics thought to better characterize physiological heat stress by considering several weather conditions simultaneously may not be the same metrics that are better at predicting heat-related mortality, which has significant implications in HHWSs. PMID:22673187
Evaluating Algorithm Performance Metrics Tailored for Prognostics
NASA Technical Reports Server (NTRS)
Saxena, Abhinav; Celaya, Jose; Saha, Bhaskar; Saha, Sankalita; Goebel, Kai
2009-01-01
Prognostics has taken a center stage in Condition Based Maintenance (CBM) where it is desired to estimate Remaining Useful Life (RUL) of the system so that remedial measures may be taken in advance to avoid catastrophic events or unwanted downtimes. Validation of such predictions is an important but difficult proposition and a lack of appropriate evaluation methods renders prognostics meaningless. Evaluation methods currently used in the research community are not standardized and in many cases do not sufficiently assess key performance aspects expected out of a prognostics algorithm. In this paper we introduce several new evaluation metrics tailored for prognostics and show that they can effectively evaluate various algorithms as compared to other conventional metrics. Specifically four algorithms namely; Relevance Vector Machine (RVM), Gaussian Process Regression (GPR), Artificial Neural Network (ANN), and Polynomial Regression (PR) are compared. These algorithms vary in complexity and their ability to manage uncertainty around predicted estimates. Results show that the new metrics rank these algorithms in different manner and depending on the requirements and constraints suitable metrics may be chosen. Beyond these results, these metrics offer ideas about how metrics suitable to prognostics may be designed so that the evaluation procedure can be standardized. 1
Wang, Jin-Hui; Zuo, Xi-Nian; Gohel, Suril; Milham, Michael P.; Biswal, Bharat B.; He, Yong
2011-01-01
Graph-based computational network analysis has proven a powerful tool to quantitatively characterize functional architectures of the brain. However, the test-retest (TRT) reliability of graph metrics of functional networks has not been systematically examined. Here, we investigated TRT reliability of topological metrics of functional brain networks derived from resting-state functional magnetic resonance imaging data. Specifically, we evaluated both short-term (<1 hour apart) and long-term (>5 months apart) TRT reliability for 12 global and 6 local nodal network metrics. We found that reliability of global network metrics was overall low, threshold-sensitive and dependent on several factors of scanning time interval (TI, long-term>short-term), network membership (NM, networks excluding negative correlations>networks including negative correlations) and network type (NT, binarized networks>weighted networks). The dependence was modulated by another factor of node definition (ND) strategy. The local nodal reliability exhibited large variability across nodal metrics and a spatially heterogeneous distribution. Nodal degree was the most reliable metric and varied the least across the factors above. Hub regions in association and limbic/paralimbic cortices showed moderate TRT reliability. Importantly, nodal reliability was robust to above-mentioned four factors. Simulation analysis revealed that global network metrics were extremely sensitive (but varying degrees) to noise in functional connectivity and weighted networks generated numerically more reliable results in compared with binarized networks. For nodal network metrics, they showed high resistance to noise in functional connectivity and no NT related differences were found in the resistance. These findings provide important implications on how to choose reliable analytical schemes and network metrics of interest. PMID:21818285
Research on cardiovascular disease prediction based on distance metric learning
NASA Astrophysics Data System (ADS)
Ni, Zhuang; Liu, Kui; Kang, Guixia
2018-04-01
Distance metric learning algorithm has been widely applied to medical diagnosis and exhibited its strengths in classification problems. The k-nearest neighbour (KNN) is an efficient method which treats each feature equally. The large margin nearest neighbour classification (LMNN) improves the accuracy of KNN by learning a global distance metric, which did not consider the locality of data distributions. In this paper, we propose a new distance metric algorithm adopting cosine metric and LMNN named COS-SUBLMNN which takes more care about local feature of data to overcome the shortage of LMNN and improve the classification accuracy. The proposed methodology is verified on CVDs patient vector derived from real-world medical data. The Experimental results show that our method provides higher accuracy than KNN and LMNN did, which demonstrates the effectiveness of the Risk predictive model of CVDs based on COS-SUBLMNN.
Gardner, Bethany T.; Dale, Ann Marie; Buckner-Petty, Skye; Van Dillen, Linda; Amick, Benjamin C.; Evanoff, Bradley
2016-01-01
Objective To assess construct and discriminant validity of four health-related work productivity loss questionnaires in relation to employer productivity metrics, and to describe variation in economic estimates of productivity loss provided by the questionnaires in healthy workers. Methods 58 billing office workers completed surveys including health information and four productivity loss questionnaires. Employer productivity metrics and work hours were also obtained. Results Productivity loss questionnaires were weakly to moderately correlated with employer productivity metrics. Workers with more health complaints reported greater health-related productivity loss than healthier workers, but showed no loss on employer productivity metrics. Economic estimates of productivity loss showed wide variation among questionnaires, yet no loss of actual productivity. Conclusions Additional studies are needed comparing questionnaires with objective measures in larger samples and other industries, to improve measurement methods for health-related productivity loss. PMID:26849261
Alvarez-Berastegui, Diego; Ciannelli, Lorenzo; Aparicio-Gonzalez, Alberto; Reglero, Patricia; Hidalgo, Manuel; López-Jurado, Jose Luis; Tintoré, Joaquín; Alemany, Francisco
2014-01-01
Seascape ecology is an emerging discipline focused on understanding how features of the marine habitat influence the spatial distribution of marine species. However, there is still a gap in the development of concepts and techniques for its application in the marine pelagic realm, where there are no clear boundaries delimitating habitats. Here we demonstrate that pelagic seascape metrics defined as a combination of hydrographic variables and their spatial gradients calculated at an appropriate spatial scale, improve our ability to model pelagic fish distribution. We apply the analysis to study the spawning locations of two tuna species: Atlantic bluefin and bullet tuna. These two species represent a gradient in life history strategies. Bluefin tuna has a large body size and is a long-distant migrant, while bullet tuna has a small body size and lives year-round in coastal waters within the Mediterranean Sea. The results show that the models performance incorporating the proposed seascape metrics increases significantly when compared with models that do not consider these metrics. This improvement is more important for Atlantic bluefin, whose spawning ecology is dependent on the local oceanographic scenario, than it is for bullet tuna, which is less influenced by the hydrographic conditions. Our study advances our understanding of how species perceive their habitat and confirms that the spatial scale at which the seascape metrics provide information is related to the spawning ecology and life history strategy of each species.
NASA Astrophysics Data System (ADS)
Nhiwatiwa, Tamuka; Dalu, Tatenda; Sithole, Tatenda
2017-12-01
River systems constitute areas of high human population densities owing to their favourable conditions for agriculture, water supply and transportation network. Despite human dependence on river systems, anthropogenic activities severely degrade water quality. The main aim of this study was to assess the river health of Ngamo River using diatom and macroinvertebrate community structure based on multivariate analyses and community metrics. Ammonia, pH, salinity, total phosphorus and temperature were found to be significantly different among the study seasons. The diatom and macroinvertebrate taxa richness increased downstream suggesting an improvement in water as we moved away from the pollution point sources. Canonical correspondence analyses identified nutrients (total nitrogen and reactive phosphorus) as important variables structuring diatom and macroinvertebrate community. The community metrics and diversity indices for both bioindicators highlighted that the water quality of the river system was very poor. These findings indicate that both methods can be used for water quality assessments, e.g. sewage and agricultural pollution, and they show high potential for use during water quality monitoring programmes in other regions.
Selective attrition and intraindividual variability in response time moderate cognitive change.
Yao, Christie; Stawski, Robert S; Hultsch, David F; MacDonald, Stuart W S
2016-01-01
Selection of a developmental time metric is useful for understanding causal processes that underlie aging-related cognitive change and for the identification of potential moderators of cognitive decline. Building on research suggesting that time to attrition is a metric sensitive to non-normative influences of aging (e.g., subclinical health conditions), we examined reason for attrition and intraindividual variability (IIV) in reaction time as predictors of cognitive performance. Three hundred and four community dwelling older adults (64-92 years) completed annual assessments in a longitudinal study. IIV was calculated from baseline performance on reaction time tasks. Multilevel models were fit to examine patterns and predictors of cognitive change. We show that time to attrition was associated with cognitive decline. Greater IIV was associated with declines on executive functioning and episodic memory measures. Attrition due to personal health reasons was also associated with decreased executive functioning compared to that of individuals who remained in the study. These findings suggest that time to attrition is a useful metric for representing cognitive change, and reason for attrition and IIV are predictive of non-normative influences that may underlie instances of cognitive loss in older adults.
Jekova, Irena; Krasteva, Vessela; Leber, Remo; Schmid, Ramun; Twerenbold, Raphael; Müller, Christian; Reichlin, Tobias; Abächerli, Roger
Electrocardiogram (ECG) biometrics is an advanced technology, not yet covered by guidelines on criteria, features and leads for maximal authentication accuracy. This study aims to define the minimal set of morphological metrics in 12-lead ECG by optimization towards high reliability and security, and validation in a person verification model across a large population. A standard 12-lead resting ECG database from 574 non-cardiac patients with two remote recordings (>1year apart) was used. A commercial ECG analysis module (Schiller AG) measured 202 morphological features, including lead-specific amplitudes, durations, ST-metrics, and axes. Coefficient of variation (CV, intersubject variability) and percent-mean-absolute-difference (PMAD, intrasubject reproducibility) defined the optimization (PMAD/CV→min) and restriction (CV<30%) criteria for selection of the most stable and distinctive features. Linear discriminant analysis (LDA) validated the non-redundant feature set for person verification. Maximal LDA verification sensitivity (85.3%) and specificity (86.4%) were validated for 11 optimal features: R-amplitude (I,II,V1,V2,V3,V5), S-amplitude (V1,V2), Tnegative-amplitude (aVR), and R-duration (aVF,V1). Copyright © 2016 Elsevier Inc. All rights reserved.
McCullough, Ian M.; Loftin, Cyndy; Sader, Steven A.
2012-01-01
Water clarity is a reliable indicator of lake productivity and an ideal metric of regional water quality. Clarity is an indicator of other water quality variables including chlorophyll-a, total phosphorus and trophic status; however, unlike these metrics, clarity can be accurately and efficiently estimated remotely on a regional scale. Remote sensing is useful in regions containing a large number of lakes that are cost prohibitive to monitor regularly using traditional field methods. Field-assessed lakes generally are easily accessible and may represent a spatially irregular, non-random sample of a region. We developed a remote monitoring program for Maine lakes >8 ha (1511 lakes) to supplement existing field monitoring programs. We combined Landsat 5 Thematic Mapper (TM) and Landsat 7 Enhanced Thematic Mapper Plus (ETM+) brightness values for TM bands 1 (blue) and 3 (red) to estimate water clarity (secchi disk depth) during 1990–2010. Although similar procedures have been applied to Minnesota and Wisconsin lakes, neither state incorporates physical lake variables or watershed characteristics that potentially affect clarity into their models. Average lake depth consistently improved model fitness, and the proportion of wetland area in lake watersheds also explained variability in clarity in some cases. Nine regression models predicted water clarity (R2 = 0.69–0.90) during 1990–2010, with separate models for eastern (TM path 11; four models) and western Maine (TM path 12; five models that captured differences in topography and landscape disturbance. Average absolute difference between model-estimated and observed secchi depth ranged 0.65–1.03 m. Eutrophic and mesotrophic lakes consistently were estimated more accurately than oligotrophic lakes. Our results show that TM bands 1 and 3 can be used to estimate regional lake water clarity outside the Great Lakes Region and that the accuracy of estimates is improved with additional model variables that reflect physical lake characteristics and watershed conditions.
Kandel, Benjamin M; Wang, Danny J J; Gee, James C; Avants, Brian B
2014-01-01
Although much attention has recently been focused on single-subject functional networks, using methods such as resting-state functional MRI, methods for constructing single-subject structural networks are in their infancy. Single-subject cortical networks aim to describe the self-similarity across the cortical structure, possibly signifying convergent developmental pathways. Previous methods for constructing single-subject cortical networks have used patch-based correlations and distance metrics based on curvature and thickness. We present here a method for constructing similarity-based cortical structural networks that utilizes a rotation-invariant representation of structure. The resulting graph metrics are closely linked to age and indicate an increasing degree of closeness throughout development in nearly all brain regions, perhaps corresponding to a more regular structure as the brain matures. The derived graph metrics demonstrate a four-fold increase in power for detecting age as compared to cortical thickness. This proof of concept study indicates that the proposed metric may be useful in identifying biologically relevant cortical patterns.
Classification of Animal Movement Behavior through Residence in Space and Time.
Torres, Leigh G; Orben, Rachael A; Tolkova, Irina; Thompson, David R
2017-01-01
Identification and classification of behavior states in animal movement data can be complex, temporally biased, time-intensive, scale-dependent, and unstandardized across studies and taxa. Large movement datasets are increasingly common and there is a need for efficient methods of data exploration that adjust to the individual variability of each track. We present the Residence in Space and Time (RST) method to classify behavior patterns in movement data based on the concept that behavior states can be partitioned by the amount of space and time occupied in an area of constant scale. Using normalized values of Residence Time and Residence Distance within a constant search radius, RST is able to differentiate behavior patterns that are time-intensive (e.g., rest), time & distance-intensive (e.g., area restricted search), and transit (short time and distance). We use grey-headed albatross (Thalassarche chrysostoma) GPS tracks to demonstrate RST's ability to classify behavior patterns and adjust to the inherent scale and individuality of each track. Next, we evaluate RST's ability to discriminate between behavior states relative to other classical movement metrics. We then temporally sub-sample albatross track data to illustrate RST's response to less resolved data. Finally, we evaluate RST's performance using datasets from four taxa with diverse ecology, functional scales, ecosystems, and data-types. We conclude that RST is a robust, rapid, and flexible method for detailed exploratory analysis and meta-analyses of behavioral states in animal movement data based on its ability to integrate distance and time measurements into one descriptive metric of behavior groupings. Given the increasing amount of animal movement data collected, it is timely and useful to implement a consistent metric of behavior classification to enable efficient and comparative analyses. Overall, the application of RST to objectively explore and compare behavior patterns in movement data can enhance our fine- and broad- scale understanding of animal movement ecology.
Kritikos, Nikolaos; Tsantili-Kakoulidou, Anna; Loukas, Yannis L; Dotsikas, Yannis
2015-07-17
In the current study, quantitative structure-retention relationships (QSRR) were constructed based on data obtained by a LC-(ESI)-QTOF-MS/MS method for the determination of amino acid analogues, following their derivatization via chloroformate esters. Molecules were derivatized via n-propyl chloroformate/n-propanol mediated reaction. Derivatives were acquired through a liquid-liquid extraction procedure. Chromatographic separation is based on gradient elution using methanol/water mixtures from a 70/30% composition to an 85/15% final one, maintaining a constant rate of change. The group of examined molecules was diverse, including mainly α-amino acids, yet also β- and γ-amino acids, γ-amino acid analogues, decarboxylated and phosphorylated analogues and dipeptides. Projection to latent structures (PLS) method was selected for the formation of QSRRs, resulting in a total of three PLS models with high cross-validated coefficients of determination Q(2)Y. For this reason, molecular structures were previously described through the use of descriptors. Through stratified random sampling procedures, 57 compounds were split to a training set and a test set. Model creation was based on multiple criteria including principal component significance and eigenvalue, variable importance, form of residuals, etc. Validation was based on statistical metrics Rpred(2),QextF2(2),QextF3(2) for the test set and Roy's metrics rm(Av)(2) and rm(δ)(2), assessing both predictive stability and internal validity. Based on aforementioned models, simplified equivalent were then created using a multi-linear regression (MLR) method. MLR models were also validated with the same metrics. The suggested models are considered useful for the estimation of retention times of amino acid analogues for a series of applications. Copyright © 2015 Elsevier B.V. All rights reserved.
Characterizing CDOM Spectral Variability Across Diverse Regions and Spectral Ranges
NASA Astrophysics Data System (ADS)
Grunert, Brice K.; Mouw, Colleen B.; Ciochetto, Audrey B.
2018-01-01
Satellite remote sensing of colored dissolved organic matter (CDOM) has focused on CDOM absorption (aCDOM) at a reference wavelength, as its magnitude provides insight into the underwater light field and large-scale biogeochemical processes. CDOM spectral slope, SCDOM, has been treated as a constant or semiconstant parameter in satellite retrievals of aCDOM despite significant regional and temporal variabilities. SCDOM and other optical metrics provide insights into CDOM composition, processing, food web dynamics, and carbon cycling. To date, much of this work relies on fluorescence techniques or aCDOM in spectral ranges unavailable to current and planned satellite sensors (e.g., <300 nm). In preparation for anticipated future hyperspectral satellite missions, we take the first step here of exploring global variability in SCDOM and fit deviations in the aCDOM spectra using the recently proposed Gaussian decomposition method. From this, we investigate if global variability in retrieved SCDOM and Gaussian components is significant and regionally distinct. We iteratively decreased the spectral range considered and analyzed the number, location, and magnitude of fitted Gaussian components to understand if a reduced spectral range impacts information obtained within a common spectral window. We compared the fitted slope from the Gaussian decomposition method to absorption-based indices that indicate CDOM composition to determine the ability of satellite-derived slope to inform the analysis and modeling of large-scale biogeochemical processes. Finally, we present implications of the observed variability for remote sensing of CDOM characteristics via SCDOM.
Patrick, Christopher J; Yuan, Lester L
2017-07-01
Flow alteration is widespread in streams, but current understanding of the effects of differences in flow characteristics on stream biological communities is incomplete. We tested hypotheses about the effect of variation in hydrology on stream communities by using generalized additive models to relate watershed information to the values of different flow metrics at gauged sites. Flow models accounted for 54-80% of the spatial variation in flow metric values among gauged sites. We then used these models to predict flow metrics in 842 ungauged stream sites in the mid-Atlantic United States that were sampled for fish, macroinvertebrates, and environmental covariates. Fish and macroinvertebrate assemblages were characterized in terms of a suite of metrics that quantified aspects of community composition, diversity, and functional traits that were expected to be associated with differences in flow characteristics. We related modeled flow metrics to biological metrics in a series of stressor-response models. Our analyses identified both drying and base flow instability as explaining 30-50% of the observed variability in fish and invertebrate community composition. Variations in community composition were related to variations in the prevalence of dispersal traits in invertebrates and trophic guilds in fish. The results demonstrate that we can use statistical models to predict hydrologic conditions at bioassessment sites, which, in turn, we can use to estimate relationships between flow conditions and biological characteristics. This analysis provides an approach to quantify the effects of spatial variation in flow metrics using readily available biomonitoring data. © 2017 by the Ecological Society of America.
Quantitative exposure metrics for sleep disturbance and their association with breast cancer risk.
Girschik, Jennifer; Fritschi, Lin; Erren, Thomas C; Heyworth, Jane
2013-05-01
It has been acknowledged by those in the field of sleep epidemiology that the current measures of sleep used in many epidemiological studies do not adequately capture the complexity and variability of sleep. A number of ways to improve the measurement of sleep have been proposed. This study aimed to assess the relationship between novel 'sleep disturbance' metrics, as expanded measures of sleep, and breast cancer risk. Data for this study were derived from a population-based case-control study conducted in Western Australia between 2009 and 2011. Participants completed a self-administered questionnaire that included questions about demographic, reproductive, and lifestyle factors in addition to questions on sleep. Four metrics of exposure to sleep disturbance (cumulative, average, duration, and peak) were developed. Unconditional logistic regression was used to examine the association between metrics of sleep disturbance and breast cancer risk. There was no evidence to support an association between any of the sleep disturbance metrics and breast cancer risk. Compared with the reference group of unexposed women, the fully adjusted ORs for cumulative sleep disturbance (harm) metric were as follows: 1st tertile 0.90 (95 % CI: 0.72-1.13); OR for the 2nd tertile 1.04 (95 % CI: 0.84-1.29); and OR for the 3rd tertile 1.02 (95 % CI: 0.82-1.27). This study found no association between several metrics of sleep disturbance and risk of breast cancer. Our experience with developing metrics of sleep disturbance may be of use to others in sleep epidemiology wishing to expand their scope of sleep measurement.
Role of Updraft Velocity in Temporal Variability of Global Cloud Hydrometeor Number
NASA Technical Reports Server (NTRS)
Sullivan, Sylvia C.; Lee, Dong Min; Oreopoulos, Lazaros; Nenes, Athanasios
2016-01-01
Understanding how dynamical and aerosol inputs affect the temporal variability of hydrometeor formation in climate models will help to explain sources of model diversity in cloud forcing, to provide robust comparisons with data, and, ultimately, to reduce the uncertainty in estimates of the aerosol indirect effect. This variability attribution can be done at various spatial and temporal resolutions with metrics derived from online adjoint sensitivities of droplet and crystal number to relevant inputs. Such metrics are defined and calculated from simulations using the NASA Goddard Earth Observing System Model, Version 5 (GEOS-5) and the National Center for Atmospheric Research Community Atmosphere Model Version 5.1 (CAM5.1). Input updraft velocity fluctuations can explain as much as 48% of temporal variability in output ice crystal number and 61% in droplet number in GEOS-5 and up to 89% of temporal variability in output ice crystal number in CAM5.1. In both models, this vertical velocity attribution depends strongly on altitude. Despite its importance for hydrometeor formation, simulated vertical velocity distributions are rarely evaluated against observations due to the sparsity of relevant data. Coordinated effort by the atmospheric community to develop more consistent, observationally based updraft treatments will help to close this knowledge gap.
NASA Astrophysics Data System (ADS)
Liu, Jinliang; Qian, Hong; Jin, Yi; Wu, Chuping; Chen, Jianhua; Yu, Shuquan; Wei, Xinliang; Jin, Xiaofeng; Liu, Jiajia; Yu, Mingjian
2016-10-01
Understanding the relative importance of dispersal limitation and environmental filtering processes in structuring the beta diversities of subtropical forests in human disturbed landscapes is still limited. Here we used taxonomic (TBD) and phylogenetic (PBD), including terminal PBD (PBDt) and basal PBD (PBDb), beta diversity indices to quantify the taxonomic and phylogenetic turnovers at different depths of evolutionary history in disturbed and undisturbed subtropical forests. Multiple linear regression model and distance-based redundancy analysis were used to disentangle the relative importance of environmental and spatial variables. Environmental variables were significantly correlated with TBD and PBDt metrics. Temperature and precipitation were major environmental drivers of beta diversity patterns, which explained 7-27% of the variance in TBD and PBDt, whereas the spatial variables independently explained less than 1% of the variation for all forests. The relative importance of environmental and spatial variables differed between disturbed and undisturbed forests (e.g., when Bray-Curtis was used as a beta diversity metric, environmental variable had a significant effect on beta diversity for disturbed forests but had no effect on undisturbed forests). We conclude that environmental filtering plays a more important role than geographical limitation and disturbance history in driving taxonomic and terminal phylogenetic beta diversity.
Role of updraft velocity in temporal variability of global cloud hydrometeor number
Sullivan, Sylvia C.; Lee, Dongmin; Oreopoulos, Lazaros; ...
2016-05-16
Understanding how dynamical and aerosol inputs affect the temporal variability of hydrometeor formation in climate models will help to explain sources of model diversity in cloud forcing, to provide robust comparisons with data, and, ultimately, to reduce the uncertainty in estimates of the aerosol indirect effect. This variability attribution can be done at various spatial and temporal resolutions with metrics derived from online adjoint sensitivities of droplet and crystal number to relevant inputs. Such metrics are defined and calculated from simulations using the NASA Goddard Earth Observing System Model, Version 5 (GEOS-5) and the National Center for Atmospheric Research Communitymore » Atmosphere Model Version 5.1 (CAM5.1). Input updraft velocity fluctuations can explain as much as 48% of temporal variability in output ice crystal number and 61% in droplet number in GEOS-5 and up to 89% of temporal variability in output ice crystal number in CAM5.1. In both models, this vertical velocity attribution depends strongly on altitude. Despite its importance for hydrometeor formation, simulated vertical velocity distributions are rarely evaluated against observations due to the sparsity of relevant data. Finally, coordinated effort by the atmospheric community to develop more consistent, observationally based updraft treatments will help to close this knowledge gap.« less
Role of updraft velocity in temporal variability of global cloud hydrometeor number
NASA Astrophysics Data System (ADS)
Sullivan, Sylvia C.; Lee, Dongmin; Oreopoulos, Lazaros; Nenes, Athanasios
2016-05-01
Understanding how dynamical and aerosol inputs affect the temporal variability of hydrometeor formation in climate models will help to explain sources of model diversity in cloud forcing, to provide robust comparisons with data, and, ultimately, to reduce the uncertainty in estimates of the aerosol indirect effect. This variability attribution can be done at various spatial and temporal resolutions with metrics derived from online adjoint sensitivities of droplet and crystal number to relevant inputs. Such metrics are defined and calculated from simulations using the NASA Goddard Earth Observing System Model, Version 5 (GEOS-5) and the National Center for Atmospheric Research Community Atmosphere Model Version 5.1 (CAM5.1). Input updraft velocity fluctuations can explain as much as 48% of temporal variability in output ice crystal number and 61% in droplet number in GEOS-5 and up to 89% of temporal variability in output ice crystal number in CAM5.1. In both models, this vertical velocity attribution depends strongly on altitude. Despite its importance for hydrometeor formation, simulated vertical velocity distributions are rarely evaluated against observations due to the sparsity of relevant data. Coordinated effort by the atmospheric community to develop more consistent, observationally based updraft treatments will help to close this knowledge gap.
Liu, Jinliang; Qian, Hong; Jin, Yi; Wu, Chuping; Chen, Jianhua; Yu, Shuquan; Wei, Xinliang; Jin, Xiaofeng; Liu, Jiajia; Yu, Mingjian
2016-01-01
Understanding the relative importance of dispersal limitation and environmental filtering processes in structuring the beta diversities of subtropical forests in human disturbed landscapes is still limited. Here we used taxonomic (TBD) and phylogenetic (PBD), including terminal PBD (PBDt) and basal PBD (PBDb), beta diversity indices to quantify the taxonomic and phylogenetic turnovers at different depths of evolutionary history in disturbed and undisturbed subtropical forests. Multiple linear regression model and distance-based redundancy analysis were used to disentangle the relative importance of environmental and spatial variables. Environmental variables were significantly correlated with TBD and PBDt metrics. Temperature and precipitation were major environmental drivers of beta diversity patterns, which explained 7–27% of the variance in TBD and PBDt, whereas the spatial variables independently explained less than 1% of the variation for all forests. The relative importance of environmental and spatial variables differed between disturbed and undisturbed forests (e.g., when Bray-Curtis was used as a beta diversity metric, environmental variable had a significant effect on beta diversity for disturbed forests but had no effect on undisturbed forests). We conclude that environmental filtering plays a more important role than geographical limitation and disturbance history in driving taxonomic and terminal phylogenetic beta diversity. PMID:27775021
Real topological entropy versus metric entropy for birational measure-preserving transformations
NASA Astrophysics Data System (ADS)
Abarenkova, N.; Anglès d'Auriac, J.-Ch.; Boukraa, S.; Maillard, J.-M.
2000-10-01
We consider a family of birational measure-preserving transformations of two complex variables, depending on one parameter for which simple rational expressions for the dynamical zeta function have been conjectured, together with an equality between the topological entropy and the logarithm of the Arnold complexity (divided by the number of iterations). Similar results have been obtained for the adaptation of these two concepts to dynamical systems of real variables, yielding to introduce a “real topological entropy” and a “real Arnold complexity”. We try to compare, here, the Kolmogorov-Sinai metric entropy and this real Arnold complexity, or real topological entropy, on this particular example of a one-parameter dependent birational transformation of two variables. More precisely, we analyze, using an infinite precision calculation, the Lyapunov characteristic exponents for various values of the parameter of the birational transformation, in order to compare these results with the ones for the real Arnold complexity. We find a quite surprising result: for this very birational example, and, in fact, for a large set of birational measure-preserving mappings generated by involutions, the Lyapunov characteristic exponents seem to be equal to zero or, at least, extremely small, for all the orbits we have considered, and for all values of the parameter. Birational measure-preserving transformations, generated by involutions, could thus allow to better understand the difference between the topological description and the probabilistic description of discrete dynamical systems. Many birational measure-preserving transformations, generated by involutions, seem to provide examples of discrete dynamical systems which can be topologically chaotic while they are metrically almost quasi-periodic. Heuristically, this can be understood as a consequence of the fact that their orbits seem to form some kind of “transcendental foliation” of the two-dimensional space of variables.
Devriendt, Floris; Moldovan, Darie; Verbeke, Wouter
2018-03-01
Prescriptive analytics extends on predictive analytics by allowing to estimate an outcome in function of control variables, allowing as such to establish the required level of control variables for realizing a desired outcome. Uplift modeling is at the heart of prescriptive analytics and aims at estimating the net difference in an outcome resulting from a specific action or treatment that is applied. In this article, a structured and detailed literature survey on uplift modeling is provided by identifying and contrasting various groups of approaches. In addition, evaluation metrics for assessing the performance of uplift models are reviewed. An experimental evaluation on four real-world data sets provides further insight into their use. Uplift random forests are found to be consistently among the best performing techniques in terms of the Qini and Gini measures, although considerable variability in performance across the various data sets of the experiments is observed. In addition, uplift models are frequently observed to be unstable and display a strong variability in terms of performance across different folds in the cross-validation experimental setup. This potentially threatens their actual use for business applications. Moreover, it is found that the available evaluation metrics do not provide an intuitively understandable indication of the actual use and performance of a model. Specifically, existing evaluation metrics do not facilitate a comparison of uplift models and predictive models and evaluate performance either at an arbitrary cutoff or over the full spectrum of potential cutoffs. In conclusion, we highlight the instability of uplift models and the need for an application-oriented approach to assess uplift models as prime topics for further research.
NASA Astrophysics Data System (ADS)
Nordtvedt, Kenneth
2018-01-01
In the author's previous publications, a recursive linear algebraic method was introduced for obtaining (without gravitational radiation) the full potential expansions for the gravitational metric field components and the Lagrangian for a general N-body system. Two apparent properties of gravity— Exterior Effacement and Interior Effacement—were defined and fully enforced to obtain the recursive algebra, especially for the motion-independent potential expansions of the general N-body situation. The linear algebraic equations of this method determine the potential coefficients at any order n of the expansions in terms of the lower-order coefficients. Then, enforcing Exterior and Interior Effacement on a selecedt few potential series of the full motion-independent potential expansions, the complete exterior metric field for a single, spherically-symmetric mass source was obtained, producing the Schwarzschild metric field of general relativity. In this fourth paper of this series, the complete spatial metric's motion-independent potentials for N bodies are obtained using enforcement of Interior Effacement and knowledge of the Schwarzschild potentials. From the full spatial metric, the complete set of temporal metric potentials and Lagrangian potentials in the motion-independent case can then be found by transfer equations among the coefficients κ( n, α) → λ( n, ɛ) → ξ( n, α) with κ( n, α), λ( n, ɛ), ξ( n, α) being the numerical coefficients in the spatial metric, the Lagrangian, and the temporal metric potential expansions, respectively.
Evolution of the auditory ossicles in extant hominids: metric variation in African apes and humans.
Quam, Rolf M; Coleman, Mark N; Martínez, Ignacio
2014-08-01
The auditory ossicles in primates have proven to be a reliable source of phylogenetic information. Nevertheless, to date, very little data have been published on the metric dimensions of the ear ossicles in African apes and humans. The present study relies on the largest samples of African ape ear ossicles studied to date to address questions of taxonomic differences and the evolutionary transformation of the ossicles in gorillas, chimpanzees and humans. Both African ape taxa show a malleus that is characterized by a long and slender manubrium and relatively short corpus, whereas humans show the opposite constellation of a short and thick manubrium and relatively long corpus. These changes in the manubrium are plausibly linked with changes in the size of the tympanic membrane. The main difference between the incus in African apes and humans seems to be related to changes in the functional length. Compared with chimpanzees, human incudes are larger in nearly all dimensions, except articular facet height, and show a more open angle between the axes. The gorilla incus resembles humans more closely in its metric dimensions, including functional length, perhaps as a result of the dramatically larger body size compared with chimpanzees. The differences between the stapedes of humans and African apes are primarily size-related, with humans being larger in nearly all dimensions. Nevertheless, some distinctions between the African apes were found in the obturator foramen and head height. Although correlations between metric variables in different ossicles were generally lower than those between variables in the same bone, variables of the malleus/incus complex appear to be more strongly correlated than those of the incus/stapes complex, perhaps reflecting the different embryological and evolutionary origins of the ossicles. The middle ear lever ratio for the African apes is similar to other haplorhines, but humans show the lowest lever ratio within primates. Very low levels of sexual dimorphism were found in the ossicles within each taxon, but some relationship with body size and several dimensions of the ear bones was found. Several of the metric distinctions in the incus and stapes imply a slightly different articulation of the ossicular chain within the tympanic cavity in African apes compared with humans. The limited auditory implications of these metric differences in the ossicles are also discussed. Finally, the results of this study suggest that several plesiomorphic features for apes may be retained in the ear bones of the early hominin taxa Australopithecus and Paranthropus as well as in the Neandertals. © 2014 Anatomical Society.
Evolution of the auditory ossicles in extant hominids: metric variation in African apes and humans
Quam, Rolf M; Coleman, Mark N; Martínez, Ignacio
2014-01-01
The auditory ossicles in primates have proven to be a reliable source of phylogenetic information. Nevertheless, to date, very little data have been published on the metric dimensions of the ear ossicles in African apes and humans. The present study relies on the largest samples of African ape ear ossicles studied to date to address questions of taxonomic differences and the evolutionary transformation of the ossicles in gorillas, chimpanzees and humans. Both African ape taxa show a malleus that is characterized by a long and slender manubrium and relatively short corpus, whereas humans show the opposite constellation of a short and thick manubrium and relatively long corpus. These changes in the manubrium are plausibly linked with changes in the size of the tympanic membrane. The main difference between the incus in African apes and humans seems to be related to changes in the functional length. Compared with chimpanzees, human incudes are larger in nearly all dimensions, except articular facet height, and show a more open angle between the axes. The gorilla incus resembles humans more closely in its metric dimensions, including functional length, perhaps as a result of the dramatically larger body size compared with chimpanzees. The differences between the stapedes of humans and African apes are primarily size-related, with humans being larger in nearly all dimensions. Nevertheless, some distinctions between the African apes were found in the obturator foramen and head height. Although correlations between metric variables in different ossicles were generally lower than those between variables in the same bone, variables of the malleus/incus complex appear to be more strongly correlated than those of the incus/stapes complex, perhaps reflecting the different embryological and evolutionary origins of the ossicles. The middle ear lever ratio for the African apes is similar to other haplorhines, but humans show the lowest lever ratio within primates. Very low levels of sexual dimorphism were found in the ossicles within each taxon, but some relationship with body size and several dimensions of the ear bones was found. Several of the metric distinctions in the incus and stapes imply a slightly different articulation of the ossicular chain within the tympanic cavity in African apes compared with humans. The limited auditory implications of these metric differences in the ossicles are also discussed. Finally, the results of this study suggest that several plesiomorphic features for apes may be retained in the ear bones of the early hominin taxa Australopithecus and Paranthropus as well as in the Neandertals. PMID:24845949
Conceptual Soundness, Metric Development, Benchmarking, and Targeting for PATH Subprogram Evaluation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mosey. G.; Doris, E.; Coggeshall, C.
The objective of this study is to evaluate the conceptual soundness of the U.S. Department of Housing and Urban Development (HUD) Partnership for Advancing Technology in Housing (PATH) program's revised goals and establish and apply a framework to identify and recommend metrics that are the most useful for measuring PATH's progress. This report provides an evaluative review of PATH's revised goals, outlines a structured method for identifying and selecting metrics, proposes metrics and benchmarks for a sampling of individual PATH programs, and discusses other metrics that potentially could be developed that may add value to the evaluation process. The frameworkmore » and individual program metrics can be used for ongoing management improvement efforts and to inform broader program-level metrics for government reporting requirements.« less
Handbook of aircraft noise metrics
NASA Technical Reports Server (NTRS)
Bennett, R. L.; Pearsons, K. S.
1981-01-01
Information is presented on 22 noise metrics that are associated with the measurement and prediction of the effects of aircraft noise. Some of the instantaneous frequency weighted sound level measures, such as A-weighted sound level, are used to provide multiple assessment of the aircraft noise level. Other multiple event metrics, such as day-night average sound level, were designed to relate sound levels measured over a period of time to subjective responses in an effort to determine compatible land uses and aid in community planning. The various measures are divided into: (1) instantaneous sound level metrics; (2) duration corrected single event metrics; (3) multiple event metrics; and (4) speech communication metrics. The scope of each measure is examined in terms of its: definition, purpose, background, relationship to other measures, calculation method, example, equipment, references, and standards.
Handbook of aircraft noise metrics
NASA Astrophysics Data System (ADS)
Bennett, R. L.; Pearsons, K. S.
1981-03-01
Information is presented on 22 noise metrics that are associated with the measurement and prediction of the effects of aircraft noise. Some of the instantaneous frequency weighted sound level measures, such as A-weighted sound level, are used to provide multiple assessment of the aircraft noise level. Other multiple event metrics, such as day-night average sound level, were designed to relate sound levels measured over a period of time to subjective responses in an effort to determine compatible land uses and aid in community planning. The various measures are divided into: (1) instantaneous sound level metrics; (2) duration corrected single event metrics; (3) multiple event metrics; and (4) speech communication metrics. The scope of each measure is examined in terms of its: definition, purpose, background, relationship to other measures, calculation method, example, equipment, references, and standards.
Alignment-free genome tree inference by learning group-specific distance metrics.
Patil, Kaustubh R; McHardy, Alice C
2013-01-01
Understanding the evolutionary relationships between organisms is vital for their in-depth study. Gene-based methods are often used to infer such relationships, which are not without drawbacks. One can now attempt to use genome-scale information, because of the ever increasing number of genomes available. This opportunity also presents a challenge in terms of computational efficiency. Two fundamentally different methods are often employed for sequence comparisons, namely alignment-based and alignment-free methods. Alignment-free methods rely on the genome signature concept and provide a computationally efficient way that is also applicable to nonhomologous sequences. The genome signature contains evolutionary signal as it is more similar for closely related organisms than for distantly related ones. We used genome-scale sequence information to infer taxonomic distances between organisms without additional information such as gene annotations. We propose a method to improve genome tree inference by learning specific distance metrics over the genome signature for groups of organisms with similar phylogenetic, genomic, or ecological properties. Specifically, our method learns a Mahalanobis metric for a set of genomes and a reference taxonomy to guide the learning process. By applying this method to more than a thousand prokaryotic genomes, we showed that, indeed, better distance metrics could be learned for most of the 18 groups of organisms tested here. Once a group-specific metric is available, it can be used to estimate the taxonomic distances for other sequenced organisms from the group. This study also presents a large scale comparison between 10 methods--9 alignment-free and 1 alignment-based.
NASA Astrophysics Data System (ADS)
Betancourt, J. L.; Weltzin, J. F.
2013-12-01
As part of an effort to develop an Indicator System for the National Climate Assessment (NCA), the Seasonality and Phenology Indicators Technical Team (SPITT) proposed an integrated, continental-scale framework for understanding and tracking seasonal timing in physical and biological systems. The framework shares several metrics with the EPA's National Climate Change Indicators. The SPITT framework includes a comprehensive suite of national indicators to track conditions, anticipate vulnerabilities, and facilitate intervention or adaptation to the extent possible. Observed, modeled, and forecasted seasonal timing metrics can inform a wide spectrum of decisions on federal, state, and private lands in the U.S., and will be pivotal for international efforts to mitigation and adaptation. Humans use calendars both to understand the natural world and to plan their lives. Although the seasons are familiar concepts, we lack a comprehensive understanding of how variability arises in the timing of seasonal transitions in the atmosphere, and how variability and change translate and propagate through hydrological, ecological and human systems. For example, the contributions of greenhouse warming and natural variability to secular trends in seasonal timing are difficult to disentangle, including earlier spring transitions from winter (strong westerlies) to summer (weak easterlies) patterns of atmospheric circulation; shifts in annual phasing of daily temperature means and extremes; advanced timing of snow and ice melt and soil thaw at higher latitudes and elevations; and earlier start and longer duration of the growing and fire seasons. The SPITT framework aims to relate spatiotemporal variability in surface climate to (1) large-scale modes of natural climate variability and greenhouse gas-driven climatic change, and (2) spatiotemporal variability in hydrological, ecological and human responses and impacts. The hierarchical framework relies on ground and satellite observations, and includes metrics of surface climate seasonality, seasonality of snow and ice, land surface phenology, ecosystem disturbance seasonality, and organismal phenology. Recommended metrics met the following requirements: (a) easily measured by day-of-year, number of days, or in the case of species migrations, by the latitude of the observation on a given date; (b) are observed or can be calculated across a high density of locations in many different regions of the U.S.; and (c) unambiguously describe both spatial and temporal variability and trends in seasonal timing that are climatically driven. The SPITT framework is meant to provide climatic and strategic guidance for the growth and application of seasonal timing and phenological monitoring efforts. The hope is that additional national indicators based on observed phenology, or evidence-based algorithms calibrated with observational data, will evolve with sustained and broad-scale monitoring of climatically sensitive species and ecological processes.
Short, Steven M; Cogdill, Robert P; D'Amico, Frank; Drennen, James K; Anderson, Carl A
2010-12-01
The absence of a unanimous, industry-specific definition of quality is, to a certain degree, impeding the progress of ongoing efforts to "modernize" the pharmaceutical industry. This work was predicated on requests by Dr. Woodcock (FDA) to re-define pharmaceutical quality in terms of risk by linking production characteristics to clinical attributes. A risk simulation platform that integrates population statistics, drug delivery system characteristics, dosing guidelines, patient compliance estimates, production metrics, and pharmacokinetic, pharmacodynamic, and in vitro-in vivo correlation models to investigate the impact of manufacturing variability on clinical performance of a model extended-release theophylline solid oral dosage system was developed. Manufacturing was characterized by inter- and intra-batch content uniformity and dissolution variability metrics, while clinical performance was described by a probabilistic pharmacodynamic model that expressed the probability of inefficacy and toxicity as a function of plasma concentrations. Least-squares regression revealed that both patient compliance variables, percent of doses taken and dosing time variability, significantly impacted efficacy and toxicity. Additionally, intra-batch content uniformity variability elicited a significant change in risk scores for the two adverse events and, therefore, was identified as a critical quality attribute. The proposed methodology demonstrates that pharmaceutical quality can be recast to explicitly reflect clinical performance. © 2010 Wiley-Liss, Inc. and the American Pharmacists Association
Biostatistics Series Module 10: Brief Overview of Multivariate Methods.
Hazra, Avijit; Gogtay, Nithya
2017-01-01
Multivariate analysis refers to statistical techniques that simultaneously look at three or more variables in relation to the subjects under investigation with the aim of identifying or clarifying the relationships between them. These techniques have been broadly classified as dependence techniques, which explore the relationship between one or more dependent variables and their independent predictors, and interdependence techniques, that make no such distinction but treat all variables equally in a search for underlying relationships. Multiple linear regression models a situation where a single numerical dependent variable is to be predicted from multiple numerical independent variables. Logistic regression is used when the outcome variable is dichotomous in nature. The log-linear technique models count type of data and can be used to analyze cross-tabulations where more than two variables are included. Analysis of covariance is an extension of analysis of variance (ANOVA), in which an additional independent variable of interest, the covariate, is brought into the analysis. It tries to examine whether a difference persists after "controlling" for the effect of the covariate that can impact the numerical dependent variable of interest. Multivariate analysis of variance (MANOVA) is a multivariate extension of ANOVA used when multiple numerical dependent variables have to be incorporated in the analysis. Interdependence techniques are more commonly applied to psychometrics, social sciences and market research. Exploratory factor analysis and principal component analysis are related techniques that seek to extract from a larger number of metric variables, a smaller number of composite factors or components, which are linearly related to the original variables. Cluster analysis aims to identify, in a large number of cases, relatively homogeneous groups called clusters, without prior information about the groups. The calculation intensive nature of multivariate analysis has so far precluded most researchers from using these techniques routinely. The situation is now changing with wider availability, and increasing sophistication of statistical software and researchers should no longer shy away from exploring the applications of multivariate methods to real-life data sets.
Watkins, Greg D; Swanson, Brett A; Suaning, Gregg J
2018-02-22
Cochlear implant (CI) sound processing strategies are usually evaluated in clinical studies involving experienced implant recipients. Metrics which estimate the capacity to perceive speech for a given set of audio and processing conditions provide an alternative means to assess the effectiveness of processing strategies. The aim of this research was to assess the ability of the output signal to noise ratio (OSNR) to accurately predict speech perception. It was hypothesized that compared with the other metrics evaluated in this study (1) OSNR would have equivalent or better accuracy and (2) OSNR would be the most accurate in the presence of variable levels of speech presentation. For the first time, the accuracy of OSNR as a metric which predicts speech intelligibility was compared, in a retrospective study, with that of the input signal to noise ratio (ISNR) and the short-term objective intelligibility (STOI) metric. Because STOI measured audio quality at the input to a CI sound processor, a vocoder was applied to the sound processor output and STOI was also calculated for the reconstructed audio signal (vocoder short-term objective intelligibility [VSTOI] metric). The figures of merit calculated for each metric were Pearson correlation of the metric and a psychometric function fitted to sentence scores at each predictor value (Pearson sigmoidal correlation [PSIG]), epsilon insensitive root mean square error (RMSE*) of the psychometric function and the sentence scores, and the statistical deviance of the fitted curve to the sentence scores (D). Sentence scores were taken from three existing data sets of Australian Sentence Tests in Noise results. The AuSTIN tests were conducted with experienced users of the Nucleus CI system. The score for each sentence was the proportion of morphemes the participant correctly repeated. In data set 1, all sentences were presented at 65 dB sound pressure level (SPL) in the presence of four-talker Babble noise. Each block of sentences used an adaptive procedure, with the speech presented at a fixed level and the ISNR varied. In data set 2, sentences were presented at 65 dB SPL in the presence of stationary speech weighted noise, street-side city noise, and cocktail party noise. An adaptive ISNR procedure was used. In data set 3, sentences were presented at levels ranging from 55 to 89 dB SPL with two automatic gain control configurations and two fixed ISNRs. For data set 1, the ISNR and OSNR were equally most accurate. STOI was significantly different for deviance (p = 0.045) and RMSE* (p < 0.001). VSTOI was significantly different for RMSE* (p < 0.001). For data set 2, ISNR and OSNR had an equivalent accuracy which was significantly better than that of STOI for PSIG (p = 0.029) and VSTOI for deviance (p = 0.001), RMSE*, and PSIG (both p < 0.001). For data set 3, OSNR was the most accurate metric and was significantly more accurate than VSTOI for deviance, RMSE*, and PSIG (all p < 0.001). ISNR and STOI were unable to predict the sentence scores for this data set. The study results supported the hypotheses. OSNR was found to have an accuracy equivalent to or better than ISNR, STOI, and VSTOI for tests conducted at a fixed presentation level and variable ISNR. OSNR was a more accurate metric than VSTOI for tests with fixed ISNRs and variable presentation levels. Overall, OSNR was the most accurate metric across the three data sets. OSNR holds promise as a prediction metric which could potentially improve the effectiveness of sound processor research and CI fitting.
Feng, Zhaozhong; Calatayud, Vicent; Zhu, Jianguo; Kobayashi, Kazuhiko
2018-04-01
Five winter wheat cultivars were exposed to ambient (A-O 3 ) and elevated (E-O 3 , 1.5 ambient) O 3 in a fully open-air fumigation system in China. Ozone exposure- and flux based response relationships were established for seven physiological variables related to photosynthesis. The performance of the fitting of the regressions in terms of R 2 increased when second order regressions instead of first order ones were used, suggesting that effects of O 3 were more pronounced towards the last developmental stages of the wheat. The more robust indicators were those related with CO 2 assimilation, Rubisco activity and RuBP regeneration capacity (A sat , J max and Vc max ), and chlorophyll content (Chl). Flux-based metrics (POD y , Phytotoxic O 3 Dose over a threshold ynmolO 3 m -2 s -1 ) predicted slightly better the responses to O 3 than exposure metrics (AOTX, Accumulated O 3 exposure over an hourly Threshold of X ppb) for most of the variables. The best performance was observed for metrics POD 1 ( A sat , J max and Vc max ) and POD 3 (Chl). For this crop, the proposed response functions could be used for O 3 risk assessment based on physiological effects and also to include the influence of O 3 on yield or other variables in models with a photosynthetic component. Copyright © 2017 Elsevier B.V. All rights reserved.
Benchmarking heart rate variability toolboxes.
Vest, Adriana N; Li, Qiao; Liu, Chengyu; Nemati, Shamim; Shah, Amit; Clifford, Gari D
Heart rate variability (HRV) metrics hold promise as potential indicators for autonomic function, prediction of adverse cardiovascular outcomes, psychophysiological status, and general wellness. Although the investigation of HRV has been prevalent for several decades, the methods used for preprocessing, windowing, and choosing appropriate parameters lack consensus among academic and clinical investigators. A comprehensive and open-source modular program is presented for calculating HRV implemented in Matlab with evidence-based algorithms and output formats. We compare our software with another widely used HRV toolbox written in C and available through PhysioNet.org. Our findings show substantially similar results when using high quality electrocardiograms (ECG) free from arrhythmias. Our software shows equivalent performance alongside an established predecessor and includes validated tools for performing preprocessing, signal quality, and arrhythmia detection to help provide standardization and repeatability in the field, leading to fewer errors in the presence of noise or arrhythmias. Copyright © 2017 Elsevier Inc. All rights reserved.
Binary sensitivity and specificity metrics are not adequate to describe the performance of quantitative microbial source tracking methods because the estimates depend on the amount of material tested and limit of detection. We introduce a new framework to compare the performance ...
Echo characteristics of two salmon species
NASA Astrophysics Data System (ADS)
Nealson, Patrick A.; Horne, John K.; Burwen, Debby L.
2005-04-01
The Alaska Department of Fish and Game relies on split-beam hydroacoustic techniques to estimate Chinook salmon (Oncorhynchus tshawytscha) returns to the Kenai River. Chinook counts are periodically confounded by large numbers of smaller sockeye salmon (O. nerka). Echo target-strength has been used to distinguish fish length classes, but was too variable to separate Kenai River chinook and sockeye distributions. To evaluate the efficacy of alternate echo metrics, controlled acoustic measurements of tethered chinook and sockeye salmon were collected at 200 kHz. Echo returns were digitally sampled at 48 kHz. A suite of descriptive metrics were collected from a series of 1,000 echoes per fish. Measurements of echo width were least variable at the -3 dB power point. Initial results show echo elongation and ping-to-ping variability in echo envelope width were significantly greater for chinook than for sockeye salmon. Chinook were also observed to return multiple discrete peaks from a single broadcast echo. These characteristics were attributed to the physical width of chinook exceeding half of the broadcast echo pulse width at certain orientations. Echo phase variability, correlation coefficient and fractal dimension distributions did not demonstrate significant discriminatory power between the two species. [Work supported by ADF&G, ONR.
Author Impact Metrics in Communication Sciences and Disorder Research
ERIC Educational Resources Information Center
Stuart, Andrew; Faucette, Sarah P.; Thomas, William Joseph
2017-01-01
Purpose: The purpose was to examine author-level impact metrics for faculty in the communication sciences and disorder research field across a variety of databases. Method: Author-level impact metrics were collected for faculty from 257 accredited universities in the United States and Canada. Three databases (i.e., Google Scholar, ResearchGate,…
Deal or No Deal? Evaluating Big Deals and Their Journals
ERIC Educational Resources Information Center
Blecic, Deborah D.; Wiberley, Stephen E., Jr.; Fiscella, Joan B.; Bahnmaier-Blaszczak, Sara; Lowery, Rebecca
2013-01-01
This paper presents methods to develop metrics that compare Big Deal journal packages and the journals within those packages. Deal-level metrics guide selection of a Big Deal for termination. Journal-level metrics guide selection of individual subscriptions from journals previously provided by a terminated deal. The paper argues that, while the…