Science.gov

Sample records for adaptive metric knn

  1. Classification in medical images using adaptive metric k-NN

    NASA Astrophysics Data System (ADS)

    Chen, C.; Chernoff, K.; Karemore, G.; Lo, P.; Nielsen, M.; Lauze, F.

    2010-03-01

    The performance of the k-nearest neighborhoods (k-NN) classifier is highly dependent on the distance metric used to identify the k nearest neighbors of the query points. The standard Euclidean distance is commonly used in practice. This paper investigates the performance of k-NN classifier with respect to different adaptive metrics in the context of medical imaging. We propose using adaptive metrics such that the structure of the data is better described, introducing some unsupervised learning knowledge in k-NN. We investigated four different metrics are estimated: a theoretical metric based on the assumption that images are drawn from Brownian Image Model (BIM), the normalized metric based on variance of the data, the empirical metric is based on the empirical covariance matrix of the unlabeled data, and an optimized metric obtained by minimizing the classification error. The spectral structure of the empirical covariance also leads to Principal Component Analysis (PCA) performed on it which results the subspace metrics. The metrics are evaluated on two data sets: lateral X-rays of the lumbar aortic/spine region, where we use k-NN for performing abdominal aorta calcification detection; and mammograms, where we use k-NN for breast cancer risk assessment. The results show that appropriate choice of metric can improve classification.

  2. Improving GPU-accelerated adaptive IDW interpolation algorithm using fast kNN search.

    PubMed

    Mei, Gang; Xu, Nengxiong; Xu, Liangliang

    2016-01-01

    This paper presents an efficient parallel Adaptive Inverse Distance Weighting (AIDW) interpolation algorithm on modern Graphics Processing Unit (GPU). The presented algorithm is an improvement of our previous GPU-accelerated AIDW algorithm by adopting fast k-nearest neighbors (kNN) search. In AIDW, it needs to find several nearest neighboring data points for each interpolated point to adaptively determine the power parameter; and then the desired prediction value of the interpolated point is obtained by weighted interpolating using the power parameter. In this work, we develop a fast kNN search approach based on the space-partitioning data structure, even grid, to improve the previous GPU-accelerated AIDW algorithm. The improved algorithm is composed of the stages of kNN search and weighted interpolating. To evaluate the performance of the improved algorithm, we perform five groups of experimental tests. The experimental results indicate: (1) the improved algorithm can achieve a speedup of up to 1017 over the corresponding serial algorithm; (2) the improved algorithm is at least two times faster than our previous GPU-accelerated AIDW algorithm; and (3) the utilization of fast kNN search can significantly improve the computational efficiency of the entire GPU-accelerated AIDW algorithm. PMID:27610308

  3. Adaptive Metric Learning for Saliency Detection.

    PubMed

    Li, Shuang; Lu, Huchuan; Lin, Zhe; Shen, Xiaohui; Price, Brian

    2015-11-01

    In this paper, we propose a novel adaptive metric learning algorithm (AML) for visual saliency detection. A key observation is that the saliency of a superpixel can be estimated by the distance from the most certain foreground and background seeds. Instead of measuring distance on the Euclidean space, we present a learning method based on two complementary Mahalanobis distance metrics: 1) generic metric learning (GML) and 2) specific metric learning (SML). GML aims at the global distribution of the whole training set, while SML considers the specific structure of a single image. Considering that multiple similarity measures from different views may enhance the relevant information and alleviate the irrelevant one, we try to fuse the GML and SML together and experimentally find the combining result does work well. Different from the most existing methods which are directly based on low-level features, we devise a superpixelwise Fisher vector coding approach to better distinguish salient objects from the background. We also propose an accurate seeds selection mechanism and exploit contextual and multiscale information when constructing the final saliency map. Experimental results on various image sets show that the proposed AML performs favorably against the state-of-the-arts. PMID:26054067

  4. An Adaptable Metric Shapes Perceptual Space.

    PubMed

    Hisakata, Rumi; Nishida, Shin'ya; Johnston, Alan

    2016-07-25

    How do we derive a sense of the separation of points in the world within a space-variant visual system? Visual directions are thought to be coded directly by a process referred to as local sign, in which a neuron acts as a labeled line for the perceived direction associated with its activation [1, 2]. The separations of visual directions, however, are not given, nor are they directly related to the separations of signals on the receptive surface or in the brain, which are modified by retinal and cortical magnification, respectively [3]. To represent the separation of directions veridically, the corresponding neural signals need to be scaled in some way. We considered this scaling process may be influenced by adaptation. Here, we describe a novel adaptation paradigm, which can alter both apparent spatial separation and size. We measured the perceived separation of two dots and the size of geometric figures after adaptation to random dot patterns. We show that adapting to high-density texture not only increases the apparent sparseness (average element separation) of a lower-density pattern, as expected [4], but paradoxically, it reduces the apparent separation of dot pairs and induces apparent shrinkage of geometric form. This demonstrates for the first time a contrary linkage between perceived density and perceived extent. Separation and size appear to be expressed relative to a variable spatial metric whose properties, while not directly observable, are revealed by reductions in both apparent size and texture density. PMID:27426520

  5. Stability and Performance Metrics for Adaptive Flight Control

    NASA Technical Reports Server (NTRS)

    Stepanyan, Vahram; Krishnakumar, Kalmanje; Nguyen, Nhan; VanEykeren, Luarens

    2009-01-01

    This paper addresses the problem of verifying adaptive control techniques for enabling safe flight in the presence of adverse conditions. Since the adaptive systems are non-linear by design, the existing control verification metrics are not applicable to adaptive controllers. Moreover, these systems are in general highly uncertain. Hence, the system's characteristics cannot be evaluated by relying on the available dynamical models. This necessitates the development of control verification metrics based on the system's input-output information. For this point of view, a set of metrics is introduced that compares the uncertain aircraft's input-output behavior under the action of an adaptive controller to that of a closed-loop linear reference model to be followed by the aircraft. This reference model is constructed for each specific maneuver using the exact aerodynamic and mass properties of the aircraft to meet the stability and performance requirements commonly accepted in flight control. The proposed metrics are unified in the sense that they are model independent and not restricted to any specific adaptive control methods. As an example, we present simulation results for a wing damaged generic transport aircraft with several existing adaptive controllers.

  6. Improved Segmentation of White Matter Tracts with Adaptive Riemannian Metrics

    PubMed Central

    Hao, Xiang; Zygmunt, Kristen; Whitaker, Ross T.; Fletcher, P. Thomas

    2014-01-01

    We present a novel geodesic approach to segmentation of white matter tracts from diffusion tensor imaging (DTI). Compared to deterministic and stochastic tractography, geodesic approaches treat the geometry of the brain white matter as a manifold, often using the inverse tensor field as a Riemannian metric. The white matter pathways are then inferred from the resulting geodesics, which have the desirable property that they tend to follow the main eigenvectors of the tensors, yet still have the flexibility to deviate from these directions when it results in lower costs. While this makes such methods more robust to noise, the choice of Riemannian metric in these methods is ad hoc. A serious drawback of current geodesic methods is that geodesics tend to deviate from the major eigenvectors in high-curvature areas in order to achieve the shortest path. In this paper we propose a method for learning an adaptive Riemannian metric from the DTI data, where the resulting geodesics more closely follow the principal eigenvector of the diffusion tensors even in high-curvature regions. We also develop a way to automatically segment the white matter tracts based on the computed geodesics. We show the robustness of our method on simulated data with different noise levels. We also compare our method with tractography methods and geodesic approaches using other Riemannian metrics and demonstrate that the proposed method results in improved geodesics and segmentations using both synthetic and real DTI data. PMID:24211814

  7. Distributed Computation of the knn Graph for Large High-Dimensional Point Sets.

    PubMed

    Plaku, Erion; Kavraki, Lydia E

    2007-03-01

    High-dimensional problems arising from robot motion planning, biology, data mining, and geographic information systems often require the computation of k nearest neighbor (knn) graphs. The knn graph of a data set is obtained by connecting each point to its k closest points. As the research in the above-mentioned fields progressively addresses problems of unprecedented complexity, the demand for computing knn graphs based on arbitrary distance metrics and large high-dimensional data sets increases, exceeding resources available to a single machine. In this work we efficiently distribute the computation of knn graphs for clusters of processors with message passing. Extensions to our distributed framework include the computation of graphs based on other proximity queries, such as approximate knn or range queries. Our experiments show nearly linear speedup with over one hundred processors and indicate that similar speedup can be obtained with several hundred processors. PMID:19847318

  8. Distributed Computation of the knn Graph for Large High-Dimensional Point Sets

    PubMed Central

    Plaku, Erion; Kavraki, Lydia E.

    2009-01-01

    High-dimensional problems arising from robot motion planning, biology, data mining, and geographic information systems often require the computation of k nearest neighbor (knn) graphs. The knn graph of a data set is obtained by connecting each point to its k closest points. As the research in the above-mentioned fields progressively addresses problems of unprecedented complexity, the demand for computing knn graphs based on arbitrary distance metrics and large high-dimensional data sets increases, exceeding resources available to a single machine. In this work we efficiently distribute the computation of knn graphs for clusters of processors with message passing. Extensions to our distributed framework include the computation of graphs based on other proximity queries, such as approximate knn or range queries. Our experiments show nearly linear speedup with over one hundred processors and indicate that similar speedup can be obtained with several hundred processors. PMID:19847318

  9. Flight Validation of a Metrics Driven L(sub 1) Adaptive Control

    NASA Technical Reports Server (NTRS)

    Dobrokhodov, Vladimir; Kitsios, Ioannis; Kaminer, Isaac; Jones, Kevin D.; Xargay, Enric; Hovakimyan, Naira; Cao, Chengyu; Lizarraga, Mariano I.; Gregory, Irene M.

    2008-01-01

    The paper addresses initial steps involved in the development and flight implementation of new metrics driven L1 adaptive flight control system. The work concentrates on (i) definition of appropriate control driven metrics that account for the control surface failures; (ii) tailoring recently developed L1 adaptive controller to the design of adaptive flight control systems that explicitly address these metrics in the presence of control surface failures and dynamic changes under adverse flight conditions; (iii) development of a flight control system for implementation of the resulting algorithms onboard of small UAV; and (iv) conducting a comprehensive flight test program that demonstrates performance of the developed adaptive control algorithms in the presence of failures. As the initial milestone the paper concentrates on the adaptive flight system setup and initial efforts addressing the ability of a commercial off-the-shelf AP with and without adaptive augmentation to recover from control surface failures.

  10. A low-cost compact metric adaptive optics system

    NASA Astrophysics Data System (ADS)

    Mansell, Justin D.; Henderson, Brian; Wiesner, Brennen; Praus, Robert; Coy, Steve

    2007-09-01

    The application of adaptive optics has been hindered by the cost, size, and complexity of the systems. We describe here progress we have made toward creating low-cost compact turn-key adaptive optics systems. We describe our new low-cost deformable mirror technology developed using polymer membranes, the associated USB interface drive electronics, and different ways that this technology can be configured into a low-cost compact adaptive optics system. We also present results of a parametric study of the stochastic parallel gradient descent (SPGD) control algorithm.

  11. Stability Metrics for Simulation and Flight-Software Assessment and Monitoring of Adaptive Control Assist Compensators

    NASA Technical Reports Server (NTRS)

    Hodel, A. S.; Whorton, Mark; Zhu, J. Jim

    2008-01-01

    Due to a need for improved reliability and performance in aerospace systems, there is increased interest in the use of adaptive control or other nonlinear, time-varying control designs in aerospace vehicles. While such techniques are built on Lyapunov stability theory, they lack an accompanying set of metrics for the assessment of stability margins such as the classical gain and phase margins used in linear time-invariant systems. Such metrics must both be physically meaningful and permit the user to draw conclusions in a straightforward fashion. We present in this paper a roadmap to the development of metrics appropriate to nonlinear, time-varying systems. We also present two case studies in which frozen-time gain and phase margins incorrectly predict stability or instability. We then present a multi-resolution analysis approach that permits on-line real-time stability assessment of nonlinear systems.

  12. Closed-loop adaptive optics using a CMOS image quality metric sensor

    NASA Astrophysics Data System (ADS)

    Ting, Chueh; Rayankula, Aditya; Giles, Michael K.; Furth, Paul M.

    2006-08-01

    When compared to a Shack-Hartmann sensor, a CMOS image sharpness sensor has the advantage of reduced complexity in a closed-loop adaptive optics system. It also has the potential to be implemented as a smart sensor using VLSI technology. In this paper, we present a novel adaptive optics testbed that uses a CMOS sharpness imager built in the New Mexico State University (NMSU) Electro-Optics Research Laboratory (EORL). The adaptive optics testbed, which includes a CMOS image quality metric sensor and a 37-channel deformable mirror, has the capability to rapidly compensate higher-order phase aberrations. An experimental performance comparison of the pinhole image sharpness feedback method and the CMOS imager is presented. The experimental data shows that the CMOS sharpness imager works well in a closed-loop adaptive optics system. Its overall performance is better than that of the pinhole method, and it has a fast response time.

  13. Quality metric in matched Laplacian of Gaussian response domain for blind adaptive optics image deconvolution

    NASA Astrophysics Data System (ADS)

    Guo, Shiping; Zhang, Rongzhi; Yang, Yikang; Xu, Rong; Liu, Changhai; Li, Jisheng

    2016-04-01

    Adaptive optics (AO) in conjunction with subsequent postprocessing techniques have obviously improved the resolution of turbulence-degraded images in ground-based astronomical observations or artificial space objects detection and identification. However, important tasks involved in AO image postprocessing, such as frame selection, stopping iterative deconvolution, and algorithm comparison, commonly need manual intervention and cannot be performed automatically due to a lack of widely agreed on image quality metrics. In this work, based on the Laplacian of Gaussian (LoG) local contrast feature detection operator, we propose a LoG domain matching operation to perceive effective and universal image quality statistics. Further, we extract two no-reference quality assessment indices in the matched LoG domain that can be used for a variety of postprocessing tasks. Three typical space object images with distinct structural features are tested to verify the consistency of the proposed metric with perceptual image quality through subjective evaluation.

  14. Complexity and Pilot Workload Metrics for the Evaluation of Adaptive Flight Controls on a Full Scale Piloted Aircraft

    NASA Technical Reports Server (NTRS)

    Hanson, Curt; Schaefer, Jacob; Burken, John J.; Larson, David; Johnson, Marcus

    2014-01-01

    Flight research has shown the effectiveness of adaptive flight controls for improving aircraft safety and performance in the presence of uncertainties. The National Aeronautics and Space Administration's (NASA)'s Integrated Resilient Aircraft Control (IRAC) project designed and conducted a series of flight experiments to study the impact of variations in adaptive controller design complexity on performance and handling qualities. A novel complexity metric was devised to compare the degrees of simplicity achieved in three variations of a model reference adaptive controller (MRAC) for NASA's F-18 (McDonnell Douglas, now The Boeing Company, Chicago, Illinois) Full-Scale Advanced Systems Testbed (Gen-2A) aircraft. The complexity measures of these controllers are also compared to that of an earlier MRAC design for NASA's Intelligent Flight Control System (IFCS) project and flown on a highly modified F-15 aircraft (McDonnell Douglas, now The Boeing Company, Chicago, Illinois). Pilot comments during the IRAC research flights pointed to the importance of workload on handling qualities ratings for failure and damage scenarios. Modifications to existing pilot aggressiveness and duty cycle metrics are presented and applied to the IRAC controllers. Finally, while adaptive controllers may alleviate the effects of failures or damage on an aircraft's handling qualities, they also have the potential to introduce annoying changes to the flight dynamics or to the operation of aircraft systems. A nuisance rating scale is presented for the categorization of nuisance side-effects of adaptive controllers.

  15. Preserved Network Metrics across Translated Texts

    NASA Astrophysics Data System (ADS)

    Cabatbat, Josephine Jill T.; Monsanto, Jica P.; Tapang, Giovanni A.

    2014-09-01

    Co-occurrence language networks based on Bible translations and the Universal Declaration of Human Rights (UDHR) translations in different languages were constructed and compared with random text networks. Among the considered network metrics, the network size, N, the normalized betweenness centrality (BC), and the average k-nearest neighbors, knn, were found to be the most preserved across translations. Moreover, similar frequency distributions of co-occurring network motifs were observed for translated texts networks.

  16. Metric Education Evaluation Package.

    ERIC Educational Resources Information Center

    Kansky, Bob; And Others

    This document was developed out of a need for a complete, carefully designed set of evaluation instruments and procedures that might be applied in metric inservice programs across the nation. Components of this package were prepared in such a way as to permit local adaptation to the evaluation of a broad spectrum of metric education activities.…

  17. Metric Madness

    ERIC Educational Resources Information Center

    Kroon, Cindy D.

    2007-01-01

    Created for a Metric Day activity, Metric Madness is a board game for two to four players. Students review and practice metric vocabulary, measurement, and calculations by playing the game. Playing time is approximately twenty to thirty minutes.

  18. Comparing distance metrics for rotation using the k-nearest neighbors algorithm for entropy estimation.

    PubMed

    Huggins, David J

    2014-02-15

    Distance metrics facilitate a number of methods for statistical analysis. For statistical mechanical applications, it is useful to be able to compute the distance between two different orientations of a molecule. However, a number of distance metrics for rotation have been employed, and in this study, we consider different distance metrics and their utility in entropy estimation using the k-nearest neighbors (KNN) algorithm. This approach shows a number of advantages over entropy estimation using a histogram method, and the different approaches are assessed using uniform randomly generated data, biased randomly generated data, and data from a molecular dynamics (MD) simulation of bulk water. The results identify quaternion metrics as superior to a metric based on the Euler angles. However, it is demonstrated that samples from MD simulation must be independent for effective use of the KNN algorithm and this finding impacts any application to time series data. PMID:24311273

  19. Induction motor fault diagnosis based on the k-NN and optimal feature selection

    NASA Astrophysics Data System (ADS)

    Nguyen, Ngoc-Tu; Lee, Hong-Hee

    2010-09-01

    The k-nearest neighbour (k-NN) rule is applied to diagnose the conditions of induction motors. The features are extracted from the time vibration signals while the optimal features are selected by a genetic algorithm based on a distance criterion. A weight value is assigned to each feature to help select the best quality features. To improve the classification performance of the k-NN rule, each of the k neighbours are evaluated by a weight factor based on the distance to the test pattern. The proposed k-NN is compared to the conventional k-NN and support vector machine classification to verify the performance of an induction motor fault diagnosis.

  20. Fabrication of transparent lead-free KNN glass ceramics by incorporation method

    PubMed Central

    2012-01-01

    The incorporation method was employed to produce potassium sodium niobate [KNN] (K0.5Na0.5NbO3) glass ceramics from the KNN-SiO2 system. This incorporation method combines a simple mixed-oxide technique for producing KNN powder and a conventional melt-quenching technique to form the resulting glass. KNN was calcined at 800°C and subsequently mixed with SiO2 in the KNN:SiO2 ratio of 75:25 (mol%). The successfully produced optically transparent glass was then subjected to a heat treatment schedule at temperatures ranging from 525°C -575°C for crystallization. All glass ceramics of more than 40% transmittance crystallized into KNN nanocrystals that were rectangular in shape and dispersed well throughout the glass matrix. The crystal size and crystallinity were found to increase with increasing heat treatment temperature, which in turn plays an important role in controlling the properties of the glass ceramics, including physical, optical, and dielectric properties. The transparency of the glass samples decreased with increasing crystal size. The maximum room temperature dielectric constant (εr) was as high as 474 at 10 kHz with an acceptable low loss (tanδ) around 0.02 at 10 kHz. PMID:22340426

  1. Color Metric.

    ERIC Educational Resources Information Center

    Illinois State Office of Education, Springfield.

    This booklet was designed to convey metric information in pictoral form. The use of pictures in the coloring book enables the more mature person to grasp the metric message instantly, whereas the younger person, while coloring the picture, will be exposed to the metric information long enough to make the proper associations. Sheets of the booklet…

  2. GPU-FS-kNN: A Software Tool for Fast and Scalable kNN Computation Using GPUs

    PubMed Central

    Arefin, Ahmed Shamsul; Riveros, Carlos; Berretta, Regina; Moscato, Pablo

    2012-01-01

    Background The analysis of biological networks has become a major challenge due to the recent development of high-throughput techniques that are rapidly producing very large data sets. The exploding volumes of biological data are craving for extreme computational power and special computing facilities (i.e. super-computers). An inexpensive solution, such as General Purpose computation based on Graphics Processing Units (GPGPU), can be adapted to tackle this challenge, but the limitation of the device internal memory can pose a new problem of scalability. An efficient data and computational parallelism with partitioning is required to provide a fast and scalable solution to this problem. Results We propose an efficient parallel formulation of the k-Nearest Neighbour (kNN) search problem, which is a popular method for classifying objects in several fields of research, such as pattern recognition, machine learning and bioinformatics. Being very simple and straightforward, the performance of the kNN search degrades dramatically for large data sets, since the task is computationally intensive. The proposed approach is not only fast but also scalable to large-scale instances. Based on our approach, we implemented a software tool GPU-FS-kNN (GPU-based Fast and Scalable k-Nearest Neighbour) for CUDA enabled GPUs. The basic approach is simple and adaptable to other available GPU architectures. We observed speed-ups of 50–60 times compared with CPU implementation on a well-known breast microarray study and its associated data sets. Conclusion Our GPU-based Fast and Scalable k-Nearest Neighbour search technique (GPU-FS-kNN) provides a significant performance improvement for nearest neighbour computation in large-scale networks. Source code and the software tool is available under GNU Public License (GPL) at https://sourceforge.net/p/gpufsknn/. PMID:22937144

  3. Accelerating k-NN Algorithm with Hybrid MPI and OpenSHMEM

    SciTech Connect

    Lin, Jian; Hamidouche, Khaled; Zheng, Jie; Lu, Xiaoyi; Vishnu, Abhinav; Panda, Dhabaleswar

    2015-08-05

    Machine Learning algorithms are benefiting from the continuous improvement of programming models, including MPI, MapReduce and PGAS. k-Nearest Neighbors (k-NN) algorithm is a widely used machine learning algorithm, applied to supervised learning tasks such as classification. Several parallel implementations of k-NN have been proposed in the literature and practice. However, on high-performance computing systems with high-speed interconnects, it is important to further accelerate existing designs of the k-NN algorithm through taking advantage of scalable programming models. To improve the performance of k-NN on large-scale environment with InfiniBand network, this paper proposes several alternative hybrid MPI+OpenSHMEM designs and performs a systemic evaluation and analysis on typical workloads. The hybrid designs leverage the one-sided memory access to better overlap communication with computation than the existing pure MPI design, and propose better schemes for efficient buffer management. The implementation based on k-NN program from MaTEx with MVAPICH2-X (Unified MPI+PGAS Communication Runtime over InfiniBand) shows up to 9.0% time reduction for training KDD Cup 2010 workload over 512 cores, and 27.6% time reduction for small workload with balanced communication and computation. Experiments of running with varied number of cores show that our design can maintain good scalability.

  4. Key landscape ecology metrics for assessing climate change adaptation options: rate of change and patchiness of impacts

    USGS Publications Warehouse

    López-Hoffman, Laura; Breshears, David D.; Allen, Craig D.; Miller, Marc L.

    2013-01-01

    Under a changing climate, devising strategies to help stakeholders adapt to alterations to ecosystems and their services is of utmost importance. In western North America, diminished snowpack and river flows are causing relatively gradual, homogeneous (system-wide) changes in ecosystems and services. In addition, increased climate variability is also accelerating the incidence of abrupt and patchy disturbances such as fires, floods and droughts. This paper posits that two key variables often considered in landscape ecology—the rate of change and the degree of patchiness of change—can aid in developing climate change adaptation strategies. We use two examples from the “borderland” region of the southwestern United States and northwestern Mexico. In piñon-juniper woodland die-offs that occurred in the southwestern United States during the 2000s, ecosystem services suddenly crashed in some parts of the system while remaining unaffected in other locations. The precise timing and location of die-offs was uncertain. On the other hand, slower, homogeneous change, such as the expected declines in water supply to the Colorado River delta, will likely impact the entire ecosystem, with ecosystem services everywhere in the delta subject to alteration, and all users likely exposed. The rapidity and spatial heterogeneity of faster, patchy climate change exemplified by tree die-off suggests that decision-makers and local stakeholders would be wise to operate under a Rawlsian “veil of ignorance,” and implement adaptation strategies that allow ecosystem service users to equitably share the risk of sudden loss of ecosystem services before actual ecosystem changes occur. On the other hand, in the case of slower, homogeneous, system-wide impacts to ecosystem services as exemplified by the Colorado River delta, adaptation strategies can be implemented after the changes begin, but will require a fundamental rethinking of how ecosystems and services are used and valued. In

  5. Sustainability Metrics: The San Luis Basin Project

    EPA Science Inventory

    Sustainability is about promoting humanly desirable dynamic regimes of the environment. Metrics: ecological footprint, net regional product, exergy, emergy, and Fisher Information. Adaptive management: (1) metrics assess problem, (2) specific problem identified, and (3) managemen...

  6. Primary Metrics.

    ERIC Educational Resources Information Center

    Otto, Karen; And Others

    These 55 activity cards were created to help teachers implement a unit on metric measurement. They were designed for students aged 5 to 10, but could be used with older students. Cards are color-coded in terms of activities on basic metric terms, prefixes, length, and other measures. Both individual and small-group games and ideas are included.…

  7. Mastering Metrics

    ERIC Educational Resources Information Center

    Parrot, Annette M.

    2005-01-01

    By the time students reach a middle school science course, they are expected to make measurements using the metric system. However, most are not practiced in its use, as their experience in metrics is often limited to one unit they were taught in elementary school. This lack of knowledge is not wholly the fault of formal education. Although the…

  8. Metric transition

    NASA Technical Reports Server (NTRS)

    1992-01-01

    This report describes NASA's metric transition in terms of seven major program elements. Six are technical areas involving research, technology development, and operations; they are managed by specific Program Offices at NASA Headquarters. The final program element, Institutional Management, covers both NASA-wide functional management under control of NASA Headquarters and metric capability development at the individual NASA Field Installations. This area addresses issues common to all NASA program elements, including: Federal, state, and local coordination; standards; private industry initiatives; public-awareness initiatives; and employee training. The concluding section identifies current barriers and impediments to metric transition; NASA has no specific recommendations for consideration by the Congress.

  9. Edible Metrics.

    ERIC Educational Resources Information Center

    Mecca, Christyna E.

    1998-01-01

    Presents an exercise that introduces students to scientific measurements using only metric units. At the conclusion of the exercise, students eat the experiment. Requires dried refried beans, crackers or chips, and dried instant powder for lemonade. (DDR)

  10. Photometric redshift estimation for quasars by integration of KNN and SVM

    NASA Astrophysics Data System (ADS)

    Han, Bo; Ding, Hong-Peng; Zhang, Yan-Xia; Zhao, Yong-Heng

    2016-05-01

    The massive photometric data collected from multiple large-scale sky surveys offer significant opportunities for measuring distances of celestial objects by photometric redshifts. However, catastrophic failure is an unsolved problem with a long history and it still exists in the current photometric redshift estimation approaches (such as the k-nearest neighbor (KNN) algorithm). In this paper, we propose a novel two-stage approach by integration of KNN and support vector machine (SVM) methods together. In the first stage, we apply the KNN algorithm to photometric data and estimate their corresponding z phot. Our analysis has found two dense regions with catastrophic failure, one in the range of z phot ɛ [0.3, 1.2] and the other in the range of zphot ɛ [1.2, 2.1]. In the second stage, we map the photometric input pattern of points falling into the two ranges from their original attribute space into a high dimensional feature space by using a Gaussian kernel function from an SVM. In the high dimensional feature space, many outliers resulting from catastrophic failure by simple Euclidean distance computation in KNN can be identified by a classification hyperplane of SVM and can be further corrected. Experimental results based on the Sloan Digital Sky Survey (SDSS) quasar data show that the two-stage fusion approach can significantly mitigate catastrophic failure and improve the estimation accuracy of photometric redshifts of quasars. The percents in different |δz| ranges and root mean square (rms) error by the integrated method are 83.47%, 89.83%, 90.90% and 0.192, respectively, compared to the results by KNN (71.96%, 83.78%, 89.73% and 0.204).

  11. Think Metric

    USGS Publications Warehouse

    U.S. Geological Survey

    1978-01-01

    The International System of Units, as the metric system is officially called, provides for a single "language" to describe weights and measures over the world. We in the United States together with the people of Brunei, Burma, and Yemen are the only ones who have not put this convenient system into effect. In the passage of the Metric Conversion Act of 1975, Congress determined that we also will adopt it, but the transition will be voluntary.

  12. STN area detection using K-NN classifiers for MER recordings in Parkinson patients during neurostimulator implant surgery

    NASA Astrophysics Data System (ADS)

    Schiaffino, L.; Rosado Muñoz, A.; Guerrero Martínez, J.; Francés Villora, J.; Gutiérrez, A.; Martínez Torres, I.; Kohan, y. D. R.

    2016-04-01

    Deep Brain Stimulation (DBS) applies electric pulses into the subthalamic nucleus (STN) improving tremor and other symptoms associated to Parkinson’s disease. Accurate STN detection for proper location and implant of the stimulating electrodes is a complex task and surgeons are not always certain about final location. Signals from the STN acquired during DBS surgery are obtained with microelectrodes, having specific characteristics differing from other brain areas. Using supervised learning, a trained model based on previous microelectrode recordings (MER) can be obtained, being able to successfully classify the STN area for new MER signals. The K Nearest Neighbours (K-NN) algorithm has been successfully applied to STN detection. However, the use of the fuzzy form of the K-NN algorithm (KNN-F) has not been reported. This work compares the STN detection algorithm of K-NN and KNN-F. Real MER recordings from eight patients where previously classified by neurophysiologists, defining 15 features. Sensitivity and specificity for the classifiers are obtained, Wilcoxon signed rank non-parametric test is used as statistical hypothesis validation. We conclude that the performance of KNN-F classifier is higher than K-NN with p<0.01 in STN specificity.

  13. Metric System.

    ERIC Educational Resources Information Center

    Del Mod System, Dover, DE.

    This autoinstructional unit deals with the identification of units of measure in the metric system and the construction of relevant conversion tables. Students in middle school or in grade ten, taking a General Science course, can handle this learning activity. It is recommended that high, middle or low level achievers can use the program.…

  14. KNN/BNT Composite Lead-Free Films for High-Frequency Ultrasonic Transducer Applications

    PubMed Central

    Lau, Sien Ting; Ji, Hong Fen; Li, Xiang; Ren, Wei; Zhou, Qifa; Shung, K. Kirk

    2011-01-01

    Lead-free K0.5Na0.5NbO3/Bi0.5Na0.5TiO3 (KNN/BNT) films have been fabricated by a composite sol-gel technique. Crystalline KNN fine powder was dispersed in the BNT precursor solution to form a composite slurry which was then spin-coated onto a platinum-buffered Si substrate. Repeated layering and vacuum infiltration were applied to produce 5-μm-thick dense composite film. By optimizing the sintering temperature, the films exhibited good dielectric and ferroelectric properties comparable to PZT films. A 193-MHz high-frequency ultrasonic transducer fabricated from this composite film showed a −6-dB bandwidth of approximately 34%. A tungsten wire phantom was imaged to demonstrate the capability of the transducer. PMID:21244994

  15. KNN/BNT composite lead-free films for high-frequency ultrasonic transducer applications.

    PubMed

    Lau, Sien Ting; Ji, Hong Fen; Li, Xiang; Ren, Wei; Zhou, Qifa; Shung, K Kirk

    2011-01-01

    Lead-free K(0.5)Na(0.5)NbO(3)/Bi(0.5)Na(0.5)TiO(3) (KNN/ BNT) films have been fabricated by a composite sol-gel technique. Crystalline KNN fine powder was dispersed in the BNT precursor solution to form a composite slurry which was then spin-coated onto a platinum-buffered Si substrate. Repeated layering and vacuum infiltration were applied to produce 5-μm-thick dense composite film. By optimizing the sintering temperature, the films exhibited good dielectric and ferroelectric properties comparable to PZT films. A 193-MHz high-frequency ultrasonic transducer fabricated from this composite film showed a -6-dB bandwidth of approximately 34%. A tungsten wire phantom was imaged to demonstrate the capability of the transducer. PMID:21244994

  16. SU-E-J-124: FDG PET Metrics Analysis in the Context of An Adaptive PET Protocol for Node Positive Gynecologic Cancer Patients

    SciTech Connect

    Nawrocki, J; Chino, J; Light, K; Vergalasova, I; Craciunescu, O

    2014-06-01

    Purpose: To compare PET extracted metrics and investigate the role of a gradient-based PET segmentation tool, PET Edge (MIM Software Inc., Cleveland, OH), in the context of an adaptive PET protocol for node positive gynecologic cancer patients. Methods: An IRB approved protocol enrolled women with gynecological, PET visible malignancies. A PET-CT was obtained for treatment planning prescribed to 45–50.4Gy with a 55– 70Gy boost to the PET positive nodes. An intra-treatment PET-CT was obtained between 30–36Gy, and all volumes re-contoured. Standard uptake values (SUVmax, SUVmean, SUVmedian) and GTV volumes were extracted from the clinician contoured GTVs on the pre- and intra-treament PET-CT for primaries and nodes and compared with a two tailed Wilcoxon signed-rank test. The differences between primary and node GTV volumes contoured in the treatment planning system and those volumes generated using PET Edge were also investigated. Bland-Altman plots were used to describe significant differences between the two contouring methods. Results: Thirteen women were enrolled in this study. The median baseline/intra-treatment primary (SUVmax, mean, median) were (30.5, 9.09, 7.83)/( 16.6, 4.35, 3.74), and nodes were (20.1, 4.64, 3.93)/( 6.78, 3.13, 3.26). The p values were all < 0.001. The clinical contours were all larger than the PET Edge generated ones, with mean difference of +20.6 ml for primary, and +23.5 ml for nodes. The Bland-Altman revealed changes between clinician/PET Edge contours to be mostly within the margins of the coefficient of variability. However, there was a proportional trend, i.e. the larger the GTV, the larger the clinical contours as compared to PET Edge contours. Conclusion: Primary and node SUV values taken from the intratreament PET-CT can be used to assess the disease response and to design an adaptive plan. The PET Edge tool can streamline the contouring process and lead to smaller, less user-dependent contours.

  17. Quality metrics for product defectiveness at KCD

    SciTech Connect

    Grice, J.V.

    1993-07-01

    Metrics are discussed for measuring and tracking product defectiveness at AlliedSignal Inc., Kansas City Division (KCD). Three new metrics, the metric (percent defective) that preceded the new metrics, and several alternatives are described. The new metrics, Percent Parts Accepted, Percent Parts Accepted Trouble Free, and Defects Per Million Observations, (denoted by PPA, PATF, and DPMO, respectively) were implemented for KCD-manufactured product and purchased material in November 1992. These metrics replace the percent defective metric that had been used for several years. The PPA and PATF metrics primarily measure quality performance while DPMO measures the effects of continuous improvement activities. The new metrics measure product quality in terms of product defectiveness observed only during the inspection process. The metrics were originally developed for purchased product and were adapted to manufactured product to provide a consistent set of metrics plant- wide. The new metrics provide a meaningful tool to measure the quantity of product defectiveness in terms of the customer`s requirements and expectations for quality. Many valid metrics are available and all will have deficiencies. These three metrics are among the least sensitive to problems and are easily understood. They will serve as good management tools for KCD in the foreseeable future until new flexible data systems and reporting procedures can be implemented that can provide more detailed and accurate metric computations.

  18. Metricize Yourself

    NASA Astrophysics Data System (ADS)

    Falbo, Maria K.

    2006-12-01

    In lab and homework, students should check whether or not their quantitative answers to physics questions make sense in the context of the problem. Unfortunately it is still the case in the US that many students don’t have a “feel” for oC, kg, cm, liters or Newtons. This problem contributes to the inability of students to check answers. It is also the case that just “going over” the tables in the text can be boring and dry. In this talk I’ll demonstrate some classroom activities that can be used throughout the year to give students a metric context in which quantitative answers can be interpreted.

  19. New KNN-based lead-free piezoelectric ceramic for high-frequency ultrasound transducer applications

    NASA Astrophysics Data System (ADS)

    Ou-Yang, Jun; Zhu, Benpeng; Zhang, Yue; Chen, Shi; Yang, Xiaofei; Wei, Wei

    2015-03-01

    Based on new KNN-based piezoelectric material 0.96(K0.48Na0.52)(Nb0.95Sb0.05)O3-0.04Bi0.5(Na0.82K0.18)0.5ZrO3 with a giant d33 of 490, a 37-MHz high-frequency ultrasound needle transducer with the aperture size of 1 mm was successfully fabricated. The obtained transducer had a high electromechanical coupling factor k t of 0.55, a good bandwidth of 56.8 % at -6 dB, and a low insertion loss of -16 dB at the central frequency. Its excellent performance is comparable to lead-containing transducer and is superior to that of any other lead-free transducer. This promising result demonstrates that this new KNN-based lead-free piezoelectric ceramic is a good candidate to replace lead-based materials for high-frequency ultrasound imaging.

  20. Make It Metric.

    ERIC Educational Resources Information Center

    Camilli, Thomas

    Measurement is perhaps the most frequently used form of mathematics. This book presents activities for learning about the metric system designed for upper intermediate and junior high levels. Discussions include: why metrics, history of metrics, changing to a metric world, teaching tips, and formulas. Activities presented are: metrics all around…

  1. Adaptation.

    PubMed

    Broom, Donald M

    2006-01-01

    The term adaptation is used in biology in three different ways. It may refer to changes which occur at the cell and organ level, or at the individual level, or at the level of gene action and evolutionary processes. Adaptation by cells, especially nerve cells helps in: communication within the body, the distinguishing of stimuli, the avoidance of overload and the conservation of energy. The time course and complexity of these mechanisms varies. Adaptive characters of organisms, including adaptive behaviours, increase fitness so this adaptation is evolutionary. The major part of this paper concerns adaptation by individuals and its relationships to welfare. In complex animals, feed forward control is widely used. Individuals predict problems and adapt by acting before the environmental effect is substantial. Much of adaptation involves brain control and animals have a set of needs, located in the brain and acting largely via motivational mechanisms, to regulate life. Needs may be for resources but are also for actions and stimuli which are part of the mechanism which has evolved to obtain the resources. Hence pigs do not just need food but need to be able to carry out actions like rooting in earth or manipulating materials which are part of foraging behaviour. The welfare of an individual is its state as regards its attempts to cope with its environment. This state includes various adaptive mechanisms including feelings and those which cope with disease. The part of welfare which is concerned with coping with pathology is health. Disease, which implies some significant effect of pathology, always results in poor welfare. Welfare varies over a range from very good, when adaptation is effective and there are feelings of pleasure or contentment, to very poor. A key point concerning the concept of individual adaptation in relation to welfare is that welfare may be good or poor while adaptation is occurring. Some adaptation is very easy and energetically cheap and

  2. Evaluation of normalization methods for cDNA microarray data by k-NN classification

    SciTech Connect

    Wu, Wei; Xing, Eric P; Myers, Connie; Mian, Saira; Bissell, Mina J

    2004-12-17

    Non-biological factors give rise to unwanted variations in cDNA microarray data. There are many normalization methods designed to remove such variations. However, to date there have been few published systematic evaluations of these techniques for removing variations arising from dye biases in the context of downstream, higher-order analytical tasks such as classification. Ten location normalization methods that adjust spatial- and/or intensity-dependent dye biases, and three scale methods that adjust scale differences were applied, individually and in combination, to five distinct, published, cancer biology-related cDNA microarray data sets. Leave-one-out cross-validation (LOOCV) classification error was employed as the quantitative end-point for assessing the effectiveness of a normalization method. In particular, a known classifier, k-nearest neighbor (k-NN), was estimated from data normalized using a given technique, and the LOOCV error rate of the ensuing model was computed. We found that k-NN classifiers are sensitive to dye biases in the data. Using NONRM and GMEDIAN as baseline methods, our results show that single-bias-removal techniques which remove either spatial-dependent dye bias (referred later as spatial effect) or intensity-dependent dye bias (referred later as intensity effect) moderately reduce LOOCV classification errors; whereas double-bias-removal techniques which remove both spatial- and intensity effect reduce LOOCV classification errors even further. Of the 41 different strategies examined, three two-step processes, IGLOESS-SLFILTERW7, ISTSPLINE-SLLOESS and IGLOESS-SLLOESS, all of which removed intensity effect globally and spatial effect locally, appear to reduce LOOCV classification errors most consistently and effectively across all data sets. We also found that the investigated scale normalization methods do not reduce LOOCV classification error. Using LOOCV error of k-NNs as the evaluation criterion, three double

  3. An improved k-NN method based on multiple-point statistics for classification of high-spatial resolution imagery

    NASA Astrophysics Data System (ADS)

    Tang, Y.; Jing, L.; Li, H.; Liu, Q.; Ding, H.

    2016-04-01

    In this paper, the potential of multiple-point statistics (MPS) for object-based classification is explored using a modified k-nearest neighbour (k-NN) classification method (MPk-NN). The method first utilises a training image derived from a classified map to characterise the spatial correlation between multiple points of land cover classes, overcoming the limitations of two-point geostatistical methods, and then the spatial information in the form of multiple-point probability is incorporated into the k-NN classifier. The remotely sensed image of an IKONOS subscene of the Beijing urban area was selected to evaluate the method. The image was object-based classified using the MPk-NN method and several alternatives, including the traditional k-NN, the geostatistically weighted k-NN, the Bayesian method, the decision tree classifier (DTC), and the support vector machine classifier (SVM). It was demonstrated that the MPk-NN approach can achieve greater classification accuracy relative to the alternatives, which are 82.05% and 89.12% based on pixel and object testing data, respectively. Thus, the proposed method is appropriate for object-based classification.

  4. NASA metrication activities

    NASA Technical Reports Server (NTRS)

    Vlannes, P. N.

    1978-01-01

    NASA's organization and policy for metrification, history from 1964, NASA participation in Federal agency activities, interaction with nongovernmental metrication organizations, and the proposed metrication assessment study are reviewed.

  5. (100)-Textured KNN-based thick film with enhanced piezoelectric property for intravascular ultrasound imaging

    PubMed Central

    Zhu, Benpeng; Zhang, Zhiqiang; Ma, Teng; Yang, Xiaofei; Li, Yongxiang; Shung, K. Kirk; Zhou, Qifa

    2015-01-01

    Using tape-casting technology, 35 μm free-standing (100)-textured Li doped KNN (KNLN) thick film was prepared by employing NaNbO3 (NN) as template. It exhibited similar piezoelectric behavior to lead containing materials: a longitudinal piezoelectric coefficient (d33) of ∼150 pm/V and an electromechanical coupling coefficient (kt) of 0.44. Based on this thick film, a 52 MHz side-looking miniature transducer with a bandwidth of 61.5% at −6 dB was built for Intravascular ultrasound (IVUS) imaging. In comparison with 40 MHz PMN-PT single crystal transducer, the rabbit aorta image had better resolution and higher noise-to-signal ratio, indicating that lead-free (100)-textured KNLN thick film may be suitable for IVUS (>50 MHz) imaging. PMID:25991874

  6. (100)-Textured KNN-based thick film with enhanced piezoelectric property for intravascular ultrasound imaging

    NASA Astrophysics Data System (ADS)

    Zhu, Benpeng; Zhang, Zhiqiang; Ma, Teng; Yang, Xiaofei; Li, Yongxiang; Shung, K. Kirk; Zhou, Qifa

    2015-04-01

    Using tape-casting technology, 35 μm free-standing (100)-textured Li doped KNN (KNLN) thick film was prepared by employing NaNbO3 (NN) as template. It exhibited similar piezoelectric behavior to lead containing materials: a longitudinal piezoelectric coefficient (d33) of ˜150 pm/V and an electromechanical coupling coefficient (kt) of 0.44. Based on this thick film, a 52 MHz side-looking miniature transducer with a bandwidth of 61.5% at -6 dB was built for Intravascular ultrasound (IVUS) imaging. In comparison with 40 MHz PMN-PT single crystal transducer, the rabbit aorta image had better resolution and higher noise-to-signal ratio, indicating that lead-free (100)-textured KNLN thick film may be suitable for IVUS (>50 MHz) imaging.

  7. The high density phase of the k-NN hard core lattice gas model

    NASA Astrophysics Data System (ADS)

    Nath, Trisha; Rajesh, R.

    2016-07-01

    The k-NN hard core lattice gas model on a square lattice, in which the first k next nearest neighbor sites of a particle are excluded from being occupied by another particle, is the lattice version of the hard disc model in two dimensional continuum. It has been conjectured that the lattice model, like its continuum counterpart, will show multiple entropy-driven transitions with increasing density if the high density phase has columnar or striped order. Here, we determine the nature of the phase at full packing for k up to 820 302 . We show that there are only eighteen values of k, all less than k  =  4134, that show columnar order, while the others show solid-like sublattice order.

  8. New Self-Dual Einstein Metrics

    NASA Astrophysics Data System (ADS)

    Casteill, P. Y.; Valent, G.

    A new family of euclidean Einstein metrics with self-dual Weyl tensor have been obtained using ideas from extended supersymmetries1,2. The basic supersymmetric formalism3, known as harmonic superspace, was adapted to the computation of self-dual Einstein metrics in 4. The resulting metric depends on 4 parameters besides the Einstein constant and has for isometry group U(1) × U(1), with hypersurface generating Killing vectors. In the limit of vanishing Einstein constant we recover a family of hyperkähler metrics within the Multicentre family 5 (in fact the most general one with two centres). Our results include the metrics of Plebanski and Demianski6 when these ones are restricted to be self-dual Weyl. From Flaherty's equivalence 7 these metrics can also be interpreted as a solution of the coupled Einstein-Maxwell field equations, for which we have given the Maxwell field strength forms2.

  9. Measuring in Metric.

    ERIC Educational Resources Information Center

    Sorenson, Juanita S.

    Eight modules for an in-service course on metric education for elementary teachers are included in this document. The modules are on an introduction to the metric system, length and basic prefixes, volume, mass, temperature, relationships within the metric system, and metric and English system relationships. The eighth one is on developing a…

  10. Adapt

    NASA Astrophysics Data System (ADS)

    Bargatze, L. F.

    2015-12-01

    Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted

  11. Defining Sustainability Metric Targets in an Institutional Setting

    ERIC Educational Resources Information Center

    Rauch, Jason N.; Newman, Julie

    2009-01-01

    Purpose: The purpose of this paper is to expand on the development of university and college sustainability metrics by implementing an adaptable metric target strategy. Design/methodology/approach: A combined qualitative and quantitative methodology is derived that both defines what a sustainable metric target might be and describes the path a…

  12. NASA metric transition plan

    NASA Astrophysics Data System (ADS)

    NASA science publications have used the metric system of measurement since 1970. Although NASA has maintained a metric use policy since 1979, practical constraints have restricted actual use of metric units. In 1988, an amendment to the Metric Conversion Act of 1975 required the Federal Government to adopt the metric system except where impractical. In response to Public Law 100-418 and Executive Order 12770, NASA revised its metric use policy and developed this Metric Transition Plan. NASA's goal is to use the metric system for program development and functional support activities to the greatest practical extent by the end of 1995. The introduction of the metric system into new flight programs will determine the pace of the metric transition. Transition of institutional capabilities and support functions will be phased to enable use of the metric system in flight program development and operations. Externally oriented elements of this plan will introduce and actively support use of the metric system in education, public information, and small business programs. The plan also establishes a procedure for evaluating and approving waivers and exceptions to the required use of the metric system for new programs. Coordination with other Federal agencies and departments (through the Interagency Council on Metric Policy) and industry (directly and through professional societies and interest groups) will identify sources of external support and minimize duplication of effort.

  13. NASA metric transition plan

    NASA Technical Reports Server (NTRS)

    1992-01-01

    NASA science publications have used the metric system of measurement since 1970. Although NASA has maintained a metric use policy since 1979, practical constraints have restricted actual use of metric units. In 1988, an amendment to the Metric Conversion Act of 1975 required the Federal Government to adopt the metric system except where impractical. In response to Public Law 100-418 and Executive Order 12770, NASA revised its metric use policy and developed this Metric Transition Plan. NASA's goal is to use the metric system for program development and functional support activities to the greatest practical extent by the end of 1995. The introduction of the metric system into new flight programs will determine the pace of the metric transition. Transition of institutional capabilities and support functions will be phased to enable use of the metric system in flight program development and operations. Externally oriented elements of this plan will introduce and actively support use of the metric system in education, public information, and small business programs. The plan also establishes a procedure for evaluating and approving waivers and exceptions to the required use of the metric system for new programs. Coordination with other Federal agencies and departments (through the Interagency Council on Metric Policy) and industry (directly and through professional societies and interest groups) will identify sources of external support and minimize duplication of effort.

  14. Feature Selection and Predictors of Falls with Foot Force Sensors Using KNN-Based Algorithms

    PubMed Central

    Liang, Shengyun; Ning, Yunkun; Li, Huiqi; Wang, Lei; Mei, Zhanyong; Ma, Yingnan; Zhao, Guoru

    2015-01-01

    The aging process may lead to the degradation of lower extremity function in the elderly population, which can restrict their daily quality of life and gradually increase the fall risk. We aimed to determine whether objective measures of physical function could predict subsequent falls. Ground reaction force (GRF) data, which was quantified by sample entropy, was collected by foot force sensors. Thirty eight subjects (23 fallers and 15 non-fallers) participated in functional movement tests, including walking and sit-to-stand (STS). A feature selection algorithm was used to select relevant features to classify the elderly into two groups: at risk and not at risk of falling down, for three KNN-based classifiers: local mean-based k-nearest neighbor (LMKNN), pseudo nearest neighbor (PNN), local mean pseudo nearest neighbor (LMPNN) classification. We compared classification performances, and achieved the best results with LMPNN, with sensitivity, specificity and accuracy all 100%. Moreover, a subset of GRFs was significantly different between the two groups via Wilcoxon rank sum test, which is compatible with the classification results. This method could potentially be used by non-experts to monitor balance and the risk of falling down in the elderly population. PMID:26610503

  15. Using quality metrics with laser range scanners

    NASA Astrophysics Data System (ADS)

    MacKinnon, David K.; Aitken, Victor; Blais, Francois

    2008-02-01

    We have developed a series of new quality metrics that are generalizable to a variety of laser range scanning systems, including those acquiring measurements in the mid-field. Moreover, these metrics can be integrated into either an automated scanning system, or a system that guides a minimally trained operator through the scanning process. In particular, we represent the quality of measurements with regard to aliasing and sampling density for mid-field measurements, two issues that have not been well addressed in contemporary literature. We also present a quality metric that addresses the issue of laser spot motion during sample acquisition. Finally, we take into account the interaction between measurement resolution and measurement uncertainty where necessary. These metrics are presented within the context of an adaptive scanning system in which quality metrics are used to minimize the number of measurements obtained during the acquisition of a single range image.

  16. Metrication for the Manager.

    ERIC Educational Resources Information Center

    Benedict, John T.

    The scope of this book covers metrication management. It was created to fill the middle management need for condensed, authoritative information about the metrication process and was conceived as a working tool and a prime reference source. Written from a management point of view, it touches on virtually all aspects of metrication and highlights…

  17. Metrics in Career Education.

    ERIC Educational Resources Information Center

    Lindbeck, John R.

    The United States is rapidly becoming a metric nation. Industry, education, business, and government are all studying the issue of metrication to learn how they can prepare for it. The book is designed to help teachers and students in career education programs learn something about metrics. Presented in an easily understood manner, the textbook's…

  18. Toward a perceptual video-quality metric

    NASA Astrophysics Data System (ADS)

    Watson, Andrew B.

    1998-07-01

    The advent of widespread distribution of digital video creates a need for automated methods for evaluating the visual quality of digital video. This is particularly so since most digital video is compressed using lossy methods, which involve the controlled introduction of potentially visible artifacts. Compounding the problem is the bursty nature of digital video, which requires adaptive bit allocation based on visual quality metrics, and the economic need to reduce bit-rate to the lowest level that yields acceptable quality. In previous work, we have developed visual quality metrics for evaluating, controlling,a nd optimizing the quality of compressed still images. These metrics incorporate simplified models of human visual sensitivity to spatial and chromatic visual signals. Here I describe a new video quality metric that is an extension of these still image metrics into the time domain. Like the still image metrics, it is based on the Discrete Cosine Transform. An effort has been made to minimize the amount of memory and computation required by the metric, in order that might be applied in the widest range of applications. To calibrate the basic sensitivity of this metric to spatial and temporal signals we have made measurements of visual thresholds for temporally varying samples of DCT quantization noise.

  19. Knowledge discovery in medical systems using differential diagnosis, LAMSTAR & k-NN.

    PubMed

    Isola, Rahul; Carvalho, Rebeck; Tripathy, Amiya Kumar

    2012-11-01

    Medical data is an ever-growing source of information generated from the hospitals in the form of patient records. When mined properly the information hidden in these records is a huge resource bank for medical research. As of now, this data is mostly used only for clinical work. This data often contains hidden patterns and relationships, that can lead to better diagnosis, better medicines, better treatment and overall, a platform to better understand the mechanisms governing almost all aspects of the medical domain. Unfortunately, discovery of these hidden patterns and relationships often goes unexploited. However there is on-going research in medical diagnosis which can predict the diseases of the heart, lungs and various tumours based on the past data collected from the patients.They are mostly limited to domain specific systems that predict diseases restricted to their area of operation like heart, brain and various other domains. These are not applicable to the whole medical dataset. The system proposed in this paper uses this vast storage of information so that diagnosis based on this historical data can be made. It focuses on computing the probability of occurrence of a particular ailment from the medical data by mining it using a unique algorithm which increases accuracy of such diagnosis by combining the key points of Neural Networks, Large Memory Storage and Retrieval (LAMSTAR), k-NN and Differential Diagnosis all integrated into one single algorithm. The system uses a Service-Oriented Architecture wherein the system components of diagnosis, information portal and other miscellaneous services are provided.This algorithm can be used in solving a few common problems that are encountered in automated diagnosis these days, which include: diagnosis of multiple diseases showing similar symptoms, diagnosis of a person suffering from multiple diseases, receiving faster and more accurate second opinion and faster identification of trends present in the medical

  20. Metric Education and the Metrics Debate: A Perspective.

    ERIC Educational Resources Information Center

    Chappelet, Jean Loup

    A short history of the use of the metric system is given. The role of education in metrication is discussed. The metric activities of three groups of metrics advocates, the business community, private groups, and government agencies, are described. Arguments advanced by metric opponents are also included. The author compares the metric debate with…

  1. Metrics That Matter.

    PubMed

    Prentice, Julia C; Frakt, Austin B; Pizer, Steven D

    2016-04-01

    Increasingly, performance metrics are seen as key components for accurately measuring and improving health care value. Disappointment in the ability of chosen metrics to meet these goals is exemplified in a recent Institute of Medicine report that argues for a consensus-building process to determine a simplified set of reliable metrics. Overall health care goals should be defined and then metrics to measure these goals should be considered. If appropriate data for the identified goals are not available, they should be developed. We use examples from our work in the Veterans Health Administration (VHA) on validating waiting time and mental health metrics to highlight other key issues for metric selection and implementation. First, we focus on the need for specification and predictive validation of metrics. Second, we discuss strategies to maintain the fidelity of the data used in performance metrics over time. These strategies include using appropriate incentives and data sources, using composite metrics, and ongoing monitoring. Finally, we discuss the VA's leadership in developing performance metrics through a planned upgrade in its electronic medical record system to collect more comprehensive VHA and non-VHA data, increasing the ability to comprehensively measure outcomes. PMID:26951272

  2. About Using the Metric System.

    ERIC Educational Resources Information Center

    Illinois State Office of Education, Springfield.

    This booklet contains a brief introduction to the use of the metric system. Topics covered include: (1) what is the metric system; (2) how to think metric; (3) some advantages of the metric system; (4) basics of the metric system; (5) how to measure length, area, volume, mass and temperature the metric way; (6) some simple calculations using…

  3. What About Metric?

    ERIC Educational Resources Information Center

    Barbrow, Louis E.

    Implications of the change to the metric system in our daily lives are discussed. Advantages of the metric system are presented, especially its decimal base and ease of calculation which are demonstrated by several worked examples. Some further sources of information are listed. A world map indicates the few remaining countries that have not yet…

  4. Metrics for Cosmetology.

    ERIC Educational Resources Information Center

    Cooper, Gloria S., Ed.; Magisos, Joel H., Ed.

    Designed to meet the job-related metric measurement needs of cosmetology students, this instructional package on cosmetology is part of a set of 55 packages for metric instruction in different occupations. The package is intended for students who already know the occupational terminology, measurement terms, and tools currently in use. Each of the…

  5. Surveillance Metrics Sensitivity Study

    SciTech Connect

    Bierbaum, R; Hamada, M; Robertson, A

    2011-11-01

    In September of 2009, a Tri-Lab team was formed to develop a set of metrics relating to the NNSA nuclear weapon surveillance program. The purpose of the metrics was to develop a more quantitative and/or qualitative metric(s) describing the results of realized or non-realized surveillance activities on our confidence in reporting reliability and assessing the stockpile. As a part of this effort, a statistical sub-team investigated various techniques and developed a complementary set of statistical metrics that could serve as a foundation for characterizing aspects of meeting the surveillance program objectives. The metrics are a combination of tolerance limit calculations and power calculations, intending to answer level-of-confidence type questions with respect to the ability to detect certain undesirable behaviors (catastrophic defects, margin insufficiency defects, and deviations from a model). Note that the metrics are not intended to gauge product performance but instead the adequacy of surveillance. This report gives a short description of four metrics types that were explored and the results of a sensitivity study conducted to investigate their behavior for various inputs. The results of the sensitivity study can be used to set the risk parameters that specify the level of stockpile problem that the surveillance program should be addressing.

  6. Surveillance metrics sensitivity study.

    SciTech Connect

    Hamada, Michael S.; Bierbaum, Rene Lynn; Robertson, Alix A.

    2011-09-01

    In September of 2009, a Tri-Lab team was formed to develop a set of metrics relating to the NNSA nuclear weapon surveillance program. The purpose of the metrics was to develop a more quantitative and/or qualitative metric(s) describing the results of realized or non-realized surveillance activities on our confidence in reporting reliability and assessing the stockpile. As a part of this effort, a statistical sub-team investigated various techniques and developed a complementary set of statistical metrics that could serve as a foundation for characterizing aspects of meeting the surveillance program objectives. The metrics are a combination of tolerance limit calculations and power calculations, intending to answer level-of-confidence type questions with respect to the ability to detect certain undesirable behaviors (catastrophic defects, margin insufficiency defects, and deviations from a model). Note that the metrics are not intended to gauge product performance but instead the adequacy of surveillance. This report gives a short description of four metrics types that were explored and the results of a sensitivity study conducted to investigate their behavior for various inputs. The results of the sensitivity study can be used to set the risk parameters that specify the level of stockpile problem that the surveillance program should be addressing.

  7. Arbitrary Metrics in Psychology

    ERIC Educational Resources Information Center

    Blanton, Hart; Jaccard, James

    2006-01-01

    Many psychological tests have arbitrary metrics but are appropriate for testing psychological theories. Metric arbitrariness is a concern, however, when researchers wish to draw inferences about the true, absolute standing of a group or individual on the latent psychological dimension being measured. The authors illustrate this in the context of 2…

  8. Metrics for Food Distribution.

    ERIC Educational Resources Information Center

    Cooper, Gloria S., Ed.; Magisos, Joel H., Ed.

    Designed to meet the job-related metric measurement needs of students interested in food distribution, this instructional package is one of five for the marketing and distribution cluster, part of a set of 55 packages for metric instruction in different occupations. The package is intended for students who already know the occupational…

  9. Metrics for Transportation.

    ERIC Educational Resources Information Center

    Cooper, Gloria S., Ed.; Magisos, Joel H., Ed.

    Designed to meet the job-related metric measurement needs of students interested in transportation, this instructional package is one of five for the marketing and distribution cluster, part of a set of 55 packages for metric instruction in different occupations. The package is intended for students who already know the occupational terminology,…

  10. Introduction to Metrics.

    ERIC Educational Resources Information Center

    Edgecomb, Philip L.; Shapiro, Marion

    Addressed to vocational, or academic middle or high school students, this book reviews mathematics fundamentals using metric units of measurement. It utilizes a common-sense approach to the degree of accuracy needed in solving actual trade and every-day problems. Stress is placed on reading off metric measurements from a ruler or tape, and on…

  11. Metrics for Agricultural Mechanics.

    ERIC Educational Resources Information Center

    Cooper, Gloria S., Ed.; Magisos, Joel H., Ed.

    Designed to meet the job-related metric measurement needs of agricultural mechanics students, this instructional package is one of four for the agribusiness and natural resources occupations cluster, part of a set of 55 packages for metric instruction in different occupations. The package is intended for students who already know the occupational…

  12. Dynamic partial reconfiguration implementation of the SVM/KNN multi-classifier on FPGA for bioinformatics application.

    PubMed

    Hussain, Hanaa M; Benkrid, Khaled; Seker, Huseyin

    2015-08-01

    Bioinformatics data tend to be highly dimensional in nature thus impose significant computational demands. To resolve limitations of conventional computing methods, several alternative high performance computing solutions have been proposed by scientists such as Graphical Processing Units (GPUs) and Field Programmable Gate Arrays (FPGAs). The latter have shown to be efficient and high in performance. In recent years, FPGAs have been benefiting from dynamic partial reconfiguration (DPR) feature for adding flexibility to alter specific regions within the chip. This work proposes combing the use of FPGAs and DPR to build a dynamic multi-classifier architecture that can be used in processing bioinformatics data. In bioinformatics, applying different classification algorithms to the same dataset is desirable in order to obtain comparable, more reliable and consensus decision, but it can consume long time when performed on conventional PC. The DPR implementation of two common classifiers, namely support vector machines (SVMs) and K-nearest neighbor (KNN) are combined together to form a multi-classifier FPGA architecture which can utilize specific region of the FPGA to work as either SVM or KNN classifier. This multi-classifier DPR implementation achieved at least ~8x reduction in reconfiguration time over the single non-DPR classifier implementation, and occupied less space and hardware resources than having both classifiers. The proposed architecture can be extended to work as an ensemble classifier. PMID:26738068

  13. Multi-color space threshold segmentation and self-learning k-NN algorithm for surge test EUT status identification

    NASA Astrophysics Data System (ADS)

    Huang, Jian; Liu, Gui-xiong

    2016-04-01

    The identification of targets varies in different surge tests. A multi-color space threshold segmentation and self-learning k-nearest neighbor algorithm (k-NN) for equipment under test status identification was proposed after using feature matching to identify equipment status had to train new patterns every time before testing. First, color space (L*a*b*, hue saturation lightness (HSL), hue saturation value (HSV)) to segment was selected according to the high luminance points ratio and white luminance points ratio of the image. Second, the unknown class sample S r was classified by the k-NN algorithm with training set T z according to the feature vector, which was formed from number of pixels, eccentricity ratio, compactness ratio, and Euler's numbers. Last, while the classification confidence coefficient equaled k, made S r as one sample of pre-training set T z '. The training set T z increased to T z+1 by T z if T z was saturated. In nine series of illuminant, indicator light, screen, and disturbances samples (a total of 21600 frames), the algorithm had a 98.65%identification accuracy, also selected five groups of samples to enlarge the training set from T 0 to T 5 by itself.

  14. Metrication - Our Responsibility?

    ERIC Educational Resources Information Center

    Kroner, Klaus E.

    1972-01-01

    The metric system will soon be adopted in the United States. Engineering college educators can play a major role in this revision. Various suggestions are listed for teachers, authors and others. (PS)

  15. A metric for success

    NASA Astrophysics Data System (ADS)

    Carver, Gary P.

    1994-05-01

    The federal agencies are working with industry to ease adoption of the metric system. The goal is to help U.S. industry compete more successfully in the global marketplace, increase exports, and create new jobs. The strategy is to use federal procurement, financial assistance, and other business-related activities to encourage voluntary conversion. Based upon the positive experiences of firms and industries that have converted, federal agencies have concluded that metric use will yield long-term benefits that are beyond any one-time costs or inconveniences. It may be time for additional steps to move the Nation out of its dual-system comfort zone and continue to progress toward metrication. This report includes 'Metric Highlights in U.S. History'.

  16. An Arithmetic Metric

    ERIC Educational Resources Information Center

    Dominici, Diego

    2011-01-01

    This work introduces a distance between natural numbers not based on their position on the real line but on their arithmetic properties. We prove some metric properties of this distance and consider a possible extension.

  17. Sustainability Indicators and Metrics

    EPA Science Inventory

    Sustainability is about preserving human existence. Indicators and metrics are absolutely necessary to provide at least a semi-quantitative assessment of progress towards or away from sustainability. Otherwise, it becomes impossible to objectively assess whether progress is bei...

  18. Quality metrics for sensor images

    NASA Technical Reports Server (NTRS)

    Ahumada, AL

    1993-01-01

    Methods are needed for evaluating the quality of augmented visual displays (AVID). Computational quality metrics will help summarize, interpolate, and extrapolate the results of human performance tests with displays. The FLM Vision group at NASA Ames has been developing computational models of visual processing and using them to develop computational metrics for similar problems. For example, display modeling systems use metrics for comparing proposed displays, halftoning optimizing methods use metrics to evaluate the difference between the halftone and the original, and image compression methods minimize the predicted visibility of compression artifacts. The visual discrimination models take as input two arbitrary images A and B and compute an estimate of the probability that a human observer will report that A is different from B. If A is an image that one desires to display and B is the actual displayed image, such an estimate can be regarded as an image quality metric reflecting how well B approximates A. There are additional complexities associated with the problem of evaluating the quality of radar and IR enhanced displays for AVID tasks. One important problem is the question of whether intruding obstacles are detectable in such displays. Although the discrimination model can handle detection situations by making B the original image A plus the intrusion, this detection model makes the inappropriate assumption that the observer knows where the intrusion will be. Effects of signal uncertainty need to be added to our models. A pilot needs to make decisions rapidly. The models need to predict not just the probability of a correct decision, but the probability of a correct decision by the time the decision needs to be made. That is, the models need to predict latency as well as accuracy. Luce and Green have generated models for auditory detection latencies. Similar models are needed for visual detection. Most image quality models are designed for static imagery

  19. Optimal Detection Range of RFID Tag for RFID-based Positioning System Using the k-NN Algorithm

    PubMed Central

    Han, Soohee; Kim, Junghwan; Park, Choung-Hwan; Yoon, Hee-Cheon; Heo, Joon

    2009-01-01

    Positioning technology to track a moving object is an important and essential component of ubiquitous computing environments and applications. An RFID-based positioning system using the k-nearest neighbor (k-NN) algorithm can determine the position of a moving reader from observed reference data. In this study, the optimal detection range of an RFID-based positioning system was determined on the principle that tag spacing can be derived from the detection range. It was assumed that reference tags without signal strength information are regularly distributed in 1-, 2- and 3-dimensional spaces. The optimal detection range was determined, through analytical and numerical approaches, to be 125% of the tag-spacing distance in 1-dimensional space. Through numerical approaches, the range was 134% in 2-dimensional space, 143% in 3-dimensional space. PMID:22408540

  20. Predicting Subcellular Localization of Apoptosis Proteins Combining GO Features of Homologous Proteins and Distance Weighted KNN Classifier

    PubMed Central

    Wang, Xiao; Li, Hui; Zhang, Qiuwen; Wang, Rong

    2016-01-01

    Apoptosis proteins play a key role in maintaining the stability of organism; the functions of apoptosis proteins are related to their subcellular locations which are used to understand the mechanism of programmed cell death. In this paper, we utilize GO annotation information of apoptosis proteins and their homologous proteins retrieved from GOA database to formulate feature vectors and then combine the distance weighted KNN classification algorithm with them to solve the data imbalance problem existing in CL317 data set to predict subcellular locations of apoptosis proteins. It is found that the number of homologous proteins can affect the overall prediction accuracy. Under the optimal number of homologous proteins, the overall prediction accuracy of our method on CL317 data set reaches 96.8% by Jackknife test. Compared with other existing methods, it shows that our proposed method is very effective and better than others for predicting subcellular localization of apoptosis proteins. PMID:27213149

  1. MetricMap: an embedding technique for processing distance-based queries in metric spaces.

    PubMed

    Wang, Jason T L; Wang, Xiong; Shasha, Dennis; Zhang, Kaizhong

    2005-10-01

    In this paper, we present an embedding technique, called MetricMap, which is capable of estimating distances in a pseudometric space. Given a database of objects and a distance function for the objects, which is a pseudometric, we map the objects to vectors in a pseudo-Euclidean space with a reasonably low dimension while preserving the distance between two objects approximately. Such an embedding technique can be used as an approximate oracle to process a broad class of distance-based queries. It is also adaptable to data mining applications such as data clustering and classification. We present the theory underlying MetricMap and conduct experiments to compare MetricMap with other methods including MVP-tree and M-tree in processing the distance-based queries. Experimental results on both protein and RNA data show the good performance and the superiority of MetricMap over the other methods. PMID:16240772

  2. Successful Experiences in Teaching Metric.

    ERIC Educational Resources Information Center

    Odom, Jeffrey V., Ed.

    In this publication are presentations on specific experiences in teaching metrics, made at a National Bureau of Standards conference. Ideas of value to teachers and administrators are described in reports on: SI units of measure; principles and practices of teaching metric; metric and the school librarian; teaching metric through television and…

  3. Estimation of Missing Precipitation Records using Classifier, Cluster and Proximity Metric-Based Interpolation Schemes

    NASA Astrophysics Data System (ADS)

    Teegavarapu, R. S.

    2012-12-01

    New optimal proximity-based imputation, k-nn (k-nearest neighbor) classification and k-means clustering methods are proposed and developed for estimation of missing precipitation records in this study. Variants of these methods are embedded in optimization formulations to optimize the weighing schemes involving proximity measures. Ten different binary and real valued distance metrics are used as proximity measures. Two climatic regions, Kentucky and Florida, (temperate and tropical) in the United States, with different gauge density and gauge network structure are used as case studies to evaluate the efficacy of these methods for estimation of missing precipitation data. A comprehensive exercise is undertaken in this study to compare the performances of the developed new methods and their variants to those of already available methods in literature. Several deterministic and stochastic spatial interpolation methods and their improvised variants using optimization formulations are used for comparisons. Results from these comparisons indicate that the optimal proximity-based imputation, k-mean cluster-based and k-nn classification methods are competitive when combined with mathematical programming formulations and provided better estimates of missing precipitation data than available deterministic and stochastic interpolation methods.

  4. Cyber threat metrics.

    SciTech Connect

    Frye, Jason Neal; Veitch, Cynthia K.; Mateski, Mark Elliot; Michalski, John T.; Harris, James Mark; Trevino, Cassandra M.; Maruoka, Scott

    2012-03-01

    Threats are generally much easier to list than to describe, and much easier to describe than to measure. As a result, many organizations list threats. Fewer describe them in useful terms, and still fewer measure them in meaningful ways. This is particularly true in the dynamic and nebulous domain of cyber threats - a domain that tends to resist easy measurement and, in some cases, appears to defy any measurement. We believe the problem is tractable. In this report we describe threat metrics and models for characterizing threats consistently and unambiguously. The purpose of this report is to support the Operational Threat Assessment (OTA) phase of risk and vulnerability assessment. To this end, we focus on the task of characterizing cyber threats using consistent threat metrics and models. In particular, we address threat metrics and models for describing malicious cyber threats to US FCEB agencies and systems.

  5. Testing, Requirements, and Metrics

    NASA Technical Reports Server (NTRS)

    Rosenberg, Linda; Hyatt, Larry; Hammer, Theodore F.; Huffman, Lenore; Wilson, William

    1998-01-01

    The criticality of correct, complete, testable requirements is a fundamental tenet of software engineering. Also critical is complete requirements based testing of the final product. Modern tools for managing requirements allow new metrics to be used in support of both of these critical processes. Using these tools, potential problems with the quality of the requirements and the test plan can be identified early in the life cycle. Some of these quality factors include: ambiguous or incomplete requirements, poorly designed requirements databases, excessive or insufficient test cases, and incomplete linkage of tests to requirements. This paper discusses how metrics can be used to evaluate the quality of the requirements and test to avoid problems later. Requirements management and requirements based testing have always been critical in the implementation of high quality software systems. Recently, automated tools have become available to support requirements management. At NASA's Goddard Space Flight Center (GSFC), automated requirements management tools are being used on several large projects. The use of these tools opens the door to innovative uses of metrics in characterizing test plan quality and assessing overall testing risks. In support of these projects, the Software Assurance Technology Center (SATC) is working to develop and apply a metrics program that utilizes the information now available through the application of requirements management tools. Metrics based on this information provides real-time insight into the testing of requirements and these metrics assist the Project Quality Office in its testing oversight role. This paper discusses three facets of the SATC's efforts to evaluate the quality of the requirements and test plan early in the life cycle, thus preventing costly errors and time delays later.

  6. Parallel Anisotropic Tetrahedral Adaptation

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Darmofal, David L.

    2008-01-01

    An adaptive method that robustly produces high aspect ratio tetrahedra to a general 3D metric specification without introducing hybrid semi-structured regions is presented. The elemental operators and higher-level logic is described with their respective domain-decomposed parallelizations. An anisotropic tetrahedral grid adaptation scheme is demonstrated for 1000-1 stretching for a simple cube geometry. This form of adaptation is applicable to more complex domain boundaries via a cut-cell approach as demonstrated by a parallel 3D supersonic simulation of a complex fighter aircraft. To avoid the assumptions and approximations required to form a metric to specify adaptation, an approach is introduced that directly evaluates interpolation error. The grid is adapted to reduce and equidistribute this interpolation error calculation without the use of an intervening anisotropic metric. Direct interpolation error adaptation is illustrated for 1D and 3D domains.

  7. Evaluation metrics for biostatistical and epidemiological collaborations.

    PubMed

    Rubio, Doris McGartland; Del Junco, Deborah J; Bhore, Rafia; Lindsell, Christopher J; Oster, Robert A; Wittkowski, Knut M; Welty, Leah J; Li, Yi-Ju; Demets, Dave

    2011-10-15

    Increasing demands for evidence-based medicine and for the translation of biomedical research into individual and public health benefit have been accompanied by the proliferation of special units that offer expertise in biostatistics, epidemiology, and research design (BERD) within academic health centers. Objective metrics that can be used to evaluate, track, and improve the performance of these BERD units are critical to their successful establishment and sustainable future. To develop a set of reliable but versatile metrics that can be adapted easily to different environments and evolving needs, we consulted with members of BERD units from the consortium of academic health centers funded by the Clinical and Translational Science Award Program of the National Institutes of Health. Through a systematic process of consensus building and document drafting, we formulated metrics that covered the three identified domains of BERD practices: the development and maintenance of collaborations with clinical and translational science investigators, the application of BERD-related methods to clinical and translational research, and the discovery of novel BERD-related methodologies. In this article, we describe the set of metrics and advocate their use for evaluating BERD practices. The routine application, comparison of findings across diverse BERD units, and ongoing refinement of the metrics will identify trends, facilitate meaningful changes, and ultimately enhance the contribution of BERD activities to biomedical research. PMID:21284015

  8. Metric Style Guide.

    ERIC Educational Resources Information Center

    Canadian Council of Ministers of Education, Toronto (Ontario).

    This guide was designed to provide a measure of uniformity across Canada with respect to metric terminology and symbolism, and is designed to enable users to understand and apply Systeme International d'Unites (SI) to everyday life with ease and confidence. This document was written with the intent of being helpful to the greatest number of…

  9. Toll Gate Metrication Project

    ERIC Educational Resources Information Center

    Izzi, John

    1974-01-01

    The project director of the Toll Gate Metrication Project describes the project as the first structured United States public school educational experiment in implementing change toward the adoption of the International System of Units. He believes the change will simplify, rather than complicate, the educational task. (AG)

  10. Metrics and Sports.

    ERIC Educational Resources Information Center

    National Collegiate Athletic Association, Shawnee Mission, KS.

    Designed as a guide to aid the National Collegiate Athletic Association membership and others who must relate measurement of distances, weights, and volumes to athletic activity, this document presents diagrams of performance areas with measurements delineated in both imperial and metric terms. Illustrations are given for baseball, basketball,…

  11. Engineering performance metrics

    SciTech Connect

    DeLozier, R. ); Snyder, N. )

    1993-03-31

    Implementation of a Total Quality Management (TQM) approach to engineering work required the development of a system of metrics which would serve as a meaningful msinagement tool for evaluating effectiveness in accomplishing project objectives and in achieving improved customer satisfaction. A team effort was chartered with the goal of developing a system of engineering performance metrics which would measure customer satisfaction, quality, cost effectiveness, and timeliness. The approach to developing this system involved normal systems design phases including, conceptual design, detailed design, implementation, and integration. The lessons teamed from this effort will be explored in this paper. These lessons teamed may provide a starting point for other large engineering organizations seeking to institute a performance measurement system accomplishing project objectives and in achieving improved customer satisfaction. To facilitate this effort, a team was chartered to assist in the development of the metrics system. This team, consisting of customers and Engineering staff members, was utilized to ensure that the needs and views of the customers were considered in the development of performance measurements. The development of a system of metrics is no different than the development of any type of system. It includes the steps of defining performance measurement requirements, measurement process conceptual design, performance measurement and reporting system detailed design, and system implementation and integration.

  12. Engineering performance metrics

    NASA Astrophysics Data System (ADS)

    Delozier, R.; Snyder, N.

    1993-03-01

    Implementation of a Total Quality Management (TQM) approach to engineering work required the development of a system of metrics which would serve as a meaningful management tool for evaluating effectiveness in accomplishing project objectives and in achieving improved customer satisfaction. A team effort was chartered with the goal of developing a system of engineering performance metrics which would measure customer satisfaction, quality, cost effectiveness, and timeliness. The approach to developing this system involved normal systems design phases including, conceptual design, detailed design, implementation, and integration. The lessons teamed from this effort will be explored in this paper. These lessons learned may provide a starting point for other large engineering organizations seeking to institute a performance measurement system accomplishing project objectives and in achieving improved customer satisfaction. To facilitate this effort, a team was chartered to assist in the development of the metrics system. This team, consisting of customers and Engineering staff members, was utilized to ensure that the needs and views of the customers were considered in the development of performance measurements. The development of a system of metrics is no different than the development of any type of system. It includes the steps of defining performance measurement requirements, measurement process conceptual design, performance measurement and reporting system detailed design, and system implementation and integration.

  13. Metrics of Scholarly Impact

    ERIC Educational Resources Information Center

    Cacioppo, John T.; Cacioppo, Stephanie

    2012-01-01

    Ruscio and colleagues (Ruscio, Seaman, D'Oriano, Stremlo, & Mahalchik, this issue) provide a thoughtful empirical analysis of 22 different measures of individual scholarly impact. The simplest metric is number of publications, which Simonton (1997) found to be a reasonable predictor of career trajectories. Although the assessment of the scholarly…

  14. An automatic method for arterial pulse waveform recognition using KNN and SVM classifiers.

    PubMed

    Pereira, Tânia; Paiva, Joana S; Correia, Carlos; Cardoso, João

    2016-07-01

    The measurement and analysis of the arterial pulse waveform (APW) are the means for cardiovascular risk assessment. Optical sensors represent an attractive instrumental solution to APW assessment due to their truly non-contact nature that makes the measurement of the skin surface displacement possible, especially at the carotid artery site. In this work, an automatic method to extract and classify the acquired data of APW signals and noise segments was proposed. Two classifiers were implemented: k-nearest neighbours and support vector machine (SVM), and a comparative study was made, considering widely used performance metrics. This work represents a wide study in feature creation for APW. A pool of 37 features was extracted and split in different subsets: amplitude features, time domain statistics, wavelet features, cross-correlation features and frequency domain statistics. The support vector machine recursive feature elimination was implemented for feature selection in order to identify the most relevant feature. The best result (0.952 accuracy) in discrimination between signals and noise was obtained for the SVM classifier with an optimal feature subset . PMID:26403299

  15. Software Quality Assurance Metrics

    NASA Technical Reports Server (NTRS)

    McRae, Kalindra A.

    2004-01-01

    Software Quality Assurance (SQA) is a planned and systematic set of activities that ensures conformance of software life cycle processes and products conform to requirements, standards and procedures. In software development, software quality means meeting requirements and a degree of excellence and refinement of a project or product. Software Quality is a set of attributes of a software product by which its quality is described and evaluated. The set of attributes includes functionality, reliability, usability, efficiency, maintainability, and portability. Software Metrics help us understand the technical process that is used to develop a product. The process is measured to improve it and the product is measured to increase quality throughout the life cycle of software. Software Metrics are measurements of the quality of software. Software is measured to indicate the quality of the product, to assess the productivity of the people who produce the product, to assess the benefits derived from new software engineering methods and tools, to form a baseline for estimation, and to help justify requests for new tools or additional training. Any part of the software development can be measured. If Software Metrics are implemented in software development, it can save time, money, and allow the organization to identify the caused of defects which have the greatest effect on software development. The summer of 2004, I worked with Cynthia Calhoun and Frank Robinson in the Software Assurance/Risk Management department. My task was to research and collect, compile, and analyze SQA Metrics that have been used in other projects that are not currently being used by the SA team and report them to the Software Assurance team to see if any metrics can be implemented in their software assurance life cycle process.

  16. Note on a new class of metrics: touching metrics

    NASA Astrophysics Data System (ADS)

    Starovoitov, Valery V.

    1996-09-01

    A new class of functions is studied. They are generalizations of the little-known `flower-shop distance'. We call them touching functions. Some of them are metrics, i.e. touching metrics (TM). Disks, circles and digital paths based on these metrics are also studied. The distance transform based on TMs is introduced and a scheme for the algorithm is given.

  17. The Kerr metric

    NASA Astrophysics Data System (ADS)

    Teukolsky, Saul A.

    2015-06-01

    This review describes the events leading up to the discovery of the Kerr metric in 1963 and the enormous impact the discovery has had in the subsequent 50 years. The review discusses the Penrose process, the four laws of black hole mechanics, uniqueness of the solution, and the no-hair theorems. It also includes Kerr perturbation theory and its application to black hole stability and quasi-normal modes. The Kerr metric's importance in the astrophysics of quasars and accreting stellar-mass black hole systems is detailed. A theme of the review is the ‘miraculous’ nature of the solution, both in describing in a simple analytic formula the most general rotating black hole, and in having unexpected mathematical properties that make many calculations tractable. Also included is a pedagogical derivation of the solution suitable for a first course in general relativity.

  18. Quality Metrics in Endoscopy

    PubMed Central

    Gurudu, Suryakanth R.

    2013-01-01

    Endoscopy has evolved in the past 4 decades to become an important tool in the diagnosis and management of many digestive diseases. Greater focus on endoscopic quality has highlighted the need to ensure competency among endoscopists. A joint task force of the American College of Gastroenterology and the American Society for Gastrointestinal Endoscopy has proposed several quality metrics to establish competence and help define areas of continuous quality improvement. These metrics represent quality in endoscopy pertinent to pre-, intra-, and postprocedural periods. Quality in endoscopy is a dynamic and multidimensional process that requires continuous monitoring of several indicators and benchmarking with local and national standards. Institutions and practices should have a process in place for credentialing endoscopists and for the assessment of competence regarding individual endoscopic procedures. PMID:24711767

  19. Bibliography on metrication

    NASA Astrophysics Data System (ADS)

    Smith, C. R.; Powel, M. B.

    1990-08-01

    This is a bibliography on metrication, the conversion to the International System of Units (SI), compiled from citations dated from January 1977 through July 1989. Citations include books, conference proceedings, newspapers, periodicals, government and civilian documents and reports. Subject indices for each type of citation and an author index for the entire work are included. A variety of subject categories such as legislation, construction, avionics, consumers, engineering, education, management, standards, agriculture, marketing and many others are available.

  20. Aquatic Acoustic Metrics Interface

    Energy Science and Technology Software Center (ESTSC)

    2012-12-18

    Fishes and marine mammals may suffer a range of potential effects from exposure to intense underwater sound generated by anthropogenic activities such as pile driving, shipping, sonars, and underwater blasting. Several underwater sound recording (USR) devices have been built to acquire samples of the underwater sound generated by anthropogenic activities. Software becomes indispensable for processing and analyzing the audio files recorded by these USRs. The new Aquatic Acoustic Metrics Interface Utility Software (AAMI) is specificallymore » designed for analysis of underwater sound recordings to provide data in metrics that facilitate evaluation of the potential impacts of the sound on aquatic animals. In addition to the basic functions, such as loading and editing audio files recorded by USRs and batch processing of sound files, the software utilizes recording system calibration data to compute important parameters in physical units. The software also facilitates comparison of the noise sound sample metrics with biological measures such as audiograms of the sensitivity of aquatic animals to the sound, integrating various components into a single analytical frame.« less

  1. Rotational clutter metric

    NASA Astrophysics Data System (ADS)

    Salem, Salem; Halford, Carl; Moyer, Steve; Gundy, Matthew

    2009-08-01

    A new approach to linear discriminant analysis (LDA), called orthogonal rotational LDA (ORLDA) is presented. Using ORLDA and properly accounting for target size allowed development of a new clutter metric that is based on the Laplacian pyramid (LP) decomposition of clutter images. The new metric achieves correlation exceeding 98% with expert human labeling of clutter levels in a set of 244 infrared images. Our clutter metric is based on the set of weights for the LP levels that best classify images into clutter levels as manually classified by an expert human observer. LDA is applied as a preprocessing step to classification. LDA suffers from a few limitations in this application. Therefore, we propose a new approach to LDA, called ORLDA, using orthonormal geometric rotations. Each rotation brings the LP feature space closer to the LDA solution while retaining orthogonality in the feature space. To understand the effects of target size on clutter, we applied ORLDA at different target sizes. The outputs are easily related because they are functions of orthogonal rotation angles. Finally, we used Bayesian decision theory to learn class boundaries for clutter levels at different target sizes.

  2. Aquatic Acoustic Metrics Interface

    SciTech Connect

    2012-12-18

    Fishes and marine mammals may suffer a range of potential effects from exposure to intense underwater sound generated by anthropogenic activities such as pile driving, shipping, sonars, and underwater blasting. Several underwater sound recording (USR) devices have been built to acquire samples of the underwater sound generated by anthropogenic activities. Software becomes indispensable for processing and analyzing the audio files recorded by these USRs. The new Aquatic Acoustic Metrics Interface Utility Software (AAMI) is specifically designed for analysis of underwater sound recordings to provide data in metrics that facilitate evaluation of the potential impacts of the sound on aquatic animals. In addition to the basic functions, such as loading and editing audio files recorded by USRs and batch processing of sound files, the software utilizes recording system calibration data to compute important parameters in physical units. The software also facilitates comparison of the noise sound sample metrics with biological measures such as audiograms of the sensitivity of aquatic animals to the sound, integrating various components into a single analytical frame.

  3. Metrics for Energy Resilience

    SciTech Connect

    Paul E. Roege; Zachary A. Collier; James Mancillas; John A. McDonagh; Igor Linkov

    2014-09-01

    Energy lies at the backbone of any advanced society and constitutes an essential prerequisite for economic growth, social order and national defense. However there is an Achilles heel to today?s energy and technology relationship; namely a precarious intimacy between energy and the fiscal, social, and technical systems it supports. Recently, widespread and persistent disruptions in energy systems have highlighted the extent of this dependence and the vulnerability of increasingly optimized systems to changing conditions. Resilience is an emerging concept that offers to reconcile considerations of performance under dynamic environments and across multiple time frames by supplementing traditionally static system performance measures to consider behaviors under changing conditions and complex interactions among physical, information and human domains. This paper identifies metrics useful to implement guidance for energy-related planning, design, investment, and operation. Recommendations are presented using a matrix format to provide a structured and comprehensive framework of metrics relevant to a system?s energy resilience. The study synthesizes previously proposed metrics and emergent resilience literature to provide a multi-dimensional model intended for use by leaders and practitioners as they transform our energy posture from one of stasis and reaction to one that is proactive and which fosters sustainable growth.

  4. Performance Metrics for Commercial Buildings

    SciTech Connect

    Fowler, Kimberly M.; Wang, Na; Romero, Rachel L.; Deru, Michael P.

    2010-09-30

    Commercial building owners and operators have requested a standard set of key performance metrics to provide a systematic way to evaluate the performance of their buildings. The performance metrics included in this document provide standard metrics for the energy, water, operations and maintenance, indoor environmental quality, purchasing, waste and recycling and transportation impact of their building. The metrics can be used for comparative performance analysis between existing buildings and industry standards to clarify the impact of sustainably designed and operated buildings.

  5. A comparative study of the svm and k-nn machine learning algorithms for the diagnosis of respiratory pathologies using pulmonary acoustic signals

    PubMed Central

    2014-01-01

    Background Pulmonary acoustic parameters extracted from recorded respiratory sounds provide valuable information for the detection of respiratory pathologies. The automated analysis of pulmonary acoustic signals can serve as a differential diagnosis tool for medical professionals, a learning tool for medical students, and a self-management tool for patients. In this context, we intend to evaluate and compare the performance of the support vector machine (SVM) and K-nearest neighbour (K-nn) classifiers in diagnosis respiratory pathologies using respiratory sounds from R.A.L.E database. Results The pulmonary acoustic signals used in this study were obtained from the R.A.L.E lung sound database. The pulmonary acoustic signals were manually categorised into three different groups, namely normal, airway obstruction pathology, and parenchymal pathology. The mel-frequency cepstral coefficient (MFCC) features were extracted from the pre-processed pulmonary acoustic signals. The MFCC features were analysed by one-way ANOVA and then fed separately into the SVM and K-nn classifiers. The performances of the classifiers were analysed using the confusion matrix technique. The statistical analysis of the MFCC features using one-way ANOVA showed that the extracted MFCC features are significantly different (p < 0.001). The classification accuracies of the SVM and K-nn classifiers were found to be 92.19% and 98.26%, respectively. Conclusion Although the data used to train and test the classifiers are limited, the classification accuracies found are satisfactory. The K-nn classifier was better than the SVM classifier for the discrimination of pulmonary acoustic signals from pathological and normal subjects obtained from the RALE database. PMID:24970564

  6. Say "Yes" to Metric Measure.

    ERIC Educational Resources Information Center

    Monroe, Eula Ewing; Nelson, Marvin N.

    2000-01-01

    Provides a brief history of the metric system. Discusses the infrequent use of the metric measurement system in the United States, why conversion from the customary system to the metric system is difficult, and the need for change. (Contains 14 resources.) (ASK)

  7. Metrication, American Style. Fastback 41.

    ERIC Educational Resources Information Center

    Izzi, John

    The purpose of this pamphlet is to provide a starting point of information on the metric system for any concerned or interested reader. The material is organized into five brief chapters: Man and Measurement; Learning the Metric System; Progress Report: Education; Recommended Sources; and Metrication, American Style. Appendixes include an…

  8. Metrication in a global environment

    NASA Technical Reports Server (NTRS)

    Aberg, J.

    1994-01-01

    A brief history about the development of the metric system of measurement is given. The need for the U.S. to implement the 'SI' metric system in the international markets, especially in the aerospace and general trade, is discussed. Development of metric implementation and experiences locally, nationally, and internationally are included.

  9. Some References on Metric Information.

    ERIC Educational Resources Information Center

    National Bureau of Standards (DOC), Washington, DC.

    This resource work lists metric information published by the U.S. Government and the American National Standards Institute. Also organizations marketing metric materials for education are given. A short table of conversions is included as is a listing of basic metric facts for everyday living. (LS)

  10. Image Labeling for LIDAR Intensity Image Using K-Nn of Feature Obtained by Convolutional Neural Network

    NASA Astrophysics Data System (ADS)

    Umemura, Masaki; Hotta, Kazuhiro; Nonaka, Hideki; Oda, Kazuo

    2016-06-01

    We propose an image labeling method for LIDAR intensity image obtained by Mobile Mapping System (MMS) using K-Nearest Neighbor (KNN) of feature obtained by Convolutional Neural Network (CNN). Image labeling assigns labels (e.g., road, cross-walk and road shoulder) to semantic regions in an image. Since CNN is effective for various image recognition tasks, we try to use the feature of CNN (Caffenet) pre-trained by ImageNet. We use 4,096-dimensional feature at fc7 layer in the Caffenet as the descriptor of a region because the feature at fc7 layer has effective information for object classification. We extract the feature by the Caffenet from regions cropped from images. Since the similarity between features reflects the similarity of contents of regions, we can select top K similar regions cropped from training samples with a test region. Since regions in training images have manually-annotated ground truth labels, we vote the labels attached to top K similar regions to the test region. The class label with the maximum vote is assigned to each pixel in the test image. In experiments, we use 36 LIDAR intensity images with ground truth labels. We divide 36 images into training (28 images) and test sets (8 images). We use class average accuracy and pixel-wise accuracy as evaluation measures. Our method was able to assign the same label as human beings in 97.8% of the pixels in test LIDAR intensity images.

  11. Optical metrics and projective equivalence

    SciTech Connect

    Casey, Stephen; Dunajski, Maciej; Gibbons, Gary; Warnick, Claude

    2011-04-15

    Trajectories of light rays in a static spacetime are described by unparametrized geodesics of the Riemannian optical metric associated with the Lorentzian spacetime metric. We investigate the uniqueness of this structure and demonstrate that two different observers, moving relative to one another, who both see the Universe as static may determine the geometry of the light rays differently. More specifically, we classify Lorentzian metrics admitting more than one hyper-surface orthogonal timelike Killing vector and analyze the projective equivalence of the resulting optical metrics. These metrics are shown to be projectively equivalent up to diffeomorphism if the static Killing vectors generate a group SL(2,R), but not projectively equivalent in general. We also consider the cosmological C metrics in Einstein-Maxwell theory and demonstrate that optical metrics corresponding to different values of the cosmological constant are projectively equivalent.

  12. Building a Metric

    NASA Technical Reports Server (NTRS)

    Spencer, Shakira

    2007-01-01

    Launch Services Program is a Kennedy Space Center based program whose job it is to undertake all the necessary roles required to successfully launch Expendable Launch Vehicles. This project was designed to help Launch Services Program accurately report how successful they have been at launching missions on time or +/- 2 days from the scheduled launch date and also if they weren't successful, why. This information will be displayed in the form of a metric, which answers these questions in a clear and accurate way.

  13. SI (Metric) handbook

    NASA Technical Reports Server (NTRS)

    Artusa, Elisa A.

    1994-01-01

    This guide provides information for an understanding of SI units, symbols, and prefixes; style and usage in documentation in both the US and in the international business community; conversion techniques; limits, fits, and tolerance data; and drawing and technical writing guidelines. Also provided is information of SI usage for specialized applications like data processing and computer programming, science, engineering, and construction. Related information in the appendixes include legislative documents, historical and biographical data, a list of metric documentation, rules for determining significant digits and rounding, conversion factors, shorthand notation, and a unit index.

  14. Exploring Metric Symmetry

    SciTech Connect

    Zwart, P.H.; Grosse-Kunstleve, R.W.; Adams, P.D.

    2006-07-31

    Relatively minor perturbations to a crystal structure can in some cases result in apparently large changes in symmetry. Changes in space group or even lattice can be induced by heavy metal or halide soaking (Dauter et al, 2001), flash freezing (Skrzypczak-Jankun et al, 1996), and Se-Met substitution (Poulsen et al, 2001). Relations between various space groups and lattices can provide insight in the underlying structural causes for the symmetry or lattice transformations. Furthermore, these relations can be useful in understanding twinning and how to efficiently solve two different but related crystal structures. Although (pseudo) symmetric properties of a certain combination of unit cell parameters and a space group are immediately obvious (such as a pseudo four-fold axis if a is approximately equal to b in an orthorhombic space group), other relations (e.g. Lehtio, et al, 2005) that are less obvious might be crucial to the understanding and detection of certain idiosyncrasies of experimental data. We have developed a set of tools that allows straightforward exploration of possible metric symmetry relations given unit cell parameters and a space group. The new iotbx.explore{_}metric{_}symmetry command produces an overview of the various relations between several possible point groups for a given lattice. Methods for finding relations between a pair of unit cells are also available. The tools described in this newsletter are part of the CCTBX libraries, which are included in the latest (versions July 2006 and up) PHENIX and CCI Apps distributions.

  15. Pure Lovelock Kasner metrics

    NASA Astrophysics Data System (ADS)

    Camanho, Xián O.; Dadhich, Naresh; Molina, Alfred

    2015-09-01

    We study pure Lovelock vacuum and perfect fluid equations for Kasner-type metrics. These equations correspond to a single Nth order Lovelock term in the action in d=2N+1,2N+2 dimensions, and they capture the relevant gravitational dynamics when aproaching the big-bang singularity within the Lovelock family of theories. Pure Lovelock gravity also bears out the general feature that vacuum in the critical odd dimension, d=2N+1, is kinematic, i.e. we may define an analogue Lovelock-Riemann tensor that vanishes in vacuum for d=2N+1, yet the Riemann curvature is non-zero. We completely classify isotropic and vacuum Kasner metrics for this class of theories in several isotropy types. The different families can be characterized by means of certain higher order 4th rank tensors. We also analyze in detail the space of vacuum solutions for five- and six dimensional pure Gauss-Bonnet theory. It possesses an interesting and illuminating geometric structure and symmetries that carry over to the general case. We also comment on a closely related family of exponential solutions and on the possibility of solutions with complex Kasner exponents. We show that the latter imply the existence of closed timelike curves in the geometry.

  16. Handbook of aircraft noise metrics

    NASA Technical Reports Server (NTRS)

    Bennett, R. L.; Pearsons, K. S.

    1981-01-01

    Information is presented on 22 noise metrics that are associated with the measurement and prediction of the effects of aircraft noise. Some of the instantaneous frequency weighted sound level measures, such as A-weighted sound level, are used to provide multiple assessment of the aircraft noise level. Other multiple event metrics, such as day-night average sound level, were designed to relate sound levels measured over a period of time to subjective responses in an effort to determine compatible land uses and aid in community planning. The various measures are divided into: (1) instantaneous sound level metrics; (2) duration corrected single event metrics; (3) multiple event metrics; and (4) speech communication metrics. The scope of each measure is examined in terms of its: definition, purpose, background, relationship to other measures, calculation method, example, equipment, references, and standards.

  17. Handbook of aircraft noise metrics

    NASA Astrophysics Data System (ADS)

    Bennett, R. L.; Pearsons, K. S.

    1981-03-01

    Information is presented on 22 noise metrics that are associated with the measurement and prediction of the effects of aircraft noise. Some of the instantaneous frequency weighted sound level measures, such as A-weighted sound level, are used to provide multiple assessment of the aircraft noise level. Other multiple event metrics, such as day-night average sound level, were designed to relate sound levels measured over a period of time to subjective responses in an effort to determine compatible land uses and aid in community planning. The various measures are divided into: (1) instantaneous sound level metrics; (2) duration corrected single event metrics; (3) multiple event metrics; and (4) speech communication metrics. The scope of each measure is examined in terms of its: definition, purpose, background, relationship to other measures, calculation method, example, equipment, references, and standards.

  18. Do-It-Yourself Metrics

    ERIC Educational Resources Information Center

    Klubeck, Martin; Langthorne, Michael; Padgett, Don

    2006-01-01

    Something new is on the horizon, and depending on one's role on campus, it might be storm clouds or a cleansing shower. Either way, no matter how hard one tries to avoid it, sooner rather than later he/she will have to deal with metrics. Metrics do not have to cause fear and resistance. Metrics can, and should, be a powerful tool for improvement.…

  19. The metric system: An introduction

    SciTech Connect

    Lumley, S.M.

    1995-05-01

    On July 13, 1992, Deputy Director Duane Sewell restated the Laboratory`s policy on conversion to the metric system which was established in 1974. Sewell`s memo announced the Laboratory`s intention to continue metric conversion on a reasonable and cost effective basis. Copies of the 1974 and 1992 Administrative Memos are contained in the Appendix. There are three primary reasons behind the Laboratory`s conversion to the metric system. First, Public Law 100-418, passed in 1988, states that by the end of fiscal year 1992 the Federal Government must begin using metric units in grants, procurements, and other business transactions. Second, on July 25, 1991, President George Bush signed Executive Order 12770 which urged Federal agencies to expedite conversion to metric units. Third, the contract between the University of California and the Department of Energy calls for the Laboratory to convert to the metric system. Thus, conversion to the metric system is a legal requirement and a contractual mandate with the University of California. Public Law 100-418 and Executive Order 12770 are discussed in more detail later in this section, but first they examine the reasons behind the nation`s conversion to the metric system. The second part of this report is on applying the metric system.

  20. The metric system: An introduction

    NASA Astrophysics Data System (ADS)

    Lumley, Susan M.

    On 13 Jul. 1992, Deputy Director Duane Sewell restated the Laboratory's policy on conversion to the metric system which was established in 1974. Sewell's memo announced the Laboratory's intention to continue metric conversion on a reasonable and cost effective basis. Copies of the 1974 and 1992 Administrative Memos are contained in the Appendix. There are three primary reasons behind the Laboratory's conversion to the metric system. First, Public Law 100-418, passed in 1988, states that by the end of fiscal year 1992 the Federal Government must begin using metric units in grants, procurements, and other business transactions. Second, on 25 Jul. 1991, President George Bush signed Executive Order 12770 which urged Federal agencies to expedite conversion to metric units. Third, the contract between the University of California and the Department of Energy calls for the Laboratory to convert to the metric system. Thus, conversion to the metric system is a legal requirement and a contractual mandate with the University of California. Public Law 100-418 and Executive Order 12770 are discussed in more detail later in this section, but first they examine the reasons behind the nation's conversion to the metric system. The second part of this report is on applying the metric system.

  1. Software metrics: Software quality metrics for distributed systems. [reliability engineering

    NASA Technical Reports Server (NTRS)

    Post, J. V.

    1981-01-01

    Software quality metrics was extended to cover distributed computer systems. Emphasis is placed on studying embedded computer systems and on viewing them within a system life cycle. The hierarchy of quality factors, criteria, and metrics was maintained. New software quality factors were added, including survivability, expandability, and evolvability.

  2. Implementing the Metric System in Business Occupations. Metric Implementation Guide.

    ERIC Educational Resources Information Center

    Retzer, Kenneth A.; And Others

    Addressed to the business education teacher, this guide is intended to provide appropriate information, viewpoints, and attitudes regarding the metric system and to make suggestions regarding presentation of the material in the classroom. An introductory section on teaching suggestions emphasizes the need for a "think metric" approach made up of…

  3. Improved Adaptive-Reinforcement Learning Control for morphing unmanned air vehicles.

    PubMed

    Valasek, John; Doebbler, James; Tandale, Monish D; Meade, Andrew J

    2008-08-01

    This paper presents an improved Adaptive-Reinforcement Learning Control methodology for the problem of unmanned air vehicle morphing control. The reinforcement learning morphing control function that learns the optimal shape change policy is integrated with an adaptive dynamic inversion control trajectory tracking function. An episodic unsupervised learning simulation using the Q-learning method is developed to replace an earlier and less accurate Actor-Critic algorithm. Sequential Function Approximation, a Galerkin-based scattered data approximation scheme, replaces a K-Nearest Neighbors (KNN) method and is used to generalize the learning from previously experienced quantized states and actions to the continuous state-action space, all of which may not have been experienced before. The improved method showed smaller errors and improved learning of the optimal shape compared to the KNN. PMID:18632393

  4. Multimetric indices: How many metrics?

    EPA Science Inventory

    Multimetric indices (MMI’s) often include 5 to 15 metrics, each representing a different attribute of assemblage condition, such as species diversity, tolerant taxa, and nonnative taxa. Is there an optimal number of metrics for MMIs? To explore this question, I created 1000 9-met...

  5. What About Metric? Revised Edition.

    ERIC Educational Resources Information Center

    Barbrow, Louis E.

    Described are the advantages of using the metric system over the English system. The most common units of both systems are listed and compared. Pictures are used to exhibit use of the metric system in connection with giving prices or sizes of common items. Several examples provide computations of area, total weight of several objects, and volume;…

  6. Inching toward the Metric System.

    ERIC Educational Resources Information Center

    Moore, Randy

    1989-01-01

    Provides an overview and description of the metric system. Discusses the evolution of measurement systems and their early cultures, the beginnings of metric measurement, the history of measurement systems in the United States, the International System of Units, its general style and usage, and supplementary units. (RT)

  7. Metric Activities, Grades K-6.

    ERIC Educational Resources Information Center

    Draper, Bob, Comp.

    This pamphlet presents worksheets for use in fifteen activities or groups of activities designed for teaching the metric system to children in grades K through 6. The approach taken in several of the activities is one of conversion between metric and English units. The majority of the activities concern length, area, volume, and capacity. A…

  8. Metrication: A Guide for Consumers.

    ERIC Educational Resources Information Center

    Consumer and Corporate Affairs Dept., Ottawa (Ontario).

    The widespread use of the metric system by most of the major industrial powers of the world has prompted the Canadian government to investigate and consider use of the system. This booklet was developed to aid the consuming public in Canada in gaining some knowledge of metrication and how its application would affect their present economy.…

  9. Metric Supplement to Technical Drawing.

    ERIC Educational Resources Information Center

    Henschel, Mark

    This manual is intended for use in training persons whose vocations involve technical drawing to use the metric system of measurement. It could be used in a short course designed for that purpose or for individual study. The manual begins with a brief discussion of the rationale for conversion to the metric system. It then provides a…

  10. Conversion to the Metric System

    ERIC Educational Resources Information Center

    Crunkilton, John C.; Lee, Jasper S.

    1974-01-01

    The authors discuss background information about the metric system and explore the effect of metrication of agriculture in areas such as equipment calibration, chemical measurement, and marketing of agricultural products. Suggestions are given for possible leadership roles and approaches that agricultural education might take in converting to the…

  11. Metrics for Soft Goods Merchandising.

    ERIC Educational Resources Information Center

    Cooper, Gloria S., Ed.; Magisos, Joel H., Ed.

    Designed to meet the job-related metric measurement needs of students interested in soft goods merchandising, this instructional package is one of five for the marketing and distribution cluster, part of a set of 55 packages for metric instruction in different occupations. The package is intended for students who already know the occupational…

  12. Metrics for Hard Goods Merchandising.

    ERIC Educational Resources Information Center

    Cooper, Gloria S., Ed.; Magisos, Joel H., Ed.

    Designed to meet the job-related metric measurement needs of students interested in hard goods merchandising, this instructional package is one of five for the marketing and distribution cluster, part of a set of 55 packages for metric instruction in different occupations. The package is intended for students who already know the occupational…

  13. Modestobacter caceresii sp. nov., novel actinobacteria with an insight into their adaptive mechanisms for survival in extreme hyper-arid Atacama Desert soils.

    PubMed

    Busarakam, Kanungnid; Bull, Alan T; Trujillo, Martha E; Riesco, Raul; Sangal, Vartul; van Wezel, Gilles P; Goodfellow, Michael

    2016-06-01

    A polyphasic study was designed to determine the taxonomic provenance of three Modestobacter strains isolated from an extreme hyper-arid Atacama Desert soil. The strains, isolates KNN 45-1a, KNN 45-2b(T) and KNN 45-3b, were shown to have chemotaxonomic and morphological properties in line with their classification in the genus Modestobacter. The isolates had identical 16S rRNA gene sequences and formed a branch in the Modestobacter gene tree that was most closely related to the type strain of Modestobacter marinus (99.6% similarity). All three isolates were distinguished readily from Modestobacter type strains by a broad range of phenotypic properties, by qualitative and quantitative differences in fatty acid profiles and by BOX fingerprint patterns. The whole genome sequence of isolate KNN 45-2b(T) showed 89.3% average nucleotide identity, 90.1% (SD: 10.97%) average amino acid identity and a digital DNA-DNA hybridization value of 42.4±3.1 against the genome sequence of M. marinus DSM 45201(T), values consistent with its assignment to a separate species. On the basis of all of these data, it is proposed that the isolates be assigned to the genus Modestobacter as Modestobacter caceresii sp. nov. with isolate KNN 45-2b(T) (CECT 9023(T)=DSM 101691(T)) as the type strain. Analysis of the whole-genome sequence of M. caceresii KNN 45-2b(T), with 4683 open reading frames and a genome size of ∽4.96Mb, revealed the presence of genes and gene-clusters that encode for properties relevant to its adaptability to harsh environmental conditions prevalent in extreme hyper arid Atacama Desert soils. PMID:27108251

  14. The Metric System--An Overview.

    ERIC Educational Resources Information Center

    Hovey, Larry; Hovey, Kathi

    1983-01-01

    Sections look at: (1) Historical Perspective; (2) Naming the New System; (3) The Metric Units; (4) Measuring Larger and Smaller Amounts; (5) Advantage of Using the Metric System; (6) Metric Symbols; (7) Conversion from Metric to Customary System; (8) General Hints for Helping Children Understand; and (9) Current Status of Metric Conversion. (MP)

  15. GPS Metric Tracking Unit

    NASA Technical Reports Server (NTRS)

    2008-01-01

    As Global Positioning Satellite (GPS) applications become more prevalent for land- and air-based vehicles, GPS applications for space vehicles will also increase. The Applied Technology Directorate of Kennedy Space Center (KSC) has developed a lightweight, low-cost GPS Metric Tracking Unit (GMTU), the first of two steps in developing a lightweight, low-cost Space-Based Tracking and Command Subsystem (STACS) designed to meet Range Safety's link margin and latency requirements for vehicle command and telemetry data. The goals of STACS are to improve Range Safety operations and expand tracking capabilities for space vehicles. STACS will track the vehicle, receive commands, and send telemetry data through the space-based asset, which will dramatically reduce dependence on ground-based assets. The other step was the Low-Cost Tracking and Data Relay Satellite System (TDRSS) Transceiver (LCT2), developed by the Wallops Flight Facility (WFF), which allows the vehicle to communicate with a geosynchronous relay satellite. Although the GMTU and LCT2 were independently implemented and tested, the design collaboration of KSC and WFF engineers allowed GMTU and LCT2 to be integrated into one enclosure, leading to the final STACS. In operation, GMTU needs only a radio frequency (RF) input from a GPS antenna and outputs position and velocity data to the vehicle through a serial or pulse code modulation (PCM) interface. GMTU includes one commercial GPS receiver board and a custom board, the Command and Telemetry Processor (CTP) developed by KSC. The CTP design is based on a field-programmable gate array (FPGA) with embedded processors to support GPS functions.

  16. Measure for Measure: A Guide to Metrication for Workshop Crafts and Technical Studies.

    ERIC Educational Resources Information Center

    Schools Council, London (England).

    This booklet is designed to help teachers of the industrial arts in Great Britain during the changeover to metric units which is due to be substantially completed during the period 1970-1975. General suggestions are given for adapting equipment in metalwork and engineering and woodwork and technical drawing by adding some metric equipment…

  17. Variable metric conjugate gradient methods

    SciTech Connect

    Barth, T.; Manteuffel, T.

    1994-07-01

    1.1 Motivation. In this paper we present a framework that includes many well known iterative methods for the solution of nonsymmetric linear systems of equations, Ax = b. Section 2 begins with a brief review of the conjugate gradient method. Next, we describe a broader class of methods, known as projection methods, to which the conjugate gradient (CG) method and most conjugate gradient-like methods belong. The concept of a method having either a fixed or a variable metric is introduced. Methods that have a metric are referred to as either fixed or variable metric methods. Some relationships between projection methods and fixed (variable) metric methods are discussed. The main emphasis of the remainder of this paper is on variable metric methods. In Section 3 we show how the biconjugate gradient (BCG), and the quasi-minimal residual (QMR) methods fit into this framework as variable metric methods. By modifying the underlying Lanczos biorthogonalization process used in the implementation of BCG and QMR, we obtain other variable metric methods. These, we refer to as generalizations of BCG and QMR.

  18. A Dynamic Testing Complexity Metric

    NASA Technical Reports Server (NTRS)

    Voas, Jeffrey

    1991-01-01

    This paper introduces a dynamic metric that is based on the estimated ability of a program to withstand the effects of injected "semantic mutants" during execution by computing the same function as if the semantic mutants had not been injected. Semantic mutants include: (1) syntactic mutants injected into an executing program and (2) randomly selected values injected into an executing program's internal states. The metric is a function of a program, the method used for injecting these two types of mutants, and the program's input distribution; this metric is found through dynamic executions of the program. A program's ability to withstand the effects of injected semantic mutants by computing the same function when executed is then used as a tool for predicting the difficulty that will be incurred during random testing to reveal the existence of faults, i.e., the metric suggests the likelihood that a program will expose the existence of faults during random testing assuming faults were to exist. If the metric is applied to a module rather than to a program, the metric can be used to guide the allocation of testing resources among a program's modules. In this manner the metric acts as a white-box testing tool for determining where to concentrate testing resources. Index Terms: Revealing ability, random testing, input distribution, program, fault, failure.

  19. Double metric, generalized metric, and α' -deformed double field theory

    NASA Astrophysics Data System (ADS)

    Hohm, Olaf; Zwiebach, Barton

    2016-03-01

    We relate the unconstrained "double metric" of the "α' -geometry" formulation of double field theory to the constrained generalized metric encoding the spacetime metric and b -field. This is achieved by integrating out auxiliary field components of the double metric in an iterative procedure that induces an infinite number of higher-derivative corrections. As an application, we prove that, to first order in α' and to all orders in fields, the deformed gauge transformations are Green-Schwarz-deformed diffeomorphisms. We also prove that to first order in α' the spacetime action encodes precisely the Green-Schwarz deformation with Chern-Simons forms based on the torsionless gravitational connection. This seems to be in tension with suggestions in the literature that T-duality requires a torsionful connection, but we explain that these assertions are ambiguous since actions that use different connections are related by field redefinitions.

  20. Predicting persistence in the sediment compartment with a new automatic software based on the k-Nearest Neighbor (k-NN) algorithm.

    PubMed

    Manganaro, Alberto; Pizzo, Fabiola; Lombardo, Anna; Pogliaghi, Alberto; Benfenati, Emilio

    2016-02-01

    The ability of a substance to resist degradation and persist in the environment needs to be readily identified in order to protect the environment and human health. Many regulations require the assessment of persistence for substances commonly manufactured and marketed. Besides laboratory-based testing methods, in silico tools may be used to obtain a computational prediction of persistence. We present a new program to develop k-Nearest Neighbor (k-NN) models. The k-NN algorithm is a similarity-based approach that predicts the property of a substance in relation to the experimental data for its most similar compounds. We employed this software to identify persistence in the sediment compartment. Data on half-life (HL) in sediment were obtained from different sources and, after careful data pruning the final dataset, containing 297 organic compounds, was divided into four experimental classes. We developed several models giving satisfactory performances, considering that both the training and test set accuracy ranged between 0.90 and 0.96. We finally selected one model which will be made available in the near future in the freely available software platform VEGA. This model offers a valuable in silico tool that may be really useful for fast and inexpensive screening. PMID:26517391

  1. Daylight metrics and energy savings

    SciTech Connect

    Mardaljevic, John; Heschong, Lisa; Lee, Eleanor

    2009-12-31

    The drive towards sustainable, low-energy buildings has increased the need for simple, yet accurate methods to evaluate whether a daylit building meets minimum standards for energy and human comfort performance. Current metrics do not account for the temporal and spatial aspects of daylight, nor of occupants comfort or interventions. This paper reviews the historical basis of current compliance methods for achieving daylit buildings, proposes a technical basis for development of better metrics, and provides two case study examples to stimulate dialogue on how metrics can be applied in a practical, real-world context.

  2. Issues in Benchmark Metric Selection

    NASA Astrophysics Data System (ADS)

    Crolotte, Alain

    It is true that a metric can influence a benchmark but will esoteric metrics create more problems than they will solve? We answer this question affirmatively by examining the case of the TPC-D metric which used the much debated geometric mean for the single-stream test. We will show how a simple choice influenced the benchmark and its conduct and, to some extent, DBMS development. After examining other alternatives our conclusion is that the “real” measure for a decision-support benchmark is the arithmetic mean.

  3. Truss Performance and Packaging Metrics

    NASA Technical Reports Server (NTRS)

    Mikulas, Martin M.; Collins, Timothy J.; Doggett, William; Dorsey, John; Watson, Judith

    2006-01-01

    In the present paper a set of performance metrics are derived from first principals to assess the efficiency of competing space truss structural concepts in terms of mass, stiffness, and strength, for designs that are constrained by packaging. The use of these performance metrics provides unique insight into the primary drivers for lowering structural mass and packaging volume as well as enabling quantitative concept performance evaluation and comparison. To demonstrate the use of these performance metrics, data for existing structural concepts are plotted and discussed. Structural performance data is presented for various mechanical deployable concepts, for erectable structures, and for rigidizable structures.

  4. Converting Residential Drawing Courses to Metric.

    ERIC Educational Resources Information Center

    Goetsch, David L.

    1980-01-01

    Describes the process of metric conversion in residential drafting courses. Areas of concern are metric paper sizes; metric scale; plot, foundation, floor and electric plans; wall sections; elevations; and heat loss/ heat gain calculations. (SK)

  5. Using principal component analysis for selecting network behavioral anomaly metrics

    NASA Astrophysics Data System (ADS)

    Gregorio-de Souza, Ian; Berk, Vincent; Barsamian, Alex

    2010-04-01

    This work addresses new approaches to behavioral analysis of networks and hosts for the purposes of security monitoring and anomaly detection. Most commonly used approaches simply implement anomaly detectors for one, or a few, simple metrics and those metrics can exhibit unacceptable false alarm rates. For instance, the anomaly score of network communication is defined as the reciprocal of the likelihood that a given host uses a particular protocol (or destination);this definition may result in an unrealistically high threshold for alerting to avoid being flooded by false positives. We demonstrate that selecting and adapting the metrics and thresholds, on a host-by-host or protocol-by-protocol basis can be done by established multivariate analyses such as PCA. We will show how to determine one or more metrics, for each network host, that records the highest available amount of information regarding the baseline behavior, and shows relevant deviances reliably. We describe the methodology used to pick from a large selection of available metrics, and illustrate a method for comparing the resulting classifiers. Using our approach we are able to reduce the resources required to properly identify misbehaving hosts, protocols, or networks, by dedicating system resources to only those metrics that actually matter in detecting network deviations.

  6. Metrics for Linear Kinematic Features in Sea Ice

    NASA Technical Reports Server (NTRS)

    Levy, G.; Coon, M.; Sulsky, D.

    2006-01-01

    The treatment of leads as cracks or discontinuities (see Coon et al. presentation) requires some shift in the procedure of evaluation and comparison of lead-resolving models and their validation against observations. Common metrics used to evaluate ice model skills are by and large an adaptation of a least square "metric" adopted from operational numerical weather prediction data assimilation systems and are most appropriate for continuous fields and Eilerian systems where the observations and predictions are commensurate. However, this class of metrics suffers from some flaws in areas of sharp gradients and discontinuities (e.g., leads) and when Lagrangian treatments are more natural. After a brief review of these metrics and their performance in areas of sharp gradients, we present two new metrics specifically designed to measure model accuracy in representing linear features (e.g., leads). The indices developed circumvent the requirement that both the observations and model variables be commensurate (i.e., measured with the same units) by considering the frequencies of the features of interest/importance. We illustrate the metrics by scoring several hypothetical "simulated" discontinuity fields against the lead interpreted from RGPS observations.

  7. Towards a Visual Quality Metric for Digital Video

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.

    1998-01-01

    The advent of widespread distribution of digital video creates a need for automated methods for evaluating visual quality of digital video. This is particularly so since most digital video is compressed using lossy methods, which involve the controlled introduction of potentially visible artifacts. Compounding the problem is the bursty nature of digital video, which requires adaptive bit allocation based on visual quality metrics. In previous work, we have developed visual quality metrics for evaluating, controlling, and optimizing the quality of compressed still images. These metrics incorporate simplified models of human visual sensitivity to spatial and chromatic visual signals. The challenge of video quality metrics is to extend these simplified models to temporal signals as well. In this presentation I will discuss a number of the issues that must be resolved in the design of effective video quality metrics. Among these are spatial, temporal, and chromatic sensitivity and their interactions, visual masking, and implementation complexity. I will also touch on the question of how to evaluate the performance of these metrics.

  8. Let's Make Metric Ice Cream

    ERIC Educational Resources Information Center

    Zimmerman, Marianna

    1975-01-01

    Describes a classroom activity which involved sixth grade students in a learning situation including making ice cream, safety procedures in a science laboratory, calibrating a thermometer, using metric units of volume and mass. (EB)

  9. Mining metrics for buried treasure

    NASA Astrophysics Data System (ADS)

    Konkowski, D. A.; Helliwell, T. M.

    2006-06-01

    The same but different: That might describe two metrics. On the surface CLASSI may show two metrics are locally equivalent, but buried beneath may be a wealth of further structure. This was beautifully described in a paper by Malcolm MacCallum in 1998. Here I will illustrate the effect with two flat metrics — one describing ordinary Minkowski spacetime and the other describing a threeparameter family of Gal'tsov-Letelier-Tod spacetimes. I will dig out the beautiful hidden classical singularity structure of the latter (a structure first noticed by Tod in 1994) and then show how quantum considerations can illuminate the riches. I will then discuss how quantum structure can help us understand classical singularities and metric parameters in a variety of exact solutions mined from the Exact Solutions book.

  10. Spacetime metric from linear electrodynamics

    NASA Astrophysics Data System (ADS)

    Obukhov, Yuri N.; Hehl, Friedrich W.

    1999-07-01

    The Maxwell equations are formulated on an arbitrary (1+3)-dimensional manifold. Then, imposing a (constrained) linear constitutive relation between electromagnetic field (E,B) and excitation (D,ℌ), we derive the metric of spacetime therefrom.

  11. Using TRACI for Sustainability Metrics

    EPA Science Inventory

    TRACI, the Tool for the Reduction and Assessment of Chemical and other environmental Impacts, has been developed for sustainability metrics, life cycle impact assessment, and product and process design impact assessment for developing increasingly sustainable products, processes,...

  12. Coverage Metrics for Model Checking

    NASA Technical Reports Server (NTRS)

    Penix, John; Visser, Willem; Norvig, Peter (Technical Monitor)

    2001-01-01

    When using model checking to verify programs in practice, it is not usually possible to achieve complete coverage of the system. In this position paper we describe ongoing research within the Automated Software Engineering group at NASA Ames on the use of test coverage metrics to measure partial coverage and provide heuristic guidance for program model checking. We are specifically interested in applying and developing coverage metrics for concurrent programs that might be used to support certification of next generation avionics software.

  13. Metrics with Galilean conformal isometry

    SciTech Connect

    Bagchi, Arjun; Kundu, Arnab

    2011-03-15

    The Galilean conformal algebra (GCA) arises in taking the nonrelativistic limit of the symmetries of a relativistic conformal field theory in any dimensions. It is known to be infinite dimensional in all spacetime dimensions. In particular, the 2d GCA emerges out of a scaling limit of linear combinations of two copies of the Virasoro algebra. In this paper, we find metrics in dimensions greater than 2 which realize the finite 2d GCA (the global part of the infinite algebra) as their isometry by systematically looking at a construction in terms of cosets of this finite algebra. We list all possible subalgebras consistent with some physical considerations motivated by earlier work in this direction and construct all possible higher-dimensional nondegenerate metrics. We briefly study the properties of the metrics obtained. In the standard one higher-dimensional ''holographic'' setting, we find that the only nondegenerate metric is Minkowskian. In four and five dimensions, we find families of nontrivial metrics with a rather exotic signature. A curious feature of these metrics is that all but one of them are Ricci-scalar flat.

  14. A wavelet contrast metric for the targeting task performance metric

    NASA Astrophysics Data System (ADS)

    Preece, Bradley L.; Flug, Eric A.

    2016-05-01

    Target acquisition performance depends strongly on the contrast of the target. The Targeting Task Performance (TTP) metric, within the Night Vision Integrated Performance Model (NV-IPM), uses a combination of resolution, signal-to-noise ratio (SNR), and contrast to predict and model system performance. While the dependence on resolution and SNR are well defined and understood, defining a robust and versatile contrast metric for a wide variety of acquisition tasks is more difficult. In this correspondence, a wavelet contrast metric (WCM) is developed under the assumption that the human eye processes spatial differences in a manner similar to a wavelet transform. The amount of perceivable information, or useful wavelet coefficients, is used to predict the total viewable contrast to the human eye. The WCM is intended to better match the measured performance of the human vision system for high-contrast, low-contrast, and low-observable targets. After further validation, the new contrast metric can be incorporated using a modified TTP metric into the latest Army target acquisition software suite, the NV-IPM.

  15. Symbolic planning with metric time

    NASA Astrophysics Data System (ADS)

    MacMillan, T. R.

    1992-03-01

    Most AI planning systems have considered time in a qualitative way only. For example, a plan may require one action to come 'before' another. Metric time enables AI planners to represent action durations and reason over quantitative temporal constraints such as windows of opportunity. This paper presents preliminary results observed while developing a theory of multi-agent adversarial planning for battle management research. Quantitative temporal reasoning seems essential in this domain. For example, Orange may plan to block Blue's attack by seizing a river ford which Blue must cross, but only if Orange can get there during the window of opportunity while Blue is approaching the ford but has not yet arrived. In nonadversarial multi-agent planning, metric time enables planners to detect windows of opportunity for agents to help or hinder each other. In single-agent planning, metric time enables planners to reason about deadlines, temporally constrained resource availability, and asynchronous processes which the agent can initiate and monitor. Perhaps surprisingly, metric time increases the computational complexity of planning less than might be expected, because it reduces the computational complexity of modal truth criteria. To make this observation precise, we review Chapman's analysis to modal truth criteria and describe a tractable heuristic criterion, 'worst case necessarily true.' Deciding if a proposition is worst case necessarily true, in a single-agent plan with n steps, requires O(n) computations only if qualitative temporal information is used. We show how it can be decided in O(log n) using metric time.

  16. Metrics for measuring net-centric data strategy implementation

    NASA Astrophysics Data System (ADS)

    Kroculick, Joseph B.

    2010-04-01

    An enterprise data strategy outlines an organization's vision and objectives for improved collection and use of data. We propose generic metrics and quantifiable measures for each of the DoD Net-Centric Data Strategy (NCDS) data goals. Data strategy metrics can be adapted to the business processes of an enterprise and the needs of stakeholders in leveraging the organization's data assets to provide for more effective decision making. Generic metrics are applied to a specific application where logistics supply and transportation data is integrated across multiple functional groups. A dashboard presents a multidimensional view of the current progress to a state where logistics data shared in a timely and seamless manner among users, applications, and systems.

  17. Eye Tracking Metrics for Workload Estimation in Flight Deck Operation

    NASA Technical Reports Server (NTRS)

    Ellis, Kyle; Schnell, Thomas

    2010-01-01

    Flight decks of the future are being enhanced through improved avionics that adapt to both aircraft and operator state. Eye tracking allows for non-invasive analysis of pilot eye movements, from which a set of metrics can be derived to effectively and reliably characterize workload. This research identifies eye tracking metrics that correlate to aircraft automation conditions, and identifies the correlation of pilot workload to the same automation conditions. Saccade length was used as an indirect index of pilot workload: Pilots in the fully automated condition were observed to have on average, larger saccadic movements in contrast to the guidance and manual flight conditions. The data set itself also provides a general model of human eye movement behavior and so ostensibly visual attention distribution in the cockpit for approach to land tasks with various levels of automation, by means of the same metrics used for workload algorithm development.

  18. How Soon Will We Measure in Metric?

    ERIC Educational Resources Information Center

    Weaver, Kenneth F.

    1977-01-01

    A brief history of measurement systems beginning with the Egyptians and Babylonians is given, ending with a discussion of the metric system and its adoption by the United States. Tables of metric prefixes, metric units, and common metric conversions are included. (MN)

  19. Metrics for Occupations. Information Series No. 118.

    ERIC Educational Resources Information Center

    Peterson, John C.

    The metric system is discussed in this information analysis paper with regard to its history, a rationale for the United States' adoption of the metric system, a brief overview of the basic units of the metric system, examples of how the metric system will be used in different occupations, and recommendations for research and development. The…

  20. Synthesis Array Topology Metrics in Location Characterization

    NASA Astrophysics Data System (ADS)

    Shanmugha Sundaram, GA

    2015-08-01

    Towards addressing some of the fundamental mysteries in physics at the micro- and macro-cosm level, that form the Key Science Projects (KSPs) for the Square Kilometer Array (SKA; such as Probing the Dark Ages and the Epoch of Reionization in the course of an Evolving Universe; Galaxy Evolution, Cosmology, and Dark Energy; and the Origin and evolution of Cosmic Magnetism) a suitable interfacing of these goals has to be achieved with its optimally designed array configuration, by means of a critical evaluation of the radio imagingcapabilities and metrics. Of the two forerunner sites, viz. Australia and South Africa, where pioneering advancements to state-of-the-art in synthesis array radio astronomy instrumentation are being attempted in the form of pathfinders to the SKA, for its eventual deployment, a diversity of site-dependent topology and design metrics exists. Here, the particular discussion involves those KSPs that relate to galactic morphology and evolution, and explores their suitability as a scientific research goal from the prespective of the location-driven instrument design specification. Relative merits and adaptability with regard to either site shall be presented from invoking well-founded and established array-design and optimization principles designed into a customized software tool.

  1. Metrics correlation and analysis service (MCAS)

    SciTech Connect

    Baranovski, Andrew; Dykstra, Dave; Garzoglio, Gabriele; Hesselroth, Ted; Mhashilkar, Parag; Levshina, Tanya; /Fermilab

    2009-05-01

    The complexity of Grid workflow activities and their associated software stacks inevitably involves multiple organizations, ownership, and deployment domains. In this setting, important and common tasks such as the correlation and display of metrics and debugging information (fundamental ingredients of troubleshooting) are challenged by the informational entropy inherent to independently maintained and operated software components. Because such an information 'pond' is disorganized, it a difficult environment for business intelligence analysis i.e. troubleshooting, incident investigation and trend spotting. The mission of the MCAS project is to deliver a software solution to help with adaptation, retrieval, correlation, and display of workflow-driven data and of type-agnostic events, generated by disjoint middleware.

  2. Colonoscopy Quality: Metrics and Implementation

    PubMed Central

    Calderwood, Audrey H.; Jacobson, Brian C.

    2013-01-01

    Synopsis Colonoscopy is an excellent area for quality improvement 1 because it is high volume, has significant associated risk and expense, and there is evidence that variability in its performance affects outcomes. The best endpoint for validation of quality metrics in colonoscopy is colorectal cancer incidence and mortality, but because of feasibility issues, a more readily accessible metric is the adenoma detection rate (ADR). Fourteen quality metrics were proposed by the joint American Society of Gastrointestinal Endoscopy/American College of Gastroenterology Task Force on “Quality Indicators for Colonoscopy” in 2006, which are described in further detail below. Use of electronic health records and quality-oriented registries will facilitate quality measurement and reporting. Unlike traditional clinical research, implementation of quality improvement initiatives involves rapid assessments and changes on an iterative basis, and can be done at the individual, group, or facility level. PMID:23931862

  3. Status report on aerospace metrication

    NASA Astrophysics Data System (ADS)

    Peterson, A.

    1983-10-01

    Following passage of PL 94-168, the transition to the use of metric units within the United States has been very slow. The lack of a national plan with no clear understanding or agreement concerning the source of the funds required to effect the transition have been major impediments. There are international pressures from the international standards-making organizations, the ICAO and NATO, to proceed with the unification of standards. Within the commercial aviation field, the United States is resisting the pressure, but is participating in international standardization activities because of a general recognition of the need to support future market requirements and to assist in the resolution of a significant NATO logistics problem. The AIA, with the SAE and other standards-making organizations, is attempting to secure a harmonization of United States and AECMA metric standards. The future transition progress is not expected to accelerate significantly until metric standards are available.

  4. Requirement Metrics for Risk Identification

    NASA Technical Reports Server (NTRS)

    Hammer, Theodore; Huffman, Lenore; Wilson, William; Rosenberg, Linda; Hyatt, Lawrence

    1996-01-01

    The Software Assurance Technology Center (SATC) is part of the Office of Mission Assurance of the Goddard Space Flight Center (GSFC). The SATC's mission is to assist National Aeronautics and Space Administration (NASA) projects to improve the quality of software which they acquire or develop. The SATC's efforts are currently focused on the development and use of metric methodologies and tools that identify and assess risks associated with software performance and scheduled delivery. This starts at the requirements phase, where the SATC, in conjunction with software projects at GSFC and other NASA centers is working to identify tools and metric methodologies to assist project managers in identifying and mitigating risks. This paper discusses requirement metrics currently being used at NASA in a collaborative effort between the SATC and the Quality Assurance Office at GSFC to utilize the information available through the application of requirements management tools.

  5. Distribution Metrics and Image Segmentation

    PubMed Central

    Georgiou, Tryphon; Michailovich, Oleg; Rathi, Yogesh; Malcolm, James; Tannenbaum, Allen

    2007-01-01

    The purpose of this paper is to describe certain alternative metrics for quantifying distances between distributions, and to explain their use and relevance in visual tracking. Besides the theoretical interest, such metrics may be used to design filters for image segmentation, that is for solving the key visual task of separating an object from the background in an image. The segmenting curve is represented as the zero level set of a signed distance function. Most existing methods in the geometric active contour framework perform segmentation by maximizing the separation of intensity moments between the interior and the exterior of an evolving contour. Here one can use the given distributional metric to determine a flow which minimizes changes in the distribution inside and outside the curve. PMID:18769529

  6. Rainbow metric from quantum gravity

    NASA Astrophysics Data System (ADS)

    Assanioussi, Mehdi; Dapor, Andrea; Lewandowski, Jerzy

    2015-12-01

    In this Letter, we describe a general mechanism for emergence of a rainbow metric from a quantum cosmological model. This idea is based on QFT on a quantum spacetime. Under general assumptions, we discover that the quantum spacetime on which the field propagates can be replaced by a classical spacetime, whose metric depends explicitly on the energy of the field: as shown by an analysis of dispersion relations, quanta of different energy propagate on different metrics, similar to photons in a refractive material (hence the name "rainbow" used in the literature). In deriving this result, we do not consider any specific theory of quantum gravity: the qualitative behaviour of high-energy particles on quantum spacetime relies only on the assumption that the quantum spacetime is described by a wave-function Ψo in a Hilbert space HG.

  7. The flexibility of optical metrics

    NASA Astrophysics Data System (ADS)

    Bittencourt, Eduardo; Pereira, Jonas P.; Smolyaninov, Igor I.; Smolyaninova, Vera N.

    2016-08-01

    We firstly revisit the importance, naturalness and limitations of the so-called optical metrics for describing the propagation of light rays in the limit of geometric optics. We then exemplify their flexibility and nontriviality in some nonlinear material media and in the context of nonlinear theories of the electromagnetism, both in the presence of curved backgrounds, where optical metrics could be flat and inaccessible regions for the propagation of photons could be conceived, respectively. Finally, we underline and discuss the relevance and potential applications of our analyses in a broad sense, ranging from material media to compact astrophysical systems.

  8. Thermodynamic Metrics and Optimal Paths

    SciTech Connect

    Sivak, David; Crooks, Gavin

    2012-05-08

    A fundamental problem in modern thermodynamics is how a molecular-scale machine performs useful work, while operating away from thermal equilibrium without excessive dissipation. To this end, we derive a friction tensor that induces a Riemannian manifold on the space of thermodynamic states. Within the linear-response regime, this metric structure controls the dissipation of finite-time transformations, and bestows optimal protocols with many useful properties. We discuss the connection to the existing thermodynamic length formalism, and demonstrate the utility of this metric by solving for optimal control parameter protocols in a simple nonequilibrium model.

  9. Metrics for the comparative evaluation of chemical plume identification algorithms

    NASA Astrophysics Data System (ADS)

    Truslow, E.; Golowich, S.; Manolakis, D.; Ingle, V. K.

    2015-05-01

    The detection of chemical agents with hyperspectral longwave infrared sensors is a difficult problem with many civilian and military applications. System performance can be evaluated by comparing the detected gases in each pixel with the ground truth for each pixel using a confusion matrix. In the presence of chemical mixtures the confusion matrix becomes extremely large and difficult to interpret due to its size. We propose summarizing the confusion matrix using simple scalar metrics tailored for specific applications. Ideally, an identifier should determine exactly which chemicals are in each pixel, but in many applications it is acceptable for the output to contain additional chemicals or lack some constituent chemicals. A performance metric for identification problems should give partially correct results a lower weight than completely correct results. The metric we propose using, the Dice metric, weighs each output by its similarity with the truth for each pixel, thereby giving less importance to partially correct outputs, while still giving full scores only to exactly correct results. Using the Dice metric we evaluated the performance of two identification algorithms: an adaptive cosine estimator (ACE) detector bank approach, and Bayesian model averaging (BMA). Both algorithms were tested individually on real background data with synthetically embedded plumes; performance was evaluated using standard detection performance metrics, and then using the proposed identification metric. We show that ACE performed well as a detector but poorly as an identifier; however, BMA performed poorly as a detector but well as an identifier. Cascading the two algorithms should lead to a system with a substantially lower false alarm rate than using BMA alone, and much better identification performance than the ACE detector bank alone.

  10. Trajectory Planning by Preserving Flexibility: Metrics and Analysis

    NASA Technical Reports Server (NTRS)

    Idris, Husni R.; El-Wakil, Tarek; Wing, David J.

    2008-01-01

    In order to support traffic management functions, such as mitigating traffic complexity, ground and airborne systems may benefit from preserving or optimizing trajectory flexibility. To help support this hypothesis trajectory flexibility metrics have been defined in previous work to represent the trajectory robustness and adaptability to the risk of violating safety and traffic management constraints. In this paper these metrics are instantiated in the case of planning a trajectory with the heading degree of freedom. A metric estimation method is presented based on simplifying assumptions, namely discrete time and heading maneuvers. A case is analyzed to demonstrate the estimation method and its use in trajectory planning in a situation involving meeting a time constraint and avoiding loss of separation with nearby traffic. The case involves comparing path-stretch trajectories, in terms of adaptability and robustness along each, deduced from a map of estimated flexibility metrics over the solution space. The case demonstrated anecdotally that preserving flexibility may result in enhancing certain factors that contribute to traffic complexity, namely reducing proximity and confrontation.

  11. Maximum margin metric learning based target detection for hyperspectral images

    NASA Astrophysics Data System (ADS)

    Dong, Yanni; Zhang, Liangpei; Zhang, Lefei; Du, Bo

    2015-10-01

    Target detection is one of the most important problems in hyperspectral image (HSI) processing. However, the classical algorithms depend on the specific statistical hypothesis test, and the algorithms may only perform well under certain conditions, e.g., the adaptive matched subspace detector algorithm assumes that the background covariance matrices do not include the target signatures, which seldom happens in the real world. How to develop a proper metric for measuring the separability between targets and backgrounds becomes the key in target detection. This paper proposes an efficient maximum margin metric learning (MMML) based target detection algorithm, which aims at exploring the limited samples in metric learning and transfers the metric learning problem for hyperspectral target detection into a maximum margin problem which can be optimized via a cutting plane method, and maximally separates the target samples from the background ones. The extensive experimental results with different HSIs demonstrate that the proposed method outperforms both the state-of-the-art target detection algorithms and the other classical metric learning methods.

  12. Metric Units in Primary Schools.

    ERIC Educational Resources Information Center

    Lighthill, M. J.; And Others

    Although this pamphlet is intended as background material for teachers in English primary schools changing to the System International d'Unites (SI units), the form of the metric system being adopted by the United Kingdom, the educational implications of the change and the lists of apparatus suitable for use with children up to 14 years of age are…

  13. Metrication and the Technical Teacher

    ERIC Educational Resources Information Center

    Irving, Michael

    1975-01-01

    The conclusion of the two-part feature on the S1 metric (International System of Units) reviews the basics and some of the rules technical teachers need to know in order to prepare their students for the changing world. (Author)

  14. Guidelines for Teaching Metric Concepts.

    ERIC Educational Resources Information Center

    Wisconsin State Dept. of Public Instruction, Madison.

    The primary purpose of these guidelines is to provide teachers and other decision-makers with a suggested framework within which sound planning for metric education can be done. Student behavioral objectives are listed by topic. Each objective is coded to indicate grade level, topic, and objective number. A chart is provided to show a kindergarten…

  15. Powerful Metrics: Strategic and Transformative

    ERIC Educational Resources Information Center

    Butterfield, Barbara

    2006-01-01

    To be a valuable partner at the strategic level, human resources can and should contribute to both institutional effectiveness measurement and workforce metrics. In this article, the author examines how to link HR initiatives with key institutional strategies, clarifies essential HR responsibilities for workforce results, explores return on human…

  16. Metrics in Education - Resource Materials.

    ERIC Educational Resources Information Center

    New York State Education Dept., Albany. Div. of Curriculum Development.

    This publication contains materials suitable for reproduction as transparencies or as classroom handouts. These metric materials may be used in a variety of occupational and practical arts courses. The format of the materials is in large print, some with humorous drawing; details of drawings and charts are easy to read. Introductory pages deal…

  17. Improving an Imperfect Metric System

    ERIC Educational Resources Information Center

    Frasier, E. Lewis

    1974-01-01

    Suggests some improvements and additional units necessary for the International Metric System to expand its use to all measureable entities and defined quantities, especially in the measurement of time and angles. Included are tables of proposed unit systems in contrast with the presently available systems. (CC)

  18. Charged C -metric in conformal gravity

    NASA Astrophysics Data System (ADS)

    Lim, Yen-Kheng

    2016-04-01

    Using a C -metric-type ansatz, we obtain an exact solution to conformal gravity coupled to a Maxwell electromagnetic field. The solution resembles a C -metric spacetime carrying an electromagnetic charge. The metric is cast in a factorized form which allows us to study the domain structure of its static coordinate regions. This metric reduces to the well-known Mannheim-Kazanas metric under an appropriate limiting procedure, and also reduces to the (anti)de Sitter C -metric of Einstein gravity for a particular choice of parameters.

  19. Semantic Metrics for Analysis of Software

    NASA Technical Reports Server (NTRS)

    Etzkorn, Letha H.; Cox, Glenn W.; Farrington, Phil; Utley, Dawn R.; Ghalston, Sampson; Stein, Cara

    2005-01-01

    A recently conceived suite of object-oriented software metrics focus is on semantic aspects of software, in contradistinction to traditional software metrics, which focus on syntactic aspects of software. Semantic metrics represent a more human-oriented view of software than do syntactic metrics. The semantic metrics of a given computer program are calculated by use of the output of a knowledge-based analysis of the program, and are substantially more representative of software quality and more readily comprehensible from a human perspective than are the syntactic metrics.

  20. Metrics for Labeled Markov Systems

    NASA Technical Reports Server (NTRS)

    Desharnais, Josee; Jagadeesan, Radha; Gupta, Vineet; Panangaden, Prakash

    1999-01-01

    Partial Labeled Markov Chains are simultaneously generalizations of process algebra and of traditional Markov chains. They provide a foundation for interacting discrete probabilistic systems, the interaction being synchronization on labels as in process algebra. Existing notions of process equivalence are too sensitive to the exact probabilities of various transitions. This paper addresses contextual reasoning principles for reasoning about more robust notions of "approximate" equivalence between concurrent interacting probabilistic systems. The present results indicate that:We develop a family of metrics between partial labeled Markov chains to formalize the notion of distance between processes. We show that processes at distance zero are bisimilar. We describe a decision procedure to compute the distance between two processes. We show that reasoning about approximate equivalence can be done compositionally by showing that process combinators do not increase distance. We introduce an asymptotic metric to capture asymptotic properties of Markov chains; and show that parallel composition does not increase asymptotic distance.

  1. Object-oriented productivity metrics

    NASA Technical Reports Server (NTRS)

    Connell, John L.; Eller, Nancy

    1992-01-01

    Software productivity metrics are useful for sizing and costing proposed software and for measuring development productivity. Estimating and measuring source lines of code (SLOC) has proven to be a bad idea because it encourages writing more lines of code and using lower level languages. Function Point Analysis is an improved software metric system, but it is not compatible with newer rapid prototyping and object-oriented approaches to software development. A process is presented here for counting object-oriented effort points, based on a preliminary object-oriented analysis. It is proposed that this approach is compatible with object-oriented analysis, design, programming, and rapid prototyping. Statistics gathered on actual projects are presented to validate the approach.

  2. Hands-On Activities with Metrics

    ERIC Educational Resources Information Center

    McFee, Evan

    1978-01-01

    Suggestions for familiarizing elementary teachers with the use of the metric system are given. These include a "stair-steps" method of converting units within the metric system and estimation and measurement activities using familiar everyday objects. (MN)

  3. Measure Metric: A Multi-State Consortium

    ERIC Educational Resources Information Center

    Dowling, Kenneth W.

    1977-01-01

    Describes the "Measure Metric" series of twelve fifteen-minute programs and related classroom materials for grades 5 and 6 for teaching the metric system and the International System of Units (SI). (SL)

  4. Measuring Sustainability: Deriving Metrics From Objectives (Presentation)

    EPA Science Inventory

    The definition of 'sustain', to keep in existence, provides some insight into the metrics that are required to measure sustainability and adequately respond to assure sustainability. Keeping something in existence implies temporal and spatial contexts and requires metrics that g...

  5. Weighted contrast metric for imaging system performance

    NASA Astrophysics Data System (ADS)

    Teaney, Brian P.

    2012-06-01

    There have been significant improvements in the image quality metrics used in the NVESD model suite in recent years. The introduction of the Targeting Task Performance (TTP) metric to replace the Johnson criteria yielded significantly more accurate predictions for under-sampled imaging systems in particular. However, there are certain cases which cause the TTP metric to predict optimistic performance. In this paper a new metric for predicting performance of imaging systems is described. This new weighted contrast metric is characterized as a hybrid of the TTP metric and Johnson criteria. Results from a number of historical perception studies are presented to compare the performance of the TTP metric and Johnson criteria to the newly proposed metric.

  6. Do Your Students Measure Up Metrically?

    ERIC Educational Resources Information Center

    Taylor, P. Mark; Simms, Ken; Kim, Ok-Kyeong; Reys, Robert E.

    2001-01-01

    Examines released metric items from the Third International Mathematics and Science Study (TIMSS) and the 3rd and 4th grade results. Recommends refocusing instruction on the metric system to improve student performance in measurement. (KHR)

  7. Joint learning of labels and distance metric.

    PubMed

    Liu, Bo; Wang, Meng; Hong, Richang; Zha, Zhengjun; Hua, Xian-Sheng

    2010-06-01

    Machine learning algorithms frequently suffer from the insufficiency of training data and the usage of inappropriate distance metric. In this paper, we propose a joint learning of labels and distance metric (JLLDM) approach, which is able to simultaneously address the two difficulties. In comparison with the existing semi-supervised learning and distance metric learning methods that focus only on label prediction or distance metric construction, the JLLDM algorithm optimizes the labels of unlabeled samples and a Mahalanobis distance metric in a unified scheme. The advantage of JLLDM is multifold: 1) the problem of training data insufficiency can be tackled; 2) a good distance metric can be constructed with only very few training samples; and 3) no radius parameter is needed since the algorithm automatically determines the scale of the metric. Extensive experiments are conducted to compare the JLLDM approach with different semi-supervised learning and distance metric learning methods, and empirical results demonstrate its effectiveness. PMID:19963702

  8. Metric reconstruction from Weyl scalars

    NASA Astrophysics Data System (ADS)

    Whiting, Bernard F.; Price, Larry R.

    2005-08-01

    The Kerr geometry has remained an elusive world in which to explore physics and delve into the more esoteric implications of general relativity. Following the discovery, by Kerr in 1963, of the metric for a rotating black hole, the most major advance has been an understanding of its Weyl curvature perturbations based on Teukolsky's discovery of separable wave equations some ten years later. In the current research climate, where experiments across the globe are preparing for the first detection of gravitational waves, a more complete understanding than concerns just the Weyl curvature is now called for. To understand precisely how comparatively small masses move in response to the gravitational waves they emit, a formalism has been developed based on a description of the whole spacetime metric perturbation in the neighbourhood of the emission region. Presently, such a description is not available for the Kerr geometry. While there does exist a prescription for obtaining metric perturbations once curvature perturbations are known, it has become apparent that there are gaps in that formalism which are still waiting to be filled. The most serious gaps include gauge inflexibility, the inability to include sources—which are essential when the emitting masses are considered—and the failure to describe the ell = 0 and 1 perturbation properties. Among these latter properties of the perturbed spacetime, arising from a point mass in orbit, are the perturbed mass and axial component of angular momentum, as well as the very elusive Carter constant for non-axial angular momentum. A status report is given on recent work which begins to repair these deficiencies in our current incomplete description of Kerr metric perturbations.

  9. Marketing metrics for medical practices.

    PubMed

    Zahaluk, David; Baum, Neil

    2012-01-01

    There's a saying by John Wanamaker who pontificated, "Half the money I spend on advertising is wasted; the trouble is, I don't know which half". Today you have opportunities to determine which parts of your marketing efforts are effective and what is wasted. However, you have to measure your marketing results. This article will discuss marketing metrics and how to use them to get the best bang for your marketing buck. PMID:22834190

  10. Multi-Metric Sustainability Analysis

    SciTech Connect

    Cowlin, S.; Heimiller, D.; Macknick, J.; Mann, M.; Pless, J.; Munoz, D.

    2014-12-01

    A readily accessible framework that allows for evaluating impacts and comparing tradeoffs among factors in energy policy, expansion planning, and investment decision making is lacking. Recognizing this, the Joint Institute for Strategic Energy Analysis (JISEA) funded an exploration of multi-metric sustainability analysis (MMSA) to provide energy decision makers with a means to make more comprehensive comparisons of energy technologies. The resulting MMSA tool lets decision makers simultaneously compare technologies and potential deployment locations.

  11. Adaptive unsupervised slow feature analysis for feature extraction

    NASA Astrophysics Data System (ADS)

    Gu, Xingjian; Liu, Chuancai; Wang, Sheng

    2015-03-01

    Slow feature analysis (SFA) extracts slowly varying features out of the input data and has been successfully applied on pattern recognition. However, SFA heavily relies on the constructed time series when SFA is applied on databases that neither have obvious temporal structure nor have label information. Traditional SFA constructs time series based on k-nearest neighborhood (k-NN) criterion. Specifically, the time series set constructed by k-NN criterion is likely to include noisy time series or lose suitable time series because the parameter k is difficult to determine. To overcome these problems, a method called adaptive unsupervised slow feature analysis (AUSFA) is proposed. First, AUSFA designs an adaptive criterion to generate time series for characterizing submanifold. The constructed time series have two properties: (1) two points of time series lie on the same submanifold and (2) the submanifold of the time series is smooth. Second, AUSFA seeks projections that simultaneously minimize the slowness scatter and maximize the fastness scatter to extract slow discriminant features. Extensive experimental results on three benchmark face databases demonstrate the effectiveness of our proposed method.

  12. Achieving effective landscape conservation: evolving demands adaptive metrics

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Rapid changes in demographics and on- and off-farm land use limit the impacts of U.S. conservation programs and present particular challenges to future conservation efforts. The fragmentation of landscape through urban, suburban, and peri-urban development, coincident with demographic shifts, has s...

  13. Evaluating macroinvertebrate biological metrics for ecological assessment of streams in northern Portugal.

    PubMed

    Varandas, Simone G; Cortes, Rui Manuel Vitor

    2010-07-01

    A procedure to select the most relevant metrics for assessing the ecological condition of the Douro basin (north Portugal) was developed based upon a set of 184 benthic community metrics. They were grouped into 16 biological categories selected from literature using data collected over 2 years from 54 sites along 31 rivers covering the whole perceived range of human disturbance. Multivariate analyses were carried out to identify the main trends in the macroinvertebrate data, to select reference versus impaired sites, to avoid multicolinearity between metrics, and to identify those that were clearly independent from natural stream typology. Structural metrics, adaptation metrics, and tolerance measures most effectively responded across a range of human influence. We find these attributes to be ecologically sound for monitoring Portugal's lotic ecosystems and providing information relevant to the Water Framework Directive, which asserts that the definition of water quality depends on its "ecological status", independent of the actual or potential uses of those waters. PMID:19488735

  14. Metric Measurement: A Resource for Teachers.

    ERIC Educational Resources Information Center

    Texas Education Agency, Austin. Div. of Curriculum Development.

    This document is designed to help teachers deal with the changeover from the United States customary system to the metric system. This publication contains a brief introduction to the historical development of the metric system, tables of the International System of Units, and descriptions of everyday use of the metric system. Basic information…

  15. Materials for Metric Instruction. Mathematics Education Reports.

    ERIC Educational Resources Information Center

    Bitter, Gary G.; Geer, Charles

    This compilation lists available metric kits (41 listings), task cards (8 listings), films (24 listings), filmstrips (36 listings), slides (4 listings), and other miscellaneous metric materials (13 listings). The bibliography is intended as a quick reference or source of information for supplementary metric materials. For each entry the source,…

  16. Metrics, Lumber, and the Shop Teacher

    ERIC Educational Resources Information Center

    Craemer, Peter J.

    1978-01-01

    As producers of lumber are preparing to convert their output to the metric system, wood shop and building construction teachers must become familiar with the metric measurement language and methods. Manufacturers prefer the "soft conversion" process of changing English to metric units rather than hard conversion, or redimensioning of lumber. Some…

  17. A Look at Metrics in Distributive Education.

    ERIC Educational Resources Information Center

    Canei, Robert A.

    The United States will convert to the metric system of measurement in the near future, and the distributive education programs in high school and at the adult level will have to train the needed personnel for business. The manual gives the basic conversion methods and instruction in teaching metrics. Metric programs conducted for business…

  18. Toward an efficient objective metric based on perceptual criteria

    NASA Astrophysics Data System (ADS)

    Quintard, Ludovic; Larabi, Mohamed-Chaker; Fernandez-Maloigne, Christine

    2008-01-01

    Quality assessment is a very challenging problem and will still as is since it is difficult to define universal tools. So, subjective assessment is one adapted way but it is tedious, time consuming and needs normalized room. Objective metrics can be with reference, with reduced reference and with no-reference. This paper presents a study carried out for the development of a no-reference objective metric dedicated to the quality evaluation of display devices. Initially, a subjective study has been devoted to this problem by asking a representative panel (15 male and 15 female; 10 young adults, 10 adults and 10 seniors) to answer questions regarding their perception of several criteria for quality assessment. These quality factors were hue, saturation, contrast and texture. This aims to define the importance of perceptual criteria in the human judgment of quality. Following the study, the factors that impact the quality evaluation of display devices have been proposed. The development of a no-reference objective metric has been performed by using statistical tools allowing to separate the important axes. This no-reference metric based on perceptual criteria by integrating some specificities of the human visual system (HVS) has a high correlation with the subjective data.

  19. Texture metric that predicts target detection performance

    NASA Astrophysics Data System (ADS)

    Culpepper, Joanne B.

    2015-12-01

    Two texture metrics based on gray level co-occurrence error (GLCE) are used to predict probability of detection and mean search time. The two texture metrics are local clutter metrics and are based on the statistics of GLCE probability distributions. The degree of correlation between various clutter metrics and the target detection performance of the nine military vehicles in complex natural scenes found in the Search_2 dataset are presented. Comparison is also made between four other common clutter metrics found in the literature: root sum of squares, Doyle, statistical variance, and target structure similarity. The experimental results show that the GLCE energy metric is a better predictor of target detection performance when searching for targets in natural scenes than the other clutter metrics studied.

  20. Comparing Resource Adequacy Metrics: Preprint

    SciTech Connect

    Ibanez, E.; Milligan, M.

    2014-09-01

    As the penetration of variable generation (wind and solar) increases around the world, there is an accompanying growing interest and importance in accurately assessing the contribution that these resources can make toward planning reserve. This contribution, also known as the capacity credit or capacity value of the resource, is best quantified by using a probabilistic measure of overall resource adequacy. In recognizing the variable nature of these renewable resources, there has been interest in exploring the use of reliability metrics other than loss of load expectation. In this paper, we undertake some comparisons using data from the Western Electricity Coordinating Council in the western United States.

  1. A class of integrable metrics

    NASA Astrophysics Data System (ADS)

    Anabalón, Andrés; Batista, Carlos

    2016-03-01

    In four dimensions, the most general metric admitting two commuting Killing vectors and a rank-two Killing tensor can be parametrized by ten arbitrary functions of a single variable. We show that picking a special vierbein, reducing the system to eight functions, implies the existence of two geodesic and share-free, null congruences, generated by two principal null directions of the Weyl tensor. Thus, if the spacetime is an Einstein manifold, the Goldberg-Sachs theorem implies it is Petrov type D, and by explicit construction, is in the Carter class. Hence, our analysis provides a straightforward connection between the most general integrable structure and the Carter family of spacetimes.

  2. A family of heavenly metrics

    NASA Astrophysics Data System (ADS)

    Nutku, Y.; Sheftel, M. B.

    2014-02-01

    This is a corrected and essentially extended version of the unpublished manuscript by Y Nutku and M Sheftel which contains new results. It is proposed to be published in honour of Y Nutku’s memory. All corrections and new results in sections 1, 2 and 4 are due to M Sheftel. We present new anti-self-dual exact solutions of the Einstein field equations with Euclidean and neutral (ultra-hyperbolic) signatures that admit only one rotational Killing vector. Such solutions of the Einstein field equations are determined by non-invariant solutions of Boyer-Finley (BF) equation. For the case of Euclidean signature such a solution of the BF equation was first constructed by Calderbank and Tod. Two years later, Martina, Sheftel and Winternitz applied the method of group foliation to the BF equation and reproduced the Calderbank-Tod solution together with new solutions for the neutral signature. In the case of Euclidean signature we obtain new metrics which asymptotically locally look like a flat space and have a non-removable singular point at the origin. In the case of ultra-hyperbolic signature there exist three inequivalent forms of metric. Only one of these can be obtained by analytic continuation from the Calderbank-Tod solution whereas the other two are new.

  3. Metrics for building performance assurance

    SciTech Connect

    Koles, G.; Hitchcock, R.; Sherman, M.

    1996-07-01

    This report documents part of the work performed in phase I of a Laboratory Directors Research and Development (LDRD) funded project entitled Building Performance Assurances (BPA). The focus of the BPA effort is to transform the way buildings are built and operated in order to improve building performance by facilitating or providing tools, infrastructure, and information. The efforts described herein focus on the development of metrics with which to evaluate building performance and for which information and optimization tools need to be developed. The classes of building performance metrics reviewed are (1) Building Services (2) First Costs, (3) Operating Costs, (4) Maintenance Costs, and (5) Energy and Environmental Factors. The first category defines the direct benefits associated with buildings; the next three are different kinds of costs associated with providing those benefits; the last category includes concerns that are broader than direct costs and benefits to the building owner and building occupants. The level of detail of the various issues reflect the current state of knowledge in those scientific areas and the ability of the to determine that state of knowledge, rather than directly reflecting the importance of these issues; it intentionally does not specifically focus on energy issues. The report describes work in progress and is intended as a resource and can be used to indicate the areas needing more investigation. Other reports on BPA activities are also available.

  4. Determining GPS average performance metrics

    NASA Technical Reports Server (NTRS)

    Moore, G. V.

    1995-01-01

    Analytic and semi-analytic methods are used to show that users of the GPS constellation can expect performance variations based on their location. Specifically, performance is shown to be a function of both altitude and latitude. These results stem from the fact that the GPS constellation is itself non-uniform. For example, GPS satellites are over four times as likely to be directly over Tierra del Fuego than over Hawaii or Singapore. Inevitable performance variations due to user location occur for ground, sea, air and space GPS users. These performance variations can be studied in an average relative sense. A semi-analytic tool which symmetrically allocates GPS satellite latitude belt dwell times among longitude points is used to compute average performance metrics. These metrics include average number of GPS vehicles visible, relative average accuracies in the radial, intrack and crosstrack (or radial, north/south, east/west) directions, and relative average PDOP or GDOP. The tool can be quickly changed to incorporate various user antenna obscuration models and various GPS constellation designs. Among other applications, tool results can be used in studies to: predict locations and geometries of best/worst case performance, design GPS constellations, determine optimal user antenna location and understand performance trends among various users.

  5. Multifractal Resilience Metrics for Complex Systems?

    NASA Astrophysics Data System (ADS)

    Schertzer, D. J.; Tchiguirinskaia, I.; Lovejoy, S.

    2011-12-01

    The term resilience has become extremely fashionable, especially for complex systems, whereas corresponding operational definitions have remained rather elusive (Carpenter et al. 2001). More precisely, the resilience assessment of man-made systems (from nuclear plants to cities) to geophysical extremes require mathematically defined resilience metrics based on some conceptual definition, e.g. the often cited definition of "ecological resilience" (Hollings 1973): "the capacity of a system to absorb disturbance and reorganize while undergoing change so as to still retain essentially the same function, structure, identity, and feedbacks". Surprisingly, whereas it was acknowledged by Folke et al. (2010) that "multiscale resilience is fundamental for understanding the interplay between persistence and change, adaptability and transformability", the relation between resilience and scaling has not been so much questioned, see however Peterson (2000). We argue that is rather indispensable to go well beyond the attractor approach (Pimm and Lawton 1977; Collings and Wollkind 1990;), as well as extensions (Martin et al., 2011) into the framework of the viability theory (Aubin 1991; Aubin et al. 2011). Indeed, both are rather limited to systems that are complex only in time. Scale symmetries are indeed indispensable to reduce the space-time complexity by defining scale independent observables, which are the singularities of the original, scale dependent fields. These singularities enable to define across-scale resilience, instead of resilience at a given scale.

  6. Bounded Linear Stability Margin Analysis of Nonlinear Hybrid Adaptive Control

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Boskovic, Jovan D.

    2008-01-01

    This paper presents a bounded linear stability analysis for a hybrid adaptive control that blends both direct and indirect adaptive control. Stability and convergence of nonlinear adaptive control are analyzed using an approximate linear equivalent system. A stability margin analysis shows that a large adaptive gain can lead to a reduced phase margin. This method can enable metrics-driven adaptive control whereby the adaptive gain is adjusted to meet stability margin requirements.

  7. A Sensor-Independent Gust Hazard Metric

    NASA Technical Reports Server (NTRS)

    Stewart, Eric C.

    2001-01-01

    A procedure for calculating an intuitive hazard metric for gust effects on airplanes is described. The hazard metric is for use by pilots and is intended to replace subjective pilot reports (PIREPs) of the turbulence level. The hazard metric is composed of three numbers: the first describes the average airplane response to the turbulence, the second describes the positive peak airplane response to the gusts, and the third describes the negative peak airplane response to the gusts. The hazard metric is derived from any time history of vertical gust measurements and is thus independent of the sensor making the gust measurements. The metric is demonstrated for one simulated airplane encountering different types of gusts including those derived from flight data recorder measurements of actual accidents. The simulated airplane responses to the gusts compare favorably with the hazard metric.

  8. Fighter agility metrics, research, and test

    NASA Technical Reports Server (NTRS)

    Liefer, Randall K.; Valasek, John; Eggold, David P.

    1990-01-01

    Proposed new metrics to assess fighter aircraft agility are collected and analyzed. A framework for classification of these new agility metrics is developed and applied. A completed set of transient agility metrics is evaluated with a high fidelity, nonlinear F-18 simulation provided by the NASA Dryden Flight Research Center. Test techniques and data reduction methods are proposed. A method of providing cuing information to the pilot during flight test is discussed. The sensitivity of longitudinal and lateral agility metrics to deviations from the pilot cues is studied in detail. The metrics are shown to be largely insensitive to reasonable deviations from the nominal test pilot commands. Instrumentation required to quantify agility via flight test is also considered. With one exception, each of the proposed new metrics may be measured with instrumentation currently available. Simulation documentation and user instructions are provided in an appendix.

  9. Common Metrics for Human-Robot Interaction

    NASA Technical Reports Server (NTRS)

    Steinfeld, Aaron; Lewis, Michael; Fong, Terrence; Scholtz, Jean; Schultz, Alan; Kaber, David; Goodrich, Michael

    2006-01-01

    This paper describes an effort to identify common metrics for task-oriented human-robot interaction (HRI). We begin by discussing the need for a toolkit of HRI metrics. We then describe the framework of our work and identify important biasing factors that must be taken into consideration. Finally, we present suggested common metrics for standardization and a case study. Preparation of a larger, more detailed toolkit is in progress.

  10. Metrics for Business Process Models

    NASA Astrophysics Data System (ADS)

    Mendling, Jan

    Up until now, there has been little research on why people introduce errors in real-world business process models. In a more general context, Simon [404] points to the limitations of cognitive capabilities and concludes that humans act rationally only to a certain extent. Concerning modeling errors, this argument would imply that human modelers lose track of the interrelations of large and complex models due to their limited cognitive capabilities and introduce errors that they would not insert in a small model. A recent study by Mendling et al. [275] explores in how far certain complexity metrics of business process models have the potential to serve as error determinants. The authors conclude that complexity indeed appears to have an impact on error probability. Before we can test such a hypothesis in a more general setting, we have to establish an understanding of how we can define determinants that drive error probability and how we can measure them.

  11. Metrics for antibody therapeutics development.

    PubMed

    Reichert, Janice M

    2010-01-01

    A wide variety of full-size monoclonal antibodies (mAbs) and therapeutics derived from alternative antibody formats can be produced through genetic and biological engineering techniques. These molecules are now filling the preclinical and clinical pipelines of every major pharmaceutical company and many biotechnology firms. Metrics for the development of antibody therapeutics, including averages for the number of candidates entering clinical study and development phase lengths for mAbs approved in the United States, were derived from analysis of a dataset of over 600 therapeutic mAbs that entered clinical study sponsored, at least in part, by commercial firms. The results presented provide an overview of the field and context for the evaluation of on-going and prospective mAb development programs. The expansion of therapeutic antibody use through supplemental marketing approvals and the increase in the study of therapeutics derived from alternative antibody formats are discussed. PMID:20930555

  12. Is it possible to predict long-term success with k-NN? Case study of four market indices (FTSE100, DAX, HANGSENG, NASDAQ)

    NASA Astrophysics Data System (ADS)

    Shi, Y.; Gorban, A. N.; Y Yang, T.

    2014-03-01

    This case study tests the possibility of prediction for 'success' (or 'winner') components of four stock & shares market indices in a time period of three years from 02-Jul-2009 to 29-Jun-2012.We compare their performance ain two time frames: initial frame three months at the beginning (02/06/2009-30/09/2009) and the final three month frame (02/04/2012-29/06/2012).To label the components, average price ratio between two time frames in descending order is computed. The average price ratio is defined as the ratio between the mean prices of the beginning and final time period. The 'winner' components are referred to the top one third of total components in the same order as average price ratio it means the mean price of final time period is relatively higher than the beginning time period. The 'loser' components are referred to the last one third of total components in the same order as they have higher mean prices of beginning time period. We analyse, is there any information about the winner-looser separation in the initial fragments of the daily closing prices log-returns time series.The Leave-One-Out Cross-Validation with k-NN algorithm is applied on the daily log-return of components using a distance and proximity in the experiment. By looking at the error analysis, it shows that for HANGSENG and DAX index, there are clear signs of possibility to evaluate the probability of long-term success. The correlation distance matrix histograms and 2-D/3-D elastic maps generated from ViDaExpert show that the 'winner' components are closer to each other and 'winner'/'loser' components are separable on elastic maps for HANGSENG and DAX index while for the negative possibility indices, there is no sign of separation.

  13. Semantic Metrics for Object Oriented Design

    NASA Technical Reports Server (NTRS)

    Etzkorn, Lethe

    2003-01-01

    The purpose of this proposal is to research a new suite of object-oriented (OO) software metrics, called semantic metrics, that have the potential to help software engineers identify fragile, low quality code sections much earlier in the development cycle than is possible with traditional OO metrics. With earlier and better Fault detection, software maintenance will be less time consuming and expensive, and software reusability will be improved. Because it is less costly to correct faults found earlier than to correct faults found later in the software lifecycle, the overall cost of software development will be reduced. Semantic metrics can be derived from the knowledge base of a program understanding system. A program understanding system is designed to understand a software module. Once understanding is complete, the knowledge-base contains digested information about the software module. Various semantic metrics can be collected on the knowledge base. This new kind of metric measures domain complexity, or the relationship of the software to its application domain, rather than implementation complexity, which is what traditional software metrics measure. A semantic metric will thus map much more closely to qualities humans are interested in, such as cohesion and maintainability, than is possible using traditional metrics, that are calculated using only syntactic aspects of software.

  14. Fusion metrics for dynamic situation analysis

    NASA Astrophysics Data System (ADS)

    Blasch, Erik P.; Pribilski, Mike; Daughtery, Bryan; Roscoe, Brian; Gunsett, Josh

    2004-08-01

    To design information fusion systems, it is important to develop metrics as part of a test and evaluation strategy. In many cases, fusion systems are designed to (1) meet a specific set of user information needs (IN), (2) continuously validate information pedigree and updates, and (3) maintain this performance under changing conditions. A fusion system"s performance is evaluated in many ways. However, developing a consistent set of metrics is important for standardization. For example, many track and identification metrics have been proposed for fusion analysis. To evaluate a complete fusion system performance, level 4 sensor management and level 5 user refinement metrics need to be developed simultaneously to determine whether or not the fusion system is meeting information needs. To describe fusion performance, the fusion community needs to agree on a minimum set of metrics for user assessment and algorithm comparison. We suggest that such a minimum set should include feasible metrics of accuracy, confidence, throughput, timeliness, and cost. These metrics can be computed as confidence (probability), accuracy (error), timeliness (delay), throughput (amount) and cost (dollars). In this paper, we explore an aggregate set of metrics for fusion evaluation and demonstrate with information need metrics for dynamic situation analysis.

  15. The Adaptability Evaluation of Enterprise Information Systems

    NASA Astrophysics Data System (ADS)

    Liu, Junjuan; Xue, Chaogai; Dong, Lili

    In this paper, a set of evaluation system is proposed by GQM (Goal-Question-Metrics) for enterprise information systems. Then based on Similarity to Ideal Solution (TOPSIS), the evaluation model is proposed to evaluate enterprise information systems' adaptability. Finally, the application of the evaluation system and model is proved via a case study, which provides references for optimizing enterprise information systems' adaptability.

  16. Elementary Metric Curriculum - Project T.I.M.E. (Timely Implementation of Metric Education). Part I.

    ERIC Educational Resources Information Center

    Community School District 18, Brooklyn, NY.

    This is a teacher's manual for an ISS-based elementary school course in the metric system. Behavioral objectives and student activities are included. The topics covered include: (1) linear measurement; (2) metric-decimal relationships; (3) metric conversions; (4) geometry; (5) scale drawings; and (6) capacity. This is the first of a two-part…

  17. Advanced Life Support System Value Metric

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.; Rasky, Daniel J. (Technical Monitor)

    1999-01-01

    The NASA Advanced Life Support (ALS) Program is required to provide a performance metric to measure its progress in system development. Extensive discussions within the ALS program have led to the following approach. The Equivalent System Mass (ESM) metric has been traditionally used and provides a good summary of the weight, size, and power cost factors of space life support equipment. But ESM assumes that all the systems being traded off exactly meet a fixed performance requirement, so that the value and benefit (readiness, performance, safety, etc.) of all the different systems designs are considered to be exactly equal. This is too simplistic. Actual system design concepts are selected using many cost and benefit factors and the system specification is defined after many trade-offs. The ALS program needs a multi-parameter metric including both the ESM and a System Value Metric (SVM). The SVM would include safety, maintainability, reliability, performance, use of cross cutting technology, and commercialization potential. Another major factor in system selection is technology readiness level (TRL), a familiar metric in ALS. The overall ALS system metric that is suggested is a benefit/cost ratio, SVM/[ESM + function (TRL)], with appropriate weighting and scaling. The total value is given by SVM. Cost is represented by higher ESM and lower TRL. The paper provides a detailed description and example application of a suggested System Value Metric and an overall ALS system metric.

  18. A Complexity Metric for Automated Separation

    NASA Technical Reports Server (NTRS)

    Aweiss, Arwa

    2009-01-01

    A metric is proposed to characterize airspace complexity with respect to an automated separation assurance function. The Maneuver Option metric is a function of the number of conflict-free trajectory change options the automated separation assurance function is able to identify for each aircraft in the airspace at a given time. By aggregating the metric for all aircraft in a region of airspace, a measure of the instantaneous complexity of the airspace is produced. A six-hour simulation of Fort Worth Center air traffic was conducted to assess the metric. Results showed aircraft were twice as likely to be constrained in the vertical dimension than the horizontal one. By application of this metric, situations found to be most complex were those where level overflights and descending arrivals passed through or merged into an arrival stream. The metric identified high complexity regions that correlate well with current air traffic control operations. The Maneuver Option metric did not correlate with traffic count alone, a result consistent with complexity metrics for human-controlled airspace.

  19. Metrics for Food Preparation, Baking, Meat Cutting.

    ERIC Educational Resources Information Center

    Cooper, Gloria S., Ed.; Magisos, Joel H., Ed.

    Designed to meet the job-related metric measurement needs of food preparation, baking, meat cutting students, this instructional package is one of five for the home economics occupations cluster, part of a set of 55 packages for metric instruction in different occupations. The package is intended for students who already know the occupational…

  20. Moving to Metrics in Our Schools.

    ERIC Educational Resources Information Center

    Gibb, E. Glenadine

    This speech, addressed to school administrators, outlines the reasons for implementing instruction in the metric system and offers advice on several aspects of this implementation. The author observes that although the primary responsibility for teaching metric measurement will fall on the mathematics teacher, other teachers (e.g., science,…

  1. Metrics for Air Conditioning & Refrigeration, Heating, Ventilating.

    ERIC Educational Resources Information Center

    Cooper, Gloria S., Ed.; Magisos, Joel H., Ed.

    Designed to meet the job-related metric measurement needs of the air conditioning and refrigeration, heating and ventilating student, this instructional package is one of three for the construction occupations cluster, part of a set of 55 packages for metric instruction in different occupations. The package is intended for students who already…

  2. Particle dynamics in the original Schwarzschild metric

    NASA Astrophysics Data System (ADS)

    Fimin, N. N.; Chechetkin, V. M.

    2016-04-01

    The properties of the original Schwarzschild metric for a point gravitating mass are considered. The laws of motion in the corresponding space-time are established, and the transition from the Schwarzschildmetric to the metric of a "dusty universe" are studied. The dynamics of a system of particles in thr post-Newtonian approximation are analyzed.

  3. Metrics for Architectural, Civil, Mechanical Drafting.

    ERIC Educational Resources Information Center

    Cooper, Gloria S., Ed.; Magisos, Joel H., Ed.

    Designed to meet the job-related metric measurement needs of architectural, civil, mechanical drafting students, this instructional package is one of six for the communication media occupations cluster, part of a set of 55 packages for metric instruction in different occupations. The package is intended for students who already know the…

  4. Handbook for Metric Usage (First Edition).

    ERIC Educational Resources Information Center

    American Home Economics Association, Washington, DC.

    Guidelines for changing to the metric system of measurement with regard to all phases of home economics are presented in this handbook. Topics covered include the following: (1) history of the metric system, (2) the International System of Units (SI): derived units of length, mass, time, and electric current; temperature; luminous intensity;…

  5. Metrics for Automotive Merchandising, Petroleum Marketing.

    ERIC Educational Resources Information Center

    Cooper, Gloria S., Ed.; Magisos, Joel H., Ed.

    Designed to meet the job-related metric measurement needs of students in automotive merchandising and petroleum marketing classes, this instructional package is one of five for the marketing and distribution cluster, part of a set of 55 packages for metric instruction in different occupations. The package is intended for students who already know…

  6. On homogeneous Einstein (α , β) -metrics

    NASA Astrophysics Data System (ADS)

    Yan, Zaili; Deng, Shaoqiang

    2016-05-01

    In this paper, we study homogeneous Einstein (α , β) -metrics. First, we deduce a formula for Ricci curvature of a homogeneous (α , β) -metric. Based on this formula, we obtain a sufficient and necessary condition for a compact homogeneous (α , β) -metric to be Einstein and with vanishing S-curvature. Moreover, we prove that any homogeneous Ricci flat (α , β) space with vanishing S-curvature must be a Minkowski space. Finally, we consider left invariant Einstein (α , β) -metrics on Lie groups with negative Ricci constant. Under some appropriate conditions, we show that the underlying Lie groups must be two step solvable. We also present a more convenient sufficient and necessary condition for the metric to be Einstein in this special case.

  7. Smart Grid Status and Metrics Report Appendices

    SciTech Connect

    Balducci, Patrick J.; Antonopoulos, Chrissi A.; Clements, Samuel L.; Gorrissen, Willy J.; Kirkham, Harold; Ruiz, Kathleen A.; Smith, David L.; Weimar, Mark R.; Gardner, Chris; Varney, Jeff

    2014-07-01

    A smart grid uses digital power control and communication technology to improve the reliability, security, flexibility, and efficiency of the electric system, from large generation through the delivery systems to electricity consumers and a growing number of distributed generation and storage resources. To convey progress made in achieving the vision of a smart grid, this report uses a set of six characteristics derived from the National Energy Technology Laboratory Modern Grid Strategy. The Smart Grid Status and Metrics Report defines and examines 21 metrics that collectively provide insight into the grid’s capacity to embody these characteristics. This appendix presents papers covering each of the 21 metrics identified in Section 2.1 of the Smart Grid Status and Metrics Report. These metric papers were prepared in advance of the main body of the report and collectively form its informational backbone.

  8. Program for implementing software quality metrics

    SciTech Connect

    Yule, H.P.; Riemer, C.A.

    1992-04-01

    This report describes a program by which the Veterans Benefit Administration (VBA) can implement metrics to measure the performance of automated data systems and demonstrate that they are improving over time. It provides a definition of quality, particularly with regard to software. Requirements for management and staff to achieve a successful metrics program are discussed. It lists the attributes of high-quality software, then describes the metrics or calculations that can be used to measure these attributes in a particular system. Case studies of some successful metrics programs used by business are presented. The report ends with suggestions on which metrics the VBA should use and the order in which they should be implemented.

  9. Information metric from a linear sigma model.

    PubMed

    Miyamoto, U; Yahikozawa, S

    2012-05-01

    The idea that a space-time metric emerges as a Fisher-Rao "information metric" of instanton moduli space has been examined in several field theories, such as the Yang-Mills theories and nonlinear σ models. In this paper, we report that the flat Euclidean or Minkowskian metric, rather than an anti-de Sitter metric that generically emerges from instanton moduli spaces, can be obtained as the Fisher-Rao metric from a nontrivial solution of the massive Klein-Gordon field (a linear σ model). This realization of the flat space from the simple field theory would be useful to investigate the ideas that relate the space-time geometry with the information geometry. PMID:23004729

  10. Metrics for border management systems.

    SciTech Connect

    Duggan, Ruth Ann

    2009-07-01

    There are as many unique and disparate manifestations of border systems as there are borders to protect. Border Security is a highly complex system analysis problem with global, regional, national, sector, and border element dimensions for land, water, and air domains. The complexity increases with the multiple, and sometimes conflicting, missions for regulating the flow of people and goods across borders, while securing them for national security. These systems include frontier border surveillance, immigration management and customs functions that must operate in a variety of weather, terrain, operational conditions, cultural constraints, and geopolitical contexts. As part of a Laboratory Directed Research and Development Project 08-684 (Year 1), the team developed a reference framework to decompose this complex system into international/regional, national, and border elements levels covering customs, immigration, and border policing functions. This generalized architecture is relevant to both domestic and international borders. As part of year two of this project (09-1204), the team determined relevant relative measures to better understand border management performance. This paper describes those relative metrics and how they can be used to improve border management systems.

  11. Metrics and Benchmarks for Visualization

    NASA Technical Reports Server (NTRS)

    Uselton, Samuel P.; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    What is a "good" visualization? How can the quality of a visualization be measured? How can one tell whether one visualization is "better" than another? I claim that the true quality of a visualization can only be measured in the context of a particular purpose. The same image generated from the same data may be excellent for one purpose and abysmal for another. A good measure of visualization quality will correspond to the performance of users in accomplishing the intended purpose, so the "gold standard" is user testing. As a user of visualization software (or at least a consultant to such users) I don't expect visualization software to have been tested in this way for every possible use. In fact, scientific visualization (as distinct from more "production oriented" uses of visualization) will continually encounter new data, new questions and new purposes; user testing can never keep up. User need software they can trust, and advice on appropriate visualizations of particular purposes. Considering the following four processes, and their impact on visualization trustworthiness, reveals important work needed to create worthwhile metrics and benchmarks for visualization. These four processes are (1) complete system testing (user-in-loop), (2) software testing, (3) software design and (4) information dissemination. Additional information is contained in the original extended abstract.

  12. Metric handbook for Federal officials: Recommendations of the Interagency Committee on Metric Policy

    NASA Astrophysics Data System (ADS)

    1989-08-01

    Recommendations for introduction of metric units in proposed legislation, regulations, data requests and other Government use of measurement units are presented. These recommendations were developed for the Interagency Committee on Metric Policy by its working arm, the Metrication Operating Committee, and its Metric Practice and Preferred Units Subcommittee. Assistance in editing of the documents, coordination and publication in the Federal Register was provided by the U.S. Department of Commerce, Office of Metric Programs, which serves as the secretariat for the ICMP and its subordinate committees. Other Federal documents are provided for convenient reference as appendices.

  13. Partial Rectangular Metric Spaces and Fixed Point Theorems

    PubMed Central

    2014-01-01

    The purpose of this paper is to introduce the concept of partial rectangular metric spaces as a generalization of rectangular metric and partial metric spaces. Some properties of partial rectangular metric spaces and some fixed point results for quasitype contraction in partial rectangular metric spaces are proved. Some examples are given to illustrate the observed results. PMID:24672366

  14. Advanced Life Support System Value Metric

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.; Arnold, James O. (Technical Monitor)

    1999-01-01

    The NASA Advanced Life Support (ALS) Program is required to provide a performance metric to measure its progress in system development. Extensive discussions within the ALS program have reached a consensus. The Equivalent System Mass (ESM) metric has been traditionally used and provides a good summary of the weight, size, and power cost factors of space life support equipment. But ESM assumes that all the systems being traded off exactly meet a fixed performance requirement, so that the value and benefit (readiness, performance, safety, etc.) of all the different systems designs are exactly equal. This is too simplistic. Actual system design concepts are selected using many cost and benefit factors and the system specification is then set accordingly. The ALS program needs a multi-parameter metric including both the ESM and a System Value Metric (SVM). The SVM would include safety, maintainability, reliability, performance, use of cross cutting technology, and commercialization potential. Another major factor in system selection is technology readiness level (TRL), a familiar metric in ALS. The overall ALS system metric that is suggested is a benefit/cost ratio, [SVM + TRL]/ESM, with appropriate weighting and scaling. The total value is the sum of SVM and TRL. Cost is represented by ESM. The paper provides a detailed description and example application of the suggested System Value Metric.

  15. A Kernel Classification Framework for Metric Learning.

    PubMed

    Wang, Faqiang; Zuo, Wangmeng; Zhang, Lei; Meng, Deyu; Zhang, David

    2015-09-01

    Learning a distance metric from the given training samples plays a crucial role in many machine learning tasks, and various models and optimization algorithms have been proposed in the past decade. In this paper, we generalize several state-of-the-art metric learning methods, such as large margin nearest neighbor (LMNN) and information theoretic metric learning (ITML), into a kernel classification framework. First, doublets and triplets are constructed from the training samples, and a family of degree-2 polynomial kernel functions is proposed for pairs of doublets or triplets. Then, a kernel classification framework is established to generalize many popular metric learning methods such as LMNN and ITML. The proposed framework can also suggest new metric learning methods, which can be efficiently implemented, interestingly, using the standard support vector machine (SVM) solvers. Two novel metric learning methods, namely, doublet-SVM and triplet-SVM, are then developed under the proposed framework. Experimental results show that doublet-SVM and triplet-SVM achieve competitive classification accuracies with state-of-the-art metric learning methods but with significantly less training time. PMID:25347887

  16. Fighter agility metrics. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Liefer, Randall K.

    1990-01-01

    Fighter flying qualities and combat capabilities are currently measured and compared in terms relating to vehicle energy, angular rates and sustained acceleration. Criteria based on these measurable quantities have evolved over the past several decades and are routinely used to design aircraft structures, aerodynamics, propulsion and control systems. While these criteria, or metrics, have the advantage of being well understood, easily verified and repeatable during test, they tend to measure the steady state capability of the aircraft and not its ability to transition quickly from one state to another. Proposed new metrics to assess fighter aircraft agility are collected and analyzed. A framework for classification of these new agility metrics is developed and applied. A complete set of transient agility metrics is evaluated with a high fidelity, nonlinear F-18 simulation. Test techniques and data reduction methods are proposed. A method of providing cuing information to the pilot during flight test is discussed. The sensitivity of longitudinal and lateral agility metrics to deviations from the pilot cues is studied in detail. The metrics are shown to be largely insensitive to reasonable deviations from the nominal test pilot commands. Instrumentation required to quantify agility via flight test is also considered. With one exception, each of the proposed new metrics may be measured with instrumentation currently available.

  17. SAPHIRE 8 Quality Assurance Software Metrics Report

    SciTech Connect

    Kurt G. Vedros

    2011-08-01

    The purpose of this review of software metrics is to examine the quality of the metrics gathered in the 2010 IV&V and to set an outline for results of updated metrics runs to be performed. We find from the review that the maintenance of accepted quality standards presented in the SAPHIRE 8 initial Independent Verification and Validation (IV&V) of April, 2010 is most easily achieved by continuing to utilize the tools used in that effort while adding a metric of bug tracking and resolution. Recommendations from the final IV&V were to continue periodic measurable metrics such as McCabe's complexity measure to ensure quality is maintained. The four software tools used to measure quality in the IV&V were CodeHealer, Coverage Validator, Memory Validator, Performance Validator, and Thread Validator. These are evaluated based on their capabilities. We attempted to run their latest revisions with the newer Delphi 2010 based SAPHIRE 8 code that has been developed and was successful with all of the Validator series of tools on small tests. Another recommendation from the IV&V was to incorporate a bug tracking and resolution metric. To improve our capability of producing this metric, we integrated our current web reporting system with the SpiraTest test management software purchased earlier this year to track requirements traceability.

  18. Launch Vehicle Production and Operations Cost Metrics

    NASA Technical Reports Server (NTRS)

    Watson, Michael D.; Neeley, James R.; Blackburn, Ruby F.

    2014-01-01

    Traditionally, launch vehicle cost has been evaluated based on $/Kg to orbit. This metric is calculated based on assumptions not typically met by a specific mission. These assumptions include the specified orbit whether Low Earth Orbit (LEO), Geostationary Earth Orbit (GEO), or both. The metric also assumes the payload utilizes the full lift mass of the launch vehicle, which is rarely true even with secondary payloads.1,2,3 Other approaches for cost metrics have been evaluated including unit cost of the launch vehicle and an approach to consider the full program production and operations costs.4 Unit cost considers the variable cost of the vehicle and the definition of variable costs are discussed. The full program production and operation costs include both the variable costs and the manufacturing base. This metric also distinguishes operations costs from production costs, including pre-flight operational testing. Operations costs also consider the costs of flight operations, including control center operation and maintenance. Each of these 3 cost metrics show different sensitivities to various aspects of launch vehicle cost drivers. The comparison of these metrics provides the strengths and weaknesses of each yielding an assessment useful for cost metric selection for launch vehicle programs.

  19. An Underwater Color Image Quality Evaluation Metric.

    PubMed

    Yang, Miao; Sowmya, Arcot

    2015-12-01

    Quality evaluation of underwater images is a key goal of underwater video image retrieval and intelligent processing. To date, no metric has been proposed for underwater color image quality evaluation (UCIQE). The special absorption and scattering characteristics of the water medium do not allow direct application of natural color image quality metrics especially to different underwater environments. In this paper, subjective testing for underwater image quality has been organized. The statistical distribution of the underwater image pixels in the CIELab color space related to subjective evaluation indicates the sharpness and colorful factors correlate well with subjective image quality perception. Based on these, a new UCIQE metric, which is a linear combination of chroma, saturation, and contrast, is proposed to quantify the non-uniform color cast, blurring, and low-contrast that characterize underwater engineering and monitoring images. Experiments are conducted to illustrate the performance of the proposed UCIQE metric and its capability to measure the underwater image enhancement results. They show that the proposed metric has comparable performance to the leading natural color image quality metrics and the underwater grayscale image quality metrics available in the literature, and can predict with higher accuracy the relative amount of degradation with similar image content in underwater environments. Importantly, UCIQE is a simple and fast solution for real-time underwater video processing. The effectiveness of the presented measure is also demonstrated by subjective evaluation. The results show better correlation between the UCIQE and the subjective mean opinion score. PMID:26513783

  20. Altmetrics - a complement to conventional metrics.

    PubMed

    Melero, Remedios

    2015-01-01

    Emerging metrics based on article-level does not exclude traditional metrics based on citations to the journal, but complements them. Both can be employed in conjunction to offer a richer picture of an article use from immediate to long terms. Article-level metrics (ALM) is the result of the aggregation of different data sources and the collection of content from multiple social network services. Sources used for the aggregation can be broken down into five categories: usage, captures, mentions, social media and citations. Data sources depend on the tool, but they include classic metrics indicators based on citations, academic social networks (Mendeley, CiteULike, Delicious) and social media (Facebook, Twitter, blogs, or Youtube, among others). Altmetrics is not synonymous with alternative metrics. Altmetrics are normally early available and allow to assess the social impact of scholarly outputs, almost at the real time. This paper overviews briefly the meaning of altmetrics and describes some of the existing tools used to apply this new metrics: Public Library of Science--Article-Level Metrics, Altmetric, Impactstory and Plum. PMID:26110028

  1. Altmetrics – a complement to conventional metrics

    PubMed Central

    Melero, Remedios

    2015-01-01

    Emerging metrics based on article-level does not exclude traditional metrics based on citations to the journal, but complements them. Both can be employed in conjunction to offer a richer picture of an article use from immediate to long terms. Article-level metrics (ALM) is the result of the aggregation of different data sources and the collection of content from multiple social network services. Sources used for the aggregation can be broken down into five categories: usage, captures, mentions, social media and citations. Data sources depend on the tool, but they include classic metrics indicators based on citations, academic social networks (Mendeley, CiteULike, Delicious) and social media (Facebook, Twitter, blogs, or Youtube, among others). Altmetrics is not synonymous with alternative metrics. Altmetrics are normally early available and allow to assess the social impact of scholarly outputs, almost at the real time. This paper overviews briefly the meaning of altmetrics and describes some of the existing tools used to apply this new metrics: Public Library of Science - Article-Level Metrics, Altmetric, Impactstory and Plum. PMID:26110028

  2. Kerr metric in Bondi-Sachs form

    SciTech Connect

    Bishop, Nigel T.; Venter, Liebrecht R.

    2006-04-15

    A metric representing the Kerr geometry has been obtained by Pretorius and Israel. We make coordinate transformations on this metric, to bring it into Bondi-Sachs form. We investigate the behavior of the metric near the axis of symmetry and confirm elementary flatness, and we also confirm that it is asymptotic to the Bondi-Sachs form of the Schwarzschild geometry. The results obtained here are needed so that numerical relativity codes based on the characteristic formalism can be applied to a situation that contains a rotating black hole.

  3. Inspecting baby Skyrmions with effective metrics

    NASA Astrophysics Data System (ADS)

    Gibbons, G. W.; Goulart, E.

    2014-05-01

    In the present paper we investigate the causal structure of the baby Skyrme model using appropriate geometrical tools. We discuss several features of excitations propagating on top of background solutions and show that the evolution of high frequency waves is governed by a curved effective geometry. Examples are given for which the effective metric describes the interaction between waves and solitonic solutions such as kinks, antikinks, and hedgehogs. In particular, it is shown how violent processes involving the collisions of solitons and antisolitons may induce metrics which are not globally hyperbolic. We argue that it might be illuminating to calculate the effective metric as a diagnostic test for pathological regimes in numerical simulations.

  4. Metrics for comparison of crystallographic maps

    SciTech Connect

    Urzhumtsev, Alexandre; Afonine, Pavel V.; Lunin, Vladimir Y.; Terwilliger, Thomas C.; Adams, Paul D.

    2014-10-01

    Numerical comparison of crystallographic contour maps is used extensively in structure solution and model refinement, analysis and validation. However, traditional metrics such as the map correlation coefficient (map CC, real-space CC or RSCC) sometimes contradict the results of visual assessment of the corresponding maps. This article explains such apparent contradictions and suggests new metrics and tools to compare crystallographic contour maps. The key to the new methods is rank scaling of the Fourier syntheses. The new metrics are complementary to the usual map CC and can be more helpful in map comparison, in particular when only some of their aspects, such as regions of high density, are of interest.

  5. Assessment of proposed fighter agility metrics

    NASA Technical Reports Server (NTRS)

    Liefer, Randall K.; Valasek, John; Eggold, David P.; Downing, David R.

    1990-01-01

    This paper presents the results of an analysis of proposed metrics to assess fighter aircraft agility. A novel framework for classifying these metrics is developed and applied. A set of transient metrics intended to quantify the axial and pitch agility of fighter aircraft is evaluated with a high fidelity, nonlinear F-18 simulation. Test techniques and data reduction method are proposed, and sensitivities to pilot introduced errors during flight testing is investigated. Results indicate that the power onset and power loss parameters are promising candidates for quantifying axial agility, while maximum pitch up and pitch down rates are for quantifying pitch agility.

  6. Quantitative Adaptation Analytics for Assessing Dynamic Systems of Systems.

    SciTech Connect

    Gauthier, John H.; Miner, Nadine E.; Wilson, Michael L.; Le, Hai D.; Kao, Gio K; Melander, Darryl J.; Longsine, Dennis Earl; Vander Meer, Robert Charles,

    2015-01-01

    Our society is increasingly reliant on systems and interoperating collections of systems, known as systems of systems (SoS). These SoS are often subject to changing missions (e.g., nation- building, arms-control treaties), threats (e.g., asymmetric warfare, terrorism), natural environments (e.g., climate, weather, natural disasters) and budgets. How well can SoS adapt to these types of dynamic conditions? This report details the results of a three year Laboratory Directed Research and Development (LDRD) project aimed at developing metrics and methodologies for quantifying the adaptability of systems and SoS. Work products include: derivation of a set of adaptability metrics, a method for combining the metrics into a system of systems adaptability index (SoSAI) used to compare adaptability of SoS designs, development of a prototype dynamic SoS (proto-dSoS) simulation environment which provides the ability to investigate the validity of the adaptability metric set, and two test cases that evaluate the usefulness of a subset of the adaptability metrics and SoSAI for distinguishing good from poor adaptability in a SoS. Intellectual property results include three patents pending: A Method For Quantifying Relative System Adaptability, Method for Evaluating System Performance, and A Method for Determining Systems Re-Tasking.

  7. MPLS/VPN traffic engineering: SLA metrics

    NASA Astrophysics Data System (ADS)

    Cherkaoui, Omar; MacGibbon, Brenda; Blais, Michel; Serhrouchni, Ahmed

    2001-07-01

    Traffic engineering must be concerned with a broad definition of service that includes network availability, reliability and stability, as well as traditional traffic data on loss, throughput, delay and jitter. MPLS and Virtual Private Networks (VPNs) significantly contribute to security and Quality of Service (QoS) within communication networks, but there remains a need for metric measurement and evaluation. The purpose of this paper is to propose a methodology which gives a measure for LSP ( Lfew abel Switching Paths) metrics in VPN MPLS networks. We propose here a statistical method for the evaluation of those metrics. Statistical methodology is very important in this type of study since there is a large amount of data to consider. We use the notions of sample surveys, self-similar processes, linear regression, additive models and bootstrapping. The results obtained allows us to estimate the different metrics for such SLAs.

  8. Classroom reconstruction of the Schwarzschild metric

    NASA Astrophysics Data System (ADS)

    Kassner, Klaus

    2015-11-01

    A promising way to introduce general relativity (GR) in the classroom is to study the physical implications of certain given metrics, such as the Schwarzschild one. This involves lower mathematical expenditure than an approach focusing on differential geometry in its full glory and permits to emphasize physical aspects before attacking the field equations. Even so, in terms of motivation, lacking justification of the metric employed may pose an obstacle. The paper discusses how to establish the weak-field limit of the Schwarzschild metric with a minimum of relatively simple physical assumptions, avoiding the field equations but admitting the determination of a single parameter from experiment. An attractive experimental candidate is the measurement of the perihelion precession of Mercury, because the result was already known before the completion of GR. It is shown how to determine the temporal and radial coefficients of the Schwarzschild metric to sufficiently high accuracy to obtain quantitative predictions for all the remaining classical tests of GR.

  9. How Metric Conversion Affects Administrative Practices

    ERIC Educational Resources Information Center

    Straka, M. K.

    1977-01-01

    Changes necessary in the administrative activities of educational institutions following conversion to the metric system are outlined for secretarial practices, purchasing, internal reporting and forms, computer operations, travel, publications, buildings and plant, new buildings, sport facilities, and health services. (MF)

  10. Effectively nonlocal metric-affine gravity

    NASA Astrophysics Data System (ADS)

    Golovnev, Alexey; Koivisto, Tomi; Sandstad, Marit

    2016-03-01

    In metric-affine theories of gravity such as the C-theories, the spacetime connection is associated to a metric that is nontrivially related to the physical metric. In this article, such theories are rewritten in terms of a single metric, and it is shown that they can be recast as effectively nonlocal gravity. With some assumptions, known ghost-free theories with nonsingular and cosmologically interesting properties may be recovered. Relations between different formulations are analyzed at both perturbative and nonperturbative levels, taking carefully into account subtleties with boundary conditions in the presence of integral operators in the action, and equivalences between theories related by nonlocal redefinitions of the fields are verified at the level of equations of motion. This suggests a possible geometrical interpretation of nonlocal gravity as an emergent property of non-Riemannian spacetime structure.