Minimum distance classification in remote sensing
NASA Technical Reports Server (NTRS)
Wacker, A. G.; Landgrebe, D. A.
1972-01-01
The utilization of minimum distance classification methods in remote sensing problems, such as crop species identification, is considered. Literature concerning both minimum distance classification problems and distance measures is reviewed. Experimental results are presented for several examples. The objective of these examples is to: (a) compare the sample classification accuracy of a minimum distance classifier, with the vector classification accuracy of a maximum likelihood classifier, and (b) compare the accuracy of a parametric minimum distance classifier with that of a nonparametric one. Results show the minimum distance classifier performance is 5% to 10% better than that of the maximum likelihood classifier. The nonparametric classifier is only slightly better than the parametric version.
The evaluation of alternate methodologies for land cover classification in an urbanizing area
NASA Technical Reports Server (NTRS)
Smekofski, R. M.
1981-01-01
The usefulness of LANDSAT in classifying land cover and in identifying and classifying land use change was investigated using an urbanizing area as the study area. The question of what was the best technique for classification was the primary focus of the study. The many computer-assisted techniques available to analyze LANDSAT data were evaluated. Techniques of statistical training (polygons from CRT, unsupervised clustering, polygons from digitizer and binary masks) were tested with minimum distance to the mean, maximum likelihood and canonical analysis with minimum distance to the mean classifiers. The twelve output images were compared to photointerpreted samples, ground verified samples and a current land use data base. Results indicate that for a reconnaissance inventory, the unsupervised training with canonical analysis-minimum distance classifier is the most efficient. If more detailed ground truth and ground verification is available, the polygons from the digitizer training with the canonical analysis minimum distance is more accurate.
Dong, Wei-Feng; Canil, Sarah; Lai, Raymond; Morel, Didier; Swanson, Paul E.; Izevbaye, Iyare
2018-01-01
A new automated MYC IHC classifier based on bivariate logistic regression is presented. The predictor relies on image analysis developed with the open-source ImageJ platform. From a histologic section immunostained for MYC protein, 2 dimensionless quantitative variables are extracted: (a) relative distance between nuclei positive for MYC IHC based on euclidean minimum spanning tree graph and (b) coefficient of variation of the MYC IHC stain intensity among MYC IHC-positive nuclei. Distance between positive nuclei is suggested to inversely correlate MYC gene rearrangement status, whereas coefficient of variation is suggested to inversely correlate physiological regulation of MYC protein expression. The bivariate classifier was compared with 2 other MYC IHC classifiers (based on percentage of MYC IHC positive nuclei), all tested on 113 lymphomas including mostly diffuse large B-cell lymphomas with known MYC fluorescent in situ hybridization (FISH) status. The bivariate classifier strongly outperformed the “percentage of MYC IHC-positive nuclei” methods to predict MYC+ FISH status with 100% sensitivity (95% confidence interval, 94-100) associated with 80% specificity. The test is rapidly performed and might at a minimum provide primary IHC screening for MYC gene rearrangement status in diffuse large B-cell lymphomas. Furthermore, as this bivariate classifier actually predicts “permanent overexpressed MYC protein status,” it might identify nontranslocation-related chromosomal anomalies missed by FISH. PMID:27093450
Applying six classifiers to airborne hyperspectral imagery for detecting giant reed
USDA-ARS?s Scientific Manuscript database
This study evaluated and compared six different image classifiers, including minimum distance (MD), Mahalanobis distance (MAHD), maximum likelihood (ML), spectral angle mapper (SAM), mixture tuned matched filtering (MTMF) and support vector machine (SVM), for detecting and mapping giant reed (Arundo...
A time-frequency classifier for human gait recognition
NASA Astrophysics Data System (ADS)
Mobasseri, Bijan G.; Amin, Moeness G.
2009-05-01
Radar has established itself as an effective all-weather, day or night sensor. Radar signals can penetrate walls and provide information on moving targets. Recently, radar has been used as an effective biometric sensor for classification of gait. The return from a coherent radar system contains a frequency offset in the carrier frequency, known as the Doppler Effect. The movements of arms and legs give rise to micro Doppler which can be clearly detailed in the time-frequency domain using traditional or modern time-frequency signal representation. In this paper we propose a gait classifier based on subspace learning using principal components analysis(PCA). The training set consists of feature vectors defined as either time or frequency snapshots taken from the spectrogram of radar backscatter. We show that gait signature is captured effectively in feature vectors. Feature vectors are then used in training a minimum distance classifier based on Mahalanobis distance metric. Results show that gait classification with high accuracy and short observation window is achievable using the proposed classifier.
Zourmand, Alireza; Ting, Hua-Nong; Mirhassani, Seyed Mostafa
2013-03-01
Speech is one of the prevalent communication mediums for humans. Identifying the gender of a child speaker based on his/her speech is crucial in telecommunication and speech therapy. This article investigates the use of fundamental and formant frequencies from sustained vowel phonation to distinguish the gender of Malay children aged between 7 and 12 years. The Euclidean minimum distance and multilayer perceptron were used to classify the gender of 360 Malay children based on different combinations of fundamental and formant frequencies (F0, F1, F2, and F3). The Euclidean minimum distance with normalized frequency data achieved a classification accuracy of 79.44%, which was higher than that of the nonnormalized frequency data. Age-dependent modeling was used to improve the accuracy of gender classification. The Euclidean distance method obtained 84.17% based on the optimal classification accuracy for all age groups. The accuracy was further increased to 99.81% using multilayer perceptron based on mel-frequency cepstral coefficients. Copyright © 2013 The Voice Foundation. Published by Mosby, Inc. All rights reserved.
Korczowski, L; Congedo, M; Jutten, C
2015-08-01
The classification of electroencephalographic (EEG) data recorded from multiple users simultaneously is an important challenge in the field of Brain-Computer Interface (BCI). In this paper we compare different approaches for classification of single-trials Event-Related Potential (ERP) on two subjects playing a collaborative BCI game. The minimum distance to mean (MDM) classifier in a Riemannian framework is extended to use the diversity of the inter-subjects spatio-temporal statistics (MDM-hyper) or to merge multiple classifiers (MDM-multi). We show that both these classifiers outperform significantly the mean performance of the two users and analogous classifiers based on the step-wise linear discriminant analysis. More importantly, the MDM-multi outperforms the performance of the best player within the pair.
Texture analysis of pulmonary parenchyma in normal and emphysematous lung
NASA Astrophysics Data System (ADS)
Uppaluri, Renuka; Mitsa, Theophano; Hoffman, Eric A.; McLennan, Geoffrey; Sonka, Milan
1996-04-01
Tissue characterization using texture analysis is gaining increasing importance in medical imaging. We present a completely automated method for discriminating between normal and emphysematous regions from CT images. This method involves extracting seventeen features which are based on statistical, hybrid and fractal texture models. The best subset of features is derived from the training set using the divergence technique. A minimum distance classifier is used to classify the samples into one of the two classes--normal and emphysema. Sensitivity and specificity and accuracy values achieved were 80% or greater in most cases proving that texture analysis holds great promise in identifying emphysema.
Lenormand, Maxime; Huet, Sylvie; Deffuant, Guillaume
2012-01-01
We use a minimum requirement approach to derive the number of jobs in proximity services per inhabitant in French rural municipalities. We first classify the municipalities according to their time distance in minutes by car to the municipality where the inhabitants go the most frequently to get services (called MFM). For each set corresponding to a range of time distance to MFM, we perform a quantile regression estimating the minimum number of service jobs per inhabitant that we interpret as an estimation of the number of proximity jobs per inhabitant. We observe that the minimum number of service jobs per inhabitant is smaller in small municipalities. Moreover, for municipalities of similar sizes, when the distance to the MFM increases, the number of jobs of proximity services per inhabitant increases.
Sound Classification in Hearing Aids Inspired by Auditory Scene Analysis
NASA Astrophysics Data System (ADS)
Büchler, Michael; Allegro, Silvia; Launer, Stefan; Dillier, Norbert
2005-12-01
A sound classification system for the automatic recognition of the acoustic environment in a hearing aid is discussed. The system distinguishes the four sound classes "clean speech," "speech in noise," "noise," and "music." A number of features that are inspired by auditory scene analysis are extracted from the sound signal. These features describe amplitude modulations, spectral profile, harmonicity, amplitude onsets, and rhythm. They are evaluated together with different pattern classifiers. Simple classifiers, such as rule-based and minimum-distance classifiers, are compared with more complex approaches, such as Bayes classifier, neural network, and hidden Markov model. Sounds from a large database are employed for both training and testing of the system. The achieved recognition rates are very high except for the class "speech in noise." Problems arise in the classification of compressed pop music, strongly reverberated speech, and tonal or fluctuating noises.
Real-time stop sign detection and distance estimation using a single camera
NASA Astrophysics Data System (ADS)
Wang, Wenpeng; Su, Yuxuan; Cheng, Ming
2018-04-01
In modern world, the drastic development of driver assistance system has made driving a lot easier than before. In order to increase the safety onboard, a method was proposed to detect STOP sign and estimate distance using a single camera. In STOP sign detection, LBP-cascade classifier was applied to identify the sign in the image, and the principle of pinhole imaging was based for distance estimation. Road test was conducted using a detection system built with a CMOS camera and software developed by Python language with OpenCV library. Results shows that that the proposed system reach a detection accuracy of maximum of 97.6% at 10m, a minimum of 95.00% at 20m, and 5% max error in distance estimation. The results indicate that the system is effective and has the potential to be used in both autonomous driving and advanced driver assistance driving systems.
NASA Astrophysics Data System (ADS)
Du, Peijun; Tan, Kun; Xing, Xiaoshi
2010-12-01
Combining Support Vector Machine (SVM) with wavelet analysis, we constructed wavelet SVM (WSVM) classifier based on wavelet kernel functions in Reproducing Kernel Hilbert Space (RKHS). In conventional kernel theory, SVM is faced with the bottleneck of kernel parameter selection which further results in time-consuming and low classification accuracy. The wavelet kernel in RKHS is a kind of multidimensional wavelet function that can approximate arbitrary nonlinear functions. Implications on semiparametric estimation are proposed in this paper. Airborne Operational Modular Imaging Spectrometer II (OMIS II) hyperspectral remote sensing image with 64 bands and Reflective Optics System Imaging Spectrometer (ROSIS) data with 115 bands were used to experiment the performance and accuracy of the proposed WSVM classifier. The experimental results indicate that the WSVM classifier can obtain the highest accuracy when using the Coiflet Kernel function in wavelet transform. In contrast with some traditional classifiers, including Spectral Angle Mapping (SAM) and Minimum Distance Classification (MDC), and SVM classifier using Radial Basis Function kernel, the proposed wavelet SVM classifier using the wavelet kernel function in Reproducing Kernel Hilbert Space is capable of improving classification accuracy obviously.
Subsurface event detection and classification using Wireless Signal Networks.
Yoon, Suk-Un; Ghazanfari, Ehsan; Cheng, Liang; Pamukcu, Sibel; Suleiman, Muhannad T
2012-11-05
Subsurface environment sensing and monitoring applications such as detection of water intrusion or a landslide, which could significantly change the physical properties of the host soil, can be accomplished using a novel concept, Wireless Signal Networks (WSiNs). The wireless signal networks take advantage of the variations of radio signal strength on the distributed underground sensor nodes of WSiNs to monitor and characterize the sensed area. To characterize subsurface environments for event detection and classification, this paper provides a detailed list and experimental data of soil properties on how radio propagation is affected by soil properties in subsurface communication environments. Experiments demonstrated that calibrated wireless signal strength variations can be used as indicators to sense changes in the subsurface environment. The concept of WSiNs for the subsurface event detection is evaluated with applications such as detection of water intrusion, relative density change, and relative motion using actual underground sensor nodes. To classify geo-events using the measured signal strength as a main indicator of geo-events, we propose a window-based minimum distance classifier based on Bayesian decision theory. The window-based classifier for wireless signal networks has two steps: event detection and event classification. With the event detection, the window-based classifier classifies geo-events on the event occurring regions that are called a classification window. The proposed window-based classification method is evaluated with a water leakage experiment in which the data has been measured in laboratory experiments. In these experiments, the proposed detection and classification method based on wireless signal network can detect and classify subsurface events.
Subsurface Event Detection and Classification Using Wireless Signal Networks
Yoon, Suk-Un; Ghazanfari, Ehsan; Cheng, Liang; Pamukcu, Sibel; Suleiman, Muhannad T.
2012-01-01
Subsurface environment sensing and monitoring applications such as detection of water intrusion or a landslide, which could significantly change the physical properties of the host soil, can be accomplished using a novel concept, Wireless Signal Networks (WSiNs). The wireless signal networks take advantage of the variations of radio signal strength on the distributed underground sensor nodes of WSiNs to monitor and characterize the sensed area. To characterize subsurface environments for event detection and classification, this paper provides a detailed list and experimental data of soil properties on how radio propagation is affected by soil properties in subsurface communication environments. Experiments demonstrated that calibrated wireless signal strength variations can be used as indicators to sense changes in the subsurface environment. The concept of WSiNs for the subsurface event detection is evaluated with applications such as detection of water intrusion, relative density change, and relative motion using actual underground sensor nodes. To classify geo-events using the measured signal strength as a main indicator of geo-events, we propose a window-based minimum distance classifier based on Bayesian decision theory. The window-based classifier for wireless signal networks has two steps: event detection and event classification. With the event detection, the window-based classifier classifies geo-events on the event occurring regions that are called a classification window. The proposed window-based classification method is evaluated with a water leakage experiment in which the data has been measured in laboratory experiments. In these experiments, the proposed detection and classification method based on wireless signal network can detect and classify subsurface events. PMID:23202191
Cheng, Ningtao; Wu, Leihong; Cheng, Yiyu
2013-01-01
The promise of microarray technology in providing prediction classifiers for cancer outcome estimation has been confirmed by a number of demonstrable successes. However, the reliability of prediction results relies heavily on the accuracy of statistical parameters involved in classifiers. It cannot be reliably estimated with only a small number of training samples. Therefore, it is of vital importance to determine the minimum number of training samples and to ensure the clinical value of microarrays in cancer outcome prediction. We evaluated the impact of training sample size on model performance extensively based on 3 large-scale cancer microarray datasets provided by the second phase of MicroArray Quality Control project (MAQC-II). An SSNR-based (scale of signal-to-noise ratio) protocol was proposed in this study for minimum training sample size determination. External validation results based on another 3 cancer datasets confirmed that the SSNR-based approach could not only determine the minimum number of training samples efficiently, but also provide a valuable strategy for estimating the underlying performance of classifiers in advance. Once translated into clinical routine applications, the SSNR-based protocol would provide great convenience in microarray-based cancer outcome prediction in improving classifier reliability. PMID:23861920
Land cover mapping after the tsunami event over Nanggroe Aceh Darussalam (NAD) province, Indonesia
NASA Astrophysics Data System (ADS)
Lim, H. S.; MatJafri, M. Z.; Abdullah, K.; Alias, A. N.; Mohd. Saleh, N.; Wong, C. J.; Surbakti, M. S.
2008-03-01
Remote sensing offers an important means of detecting and analyzing temporal changes occurring in our landscape. This research used remote sensing to quantify land use/land cover changes at the Nanggroe Aceh Darussalam (Nad) province, Indonesia on a regional scale. The objective of this paper is to assess the changed produced from the analysis of Landsat TM data. A Landsat TM image was used to develop land cover classification map for the 27 March 2005. Four supervised classifications techniques (Maximum Likelihood, Minimum Distance-to- Mean, Parallelepiped and Parallelepiped with Maximum Likelihood Classifier Tiebreaker classifier) were performed to the satellite image. Training sites and accuracy assessment were needed for supervised classification techniques. The training sites were established using polygons based on the colour image. High detection accuracy (>80%) and overall Kappa (>0.80) were achieved by the Parallelepiped with Maximum Likelihood Classifier Tiebreaker classifier in this study. This preliminary study has produced a promising result. This indicates that land cover mapping can be carried out using remote sensing classification method of the satellite digital imagery.
Construction of Protograph LDPC Codes with Linear Minimum Distance
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Dolinar, Sam; Jones, Christopher
2006-01-01
A construction method for protograph-based LDPC codes that simultaneously achieve low iterative decoding threshold and linear minimum distance is proposed. We start with a high-rate protograph LDPC code with variable node degrees of at least 3. Lower rate codes are obtained by splitting check nodes and connecting them by degree-2 nodes. This guarantees the linear minimum distance property for the lower-rate codes. Excluding checks connected to degree-1 nodes, we show that the number of degree-2 nodes should be at most one less than the number of checks for the protograph LDPC code to have linear minimum distance. Iterative decoding thresholds are obtained by using the reciprocal channel approximation. Thresholds are lowered by using either precoding or at least one very high-degree node in the base protograph. A family of high- to low-rate codes with minimum distance linearly increasing in block size and with capacity-approaching performance thresholds is presented. FPGA simulation results for a few example codes show that the proposed codes perform as predicted.
The minimum distance approach to classification
NASA Technical Reports Server (NTRS)
Wacker, A. G.; Landgrebe, D. A.
1971-01-01
The work to advance the state-of-the-art of miminum distance classification is reportd. This is accomplished through a combination of theoretical and comprehensive experimental investigations based on multispectral scanner data. A survey of the literature for suitable distance measures was conducted and the results of this survey are presented. It is shown that minimum distance classification, using density estimators and Kullback-Leibler numbers as the distance measure, is equivalent to a form of maximum likelihood sample classification. It is also shown that for the parametric case, minimum distance classification is equivalent to nearest neighbor classification in the parameter space.
Ant colony optimization for solving university facility layout problem
NASA Astrophysics Data System (ADS)
Mohd Jani, Nurul Hafiza; Mohd Radzi, Nor Haizan; Ngadiman, Mohd Salihin
2013-04-01
Quadratic Assignment Problems (QAP) is classified as the NP hard problem. It has been used to model a lot of problem in several areas such as operational research, combinatorial data analysis and also parallel and distributed computing, optimization problem such as graph portioning and Travel Salesman Problem (TSP). In the literature, researcher use exact algorithm, heuristics algorithm and metaheuristic approaches to solve QAP problem. QAP is largely applied in facility layout problem (FLP). In this paper we used QAP to model university facility layout problem. There are 8 facilities that need to be assigned to 8 locations. Hence we have modeled a QAP problem with n ≤ 10 and developed an Ant Colony Optimization (ACO) algorithm to solve the university facility layout problem. The objective is to assign n facilities to n locations such that the minimum product of flows and distances is obtained. Flow is the movement from one to another facility, whereas distance is the distance between one locations of a facility to other facilities locations. The objective of the QAP is to obtain minimum total walking (flow) of lecturers from one destination to another (distance).
Mapping membrane activity in undiscovered peptide sequence space using machine learning
Fulan, Benjamin M.; Wong, Gerard C. L.
2016-01-01
There are some ∼1,100 known antimicrobial peptides (AMPs), which permeabilize microbial membranes but have diverse sequences. Here, we develop a support vector machine (SVM)-based classifier to investigate ⍺-helical AMPs and the interrelated nature of their functional commonality and sequence homology. SVM is used to search the undiscovered peptide sequence space and identify Pareto-optimal candidates that simultaneously maximize the distance σ from the SVM hyperplane (thus maximize its “antimicrobialness”) and its ⍺-helicity, but minimize mutational distance to known AMPs. By calibrating SVM machine learning results with killing assays and small-angle X-ray scattering (SAXS), we find that the SVM metric σ correlates not with a peptide’s minimum inhibitory concentration (MIC), but rather its ability to generate negative Gaussian membrane curvature. This surprising result provides a topological basis for membrane activity common to AMPs. Moreover, we highlight an important distinction between the maximal recognizability of a sequence to a trained AMP classifier (its ability to generate membrane curvature) and its maximal antimicrobial efficacy. As mutational distances are increased from known AMPs, we find AMP-like sequences that are increasingly difficult for nature to discover via simple mutation. Using the sequence map as a discovery tool, we find a unexpectedly diverse taxonomy of sequences that are just as membrane-active as known AMPs, but with a broad range of primary functions distinct from AMP functions, including endogenous neuropeptides, viral fusion proteins, topogenic peptides, and amyloids. The SVM classifier is useful as a general detector of membrane activity in peptide sequences. PMID:27849600
Classification of hyperspectral imagery with neural networks: comparison to conventional tools
NASA Astrophysics Data System (ADS)
Merényi, Erzsébet; Farrand, William H.; Taranik, James V.; Minor, Timothy B.
2014-12-01
Efficient exploitation of hyperspectral imagery is of great importance in remote sensing. Artificial intelligence approaches have been receiving favorable reviews for classification of hyperspectral data because the complexity of such data challenges the limitations of many conventional methods. Artificial neural networks (ANNs) were shown to outperform traditional classifiers in many situations. However, studies that use the full spectral dimensionality of hyperspectral images to classify a large number of surface covers are scarce if non-existent. We advocate the need for methods that can handle the full dimensionality and a large number of classes to retain the discovery potential and the ability to discriminate classes with subtle spectral differences. We demonstrate that such a method exists in the family of ANNs. We compare the maximum likelihood, Mahalonobis distance, minimum distance, spectral angle mapper, and a hybrid ANN classifier for real hyperspectral AVIRIS data, using the full spectral resolution to map 23 cover types and using a small training set. Rigorous evaluation of the classification accuracies shows that the ANN outperforms the other methods and achieves ≈90% accuracy on test data.
The distance function effect on k-nearest neighbor classification for medical datasets.
Hu, Li-Yu; Huang, Min-Wei; Ke, Shih-Wen; Tsai, Chih-Fong
2016-01-01
K-nearest neighbor (k-NN) classification is conventional non-parametric classifier, which has been used as the baseline classifier in many pattern classification problems. It is based on measuring the distances between the test data and each of the training data to decide the final classification output. Since the Euclidean distance function is the most widely used distance metric in k-NN, no study examines the classification performance of k-NN by different distance functions, especially for various medical domain problems. Therefore, the aim of this paper is to investigate whether the distance function can affect the k-NN performance over different medical datasets. Our experiments are based on three different types of medical datasets containing categorical, numerical, and mixed types of data and four different distance functions including Euclidean, cosine, Chi square, and Minkowsky are used during k-NN classification individually. The experimental results show that using the Chi square distance function is the best choice for the three different types of datasets. However, using the cosine and Euclidean (and Minkowsky) distance function perform the worst over the mixed type of datasets. In this paper, we demonstrate that the chosen distance function can affect the classification accuracy of the k-NN classifier. For the medical domain datasets including the categorical, numerical, and mixed types of data, K-NN based on the Chi square distance function performs the best.
Rai, Shesh N; Trainor, Patrick J; Khosravi, Farhad; Kloecker, Goetz; Panchapakesan, Balaji
2016-01-01
The development of biosensors that produce time series data will facilitate improvements in biomedical diagnostics and in personalized medicine. The time series produced by these devices often contains characteristic features arising from biochemical interactions between the sample and the sensor. To use such characteristic features for determining sample class, similarity-based classifiers can be utilized. However, the construction of such classifiers is complicated by the variability in the time domains of such series that renders the traditional distance metrics such as Euclidean distance ineffective in distinguishing between biological variance and time domain variance. The dynamic time warping (DTW) algorithm is a sequence alignment algorithm that can be used to align two or more series to facilitate quantifying similarity. In this article, we evaluated the performance of DTW distance-based similarity classifiers for classifying time series that mimics electrical signals produced by nanotube biosensors. Simulation studies demonstrated the positive performance of such classifiers in discriminating between time series containing characteristic features that are obscured by noise in the intensity and time domains. We then applied a DTW distance-based k -nearest neighbors classifier to distinguish the presence/absence of mesenchymal biomarker in cancer cells in buffy coats in a blinded test. Using a train-test approach, we find that the classifier had high sensitivity (90.9%) and specificity (81.8%) in differentiating between EpCAM-positive MCF7 cells spiked in buffy coats and those in plain buffy coats.
A Linguistic Image of Nature: The Burmese Numerative Classifier System
ERIC Educational Resources Information Center
Becker, Alton L.
1975-01-01
The Burmese classifier system is coherent because it is based upon a single elementary semantic dimension: deixis. On that dimension, four distances are distinguished, distances which metaphorically substitute for other conceptual relations between people and other living beings, people and things, and people and concepts. (Author/RM)
Multivariate Spectral Analysis to Extract Materials from Multispectral Data
1993-09-01
Euclidean minimum distance and conventional Bayesian classifier suggest some fundamental instabilities. Two candidate sources are (1) inadequate...Coacete Water 2 TOTAL Cetu¢t1te 0 0 0 0 34 0 0 34 TZC10 0 0 0 0 0 26 0 26 hpem ~d I 0 0 to 0 0 0 0 60 Seb~ s 0 0 0 0 4 24 0 28 Mwal 0 0 0 0 33 29 0 62 Ihwid
2014-01-01
Background Left bundle branch block (LBBB) and right bundle branch block (RBBB) not only mask electrocardiogram (ECG) changes that reflect diseases but also indicate important underlying pathology. The timely detection of LBBB and RBBB is critical in the treatment of cardiac diseases. Inter-patient heartbeat classification is based on independent training and testing sets to construct and evaluate a heartbeat classification system. Therefore, a heartbeat classification system with a high performance evaluation possesses a strong predictive capability for unknown data. The aim of this study was to propose a method for inter-patient classification of heartbeats to accurately detect LBBB and RBBB from the normal beat (NORM). Methods This study proposed a heartbeat classification method through a combination of three different types of classifiers: a minimum distance classifier constructed between NORM and LBBB; a weighted linear discriminant classifier between NORM and RBBB based on Bayesian decision making using posterior probabilities; and a linear support vector machine (SVM) between LBBB and RBBB. Each classifier was used with matching features to obtain better classification performance. The final types of the test heartbeats were determined using a majority voting strategy through the combination of class labels from the three classifiers. The optimal parameters for the classifiers were selected using cross-validation on the training set. The effects of different lead configurations on the classification results were assessed, and the performance of these three classifiers was compared for the detection of each pair of heartbeat types. Results The study results showed that a two-lead configuration exhibited better classification results compared with a single-lead configuration. The construction of a classifier with good performance between each pair of heartbeat types significantly improved the heartbeat classification performance. The results showed a sensitivity of 91.4% and a positive predictive value of 37.3% for LBBB and a sensitivity of 92.8% and a positive predictive value of 88.8% for RBBB. Conclusions A multi-classifier ensemble method was proposed based on inter-patient data and demonstrated a satisfactory classification performance. This approach has the potential for application in clinical practice to distinguish LBBB and RBBB from NORM of unknown patients. PMID:24903422
Huang, Huifang; Liu, Jie; Zhu, Qiang; Wang, Ruiping; Hu, Guangshu
2014-06-05
Left bundle branch block (LBBB) and right bundle branch block (RBBB) not only mask electrocardiogram (ECG) changes that reflect diseases but also indicate important underlying pathology. The timely detection of LBBB and RBBB is critical in the treatment of cardiac diseases. Inter-patient heartbeat classification is based on independent training and testing sets to construct and evaluate a heartbeat classification system. Therefore, a heartbeat classification system with a high performance evaluation possesses a strong predictive capability for unknown data. The aim of this study was to propose a method for inter-patient classification of heartbeats to accurately detect LBBB and RBBB from the normal beat (NORM). This study proposed a heartbeat classification method through a combination of three different types of classifiers: a minimum distance classifier constructed between NORM and LBBB; a weighted linear discriminant classifier between NORM and RBBB based on Bayesian decision making using posterior probabilities; and a linear support vector machine (SVM) between LBBB and RBBB. Each classifier was used with matching features to obtain better classification performance. The final types of the test heartbeats were determined using a majority voting strategy through the combination of class labels from the three classifiers. The optimal parameters for the classifiers were selected using cross-validation on the training set. The effects of different lead configurations on the classification results were assessed, and the performance of these three classifiers was compared for the detection of each pair of heartbeat types. The study results showed that a two-lead configuration exhibited better classification results compared with a single-lead configuration. The construction of a classifier with good performance between each pair of heartbeat types significantly improved the heartbeat classification performance. The results showed a sensitivity of 91.4% and a positive predictive value of 37.3% for LBBB and a sensitivity of 92.8% and a positive predictive value of 88.8% for RBBB. A multi-classifier ensemble method was proposed based on inter-patient data and demonstrated a satisfactory classification performance. This approach has the potential for application in clinical practice to distinguish LBBB and RBBB from NORM of unknown patients.
Reliability Based Geometric Design of Horizontal Circular Curves
NASA Astrophysics Data System (ADS)
Rajbongshi, Pabitra; Kalita, Kuldeep
2018-06-01
Geometric design of horizontal circular curve primarily involves with radius of the curve and stopping sight distance at the curve section. Minimum radius is decided based on lateral thrust exerted on the vehicles and the minimum stopping sight distance is provided to maintain the safety in longitudinal direction of vehicles. Available sight distance at site can be regulated by changing the radius and middle ordinate at the curve section. Both radius and sight distance depend on design speed. Speed of vehicles at any road section is a variable parameter and therefore, normally the 98th percentile speed is taken as the design speed. This work presents a probabilistic approach for evaluating stopping sight distance, considering the variability of all input parameters of sight distance. It is observed that the 98th percentile sight distance value is much lower than the sight distance corresponding to 98th percentile speed. The distribution of sight distance parameter is also studied and found to follow a lognormal distribution. Finally, the reliability based design charts are presented for both plain and hill regions, and considering the effect of lateral thrust.
The effect of lossy image compression on image classification
NASA Technical Reports Server (NTRS)
Paola, Justin D.; Schowengerdt, Robert A.
1995-01-01
We have classified four different images, under various levels of JPEG compression, using the following classification algorithms: minimum-distance, maximum-likelihood, and neural network. The training site accuracy and percent difference from the original classification were tabulated for each image compression level, with maximum-likelihood showing the poorest results. In general, as compression ratio increased, the classification retained its overall appearance, but much of the pixel-to-pixel detail was eliminated. We also examined the effect of compression on spatial pattern detection using a neural network.
Protograph based LDPC codes with minimum distance linearly growing with block size
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Jones, Christopher; Dolinar, Sam; Thorpe, Jeremy
2005-01-01
We propose several LDPC code constructions that simultaneously achieve good threshold and error floor performance. Minimum distance is shown to grow linearly with block size (similar to regular codes of variable degree at least 3) by considering ensemble average weight enumerators. Our constructions are based on projected graph, or protograph, structures that support high-speed decoder implementations. As with irregular ensembles, our constructions are sensitive to the proportion of degree-2 variable nodes. A code with too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code with too many such nodes tends to not exhibit a minimum distance that grows linearly in block length. In this paper we also show that precoding can be used to lower the threshold of regular LDPC codes. The decoding thresholds of the proposed codes, which have linearly increasing minimum distance in block size, outperform that of regular LDPC codes. Furthermore, a family of low to high rate codes, with thresholds that adhere closely to their respective channel capacity thresholds, is presented. Simulation results for a few example codes show that the proposed codes have low error floors as well as good threshold SNFt performance.
MSEBAG: a dynamic classifier ensemble generation based on `minimum-sufficient ensemble' and bagging
NASA Astrophysics Data System (ADS)
Chen, Lei; Kamel, Mohamed S.
2016-01-01
In this paper, we propose a dynamic classifier system, MSEBAG, which is characterised by searching for the 'minimum-sufficient ensemble' and bagging at the ensemble level. It adopts an 'over-generation and selection' strategy and aims to achieve a good bias-variance trade-off. In the training phase, MSEBAG first searches for the 'minimum-sufficient ensemble', which maximises the in-sample fitness with the minimal number of base classifiers. Then, starting from the 'minimum-sufficient ensemble', a backward stepwise algorithm is employed to generate a collection of ensembles. The objective is to create a collection of ensembles with a descending fitness on the data, as well as a descending complexity in the structure. MSEBAG dynamically selects the ensembles from the collection for the decision aggregation. The extended adaptive aggregation (EAA) approach, a bagging-style algorithm performed at the ensemble level, is employed for this task. EAA searches for the competent ensembles using a score function, which takes into consideration both the in-sample fitness and the confidence of the statistical inference, and averages the decisions of the selected ensembles to label the test pattern. The experimental results show that the proposed MSEBAG outperforms the benchmarks on average.
Schwenk
1998-11-15
We present a new classification architecture based on autoassociative neural networks that are used to learn discriminant models of each class. The proposed architecture has several interesting properties with respect to other model-based classifiers like nearest-neighbors or radial basis functions: it has a low computational complexity and uses a compact distributed representation of the models. The classifier is also well suited for the incorporation of a priori knowledge by means of a problem-specific distance measure. In particular, we will show that tangent distance (Simard, Le Cun, & Denker, 1993) can be used to achieve transformation invariance during learning and recognition. We demonstrate the application of this classifier to optical character recognition, where it has achieved state-of-the-art results on several reference databases. Relations to other models, in particular those based on principal component analysis, are also discussed.
A minimum spanning forest based classification method for dedicated breast CT images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pike, Robert; Sechopoulos, Ioannis; Fei, Baowei, E-mail: bfei@emory.edu
Purpose: To develop and test an automated algorithm to classify different types of tissue in dedicated breast CT images. Methods: Images of a single breast of five different patients were acquired with a dedicated breast CT clinical prototype. The breast CT images were processed by a multiscale bilateral filter to reduce noise while keeping edge information and were corrected to overcome cupping artifacts. As skin and glandular tissue have similar CT values on breast CT images, morphologic processing is used to identify the skin based on its position information. A support vector machine (SVM) is trained and the resulting modelmore » used to create a pixelwise classification map of fat and glandular tissue. By combining the results of the skin mask with the SVM results, the breast tissue is classified as skin, fat, and glandular tissue. This map is then used to identify markers for a minimum spanning forest that is grown to segment the image using spatial and intensity information. To evaluate the authors’ classification method, they use DICE overlap ratios to compare the results of the automated classification to those obtained by manual segmentation on five patient images. Results: Comparison between the automatic and the manual segmentation shows that the minimum spanning forest based classification method was able to successfully classify dedicated breast CT image with average DICE ratios of 96.9%, 89.8%, and 89.5% for fat, glandular, and skin tissue, respectively. Conclusions: A 2D minimum spanning forest based classification method was proposed and evaluated for classifying the fat, skin, and glandular tissue in dedicated breast CT images. The classification method can be used for dense breast tissue quantification, radiation dose assessment, and other applications in breast imaging.« less
Zhong, Hua; Redo-Sanchez, Albert; Zhang, X-C
2006-10-02
We present terahertz (THz) reflective spectroscopic focal-plane imaging of four explosive and bio-chemical materials (2, 4-DNT, Theophylline, RDX and Glutamic Acid) at a standoff imaging distance of 0.4 m. The 2 dimension (2-D) nature of this technique enables a fast acquisition time and is very close to a camera-like operation, compared to the most commonly used point emission-detection and raster scanning configuration. The samples are identified by their absorption peaks extracted from the negative derivative of the reflection coefficient respect to the frequency (-dr/dv) of each pixel. Classification of the samples is achieved by using minimum distance classifier and neural network methods with a rate of accuracy above 80% and a false alarm rate below 8%. This result supports the future application of THz time-domain spectroscopy (TDS) in standoff distance sensing, imaging, and identification.
Wang, Guizhou; Liu, Jianbo; He, Guojin
2013-01-01
This paper presents a new classification method for high-spatial-resolution remote sensing images based on a strategic mechanism of spatial mapping and reclassification. The proposed method includes four steps. First, the multispectral image is classified by a traditional pixel-based classification method (support vector machine). Second, the panchromatic image is subdivided by watershed segmentation. Third, the pixel-based multispectral image classification result is mapped to the panchromatic segmentation result based on a spatial mapping mechanism and the area dominant principle. During the mapping process, an area proportion threshold is set, and the regional property is defined as unclassified if the maximum area proportion does not surpass the threshold. Finally, unclassified regions are reclassified based on spectral information using the minimum distance to mean algorithm. Experimental results show that the classification method for high-spatial-resolution remote sensing images based on the spatial mapping mechanism and reclassification strategy can make use of both panchromatic and multispectral information, integrate the pixel- and object-based classification methods, and improve classification accuracy. PMID:24453808
Discrimination of different sub-basins on Tajo River based on water influence factor
NASA Astrophysics Data System (ADS)
Bermudez, R.; Gascó, J. M.; Tarquis, A. M.; Saa-Requejo, A.
2009-04-01
Numeric taxonomy has been applied to classify Tajo basin water (Spain) till Portugal border. Several stations, a total of 52, that estimate 15 water variables have been used in this study. The different groups have been obtained applying a Euclidean distance among stations (distance classification) and a Euclidean distance between each station and the centroid estimated among them (centroid classification), varying the number of parameters and with or without variable typification. In order to compare the classification a log-log relation has been established, between number of groups created and distances, to select the best one. It has been observed that centroid classification is more appropriate following in a more logic way the natural constrictions than the minimum distance among stations. Variable typification doesn't improve the classification except when the centroid method is applied. Taking in consideration the ions and the sum of them as variables, the classification improved. Stations are grouped based on electric conductivity (CE), total anions (TA), total cations (TC) and ions ratio (Na/Ca and Mg/Ca). For a given classification and comparing the different groups created a certain variation in ions concentration and ions ratio are observed. However, the variation in each ion among groups is different depending on the case. For the last group, regardless the classification, the increase in all ions is general. Comparing the dendrograms, and groups that originated, Tajo river basin can be sub dived in five sub-basins differentiated by the main influence on water: 1. With a higher ombrogenic influence (rain fed). 2. With ombrogenic and pedogenic influence (rain and groundwater fed). 3. With pedogenic influence. 4. With lithogenic influence (geological bedrock). 5. With a higher ombrogenic and lithogenic influence added.
NASA Astrophysics Data System (ADS)
Davoudi, Alireza; Shiry Ghidary, Saeed; Sadatnejad, Khadijeh
2017-06-01
Objective. In this paper, we propose a nonlinear dimensionality reduction algorithm for the manifold of symmetric positive definite (SPD) matrices that considers the geometry of SPD matrices and provides a low-dimensional representation of the manifold with high class discrimination in a supervised or unsupervised manner. Approach. The proposed algorithm tries to preserve the local structure of the data by preserving distances to local means (DPLM) and also provides an implicit projection matrix. DPLM is linear in terms of the number of training samples. Main results. We performed several experiments on the multi-class dataset IIa from BCI competition IV and two other datasets from BCI competition III including datasets IIIa and IVa. The results show that our approach as dimensionality reduction technique—leads to superior results in comparison with other competitors in the related literature because of its robustness against outliers and the way it preserves the local geometry of the data. Significance. The experiments confirm that the combination of DPLM with filter geodesic minimum distance to mean as the classifier leads to superior performance compared with the state of the art on brain-computer interface competition IV dataset IIa. Also the statistical analysis shows that our dimensionality reduction method performs significantly better than its competitors.
Real Time Intelligent Target Detection and Analysis with Machine Vision
NASA Technical Reports Server (NTRS)
Howard, Ayanna; Padgett, Curtis; Brown, Kenneth
2000-01-01
We present an algorithm for detecting a specified set of targets for an Automatic Target Recognition (ATR) application. ATR involves processing images for detecting, classifying, and tracking targets embedded in a background scene. We address the problem of discriminating between targets and nontarget objects in a scene by evaluating 40x40 image blocks belonging to an image. Each image block is first projected onto a set of templates specifically designed to separate images of targets embedded in a typical background scene from those background images without targets. These filters are found using directed principal component analysis which maximally separates the two groups. The projected images are then clustered into one of n classes based on a minimum distance to a set of n cluster prototypes. These cluster prototypes have previously been identified using a modified clustering algorithm based on prior sensed data. Each projected image pattern is then fed into the associated cluster's trained neural network for classification. A detailed description of our algorithm will be given in this paper. We outline our methodology for designing the templates, describe our modified clustering algorithm, and provide details on the neural network classifiers. Evaluation of the overall algorithm demonstrates that our detection rates approach 96% with a false positive rate of less than 0.03%.
NASA Astrophysics Data System (ADS)
Niazi, M. Khalid Khan; Hemminger, Jessica; Kurt, Habibe; Lozanski, Gerard; Gurcan, Metin
2014-03-01
Vascularity represents an important element of tissue/tumor microenvironment and is implicated in tumor growth, metastatic potential and resistence to therapy. Small blood vessels can be visualized using immunohistochemical stains specific to vascular cells. However, currently used manual methods to assess vascular density are poorly reproducible and are at best semi quantitative. Computer based quantitative and objective methods to measure microvessel density are urgently needed to better understand and clinically utilize microvascular density information. We propose a new method to quantify vascularity from images of bone marrow biopsies stained for CD34 vascular lining cells protein as a model. The method starts by automatically segmenting the blood vessels by methods of maxlink thresholding and minimum graph cuts. The segmentation is followed by morphological post-processing to reduce blast and small spurious objects from the bone marrow images. To classify the images into one of the four grades, we extracted 20 features from the segmented blood vessel images. These features include first four moments of the distribution of the area of blood vessels, first four moments of the distribution of 1) the edge weights in the minimum spanning tree of the blood vessels, 2) the shortest distance between blood vessels, 3) the homogeneity of the shortest distance (absolute difference in distance between consecutive blood vessels along the shortest path) between blood vessels and 5) blood vessel orientation. The method was tested on 26 bone marrow biopsy images stained with CD34 IHC stain, which were evaluated by three pathologists. The pathologists took part in this study by quantifying blood vessel density using gestalt assessment in hematopoietic bone marrow portions of bone marrow core biopsies images. To determine the intra-reader variability, each image was graded twice by each pathologist with two-week interval in between their readings. For each image, the ground truth (grade) was acquired through consensus among the three pathologists at the end of the study. A ranking of the features reveals that the fourth moment of the distribution of the area of blood vessels along with the first moment of the distribution of the shortest distance between blood vessels can correctly grade 68.2% of the bone marrow biopsies, while the intra- and inter-reader variability among the pathologists are 66.9% and 40.0%, respectively.
Trong Bui, Duong; Nguyen, Nhan Duc; Jeong, Gu-Min
2018-06-25
Human activity recognition and pedestrian dead reckoning are an interesting field because of their importance utilities in daily life healthcare. Currently, these fields are facing many challenges, one of which is the lack of a robust algorithm with high performance. This paper proposes a new method to implement a robust step detection and adaptive distance estimation algorithm based on the classification of five daily wrist activities during walking at various speeds using a smart band. The key idea is that the non-parametric adaptive distance estimator is performed after two activity classifiers and a robust step detector. In this study, two classifiers perform two phases of recognizing five wrist activities during walking. Then, a robust step detection algorithm, which is integrated with an adaptive threshold, peak and valley correction algorithm, is applied to the classified activities to detect the walking steps. In addition, the misclassification activities are fed back to the previous layer. Finally, three adaptive distance estimators, which are based on a non-parametric model of the average walking speed, calculate the length of each strike. The experimental results show that the average classification accuracy is about 99%, and the accuracy of the step detection is 98.7%. The error of the estimated distance is 2.2⁻4.2% depending on the type of wrist activities.
Protograph LDPC Codes with Node Degrees at Least 3
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Jones, Christopher
2006-01-01
In this paper we present protograph codes with a small number of degree-3 nodes and one high degree node. The iterative decoding threshold for proposed rate 1/2 codes are lower, by about 0.2 dB, than the best known irregular LDPC codes with degree at least 3. The main motivation is to gain linear minimum distance to achieve low error floor. Also to construct rate-compatible protograph-based LDPC codes for fixed block length that simultaneously achieves low iterative decoding threshold and linear minimum distance. We start with a rate 1/2 protograph LDPC code with degree-3 nodes and one high degree node. Higher rate codes are obtained by connecting check nodes with degree-2 non-transmitted nodes. This is equivalent to constraint combining in the protograph. The condition where all constraints are combined corresponds to the highest rate code. This constraint must be connected to nodes of degree at least three for the graph to have linear minimum distance. Thus having node degree at least 3 for rate 1/2 guarantees linear minimum distance property to be preserved for higher rates. Through examples we show that the iterative decoding threshold as low as 0.544 dB can be achieved for small protographs with node degrees at least three. A family of low- to high-rate codes with minimum distance linearly increasing in block size and with capacity-approaching performance thresholds is presented. FPGA simulation results for a few example codes show that the proposed codes perform as predicted.
Hussain, Lal
2018-06-01
Epilepsy is a neurological disorder produced due to abnormal excitability of neurons in the brain. The research reveals that brain activity is monitored through electroencephalogram (EEG) of patients suffered from seizure to detect the epileptic seizure. The performance of EEG detection based epilepsy require feature extracting strategies. In this research, we have extracted varying features extracting strategies based on time and frequency domain characteristics, nonlinear, wavelet based entropy and few statistical features. A deeper study was undertaken using novel machine learning classifiers by considering multiple factors. The support vector machine kernels are evaluated based on multiclass kernel and box constraint level. Likewise, for K-nearest neighbors (KNN), we computed the different distance metrics, Neighbor weights and Neighbors. Similarly, the decision trees we tuned the paramours based on maximum splits and split criteria and ensemble classifiers are evaluated based on different ensemble methods and learning rate. For training/testing tenfold Cross validation was employed and performance was evaluated in form of TPR, NPR, PPV, accuracy and AUC. In this research, a deeper analysis approach was performed using diverse features extracting strategies using robust machine learning classifiers with more advanced optimal options. Support Vector Machine linear kernel and KNN with City block distance metric give the overall highest accuracy of 99.5% which was higher than using the default parameters for these classifiers. Moreover, highest separation (AUC = 0.9991, 0.9990) were obtained at different kernel scales using SVM. Additionally, the K-nearest neighbors with inverse squared distance weight give higher performance at different Neighbors. Moreover, to distinguish the postictal heart rate oscillations from epileptic ictal subjects, and highest performance of 100% was obtained using different machine learning classifiers.
New presentation method for magnetic resonance angiography images based on skeletonization
NASA Astrophysics Data System (ADS)
Nystroem, Ingela; Smedby, Orjan
2000-04-01
Magnetic resonance angiography (MRA) images are usually presented as maximum intensity projections (MIP), and the choice of viewing direction is then critical for the detection of stenoses. We propose a presentation method that uses skeletonization and distance transformations, which visualizes variations in vessel width independent of viewing direction. In the skeletonization, the object is reduced to a surface skeleton and further to a curve skeleton. The skeletal voxels are labeled with their distance to the original background. For the curve skeleton, the distance values correspond to the minimum radius of the object at that point, i.e., half the minimum diameter of the blood vessel at that level. The following image processing steps are performed: resampling to cubic voxels, segmentation of the blood vessels, skeletonization ,and reverse distance transformation on the curve skeleton. The reconstructed vessels may be visualized with any projection method. Preliminary results are shown. They indicate that locations of possible stenoses may be identified by presenting the vessels as a structure with the minimum radius at each point.
Vision-Based Detection and Distance Estimation of Micro Unmanned Aerial Vehicles
Gökçe, Fatih; Üçoluk, Göktürk; Şahin, Erol; Kalkan, Sinan
2015-01-01
Detection and distance estimation of micro unmanned aerial vehicles (mUAVs) is crucial for (i) the detection of intruder mUAVs in protected environments; (ii) sense and avoid purposes on mUAVs or on other aerial vehicles and (iii) multi-mUAV control scenarios, such as environmental monitoring, surveillance and exploration. In this article, we evaluate vision algorithms as alternatives for detection and distance estimation of mUAVs, since other sensing modalities entail certain limitations on the environment or on the distance. For this purpose, we test Haar-like features, histogram of gradients (HOG) and local binary patterns (LBP) using cascades of boosted classifiers. Cascaded boosted classifiers allow fast processing by performing detection tests at multiple stages, where only candidates passing earlier simple stages are processed at the preceding more complex stages. We also integrate a distance estimation method with our system utilizing geometric cues with support vector regressors. We evaluated each method on indoor and outdoor videos that are collected in a systematic way and also on videos having motion blur. Our experiments show that, using boosted cascaded classifiers with LBP, near real-time detection and distance estimation of mUAVs are possible in about 60 ms indoors (1032×778 resolution) and 150 ms outdoors (1280×720 resolution) per frame, with a detection rate of 0.96 F-score. However, the cascaded classifiers using Haar-like features lead to better distance estimation since they can position the bounding boxes on mUAVs more accurately. On the other hand, our time analysis yields that the cascaded classifiers using HOG train and run faster than the other algorithms. PMID:26393599
Understanding auditory distance estimation by humpback whales: a computational approach.
Mercado, E; Green, S R; Schneider, J N
2008-02-01
Ranging, the ability to judge the distance to a sound source, depends on the presence of predictable patterns of attenuation. We measured long-range sound propagation in coastal waters to assess whether humpback whales might use frequency degradation cues to range singing whales. Two types of neural networks, a multi-layer and a single-layer perceptron, were trained to classify recorded sounds by distance traveled based on their frequency content. The multi-layer network successfully classified received sounds, demonstrating that the distorting effects of underwater propagation on frequency content provide sufficient cues to estimate source distance. Normalizing received sounds with respect to ambient noise levels increased the accuracy of distance estimates by single-layer perceptrons, indicating that familiarity with background noise can potentially improve a listening whale's ability to range. To assess whether frequency patterns predictive of source distance were likely to be perceived by whales, recordings were pre-processed using a computational model of the humpback whale's peripheral auditory system. Although signals processed with this model contained less information than the original recordings, neural networks trained with these physiologically based representations estimated source distance more accurately, suggesting that listening whales should be able to range singers using distance-dependent changes in frequency content.
Human action classification using procrustes shape theory
NASA Astrophysics Data System (ADS)
Cho, Wanhyun; Kim, Sangkyoon; Park, Soonyoung; Lee, Myungeun
2015-02-01
In this paper, we propose new method that can classify a human action using Procrustes shape theory. First, we extract a pre-shape configuration vector of landmarks from each frame of an image sequence representing an arbitrary human action, and then we have derived the Procrustes fit vector for pre-shape configuration vector. Second, we extract a set of pre-shape vectors from tanning sample stored at database, and we compute a Procrustes mean shape vector for these preshape vectors. Third, we extract a sequence of the pre-shape vectors from input video, and we project this sequence of pre-shape vectors on the tangent space with respect to the pole taking as a sequence of mean shape vectors corresponding with a target video. And we calculate the Procrustes distance between two sequences of the projection pre-shape vectors on the tangent space and the mean shape vectors. Finally, we classify the input video into the human action class with minimum Procrustes distance. We assess a performance of the proposed method using one public dataset, namely Weizmann human action dataset. Experimental results reveal that the proposed method performs very good on this dataset.
NASA Astrophysics Data System (ADS)
Hashemi, H.; Tax, D. M. J.; Duin, R. P. W.; Javaherian, A.; de Groot, P.
2008-11-01
Seismic object detection is a relatively new field in which 3-D bodies are visualized and spatial relationships between objects of different origins are studied in order to extract geologic information. In this paper, we propose a method for finding an optimal classifier with the help of a statistical feature ranking technique and combining different classifiers. The method, which has general applicability, is demonstrated here on a gas chimney detection problem. First, we evaluate a set of input seismic attributes extracted at locations labeled by a human expert using regularized discriminant analysis (RDA). In order to find the RDA score for each seismic attribute, forward and backward search strategies are used. Subsequently, two non-linear classifiers: multilayer perceptron (MLP) and support vector classifier (SVC) are run on the ranked seismic attributes. Finally, to capitalize on the intrinsic differences between both classifiers, the MLP and SVC results are combined using logical rules of maximum, minimum and mean. The proposed method optimizes the ranked feature space size and yields the lowest classification error in the final combined result. We will show that the logical minimum reveals gas chimneys that exhibit both the softness of MLP and the resolution of SVC classifiers.
Beta Atomic Contacts: Identifying Critical Specific Contacts in Protein Binding Interfaces
Liu, Qian; Kwoh, Chee Keong; Hoi, Steven C. H.
2013-01-01
Specific binding between proteins plays a crucial role in molecular functions and biological processes. Protein binding interfaces and their atomic contacts are typically defined by simple criteria, such as distance-based definitions that only use some threshold of spatial distance in previous studies. These definitions neglect the nearby atomic organization of contact atoms, and thus detect predominant contacts which are interrupted by other atoms. It is questionable whether such kinds of interrupted contacts are as important as other contacts in protein binding. To tackle this challenge, we propose a new definition called beta (β) atomic contacts. Our definition, founded on the β-skeletons in computational geometry, requires that there is no other atom in the contact spheres defined by two contact atoms; this sphere is similar to the van der Waals spheres of atoms. The statistical analysis on a large dataset shows that β contacts are only a small fraction of conventional distance-based contacts. To empirically quantify the importance of β contacts, we design βACV, an SVM classifier with β contacts as input, to classify homodimers from crystal packing. We found that our βACV is able to achieve the state-of-the-art classification performance superior to SVM classifiers with distance-based contacts as input. Our βACV also outperforms several existing methods when being evaluated on several datasets in previous works. The promising empirical performance suggests that β contacts can truly identify critical specific contacts in protein binding interfaces. β contacts thus provide a new model for more precise description of atomic organization in protein quaternary structures than distance-based contacts. PMID:23630569
Clustering of financial time series
NASA Astrophysics Data System (ADS)
D'Urso, Pierpaolo; Cappelli, Carmela; Di Lallo, Dario; Massari, Riccardo
2013-05-01
This paper addresses the topic of classifying financial time series in a fuzzy framework proposing two fuzzy clustering models both based on GARCH models. In general clustering of financial time series, due to their peculiar features, needs the definition of suitable distance measures. At this aim, the first fuzzy clustering model exploits the autoregressive representation of GARCH models and employs, in the framework of a partitioning around medoids algorithm, the classical autoregressive metric. The second fuzzy clustering model, also based on partitioning around medoids algorithm, uses the Caiado distance, a Mahalanobis-like distance, based on estimated GARCH parameters and covariances that takes into account the information about the volatility structure of time series. In order to illustrate the merits of the proposed fuzzy approaches an application to the problem of classifying 29 time series of Euro exchange rates against international currencies is presented and discussed, also comparing the fuzzy models with their crisp version.
Ibáñez, Javier; Vélez, M Dolores; de Andrés, M Teresa; Borrego, Joaquín
2009-11-01
Distinctness, uniformity and stability (DUS) testing of varieties is usually required to apply for Plant Breeders' Rights. This exam is currently carried out using morphological traits, where the establishment of distinctness through a minimum distance is the key issue. In this study, the possibility of using microsatellite markers for establishing the minimum distance in a vegetatively propagated crop (grapevine) has been evaluated. A collection of 991 accessions have been studied with nine microsatellite markers and pair-wise compared, and the highest intra-variety distance and the lowest inter-variety distance determined. The collection included 489 different genotypes, and synonyms and sports. Average values for number of alleles per locus (19), Polymorphic Information Content (0.764) and heterozygosities observed (0.773) and expected (0.785) indicated the high level of polymorphism existing in grapevine. The maximum intra-variety variability found was one allele between two accessions of the same variety, of a total of 3,171 pair-wise comparisons. The minimum inter-variety variability found was two alleles between two pairs of varieties, of a total of 119,316 pair-wise comparisons. In base to these results, the minimum distance required to set distinctness in grapevine with the nine microsatellite markers used could be established in two alleles. General rules for the use of the system as a support for establishing distinctness in vegetatively propagated crops are discussed.
Intelligent query by humming system based on score level fusion of multiple classifiers
NASA Astrophysics Data System (ADS)
Pyo Nam, Gi; Thu Trang Luong, Thi; Ha Nam, Hyun; Ryoung Park, Kang; Park, Sung-Joo
2011-12-01
Recently, the necessity for content-based music retrieval that can return results even if a user does not know information such as the title or singer has increased. Query-by-humming (QBH) systems have been introduced to address this need, as they allow the user to simply hum snatches of the tune to find the right song. Even though there have been many studies on QBH, few have combined multiple classifiers based on various fusion methods. Here we propose a new QBH system based on the score level fusion of multiple classifiers. This research is novel in the following three respects: three local classifiers [quantized binary (QB) code-based linear scaling (LS), pitch-based dynamic time warping (DTW), and LS] are employed; local maximum and minimum point-based LS and pitch distribution feature-based LS are used as global classifiers; and the combination of local and global classifiers based on the score level fusion by the PRODUCT rule is used to achieve enhanced matching accuracy. Experimental results with the 2006 MIREX QBSH and 2009 MIR-QBSH corpus databases show that the performance of the proposed method is better than that of single classifier and other fusion methods.
NASA Technical Reports Server (NTRS)
Zhuang, Xin
1990-01-01
LANDSAT Thematic Mapper (TM) data for March 23, 1987 with accompanying ground truth data for the study area in Miami County, IN were used to determine crop residue type and class. Principle components and spectral ratioing transformations were applied to the LANDSAT TM data. One graphic information system (GIS) layer of land ownership was added to each original image as the eighth band of data in an attempt to improve classification. Maximum likelihood, minimum distance, and neural networks were used to classify the original, transformed, and GIS-enhanced remotely sensed data. Crop residues could be separated from one another and from bare soil and other biomass. Two types of crop residue and four classes were identified from each LANDSAT TM image. The maximum likelihood classifier performed the best classification for each original image without need of any transformation. The neural network classifier was able to improve the classification by incorporating a GIS-layer of land ownership as an eighth band of data. The maximum likelihood classifier was unable to consider this eighth band of data and thus, its results could not be improved by its consideration.
Neuro-classification of multi-type Landsat Thematic Mapper data
NASA Technical Reports Server (NTRS)
Zhuang, Xin; Engel, Bernard A.; Fernandez, R. N.; Johannsen, Chris J.
1991-01-01
Neural networks have been successful in image classification and have shown potential for classifying remotely sensed data. This paper presents classifications of multitype Landsat Thematic Mapper (TM) data using neural networks. The Landsat TM Image for March 23, 1987 with accompanying ground observation data for a study area In Miami County, Indiana, U.S.A. was utilized to assess recognition of crop residues. Principal components and spectral ratio transformations were performed on the TM data. In addition, a layer of the geographic information system (GIS) for the study site was incorporated to generate GIS-enhanced TM data. This paper discusses (1) the performance of neuro-classification on each type of data, (2) how neural networks recognized each type of data as a new image and (3) comparisons of the results for each type of data obtained using neural networks, maximum likelihood, and minimum distance classifiers.
An ensemble of dissimilarity based classifiers for Mackerel gender determination
NASA Astrophysics Data System (ADS)
Blanco, A.; Rodriguez, R.; Martinez-Maranon, I.
2014-03-01
Mackerel is an infravalored fish captured by European fishing vessels. A manner to add value to this specie can be achieved by trying to classify it attending to its sex. Colour measurements were performed on Mackerel females and males (fresh and defrozen) extracted gonads to obtain differences between sexes. Several linear and non linear classifiers such as Support Vector Machines (SVM), k Nearest Neighbors (k-NN) or Diagonal Linear Discriminant Analysis (DLDA) can been applied to this problem. However, theyare usually based on Euclidean distances that fail to reflect accurately the sample proximities. Classifiers based on non-Euclidean dissimilarities misclassify a different set of patterns. We combine different kind of dissimilarity based classifiers. The diversity is induced considering a set of complementary dissimilarities for each model. The experimental results suggest that our algorithm helps to improve classifiers based on a single dissimilarity.
Boareto, Marcelo; Cesar, Jonatas; Leite, Vitor B P; Caticha, Nestor
2015-01-01
We introduce Supervised Variational Relevance Learning (Suvrel), a variational method to determine metric tensors to define distance based similarity in pattern classification, inspired in relevance learning. The variational method is applied to a cost function that penalizes large intraclass distances and favors small interclass distances. We find analytically the metric tensor that minimizes the cost function. Preprocessing the patterns by doing linear transformations using the metric tensor yields a dataset which can be more efficiently classified. We test our methods using publicly available datasets, for some standard classifiers. Among these datasets, two were tested by the MAQC-II project and, even without the use of further preprocessing, our results improve on their performance.
Muko, Soyoka; Shimatani, Ichiro K; Nozawa, Yoko
2014-07-01
Spatial distributions of individuals are conventionally analysed by representing objects as dimensionless points, in which spatial statistics are based on centre-to-centre distances. However, if organisms expand without overlapping and show size variations, such as is the case for encrusting corals, interobject spacing is crucial for spatial associations where interactions occur. We introduced new pairwise statistics using minimum distances between objects and demonstrated their utility when examining encrusting coral community data. We also calculated the conventional point process statistics and the grid-based statistics to clarify the advantages and limitations of each spatial statistical method. For simplicity, coral colonies were approximated by disks in these demonstrations. Focusing on short-distance effects, the use of minimum distances revealed that almost all coral genera were aggregated at a scale of 1-25 cm. However, when fragmented colonies (ramets) were treated as a genet, a genet-level analysis indicated weak or no aggregation, suggesting that most corals were randomly distributed and that fragmentation was the primary cause of colony aggregations. In contrast, point process statistics showed larger aggregation scales, presumably because centre-to-centre distances included both intercolony spacing and colony sizes (radius). The grid-based statistics were able to quantify the patch (aggregation) scale of colonies, but the scale was strongly affected by the colony size. Our approach quantitatively showed repulsive effects between an aggressive genus and a competitively weak genus, while the grid-based statistics (covariance function) also showed repulsion although the spatial scale indicated from the statistics was not directly interpretable in terms of ecological meaning. The use of minimum distances together with previously proposed spatial statistics helped us to extend our understanding of the spatial patterns of nonoverlapping objects that vary in size and the associated specific scales. © 2013 The Authors. Journal of Animal Ecology © 2013 British Ecological Society.
Advances in Distance-Based Hole Cuts on Overset Grids
NASA Technical Reports Server (NTRS)
Chan, William M.; Pandya, Shishir A.
2015-01-01
An automatic and efficient method to determine appropriate hole cuts based on distances to the wall and donor stencil maps for overset grids is presented. A new robust procedure is developed to create a closed surface triangulation representation of each geometric component for accurate determination of the minimum hole. Hole boundaries are then displaced away from the tight grid-spacing regions near solid walls to allow grid overlap to occur away from the walls where cell sizes from neighboring grids are more comparable. The placement of hole boundaries is efficiently determined using a mid-distance rule and Cartesian maps of potential valid donor stencils with minimal user input. Application of this procedure typically results in a spatially-variable offset of the hole boundaries from the minimum hole with only a small number of orphan points remaining. Test cases on complex configurations are presented to demonstrate the new scheme.
Modeling the long-term evolution of space debris
Nikolaev, Sergei; De Vries, Willem H.; Henderson, John R.; Horsley, Matthew A.; Jiang, Ming; Levatin, Joanne L.; Olivier, Scot S.; Pertica, Alexander J.; Phillion, Donald W.; Springer, Harry K.
2017-03-07
A space object modeling system that models the evolution of space debris is provided. The modeling system simulates interaction of space objects at simulation times throughout a simulation period. The modeling system includes a propagator that calculates the position of each object at each simulation time based on orbital parameters. The modeling system also includes a collision detector that, for each pair of objects at each simulation time, performs a collision analysis. When the distance between objects satisfies a conjunction criterion, the modeling system calculates a local minimum distance between the pair of objects based on a curve fitting to identify a time of closest approach at the simulation times and calculating the position of the objects at the identified time. When the local minimum distance satisfies a collision criterion, the modeling system models the debris created by the collision of the pair of objects.
Choi, Hyunseok; Cho, Byunghyun; Masamune, Ken; Hashizume, Makoto; Hong, Jaesung
2016-03-01
Depth perception is a major issue in augmented reality (AR)-based surgical navigation. We propose an AR and virtual reality (VR) switchable visualization system with distance information, and evaluate its performance in a surgical navigation set-up. To improve depth perception, seamless switching from AR to VR was implemented. In addition, the minimum distance between the tip of the surgical tool and the nearest organ was provided in real time. To evaluate the proposed techniques, five physicians and 20 non-medical volunteers participated in experiments. Targeting error, time taken, and numbers of collisions were measured in simulation experiments. There was a statistically significant difference between a simple AR technique and the proposed technique. We confirmed that depth perception in AR could be improved by the proposed seamless switching between AR and VR, and providing an indication of the minimum distance also facilitated the surgical tasks. Copyright © 2015 John Wiley & Sons, Ltd.
Minimum triplet covers of binary phylogenetic X-trees.
Huber, K T; Moulton, V; Steel, M
2017-12-01
Trees with labelled leaves and with all other vertices of degree three play an important role in systematic biology and other areas of classification. A classical combinatorial result ensures that such trees can be uniquely reconstructed from the distances between the leaves (when the edges are given any strictly positive lengths). Moreover, a linear number of these pairwise distance values suffices to determine both the tree and its edge lengths. A natural set of pairs of leaves is provided by any 'triplet cover' of the tree (based on the fact that each non-leaf vertex is the median vertex of three leaves). In this paper we describe a number of new results concerning triplet covers of minimum size. In particular, we characterize such covers in terms of an associated graph being a 2-tree. Also, we show that minimum triplet covers are 'shellable' and thereby provide a set of pairs for which the inter-leaf distance values will uniquely determine the underlying tree and its associated branch lengths.
NASA Technical Reports Server (NTRS)
Lin, Qian; Allebach, Jan P.
1990-01-01
An adaptive vector linear minimum mean-squared error (LMMSE) filter for multichannel images with multiplicative noise is presented. It is shown theoretically that the mean-squared error in the filter output is reduced by making use of the correlation between image bands. The vector and conventional scalar LMMSE filters are applied to a three-band SIR-B SAR, and their performance is compared. Based on a mutliplicative noise model, the per-pel maximum likelihood classifier was derived. The authors extend this to the design of sequential and robust classifiers. These classifiers are also applied to the three-band SIR-B SAR image.
[Minimum Standards for the Spatial Accessibility of Primary Care: A Systematic Review].
Voigtländer, S; Deiters, T
2015-12-01
Regional disparities of access to primary care are substantial in Germany, especially in terms of spatial accessibility. However, there is no legally or generally binding minimum standard for the spatial accessibility effort that is still acceptable. Our objective is to analyse existing minimum standards, the methods used as well as their empirical basis. A systematic literature review was undertaken of publications regarding minimum standards for the spatial accessibility of primary care based on a title word and keyword search using PubMed, SSCI/Web of Science, EMBASE and Cochrane Library. 8 minimum standards from the USA, Germany and Austria could be identified. All of them specify the acceptable spatial accessibility effort in terms of travel time; almost half include also distance(s). The travel time maximum, which is acceptable, is 30 min and it tends to be lower in urban areas. Primary care is, according to the identified minimum standards, part of the local area (Nahbereich) of so-called central places (Zentrale Orte) providing basic goods and services. The consideration of means of transport, e. g. public transport, is heterogeneous. The standards are based on empirical studies, consultation with service providers, practical experiences, and regional planning/central place theory as well as on legal or political regulations. The identified minimum standards provide important insights into the effort that is still acceptable regarding spatial accessibility, i. e. travel time, distance and means of transport. It seems reasonable to complement the current planning system for outpatient care, which is based on provider-to-population ratios, by a gravity-model method to identify places as well as populations with insufficient spatial accessibility. Due to a lack of a common minimum standard we propose - subject to further discussion - to begin with a threshold based on the spatial accessibility limit of the local area, i. e. 30 min to the next primary care provider for at least 90% of the regional population. The exceeding of the threshold would necessitate a discussion of a health care deficit and in line with this a potential need for intervention, e. g. in terms of alternative forms of health care provision. © Georg Thieme Verlag KG Stuttgart · New York.
Barik, Amita; Das, Santasabuj
2018-01-02
Small RNAs (sRNAs) in bacteria have emerged as key players in transcriptional and post-transcriptional regulation of gene expression. Here, we present a statistical analysis of different sequence- and structure-related features of bacterial sRNAs to identify the descriptors that could discriminate sRNAs from other bacterial RNAs. We investigated a comprehensive and heterogeneous collection of 816 sRNAs, identified by northern blotting across 33 bacterial species and compared their various features with other classes of bacterial RNAs, such as tRNAs, rRNAs and mRNAs. We observed that sRNAs differed significantly from the rest with respect to G+C composition, normalized minimum free energy of folding, motif frequency and several RNA-folding parameters like base-pairing propensity, Shannon entropy and base-pair distance. Based on the selected features, we developed a predictive model using Random Forests (RF) method to classify the above four classes of RNAs. Our model displayed an overall predictive accuracy of 89.5%. These findings would help to differentiate bacterial sRNAs from other RNAs and further promote prediction of novel sRNAs in different bacterial species.
Nishiguchi, Shu; Yorozu, Ayanori; Adachi, Daiki; Takahashi, Masaki; Aoyama, Tomoki
2017-08-08
The Timed Up and Go (TUG) test may be a useful tool to detect not only mobility impairment but also possible cognitive impairment. In this cross-sectional study, we used the TUG test to investigate the associations between trajectory-based spatial parameters measured by laser range sensor (LRS) and cognitive impairment in community-dwelling older adults. The participants were 63 community-dwelling older adults (mean age, 73.0 ± 6.3 years). The trajectory-based spatial parameters during the TUG test were measured using an LRS. In each forward and backward phase, we calculated the minimum distance from the marker, the maximum distance from the x-axis (center line), the length of the trajectories, and the area of region surrounded by the trajectory of the center of gravity and the x-axis (center line). We measured mild cognitive impairment using the Mini-Mental State Examination score (26/27 was the cut-off score for defining mild cognitive impairment). Compared with participants with normal cognitive function, those with mild cognitive impairment exhibited the following trajectory-based spatial parameters: short minimum distance from the marker (p = 0.044), narrow area of center of gravity in the forward phase (p = 0.012), and a large forward/whole phase ratio of the area of the center of gravity (p = 0.026) during the TUG test. In multivariate logistic regression analyses, a short minimum distance from the marker (odds ratio [OR]: 0.82, 95% confidence interval [CI]: 0.69-0.98), narrow area of the center of gravity in the forward phase (OR: 0.01, 95% CI: 0.00-0.36), and large forward/whole phase ratio of the area of the center of gravity (OR: 0.94, 95% CI: 0.88-0.99) were independently associated with mild cognitive impairment. In conclusion, our results indicate that some of the trajectory-based spatial parameters measured by LRS during the TUG test were independently associated with cognitive impairment in older adults. In particular, older adults with cognitive impairment exhibit shorter minimum distances from the marker and asymmetrical trajectories during the TUG test.
NASA Astrophysics Data System (ADS)
Di, Nur Faraidah Muhammad; Satari, Siti Zanariah
2017-05-01
Outlier detection in linear data sets has been done vigorously but only a small amount of work has been done for outlier detection in circular data. In this study, we proposed multiple outliers detection in circular regression models based on the clustering algorithm. Clustering technique basically utilizes distance measure to define distance between various data points. Here, we introduce the similarity distance based on Euclidean distance for circular model and obtain a cluster tree using the single linkage clustering algorithm. Then, a stopping rule for the cluster tree based on the mean direction and circular standard deviation of the tree height is proposed. We classify the cluster group that exceeds the stopping rule as potential outlier. Our aim is to demonstrate the effectiveness of proposed algorithms with the similarity distances in detecting the outliers. It is found that the proposed methods are performed well and applicable for circular regression model.
Povz, Meta; Sumer, Suzana
2003-01-01
Cobitis elongata Heckel et Kner inhabits the rivers Sava, Kolpa, Krka, Gracnica and Hudinja (the Danube river basin). The species is common in its distribution area. In the Red List of endangered Pisces and Cyclostomata in Slovenia, it is classified as endangered. Status and distribution data of the species from previous reports and recent research were summarized. A total of 31 specimens from the river Kolpa were morphologically studied. Sixteen morphometric and four meristic characteristics were analysed using standard numerical taxonomic techniques. 99.8% of the total variation of standard length was explained by preanal distance, dorsal and ventral fin lengths as well as minimum body height.
A Minimum Spanning Forest Based Method for Noninvasive Cancer Detection with Hyperspectral Imaging
Pike, Robert; Lu, Guolan; Wang, Dongsheng; Chen, Zhuo Georgia; Fei, Baowei
2016-01-01
Goal The purpose of this paper is to develop a classification method that combines both spectral and spatial information for distinguishing cancer from healthy tissue on hyperspectral images in an animal model. Methods An automated algorithm based on a minimum spanning forest (MSF) and optimal band selection has been proposed to classify healthy and cancerous tissue on hyperspectral images. A support vector machine (SVM) classifier is trained to create a pixel-wise classification probability map of cancerous and healthy tissue. This map is then used to identify markers that are used to compute mutual information for a range of bands in the hyperspectral image and thus select the optimal bands. An MSF is finally grown to segment the image using spatial and spectral information. Conclusion The MSF based method with automatically selected bands proved to be accurate in determining the tumor boundary on hyperspectral images. Significance Hyperspectral imaging combined with the proposed classification technique has the potential to provide a noninvasive tool for cancer detection. PMID:26285052
Hyperspectral feature mapping classification based on mathematical morphology
NASA Astrophysics Data System (ADS)
Liu, Chang; Li, Junwei; Wang, Guangping; Wu, Jingli
2016-03-01
This paper proposed a hyperspectral feature mapping classification algorithm based on mathematical morphology. Without the priori information such as spectral library etc., the spectral and spatial information can be used to realize the hyperspectral feature mapping classification. The mathematical morphological erosion and dilation operations are performed respectively to extract endmembers. The spectral feature mapping algorithm is used to carry on hyperspectral image classification. The hyperspectral image collected by AVIRIS is applied to evaluate the proposed algorithm. The proposed algorithm is compared with minimum Euclidean distance mapping algorithm, minimum Mahalanobis distance mapping algorithm, SAM algorithm and binary encoding mapping algorithm. From the results of the experiments, it is illuminated that the proposed algorithm's performance is better than that of the other algorithms under the same condition and has higher classification accuracy.
Benefits of Using Pairwise Trajectory Management in the Central East Pacific
NASA Technical Reports Server (NTRS)
Chartrand, Ryan; Ballard, Kathryn
2016-01-01
Pairwise Trajectory Management (PTM) is a concept that utilizes airborne and ground-based capabilities to enable airborne spacing operations in oceanic regions. The goal of PTM is to use enhanced surveillance, along with airborne tools, to manage the spacing between aircraft. Due to the enhanced airborne surveillance of Automatic Dependent Surveillance-Broadcast (ADS-B) information and reduced communication, the PTM minimum spacing distance will be less than distances currently required of an air traffic controller. Reduced minimum distance will increase the capacity of aircraft operations at a given altitude or volume of airspace, thereby increasing time on desired trajectory and overall flight efficiency. PTM is designed to allow a flight crew to resolve a specific traffic conflict (or conflicts), identified by the air traffic controller, while maintaining the flight crew's desired altitude. The air traffic controller issues a PTM clearance to a flight crew authorized to conduct PTM operations in order to resolve a conflict for the pair (or pairs) of aircraft (i.e., the PTM aircraft and a designated target aircraft). This clearance requires the flight crew of the PTM aircraft to use their ADS-B-enabled onboard equipment to manage their spacing relative to the designated target aircraft to ensure spacing distances that are no closer than the PTM minimum distance. When the air traffic controller determines that PTM is no longer required, the controller issues a clearance to cancel the PTM operation.
Brancolini, Florencia; del Pazo, Felipe; Posner, Victoria Maria; Grimberg, Alexis; Arranz, Silvia Eda
2016-01-01
Valid fish species identification is essential for biodiversity conservation and fisheries management. Here, we provide a sequence reference library based on mitochondrial cytochrome c oxidase subunit I for a valid identification of 79 freshwater fish species from the Lower Paraná River. Neighbour-joining analysis based on K2P genetic distances formed non-overlapping clusters for almost all species with a ≥99% bootstrap support each. Identification was successful for 97.8% of species as the minimum genetic distance to the nearest neighbour exceeded the maximum intraspecific distance in all these cases. A barcoding gap of 2.5% was apparent for the whole data set with the exception of four cases. Within-species distances ranged from 0.00% to 7.59%, while interspecific distances varied between 4.06% and 19.98%, without considering Odontesthes species with a minimum genetic distance of 0%. Sequence library validation was performed by applying BOLDs BIN analysis tool, Poisson Tree Processes model and Automatic Barcode Gap Discovery, along with a reliable taxonomic assignment by experts. Exhaustive revision of vouchers was performed when a conflicting assignment was detected after sequence analysis and BIN discordance evaluation. Thus, the sequence library presented here can be confidently used as a benchmark for identification of half of the fish species recorded for the Lower Paraná River. PMID:27442116
Effect of Weight Transfer on a Vehicle's Stopping Distance.
ERIC Educational Resources Information Center
Whitmire, Daniel P.; Alleman, Timothy J.
1979-01-01
An analysis of the minimum stopping distance problem is presented taking into account the effect of weight transfer on nonskidding vehicles and front- or rear-wheels-skidding vehicles. Expressions for the minimum stopping distances are given in terms of vehicle geometry and the coefficients of friction. (Author/BB)
2018-01-01
Hyperspectral image classification with a limited number of training samples without loss of accuracy is desirable, as collecting such data is often expensive and time-consuming. However, classifiers trained with limited samples usually end up with a large generalization error. To overcome the said problem, we propose a fuzziness-based active learning framework (FALF), in which we implement the idea of selecting optimal training samples to enhance generalization performance for two different kinds of classifiers, discriminative and generative (e.g. SVM and KNN). The optimal samples are selected by first estimating the boundary of each class and then calculating the fuzziness-based distance between each sample and the estimated class boundaries. Those samples that are at smaller distances from the boundaries and have higher fuzziness are chosen as target candidates for the training set. Through detailed experimentation on three publically available datasets, we showed that when trained with the proposed sample selection framework, both classifiers achieved higher classification accuracy and lower processing time with the small amount of training data as opposed to the case where the training samples were selected randomly. Our experiments demonstrate the effectiveness of our proposed method, which equates favorably with the state-of-the-art methods. PMID:29304512
Graph distance for complex networks
NASA Astrophysics Data System (ADS)
Shimada, Yutaka; Hirata, Yoshito; Ikeguchi, Tohru; Aihara, Kazuyuki
2016-10-01
Networks are widely used as a tool for describing diverse real complex systems and have been successfully applied to many fields. The distance between networks is one of the most fundamental concepts for properly classifying real networks, detecting temporal changes in network structures, and effectively predicting their temporal evolution. However, this distance has rarely been discussed in the theory of complex networks. Here, we propose a graph distance between networks based on a Laplacian matrix that reflects the structural and dynamical properties of networked dynamical systems. Our results indicate that the Laplacian-based graph distance effectively quantifies the structural difference between complex networks. We further show that our approach successfully elucidates the temporal properties underlying temporal networks observed in the context of face-to-face human interactions.
Bizios, Dimitrios; Heijl, Anders; Hougaard, Jesper Leth; Bengtsson, Boel
2010-02-01
To compare the performance of two machine learning classifiers (MLCs), artificial neural networks (ANNs) and support vector machines (SVMs), with input based on retinal nerve fibre layer thickness (RNFLT) measurements by optical coherence tomography (OCT), on the diagnosis of glaucoma, and to assess the effects of different input parameters. We analysed Stratus OCT data from 90 healthy persons and 62 glaucoma patients. Performance of MLCs was compared using conventional OCT RNFLT parameters plus novel parameters such as minimum RNFLT values, 10th and 90th percentiles of measured RNFLT, and transformations of A-scan measurements. For each input parameter and MLC, the area under the receiver operating characteristic curve (AROC) was calculated. There were no statistically significant differences between ANNs and SVMs. The best AROCs for both ANN (0.982, 95%CI: 0.966-0.999) and SVM (0.989, 95% CI: 0.979-1.0) were based on input of transformed A-scan measurements. Our SVM trained on this input performed better than ANNs or SVMs trained on any of the single RNFLT parameters (p < or = 0.038). The performance of ANNs and SVMs trained on minimum thickness values and the 10th and 90th percentiles were at least as good as ANNs and SVMs with input based on the conventional RNFLT parameters. No differences between ANN and SVM were observed in this study. Both MLCs performed very well, with similar diagnostic performance. Input parameters have a larger impact on diagnostic performance than the type of machine classifier. Our results suggest that parameters based on transformed A-scan thickness measurements of the RNFL processed by machine classifiers can improve OCT-based glaucoma diagnosis.
Classification of Salmonella serotypes with hyperspectral microscope imagery
USDA-ARS?s Scientific Manuscript database
Previous research has demonstrated an optical method with acousto-optic tunable filter (AOTF) based hyperspectral microscope imaging (HMI) had potential for classifying gram-negative from gram-positive foodborne pathogenic bacteria rapidly and nondestructively with a minimum sample preparation. In t...
Handwritten document age classification based on handwriting styles
NASA Astrophysics Data System (ADS)
Ramaiah, Chetan; Kumar, Gaurav; Govindaraju, Venu
2012-01-01
Handwriting styles are constantly changing over time. We approach the novel problem of estimating the approximate age of Historical Handwritten Documents using Handwriting styles. This system will have many applications in handwritten document processing engines where specialized processing techniques can be applied based on the estimated age of the document. We propose to learn a distribution over styles across centuries using Topic Models and to apply a classifier over weights learned in order to estimate the approximate age of the documents. We present a comparison of different distance metrics such as Euclidean Distance and Hellinger Distance within this application.
Non-Intrusive Impedance-Based Cable Tester
NASA Technical Reports Server (NTRS)
Medelius, Pedro J. (Inventor); Simpson, Howard J. (Inventor)
1999-01-01
A non-intrusive electrical cable tester determines the nature and location of a discontinuity in a cable through application of an oscillating signal to one end of the cable. The frequency of the oscillating signal is varied in increments until a minimum, close to zero voltage is measured at a signal injection point which is indicative of a minimum impedance at that point. The frequency of the test signal at which the minimum impedance occurs is then employed to determine the distance to the discontinuity by employing a formula which relates this distance to the signal frequency and the velocity factor of the cable. A numerically controlled oscillator is provided to generate the oscillating signal, and a microcontroller automatically controls operation of the cable tester to make the desired measurements and display the results. The device is contained in a portable housing which may be hand held to facilitate convenient use of the device in difficult to access locations.
HMM for hyperspectral spectrum representation and classification with endmember entropy vectors
NASA Astrophysics Data System (ADS)
Arabi, Samir Y. W.; Fernandes, David; Pizarro, Marco A.
2015-10-01
The Hyperspectral images due to its good spectral resolution are extensively used for classification, but its high number of bands requires a higher bandwidth in the transmission data, a higher data storage capability and a higher computational capability in processing systems. This work presents a new methodology for hyperspectral data classification that can work with a reduced number of spectral bands and achieve good results, comparable with processing methods that require all hyperspectral bands. The proposed method for hyperspectral spectra classification is based on the Hidden Markov Model (HMM) associated to each Endmember (EM) of a scene and the conditional probabilities of each EM belongs to each other EM. The EM conditional probability is transformed in EM vector entropy and those vectors are used as reference vectors for the classes in the scene. The conditional probability of a spectrum that will be classified is also transformed in a spectrum entropy vector, which is classified in a given class by the minimum ED (Euclidian Distance) among it and the EM entropy vectors. The methodology was tested with good results using AVIRIS spectra of a scene with 13 EM considering the full 209 bands and the reduced spectral bands of 128, 64 and 32. For the test area its show that can be used only 32 spectral bands instead of the original 209 bands, without significant loss in the classification process.
Fuzzy scalar and vector median filters based on fuzzy distances.
Chatzis, V; Pitas, I
1999-01-01
In this paper, the fuzzy scalar median (FSM) is proposed, defined by using ordering of fuzzy numbers based on fuzzy minimum and maximum operations defined by using the extension principle. Alternatively, the FSM is defined from the minimization of a fuzzy distance measure, and the equivalence of the two definitions is proven. Then, the fuzzy vector median (FVM) is proposed as an extension of vector median, based on a novel distance definition of fuzzy vectors, which satisfy the property of angle decomposition. By defining properly the fuzziness of a value, the combination of the basic properties of the classical scalar and vector median (VM) filter with other desirable characteristics can be succeeded.
Ritchie, J Brendan; Carlson, Thomas A
2016-01-01
A fundamental challenge for cognitive neuroscience is characterizing how the primitives of psychological theory are neurally implemented. Attempts to meet this challenge are a manifestation of what Fechner called "inner" psychophysics: the theory of the precise mapping between mental quantities and the brain. In his own time, inner psychophysics remained an unrealized ambition for Fechner. We suggest that, today, multivariate pattern analysis (MVPA), or neural "decoding," methods provide a promising starting point for developing an inner psychophysics. A cornerstone of these methods are simple linear classifiers applied to neural activity in high-dimensional activation spaces. We describe an approach to inner psychophysics based on the shared architecture of linear classifiers and observers under decision boundary models such as signal detection theory. Under this approach, distance from a decision boundary through activation space, as estimated by linear classifiers, can be used to predict reaction time in accordance with signal detection theory, and distance-to-bound models of reaction time. Our "neural distance-to-bound" approach is potentially quite general, and simple to implement. Furthermore, our recent work on visual object recognition suggests it is empirically viable. We believe the approach constitutes an important step along the path to an inner psychophysics that links mind, brain, and behavior.
The Minimum Binding Energy and Size of Doubly Muonic D3 Molecule
NASA Astrophysics Data System (ADS)
Eskandari, M. R.; Faghihi, F.; Mahdavi, M.
The minimum energy and size of doubly muonic D3 molecule, which two of the electrons are replaced by the much heavier muons, are calculated by the well-known variational method. The calculations show that the system possesses two minimum positions, one at typically muonic distance and the second at the atomic distance. It is shown that at the muonic distance, the effective charge, zeff is 2.9. We assumed a symmetric planar vibrational model between two minima and an oscillation potential energy is approximated in this region.
Prediction of fatigue-related driver performance from EEG data by deep Riemannian model.
Hajinoroozi, Mehdi; Jianqiu Zhang; Yufei Huang
2017-07-01
Prediction of the drivers' drowsy and alert states is important for safety purposes. The prediction of drivers' drowsy and alert states from electroencephalography (EEG) using shallow and deep Riemannian methods is presented. For shallow Riemannian methods, the minimum distance to Riemannian mean (mdm) and Log-Euclidian metric are investigated, where it is shown that Log-Euclidian metric outperforms the mdm algorithm. In addition the SPDNet, a deep Riemannian model, that takes the EEG covariance matrix as the input is investigated. It is shown that SPDNet outperforms all tested shallow and deep classification methods. Performance of SPDNet is 6.02% and 2.86% higher than the best performance by the conventional Euclidian classifiers and shallow Riemannian models, respectively.
Feature and Score Fusion Based Multiple Classifier Selection for Iris Recognition
Islam, Md. Rabiul
2014-01-01
The aim of this work is to propose a new feature and score fusion based iris recognition approach where voting method on Multiple Classifier Selection technique has been applied. Four Discrete Hidden Markov Model classifiers output, that is, left iris based unimodal system, right iris based unimodal system, left-right iris feature fusion based multimodal system, and left-right iris likelihood ratio score fusion based multimodal system, is combined using voting method to achieve the final recognition result. CASIA-IrisV4 database has been used to measure the performance of the proposed system with various dimensions. Experimental results show the versatility of the proposed system of four different classifiers with various dimensions. Finally, recognition accuracy of the proposed system has been compared with existing N hamming distance score fusion approach proposed by Ma et al., log-likelihood ratio score fusion approach proposed by Schmid et al., and single level feature fusion approach proposed by Hollingsworth et al. PMID:25114676
Feature and score fusion based multiple classifier selection for iris recognition.
Islam, Md Rabiul
2014-01-01
The aim of this work is to propose a new feature and score fusion based iris recognition approach where voting method on Multiple Classifier Selection technique has been applied. Four Discrete Hidden Markov Model classifiers output, that is, left iris based unimodal system, right iris based unimodal system, left-right iris feature fusion based multimodal system, and left-right iris likelihood ratio score fusion based multimodal system, is combined using voting method to achieve the final recognition result. CASIA-IrisV4 database has been used to measure the performance of the proposed system with various dimensions. Experimental results show the versatility of the proposed system of four different classifiers with various dimensions. Finally, recognition accuracy of the proposed system has been compared with existing N hamming distance score fusion approach proposed by Ma et al., log-likelihood ratio score fusion approach proposed by Schmid et al., and single level feature fusion approach proposed by Hollingsworth et al.
Site classification of Indian strong motion network using response spectra ratios
NASA Astrophysics Data System (ADS)
Chopra, Sumer; Kumar, Vikas; Choudhury, Pallabee; Yadav, R. B. S.
2018-03-01
In the present study, we tried to classify the Indian strong motion sites spread all over Himalaya and adjoining region, located on varied geological formations, based on response spectral ratio. A total of 90 sites were classified based on 395 strong motion records from 94 earthquakes recorded at these sites. The magnitude of these earthquakes are between 2.3 and 7.7 and the hypocentral distance for most of the cases is less than 50 km. The predominant period obtained from response spectral ratios is used to classify these sites. It was found that the shape and predominant peaks of the spectra at these sites match with those in Japan, Italy, Iran, and at some of the sites in Europe and the same classification scheme can be applied to Indian strong motion network. We found that the earlier schemes based on description of near-surface geology, geomorphology, and topography were not able to capture the effect of sediment thickness. The sites are classified into seven classes (CL-I to CL-VII) with varying predominant periods and ranges as proposed by Alessandro et al. (Bull Seismol Soc Am 102:680-695 2012). The effect of magnitudes and hypocentral distances on the shape and predominant peaks were also studied and found to be very small. The classification scheme is robust and cost-effective and can be used in region-specific attenuation relationships for accounting local site effect.
A semi-supervised classification algorithm using the TAD-derived background as training data
NASA Astrophysics Data System (ADS)
Fan, Lei; Ambeau, Brittany; Messinger, David W.
2013-05-01
In general, spectral image classification algorithms fall into one of two categories: supervised and unsupervised. In unsupervised approaches, the algorithm automatically identifies clusters in the data without a priori information about those clusters (except perhaps the expected number of them). Supervised approaches require an analyst to identify training data to learn the characteristics of the clusters such that they can then classify all other pixels into one of the pre-defined groups. The classification algorithm presented here is a semi-supervised approach based on the Topological Anomaly Detection (TAD) algorithm. The TAD algorithm defines background components based on a mutual k-Nearest Neighbor graph model of the data, along with a spectral connected components analysis. Here, the largest components produced by TAD are used as regions of interest (ROI's),or training data for a supervised classification scheme. By combining those ROI's with a Gaussian Maximum Likelihood (GML) or a Minimum Distance to the Mean (MDM) algorithm, we are able to achieve a semi supervised classification method. We test this classification algorithm against data collected by the HyMAP sensor over the Cooke City, MT area and University of Pavia scene.
A new approach to keratoconus detection based on corneal morphogeometric analysis.
Cavas-Martínez, Francisco; Bataille, Laurent; Fernández-Pacheco, Daniel G; Cañavate, Francisco J F; Alió, Jorge L
2017-01-01
To characterize corneal structural changes in keratoconus using a new morphogeometric approach and to evaluate its potential diagnostic ability. Comparative study including 464 eyes of 464 patients (age, 16 and 72 years) divided into two groups: control group (143 healthy eyes) and keratoconus group (321 keratoconus eyes). Topographic information (Sirius, CSO, Italy) was processed with SolidWorks v2012 and a solid model representing the geometry of each cornea was generated. The following parameters were defined: anterior (Aant) and posterior (Apost) corneal surface areas, area of the cornea within the sagittal plane passing through the Z axis and the apex (Aapexant, Aapexpost) and minimum thickness points (Amctant, Amctpost) of the anterior and posterior corneal surfaces, and average distance from the Z axis to the apex (Dapexant, Dapexpost) and minimum thickness points (Dmctant, Dmctpost) of both corneal surfaces. Significant differences among control and keratoconus group were found in Aapexant, Aapexpost, Amctant, Amctpost, Dapexant, Dapexpost (all p<0.001), Apost (p = 0.014), and Dmctpost (p = 0.035). Significant correlations in keratoconus group were found between Aant and Apost (r = 0.836), Amctant and Amctpost (r = 0.983), and Dmctant and Dmctpost (r = 0.954, all p<0.001). A logistic regression analysis revealed that the detection of keratoconus grade I (Amsler Krumeich) was related to Apost, Atot, Aapexant, Amctant, Amctpost, Dapexpost, Dmctant and Dmctpost (Hosmer-Lemeshow: p>0.05, R2 Nagelkerke: 0.926). The overall percentage of cases correctly classified by the model was 97.30%. Our morphogeometric approach based on the analysis of the cornea as a solid is useful for the characterization and detection of keratoconus.
Detection of Epileptic Seizure Event and Onset Using EEG
Ahammad, Nabeel; Fathima, Thasneem; Joseph, Paul
2014-01-01
This study proposes a method of automatic detection of epileptic seizure event and onset using wavelet based features and certain statistical features without wavelet decomposition. Normal and epileptic EEG signals were classified using linear classifier. For seizure event detection, Bonn University EEG database has been used. Three types of EEG signals (EEG signal recorded from healthy volunteer with eye open, epilepsy patients in the epileptogenic zone during a seizure-free interval, and epilepsy patients during epileptic seizures) were classified. Important features such as energy, entropy, standard deviation, maximum, minimum, and mean at different subbands were computed and classification was done using linear classifier. The performance of classifier was determined in terms of specificity, sensitivity, and accuracy. The overall accuracy was 84.2%. In the case of seizure onset detection, the database used is CHB-MIT scalp EEG database. Along with wavelet based features, interquartile range (IQR) and mean absolute deviation (MAD) without wavelet decomposition were extracted. Latency was used to study the performance of seizure onset detection. Classifier gave a sensitivity of 98.5% with an average latency of 1.76 seconds. PMID:24616892
DETERMINING MINIMUM IGNITION ENERGIES AND QUENCHING DISTANCES OF DIFFICULT-TO-IGNITE COMPOUNDS
Minimum spark energies and corresponding flat-plate electrode quenching distances required to initiate propagation of a combustion wave have been experimentally measured for four flammable hydrofluorocarbon (HFC) refrigerants and propane using ASTM (American Society for Testing a...
Three-dimensional modeling and animation of two carpal bones: a technique.
Green, Jason K; Werner, Frederick W; Wang, Haoyu; Weiner, Marsha M; Sacks, Jonathan M; Short, Walter H
2004-05-01
The objectives of this study were to (a). create 3D reconstructions of two carpal bones from single CT data sets and animate these bones with experimental in vitro motion data collected during dynamic loading of the wrist joint, (b). develop a technique to calculate the minimum interbone distance between the two carpal bones, and (c). validate the interbone distance calculation process. This method utilized commercial software to create the animations and an in-house program to interface with three-dimensional CAD software to calculate the minimum distance between the irregular geometries of the bones. This interbone minimum distance provides quantitative information regarding the motion of the bones studied and may help to understand and quantify the effects of ligamentous injury.
A new algorithm for reducing the workload of experts in performing systematic reviews.
Matwin, Stan; Kouznetsov, Alexandre; Inkpen, Diana; Frunza, Oana; O'Blenis, Peter
2010-01-01
To determine whether a factorized version of the complement naïve Bayes (FCNB) classifier can reduce the time spent by experts reviewing journal articles for inclusion in systematic reviews of drug class efficacy for disease treatment. The proposed classifier was evaluated on a test collection built from 15 systematic drug class reviews used in previous work. The FCNB classifier was constructed to classify each article as containing high-quality, drug class-specific evidence or not. Weight engineering (WE) techniques were added to reduce underestimation for Medical Subject Headings (MeSH)-based and Publication Type (PubType)-based features. Cross-validation experiments were performed to evaluate the classifier's parameters and performance. Work saved over sampling (WSS) at no less than a 95% recall was used as the main measure of performance. The minimum workload reduction for a systematic review for one topic, achieved with a FCNB/WE classifier, was 8.5%; the maximum was 62.2% and the average over the 15 topics was 33.5%. This is 15.0% higher than the average workload reduction obtained using a voting perceptron-based automated citation classification system. The FCNB/WE classifier is simple, easy to implement, and produces significantly better results in reducing the workload than previously achieved. The results support it being a useful algorithm for machine-learning-based automation of systematic reviews of drug class efficacy for disease treatment.
Recognition of In-Vehicle Group Activities (iVGA): Phase-I, Feasibility Study
2014-08-27
the driver is either adjusting his/her eyeglasses , adjusting his/her makeup, or possibly attempt to hiding his/her face from getting recognized. In...closest of two patterns measured based on hamming distance determine the best class representing a test pattern. Figure 61 presents the Hamming neural...symbols are different. In another way, it measures the minimum number of substitutions required to change one string into the other, or the minimum
Fukunishi, Yoshifumi; Mikami, Yoshiaki; Nakamura, Haruki
2005-09-01
We developed a new method to evaluate the distances and similarities between receptor pockets or chemical compounds based on a multi-receptor versus multi-ligand docking affinity matrix. The receptors were classified by a cluster analysis based on calculations of the distance between receptor pockets. A set of low homologous receptors that bind a similar compound could be classified into one cluster. Based on this line of reasoning, we proposed a new in silico screening method. According to this method, compounds in a database were docked to multiple targets. The new docking score was a slightly modified version of the multiple active site correction (MASC) score. Receptors that were at a set distance from the target receptor were not included in the analysis, and the modified MASC scores were calculated for the selected receptors. The choice of the receptors is important to achieve a good screening result, and our clustering of receptors is useful to this purpose. This method was applied to the analysis of a set of 132 receptors and 132 compounds, and the results demonstrated that this method achieves a high hit ratio, as compared to that of a uniform sampling, using a receptor-ligand docking program, Sievgene, which was newly developed with a good docking performance yielding 50.8% of the reconstructed complexes at a distance of less than 2 A RMSD.
Classification of resistance to passive motion using minimum probability of error criterion.
Chan, H C; Manry, M T; Kondraske, G V
1987-01-01
Neurologists diagnose many muscular and nerve disorders by classifying the resistance to passive motion of patients' limbs. Over the past several years, a computer-based instrument has been developed for automated measurement and parameterization of this resistance. In the device, a voluntarily relaxed lower extremity is moved at constant velocity by a motorized driver. The torque exerted on the extremity by the machine is sampled, along with the angle of the extremity. In this paper a computerized technique is described for classifying a patient's condition as 'Normal' or 'Parkinson disease' (rigidity), from the torque versus angle curve for the knee joint. A Legendre polynomial, fit to the curve, is used to calculate a set of eight normally distributed features of the curve. The minimum probability of error approach is used to classify the curve as being from a normal or Parkinson disease patient. Data collected from 44 different subjects was processes and the results were compared with an independent physician's subjective assessment of rigidity. There is agreement in better than 95% of the cases, when all of the features are used.
41 CFR 302-4.704 - Must we require a minimum driving distance per day?
Code of Federal Regulations, 2010 CFR
2010-07-01
... Federal Travel Regulation System RELOCATION ALLOWANCES PERMANENT CHANGE OF STATION (PCS) ALLOWANCES FOR... driving distance not less than an average of 300 miles per day. However, an exception to the daily minimum... reasons acceptable to you. ...
Ground-based Observations for the Asteroid Itokawa
NASA Astrophysics Data System (ADS)
Ishiguro, M.; Tholen, D. J.; Hasegawa, S.; Abe, M.; Sekiguchi, T.; Ostro, S. J.; Kaasalainen, M.
Apollo-type near-Earth asteroid (25143) Itokawa is a target of the asteroid explorer "HAYABUSA" launched in May 2003. On March 29, 2001, Itokawa was close to the Earth at a minimum distance of 0.038 AU. During the apparition, vigorous ground-based observations have performed. Multi-band photometry (e.g. ECAS and Johnson-Cousins photometric system) and spectroscopy in visible and near-infrared revealed that Itokawa is classified as an S(IV)-type asteroid, and the surface composition is like an anhydrous ordinary chondrite. The extensive photometric campaign data indicate that the rotation is retrograde (i.e., the pole orientation of the asteroid is south of the ecliptic plane) and its rotational period is 12 hr. From the mid-infrared observation, Itokawa is found to be a sub-km size. Detail three dimensional model was constructed based on both the radar observations and the optical lightcurve. Moreover, the bulk density determined by radar observations is 2.5 g/cc. Generally, the results obtained by optical, infrared and radar observations are consistent with each other. These observational results provide constraints on the thermal and optical design of Hayabusa spacecraft and its scientific devices. In this paper, we review these results mentioned above. In addition, we are planning to introduce the latest results obtained during the apparition in 2004.
A Modified Hopfield Neural Network Algorithm (MHNNA) Using ALOS Image for Water Quality Mapping
Kzar, Ahmed Asal; Mat Jafri, Mohd Zubir; Mutter, Kussay N.; Syahreza, Saumi
2015-01-01
Decreasing water pollution is a big problem in coastal waters. Coastal health of ecosystems can be affected by high concentrations of suspended sediment. In this work, a Modified Hopfield Neural Network Algorithm (MHNNA) was used with remote sensing imagery to classify the total suspended solids (TSS) concentrations in the waters of coastal Langkawi Island, Malaysia. The adopted remote sensing image is the Advanced Land Observation Satellite (ALOS) image acquired on 18 January 2010. Our modification allows the Hopfield neural network to convert and classify color satellite images. The samples were collected from the study area simultaneously with the acquiring of satellite imagery. The sample locations were determined using a handheld global positioning system (GPS). The TSS concentration measurements were conducted in a lab and used for validation (real data), classification, and accuracy assessments. Mapping was achieved by using the MHNNA to classify the concentrations according to their reflectance values in band 1, band 2, and band 3. The TSS map was color-coded for visual interpretation. The efficiency of the proposed algorithm was investigated by dividing the validation data into two groups. The first group was used as source samples for supervisor classification via the MHNNA. The second group was used to test the MHNNA efficiency. After mapping, the locations of the second group in the produced classes were detected. Next, the correlation coefficient (R) and root mean square error (RMSE) were calculated between the two groups, according to their corresponding locations in the classes. The MHNNA exhibited a higher R (0.977) and lower RMSE (2.887). In addition, we test the MHNNA with noise, where it proves its accuracy with noisy images over a range of noise levels. All results have been compared with a minimum distance classifier (Min-Dis). Therefore, TSS mapping of polluted water in the coastal Langkawi Island, Malaysia can be performed using the adopted MHNNA with remote sensing techniques (as based on ALOS images). PMID:26729148
Distance estimation and collision prediction for on-line robotic motion planning
NASA Technical Reports Server (NTRS)
Kyriakopoulos, K. J.; Saridis, G. N.
1991-01-01
An efficient method for computing the minimum distance and predicting collisions between moving objects is presented. This problem has been incorporated in the framework of an in-line motion planning algorithm to satisfy collision avoidance between a robot and moving objects modeled as convex polyhedra. In the beginning the deterministic problem, where the information about the objects is assumed to be certain is examined. If instead of the Euclidean norm, L(sub 1) or L(sub infinity) norms are used to represent distance, the problem becomes a linear programming problem. The stochastic problem is formulated, where the uncertainty is induced by sensing and the unknown dynamics of the moving obstacles. Two problems are considered: (1) filtering of the minimum distance between the robot and the moving object, at the present time; and (2) prediction of the minimum distance in the future, in order to predict possible collisions with the moving obstacles and estimate the collision time.
Model-based multiple patterning layout decomposition
NASA Astrophysics Data System (ADS)
Guo, Daifeng; Tian, Haitong; Du, Yuelin; Wong, Martin D. F.
2015-10-01
As one of the most promising next generation lithography technologies, multiple patterning lithography (MPL) plays an important role in the attempts to keep in pace with 10 nm technology node and beyond. With feature size keeps shrinking, it has become impossible to print dense layouts within one single exposure. As a result, MPL such as double patterning lithography (DPL) and triple patterning lithography (TPL) has been widely adopted. There is a large volume of literature on DPL/TPL layout decomposition, and the current approach is to formulate the problem as a classical graph-coloring problem: Layout features (polygons) are represented by vertices in a graph G and there is an edge between two vertices if and only if the distance between the two corresponding features are less than a minimum distance threshold value dmin. The problem is to color the vertices of G using k colors (k = 2 for DPL, k = 3 for TPL) such that no two vertices connected by an edge are given the same color. This is a rule-based approach, which impose a geometric distance as a minimum constraint to simply decompose polygons within the distance into different masks. It is not desired in practice because this criteria cannot completely capture the behavior of the optics. For example, it lacks of sufficient information such as the optical source characteristics and the effects between the polygons outside the minimum distance. To remedy the deficiency, a model-based layout decomposition approach to make the decomposition criteria base on simulation results was first introduced at SPIE 2013.1 However, the algorithm1 is based on simplified assumption on the optical simulation model and therefore its usage on real layouts is limited. Recently AMSL2 also proposed a model-based approach to layout decomposition by iteratively simulating the layout, which requires excessive computational resource and may lead to sub-optimal solutions. The approach2 also potentially generates too many stiches. In this paper, we propose a model-based MPL layout decomposition method using a pre-simulated library of frequent layout patterns. Instead of using the graph G in the standard graph-coloring formulation, we build an expanded graph H where each vertex represents a group of adjacent features together with a coloring solution. By utilizing the library and running sophisticated graph algorithms on H, our approach can obtain optimal decomposition results efficiently. Our model-based solution can achieve a practical mask design which significantly improves the lithography quality on the wafer compared to the rule based decomposition.
Rate-compatible protograph LDPC code families with linear minimum distance
NASA Technical Reports Server (NTRS)
Divsalar, Dariush (Inventor); Dolinar, Jr., Samuel J (Inventor); Jones, Christopher R. (Inventor)
2012-01-01
Digital communication coding methods are shown, which generate certain types of low-density parity-check (LDPC) codes built from protographs. A first method creates protographs having the linear minimum distance property and comprising at least one variable node with degree less than 3. A second method creates families of protographs of different rates, all having the linear minimum distance property, and structurally identical for all rates except for a rate-dependent designation of certain variable nodes as transmitted or non-transmitted. A third method creates families of protographs of different rates, all having the linear minimum distance property, and structurally identical for all rates except for a rate-dependent designation of the status of certain variable nodes as non-transmitted or set to zero. LDPC codes built from the protographs created by these methods can simultaneously have low error floors and low iterative decoding thresholds, and families of such codes of different rates can be decoded efficiently using a common decoding architecture.
Yu, Jing; Zhu, Yi Feng; Dai, Mei Xia; Lin, Xia; Mao, Shuo Qian
2017-05-18
Utilizing the plankton 1 (505 Μm), 2 (160 Μm), 3 (77 Μm) nets to seasonally collect zooplankton samples at 10 stations and the corresponding abundance data was obtained. Based on individual zooplankton biovolume, size groups were classified to test the changes in spatiotemporal characteristics of both Sheldon and normalized biovolume size spectra in thermal discharge seawaters near the Guohua Power Plant, so as to explore the effects of temperature increase on zooplankton size spectra in the seawaters. The results showed that the individual biovolume of zooplankton ranged from 0.00012 to 127.0 mm 3 ·ind -1 , which could be divided into 21 size groups, and corresponding logarithmic ranges were from -13.06 to 6.99. According to Sheldon size spectra, the predominant species to form main peaks of the size spectrum in different months were Copepodite larvae, Centropages mcmurrichi, Calanus sinicus, fish larvae, Sagitta bedoti, Sagitta nagae and Pleurobrachia globosa, and minor peaks mostly consisted of individuals with smaller larvae, Cyclops and Paracalanus aculeatus. In different warming sections, Copepodite larvae, fish eggs and Cyclops were mostly unaffected by the temperature increase, while the macrozooplankton such as S. bedoti, S. nagae, P. globosa, C. sinicus and Beroe cucumis had an obvious tendency to avoid the outfall of the power plant. Based on the results of normalized size spectra, the intercepts from low to high occurred in November, February, May and August, respectively. At the same time, the minimum slope was found in February, and similarly bigger slopes were observed in May and August. These results indicated that the proportion of small zooplankton was highest in February, while the proportions of the meso- and macro-zooplankton were relatively high in May and August. Among different sections, the slope in the 0.2 km section was minimum, which increased with the increase of section distance to the outfall. The result obviously demonstrated that the closer the distance was from outfall of the power plant, the smaller the zooplankton became. On the whole, the average intercept of normalized size spectrum in Xiangshan Bay was 4.68, and the slope was -0.655.
Silva Filho, Telmo M; Souza, Renata M C R; Prudêncio, Ricardo B C
2016-08-01
Some complex data types are capable of modeling data variability and imprecision. These data types are studied in the symbolic data analysis field. One such data type is interval data, which represents ranges of values and is more versatile than classic point data for many domains. This paper proposes a new prototype-based classifier for interval data, trained by a swarm optimization method. Our work has two main contributions: a swarm method which is capable of performing both automatic selection of features and pruning of unused prototypes and a generalized weighted squared Euclidean distance for interval data. By discarding unnecessary features and prototypes, the proposed algorithm deals with typical limitations of prototype-based methods, such as the problem of prototype initialization. The proposed distance is useful for learning classes in interval datasets with different shapes, sizes and structures. When compared to other prototype-based methods, the proposed method achieves lower error rates in both synthetic and real interval datasets. Copyright © 2016 Elsevier Ltd. All rights reserved.
Multiple Spectral-Spatial Classification Approach for Hyperspectral Data
NASA Technical Reports Server (NTRS)
Tarabalka, Yuliya; Benediktsson, Jon Atli; Chanussot, Jocelyn; Tilton, James C.
2010-01-01
A .new multiple classifier approach for spectral-spatial classification of hyperspectral images is proposed. Several classifiers are used independently to classify an image. For every pixel, if all the classifiers have assigned this pixel to the same class, the pixel is kept as a marker, i.e., a seed of the spatial region, with the corresponding class label. We propose to use spectral-spatial classifiers at the preliminary step of the marker selection procedure, each of them combining the results of a pixel-wise classification and a segmentation map. Different segmentation methods based on dissimilar principles lead to different classification results. Furthermore, a minimum spanning forest is built, where each tree is rooted on a classification -driven marker and forms a region in the spectral -spatial classification: map. Experimental results are presented for two hyperspectral airborne images. The proposed method significantly improves classification accuracies, when compared to previously proposed classification techniques.
Analytic processing of distance.
Dopkins, Stephen; Galyer, Darin
2018-01-01
How does a human observer extract from the distance between two frontal points the component corresponding to an axis of a rectangular reference frame? To find out we had participants classify pairs of small circles, varying on the horizontal and vertical axes of a computer screen, in terms of the horizontal distance between them. A response signal controlled response time. The error rate depended on the irrelevant vertical as well as the relevant horizontal distance between the test circles with the relevant distance effect being larger than the irrelevant distance effect. The results implied that the horizontal distance between the test circles was imperfectly extracted from the overall distance between them. The results supported an account, derived from the Exemplar Based Random Walk model (Nosofsky & Palmieri, 1997), under which distance classification is based on the overall distance between the test circles, with relevant distance being extracted from overall distance to the extent that the relevant and irrelevant axes are differentially weighted so as to reduce the contribution of irrelevant distance to overall distance. The results did not support an account, derived from the General Recognition Theory (Ashby & Maddox, 1994), under which distance classification is based on the relevant distance between the test circles, with the irrelevant distance effect arising because a test circle's perceived location on the relevant axis depends on its location on the irrelevant axis, and with relevant distance being extracted from overall distance to the extent that this dependency is absent. Copyright © 2017 Elsevier B.V. All rights reserved.
Yousef, Malik; Khalifa, Waleed; AbedAllah, Loai
2016-12-22
The performance of many learning and data mining algorithms depends critically on suitable metrics to assess efficiency over the input space. Learning a suitable metric from examples may, therefore, be the key to successful application of these algorithms. We have demonstrated that the k-nearest neighbor (kNN) classification can be significantly improved by learning a distance metric from labeled examples. The clustering ensemble is used to define the distance between points in respect to how they co-cluster. This distance is then used within the framework of the kNN algorithm to define a classifier named ensemble clustering kNN classifier (EC-kNN). In many instances in our experiments we achieved highest accuracy while SVM failed to perform as well. In this study, we compare the performance of a two-class classifier using EC-kNN with different one-class and two-class classifiers. The comparison was applied to seven different plant microRNA species considering eight feature selection methods. In this study, the averaged results show that ECkNN outperforms all other methods employed here and previously published results for the same data. In conclusion, this study shows that the chosen classifier shows high performance when the distance metric is carefully chosen.
Yousef, Malik; Khalifa, Waleed; AbdAllah, Loai
2016-12-01
The performance of many learning and data mining algorithms depends critically on suitable metrics to assess efficiency over the input space. Learning a suitable metric from examples may, therefore, be the key to successful application of these algorithms. We have demonstrated that the k-nearest neighbor (kNN) classification can be significantly improved by learning a distance metric from labeled examples. The clustering ensemble is used to define the distance between points in respect to how they co-cluster. This distance is then used within the framework of the kNN algorithm to define a classifier named ensemble clustering kNN classifier (EC-kNN). In many instances in our experiments we achieved highest accuracy while SVM failed to perform as well. In this study, we compare the performance of a two-class classifier using EC-kNN with different one-class and two-class classifiers. The comparison was applied to seven different plant microRNA species considering eight feature selection methods. In this study, the averaged results show that EC-kNN outperforms all other methods employed here and previously published results for the same data. In conclusion, this study shows that the chosen classifier shows high performance when the distance metric is carefully chosen.
Nearest Neighbor Algorithms for Pattern Classification
NASA Technical Reports Server (NTRS)
Barrios, J. O.
1972-01-01
A solution of the discrimination problem is considered by means of the minimum distance classifier, commonly referred to as the nearest neighbor (NN) rule. The NN rule is nonparametric, or distribution free, in the sense that it does not depend on any assumptions about the underlying statistics for its application. The k-NN rule is a procedure that assigns an observation vector z to a category F if most of the k nearby observations x sub i are elements of F. The condensed nearest neighbor (CNN) rule may be used to reduce the size of the training set required categorize The Bayes risk serves merely as a reference-the limit of excellence beyond which it is not possible to go. The NN rule is bounded below by the Bayes risk and above by twice the Bayes risk.
[Optimization of cluster analysis based on drug resistance profiles of MRSA isolates].
Tani, Hiroya; Kishi, Takahiko; Gotoh, Minehiro; Yamagishi, Yuka; Mikamo, Hiroshige
2015-12-01
We examined 402 methicillin-resistant Staphylococcus aureus (MRSA) strains isolated from clinical specimens in our hospital between November 19, 2010 and December 27, 2011 to evaluate the similarity between cluster analysis of drug susceptibility tests and pulsed-field gel electrophoresis (PFGE). The results showed that the 402 strains tested were classified into 27 PFGE patterns (151 subtypes of patterns). Cluster analyses of drug susceptibility tests with the cut-off distance yielding a similar classification capability showed favorable results--when the MIC method was used, and minimum inhibitory concentration (MIC) values were used directly in the method, the level of agreement with PFGE was 74.2% when 15 drugs were tested. The Unweighted Pair Group Method with Arithmetic mean (UPGMA) method was effective when the cut-off distance was 16. Using the SIR method in which susceptible (S), intermediate (I), and resistant (R) were coded as 0, 2, and 3, respectively, according to the Clinical and Laboratory Standards Institute (CLSI) criteria, the level of agreement with PFGE was 75.9% when the number of drugs tested was 17, the method used for clustering was the UPGMA, and the cut-off distance was 3.6. In addition, to assess the reproducibility of the results, 10 strains were randomly sampled from the overall test and subjected to cluster analysis. This was repeated 100 times under the same conditions. The results indicated good reproducibility of the results, with the level of agreement with PFGE showing a mean of 82.0%, standard deviation of 12.1%, and mode of 90.0% for the MIC method and a mean of 80.0%, standard deviation of 13.4%, and mode of 90.0% for the SIR method. In summary, cluster analysis for drug susceptibility tests is useful for the epidemiological analysis of MRSA.
Maximum Likelihood and Minimum Distance Applied to Univariate Mixture Distributions.
ERIC Educational Resources Information Center
Wang, Yuh-Yin Wu; Schafer, William D.
This Monte-Carlo study compared modified Newton (NW), expectation-maximization algorithm (EM), and minimum Cramer-von Mises distance (MD), used to estimate parameters of univariate mixtures of two components. Data sets were fixed at size 160 and manipulated by mean separation, variance ratio, component proportion, and non-normality. Results…
47 CFR 73.807 - Minimum distance separation between stations.
Code of Federal Regulations, 2010 CFR
2010-10-01
... and the right-hand column lists (for informational purposes only) the minimum distance necessary for...) Within 320 km of the Mexican border, LP100 stations must meet the following separations with respect to any Mexican stations: Mexican station class Co-channel (km) First-adjacent channel (km) Second-third...
Venara, A; Gaudin, A; Lebigot, J; Airagnes, G; Hamel, J F; Jousset, N; Ridereau-Zins, C; Mauillon, D; Rouge-Maillart, C
2013-06-10
Forensic doctors are frequently asked by magistrates when dealing principally with knife wounds, about the depth of the blade which may have penetrated the victim's body. Without the use of imaging, it is often difficult to respond to this question, even in an approximate way. Knowledge of the various distances between organs and the skin wall would allow an assessment to be made of the minimum blade length required to obtain the injuries observed. The objective of this study is thus to determine average distances between the vital organs of the thorax and abdomen, and the skin wall, taking into account the person's body mass index (BMI). This is a prospective single-center study, carried out over a 2-month period at University Hospital in Angers. A sample of 200 people was studied. The inclusion criteria were as follows: all patients coming to the radiology department and the emergency department for an abdominal, thoracic or thoraco-abdominal scan with injection. The exclusion criteria included patients presenting a large lymphoma, a large abdominal or retroperitoneal tumor, a tumor in one of the organs targeted by our study and patients presenting ascites. The organs focused on were: the pericardium, pleura, aorta, liver, spleen, kidneys, abdominal aorta and femoral arteries. The shortest distance between the organ and the skin wall was noted. Median distances were calculated according to gender, abdominal diameter and BMI. We associated these values to propose an indicative chart which may be used by doctors in connection with their forensic activities. The problem of the depth of a wound is frequently exposed to the expert. Without a reliable tool, it is difficult to value and a personal interpretation is often done. Even if, in current days, tomodensitometry is frequently done in vivo or after death, measurement can be difficult because of the local conditions. We classified values according to the different factors of fat repartition (BMI, abdominal diameter, gender). These tables, collectively used, permit evaluation of the distance between wall and thoracic or abdominal vital organs. We suggest an indicative chart designed for forensic doctors in their professional life to help determine the minimum penetration length for a knife, which may wound a vital organ. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Performance Analysis of Classification Methods for Indoor Localization in Vlc Networks
NASA Astrophysics Data System (ADS)
Sánchez-Rodríguez, D.; Alonso-González, I.; Sánchez-Medina, J.; Ley-Bosch, C.; Díaz-Vilariño, L.
2017-09-01
Indoor localization has gained considerable attention over the past decade because of the emergence of numerous location-aware services. Research works have been proposed on solving this problem by using wireless networks. Nevertheless, there is still much room for improvement in the quality of the proposed classification models. In the last years, the emergence of Visible Light Communication (VLC) brings a brand new approach to high quality indoor positioning. Among its advantages, this new technology is immune to electromagnetic interference and has the advantage of having a smaller variance of received signal power compared to RF based technologies. In this paper, a performance analysis of seventeen machine leaning classifiers for indoor localization in VLC networks is carried out. The analysis is accomplished in terms of accuracy, average distance error, computational cost, training size, precision and recall measurements. Results show that most of classifiers harvest an accuracy above 90 %. The best tested classifier yielded a 99.0 % accuracy, with an average error distance of 0.3 centimetres.
Bissacco, Alessandro; Chiuso, Alessandro; Soatto, Stefano
2007-11-01
We address the problem of performing decision tasks, and in particular classification and recognition, in the space of dynamical models in order to compare time series of data. Motivated by the application of recognition of human motion in image sequences, we consider a class of models that include linear dynamics, both stable and marginally stable (periodic), both minimum and non-minimum phase, driven by non-Gaussian processes. This requires extending existing learning and system identification algorithms to handle periodic modes and nonminimum phase behavior, while taking into account higher-order statistics of the data. Once a model is identified, we define a kernel-based cord distance between models that includes their dynamics, their initial conditions as well as input distribution. This is made possible by a novel kernel defined between two arbitrary (non-Gaussian) distributions, which is computed by efficiently solving an optimal transport problem. We validate our choice of models, inference algorithm, and distance on the tasks of human motion synthesis (sample paths of the learned models), and recognition (nearest-neighbor classification in the computed distance). However, our work can be applied more broadly where one needs to compare historical data while taking into account periodic trends, non-minimum phase behavior, and non-Gaussian input distributions.
Exploiting Sparsity in Hyperspectral Image Classification via Graphical Models
2013-05-01
distribution p by minimizing the Kullback – Leibler (KL) distance D(p‖p̂) = Ep[log(p/p̂)] using first- and second-order statistics, via a maximum-weight...Obtain sparse representations αl, l = 1, . . . , T , in RN from test image. 6: Inference: Classify based on the output of the resulting classifier using ...The public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing
Optimum nonparametric estimation of population density based on ordered distances
Patil, S.A.; Kovner, J.L.; Burnham, Kenneth P.
1982-01-01
The asymptotic mean and error mean square are determined for the nonparametric estimator of plant density by distance sampling proposed by Patil, Burnham and Kovner (1979, Biometrics 35, 597-604. On the basis of these formulae, a bias-reduced version of this estimator is given, and its specific form is determined which gives minimum mean square error under varying assumptions about the true probability density function of the sampled data. Extension is given to line-transect sampling.
Ensemble Weight Enumerators for Protograph LDPC Codes
NASA Technical Reports Server (NTRS)
Divsalar, Dariush
2006-01-01
Recently LDPC codes with projected graph, or protograph structures have been proposed. In this paper, finite length ensemble weight enumerators for LDPC codes with protograph structures are obtained. Asymptotic results are derived as the block size goes to infinity. In particular we are interested in obtaining ensemble average weight enumerators for protograph LDPC codes which have minimum distance that grows linearly with block size. As with irregular ensembles, linear minimum distance property is sensitive to the proportion of degree-2 variable nodes. In this paper the derived results on ensemble weight enumerators show that linear minimum distance condition on degree distribution of unstructured irregular LDPC codes is a sufficient but not a necessary condition for protograph LDPC codes.
A new approach to keratoconus detection based on corneal morphogeometric analysis
Bataille, Laurent; Fernández-Pacheco, Daniel G.; Cañavate, Francisco J. F.; Alió, Jorge L.
2017-01-01
Purpose To characterize corneal structural changes in keratoconus using a new morphogeometric approach and to evaluate its potential diagnostic ability. Methods Comparative study including 464 eyes of 464 patients (age, 16 and 72 years) divided into two groups: control group (143 healthy eyes) and keratoconus group (321 keratoconus eyes). Topographic information (Sirius, CSO, Italy) was processed with SolidWorks v2012 and a solid model representing the geometry of each cornea was generated. The following parameters were defined: anterior (Aant) and posterior (Apost) corneal surface areas, area of the cornea within the sagittal plane passing through the Z axis and the apex (Aapexant, Aapexpost) and minimum thickness points (Amctant, Amctpost) of the anterior and posterior corneal surfaces, and average distance from the Z axis to the apex (Dapexant, Dapexpost) and minimum thickness points (Dmctant, Dmctpost) of both corneal surfaces. Results Significant differences among control and keratoconus group were found in Aapexant, Aapexpost, Amctant, Amctpost, Dapexant, Dapexpost (all p<0.001), Apost (p = 0.014), and Dmctpost (p = 0.035). Significant correlations in keratoconus group were found between Aant and Apost (r = 0.836), Amctant and Amctpost (r = 0.983), and Dmctant and Dmctpost (r = 0.954, all p<0.001). A logistic regression analysis revealed that the detection of keratoconus grade I (Amsler Krumeich) was related to Apost, Atot, Aapexant, Amctant, Amctpost, Dapexpost, Dmctant and Dmctpost (Hosmer-Lemeshow: p>0.05, R2 Nagelkerke: 0.926). The overall percentage of cases correctly classified by the model was 97.30%. Conclusions Our morphogeometric approach based on the analysis of the cornea as a solid is useful for the characterization and detection of keratoconus. PMID:28886157
Discrimination of malignant lymphomas and leukemia using Radon transform based-higher order spectra
NASA Astrophysics Data System (ADS)
Luo, Yi; Celenk, Mehmet; Bejai, Prashanth
2006-03-01
A new algorithm that can be used to automatically recognize and classify malignant lymphomas and leukemia is proposed in this paper. The algorithm utilizes the morphological watersheds to obtain boundaries of cells from cell images and isolate them from the surrounding background. The areas of cells are extracted from cell images after background subtraction. The Radon transform and higher-order spectra (HOS) analysis are utilized as an image processing tool to generate class feature vectors of different type cells and to extract testing cells' feature vectors. The testing cells' feature vectors are then compared with the known class feature vectors for a possible match by computing the Euclidean distances. The cell in question is classified as belonging to one of the existing cell classes in the least Euclidean distance sense.
Impact of the reduced vertical separation minimum on the domestic United States
DOT National Transportation Integrated Search
2009-01-31
Aviation regulatory bodies have enacted the reduced vertical separation minimum standard over most of the globe. The reduced vertical separation minimum is a technique that reduces the minimum vertical separation distance between aircraft from 2000 t...
ERIC Educational Resources Information Center
Kubota, Keiichi; Fujikawa, Kiyoshi
2007-01-01
We implemented a synchronous distance course entitled: Introductory Finance designed for undergraduate students. This course was held between two Japanese universities. Stable Internet connections allowing minimum delay and minimum interruptions of the audio-video streaming signals were used. Students were equipped with their own PCs with…
Towards the use of similarity distances to music genre classification: A comparative study.
Goienetxea, Izaro; Martínez-Otzeta, José María; Sierra, Basilio; Mendialdua, Iñigo
2018-01-01
Music genre classification is a challenging research concept, for which open questions remain regarding classification approach, music piece representation, distances between/within genres, and so on. In this paper an investigation on the classification of generated music pieces is performed, based on the idea that grouping close related known pieces in different sets -or clusters- and then generating in an automatic way a new song which is somehow "inspired" in each set, the new song would be more likely to be classified as belonging to the set which inspired it, based on the same distance used to separate the clusters. Different music pieces representations and distances among pieces are used; obtained results are promising, and indicate the appropriateness of the used approach even in a such a subjective area as music genre classification is.
Towards the use of similarity distances to music genre classification: A comparative study
Martínez-Otzeta, José María; Sierra, Basilio; Mendialdua, Iñigo
2018-01-01
Music genre classification is a challenging research concept, for which open questions remain regarding classification approach, music piece representation, distances between/within genres, and so on. In this paper an investigation on the classification of generated music pieces is performed, based on the idea that grouping close related known pieces in different sets –or clusters– and then generating in an automatic way a new song which is somehow “inspired” in each set, the new song would be more likely to be classified as belonging to the set which inspired it, based on the same distance used to separate the clusters. Different music pieces representations and distances among pieces are used; obtained results are promising, and indicate the appropriateness of the used approach even in a such a subjective area as music genre classification is. PMID:29444160
41 CFR 105-62.102 - Authority to originally classify.
Code of Federal Regulations, 2013 CFR
2013-07-01
... originally classify. (a) Top secret, secret, and confidential. The authority to originally classify information as Top Secret, Secret, or Confidential may be exercised only by the Administrator and is delegable... classification authority. Delegations of original classification authority are limited to the minimum number...
41 CFR 105-62.102 - Authority to originally classify.
Code of Federal Regulations, 2011 CFR
2011-01-01
... originally classify. (a) Top secret, secret, and confidential. The authority to originally classify information as Top Secret, Secret, or Confidential may be exercised only by the Administrator and is delegable... classification authority. Delegations of original classification authority are limited to the minimum number...
41 CFR 105-62.102 - Authority to originally classify.
Code of Federal Regulations, 2014 CFR
2014-01-01
... originally classify. (a) Top secret, secret, and confidential. The authority to originally classify information as Top Secret, Secret, or Confidential may be exercised only by the Administrator and is delegable... classification authority. Delegations of original classification authority are limited to the minimum number...
Complex networks in the Euclidean space of communicability distances
NASA Astrophysics Data System (ADS)
Estrada, Ernesto
2012-06-01
We study the properties of complex networks embedded in a Euclidean space of communicability distances. The communicability distance between two nodes is defined as the difference between the weighted sum of walks self-returning to the nodes and the weighted sum of walks going from one node to the other. We give some indications that the communicability distance identifies the least crowded routes in networks where simultaneous submission of packages is taking place. We define an index Q based on communicability and shortest path distances, which allows reinterpreting the “small-world” phenomenon as the region of minimum Q in the Watts-Strogatz model. It also allows the classification and analysis of networks with different efficiency of spatial uses. Consequently, the communicability distance displays unique features for the analysis of complex networks in different scenarios.
Modified fuzzy c-means applied to a Bragg grating-based spectral imager for material clustering
NASA Astrophysics Data System (ADS)
Rodríguez, Aida; Nieves, Juan Luis; Valero, Eva; Garrote, Estíbaliz; Hernández-Andrés, Javier; Romero, Javier
2012-01-01
We have modified the Fuzzy C-Means algorithm for an application related to segmentation of hyperspectral images. Classical fuzzy c-means algorithm uses Euclidean distance for computing sample membership to each cluster. We have introduced a different distance metric, Spectral Similarity Value (SSV), in order to have a more convenient similarity measure for reflectance information. SSV distance metric considers both magnitude difference (by the use of Euclidean distance) and spectral shape (by the use of Pearson correlation). Experiments confirmed that the introduction of this metric improves the quality of hyperspectral image segmentation, creating spectrally more dense clusters and increasing the number of correctly classified pixels.
The Simplified Aircraft-Based Paired Approach With the ALAS Alerting Algorithm
NASA Technical Reports Server (NTRS)
Perry, Raleigh B.; Madden, Michael M.; Torres-Pomales, Wilfredo; Butler, Ricky W.
2013-01-01
This paper presents the results of an investigation of a proposed concept for closely spaced parallel runways called the Simplified Aircraft-based Paired Approach (SAPA). This procedure depends upon a new alerting algorithm called the Adjacent Landing Alerting System (ALAS). This study used both low fidelity and high fidelity simulations to validate the SAPA procedure and test the performance of the new alerting algorithm. The low fidelity simulation enabled a determination of minimum approach distance for the worst case over millions of scenarios. The high fidelity simulation enabled an accurate determination of timings and minimum approach distance in the presence of realistic trajectories, communication latencies, and total system error for 108 test cases. The SAPA procedure and the ALAS alerting algorithm were applied to the 750-ft parallel spacing (e.g., SFO 28L/28R) approach problem. With the SAPA procedure as defined in this paper, this study concludes that a 750-ft application does not appear to be feasible, but preliminary results for 1000-ft parallel runways look promising.
A comparison of minimum distance and maximum likelihood techniques for proportion estimation
NASA Technical Reports Server (NTRS)
Woodward, W. A.; Schucany, W. R.; Lindsey, H.; Gray, H. L.
1982-01-01
The estimation of mixing proportions P sub 1, P sub 2,...P sub m in the mixture density f(x) = the sum of the series P sub i F sub i(X) with i = 1 to M is often encountered in agricultural remote sensing problems in which case the p sub i's usually represent crop proportions. In these remote sensing applications, component densities f sub i(x) have typically been assumed to be normally distributed, and parameter estimation has been accomplished using maximum likelihood (ML) techniques. Minimum distance (MD) estimation is examined as an alternative to ML where, in this investigation, both procedures are based upon normal components. Results indicate that ML techniques are superior to MD when component distributions actually are normal, while MD estimation provides better estimates than ML under symmetric departures from normality. When component distributions are not symmetric, however, it is seen that neither of these normal based techniques provides satisfactory results.
Multi-image acquisition-based distance sensor using agile laser spot beam.
Riza, Nabeel A; Amin, M Junaid
2014-09-01
We present a novel laser-based distance measurement technique that uses multiple-image-based spatial processing to enable distance measurements. Compared with the first-generation distance sensor using spatial processing, the modified sensor is no longer hindered by the classic Rayleigh axial resolution limit for the propagating laser beam at its minimum beam waist location. The proposed high-resolution distance sensor design uses an electronically controlled variable focus lens (ECVFL) in combination with an optical imaging device, such as a charged-coupled device (CCD), to produce and capture different laser spot size images on a target with these beam spot sizes different from the minimal spot size possible at this target distance. By exploiting the unique relationship of the target located spot sizes with the varying ECVFL focal length for each target distance, the proposed distance sensor can compute the target distance with a distance measurement resolution better than the axial resolution via the Rayleigh resolution criterion. Using a 30 mW 633 nm He-Ne laser coupled with an electromagnetically actuated liquid ECVFL, along with a 20 cm focal length bias lens, and using five spot images captured per target position by a CCD-based Nikon camera, a proof-of-concept proposed distance sensor is successfully implemented in the laboratory over target ranges from 10 to 100 cm with a demonstrated sub-cm axial resolution, which is better than the axial Rayleigh resolution limit at these target distances. Applications for the proposed potentially cost-effective distance sensor are diverse and include industrial inspection and measurement and 3D object shape mapping and imaging.
Textual and visual content-based anti-phishing: a Bayesian approach.
Zhang, Haijun; Liu, Gang; Chow, Tommy W S; Liu, Wenyin
2011-10-01
A novel framework using a Bayesian approach for content-based phishing web page detection is presented. Our model takes into account textual and visual contents to measure the similarity between the protected web page and suspicious web pages. A text classifier, an image classifier, and an algorithm fusing the results from classifiers are introduced. An outstanding feature of this paper is the exploration of a Bayesian model to estimate the matching threshold. This is required in the classifier for determining the class of the web page and identifying whether the web page is phishing or not. In the text classifier, the naive Bayes rule is used to calculate the probability that a web page is phishing. In the image classifier, the earth mover's distance is employed to measure the visual similarity, and our Bayesian model is designed to determine the threshold. In the data fusion algorithm, the Bayes theory is used to synthesize the classification results from textual and visual content. The effectiveness of our proposed approach was examined in a large-scale dataset collected from real phishing cases. Experimental results demonstrated that the text classifier and the image classifier we designed deliver promising results, the fusion algorithm outperforms either of the individual classifiers, and our model can be adapted to different phishing cases. © 2011 IEEE
Development of Gis Tool for the Solution of Minimum Spanning Tree Problem using Prim's Algorithm
NASA Astrophysics Data System (ADS)
Dutta, S.; Patra, D.; Shankar, H.; Alok Verma, P.
2014-11-01
minimum spanning tree (MST) of a connected, undirected and weighted network is a tree of that network consisting of all its nodes and the sum of weights of all its edges is minimum among all such possible spanning trees of the same network. In this study, we have developed a new GIS tool using most commonly known rudimentary algorithm called Prim's algorithm to construct the minimum spanning tree of a connected, undirected and weighted road network. This algorithm is based on the weight (adjacency) matrix of a weighted network and helps to solve complex network MST problem easily, efficiently and effectively. The selection of the appropriate algorithm is very essential otherwise it will be very hard to get an optimal result. In case of Road Transportation Network, it is very essential to find the optimal results by considering all the necessary points based on cost factor (time or distance). This paper is based on solving the Minimum Spanning Tree (MST) problem of a road network by finding it's minimum span by considering all the important network junction point. GIS technology is usually used to solve the network related problems like the optimal path problem, travelling salesman problem, vehicle routing problems, location-allocation problems etc. Therefore, in this study we have developed a customized GIS tool using Python script in ArcGIS software for the solution of MST problem for a Road Transportation Network of Dehradun city by considering distance and time as the impedance (cost) factors. It has a number of advantages like the users do not need a greater knowledge of the subject as the tool is user-friendly and that allows to access information varied and adapted the needs of the users. This GIS tool for MST can be applied for a nationwide plan called Prime Minister Gram Sadak Yojana in India to provide optimal all weather road connectivity to unconnected villages (points). This tool is also useful for constructing highways or railways spanning several cities optimally or connecting all cities with minimum total road length.
Slit identification for a uranium slab using a binary classifier based on cosmic-ray muon scattering
NASA Astrophysics Data System (ADS)
Xiao, S.; He, W.; Chen, Y.; Dang, X.; Wu, L.; Shuai, M.
2017-12-01
Traditional muon tomographic method has been fraught with difficulty when it is applied to identify some defective high-Z objects or other complicated structures, since it usually gets into trouble when attempting to produce a precise three-dimensional image for such objects. In this paper, we present a binary classifier based on cosmic-ray muon scattering to identify the slit potentially located in a uranium slab. The superiority of this classifier is established by steering clear of the stubborn imaging procedure necessary for the conventional methods. Simulation results demonstrate its capability to spot a horizontal or vertical slit with a reasonable exposure time. The minimum width of a spotted slit is on the level of millimeters or even sub-millimeters. Therefore, this technique will be prospective in terms of monitoring the long-term status of nuclear storage and facilities in real life.
[Home health resource utilization measures using a case-mix adjustor model].
You, Sun-Ju; Chang, Hyun-Sook
2005-08-01
The purpose of this study was to measure home health resource utilization using a Case-Mix Adjustor Model developed in the U.S. The subjects of this study were 484 patients who had received home health care more than 4 visits during a 60-day episode at 31 home health care institutions. Data on the 484 patients had to be merged onto a 60-day payment segment. Based on the results, the researcher classified home health resource groups (HHRG). The subjects were classified into 34 HHRGs in Korea. Home health resource utilization according to clinical severity was in order of Minimum (C0) < 'Low (C1) < 'Moderate (C2) < 'High (C3), according to dependency in daily activities was in order of Minimum (F0) < 'High (F3) < 'Medium (F2) < 'Low (F1) < 'Maximum (F4). Resource utilization by HHRGs was the highest 564,735 won in group C0F0S2 (clinical severity minimum, dependency in daily activity minimum, service utilization moderate), and the lowest 97,000 won in group C2F3S1, so the former was 5.82 times higher than the latter. Resource utilization in home health care has become an issue of concern due to rising costs for home health care. The results suggest the need for more analytical attention on the utilization and expenditures for home care using a Case-Mix Adjustor Model.
Inostroza-Michael, Oscar; Hernández, Cristián E; Rodríguez-Serrano, Enrique; Avaria-Llautureo, Jorge; Rivadeneira, Marcelo M
2018-05-01
Among the earliest macroecological patterns documented, is the range and body size relationship, characterized by a minimum geographic range size imposed by the species' body size. This boundary for the geographic range size increases linearly with body size and has been proposed to have implications in lineages evolution and conservation. Nevertheless, the macroevolutionary processes involved in the origin of this boundary and its consequences on lineage diversification have been poorly explored. We evaluate the macroevolutionary consequences of the difference (hereafter the distance) between the observed and the minimum range sizes required by the species' body size, to untangle its role on the diversification of a Neotropical species-rich bird clade using trait-dependent diversification models. We show that speciation rate is a positive hump-shaped function of the distance to the lower boundary. The species with highest and lowest distances to minimum range size had lower speciation rates, while species close to medium distances values had the highest speciation rates. Further, our results suggest that the distance to the minimum range size is a macroevolutionary constraint that affects the diversification process responsible for the origin of this macroecological pattern in a more complex way than previously envisioned. © 2018 The Author(s). Evolution © 2018 The Society for the Study of Evolution.
An information-based network approach for protein classification
Wan, Xiaogeng; Zhao, Xin; Yau, Stephen S. T.
2017-01-01
Protein classification is one of the critical problems in bioinformatics. Early studies used geometric distances and polygenetic-tree to classify proteins. These methods use binary trees to present protein classification. In this paper, we propose a new protein classification method, whereby theories of information and networks are used to classify the multivariate relationships of proteins. In this study, protein universe is modeled as an undirected network, where proteins are classified according to their connections. Our method is unsupervised, multivariate, and alignment-free. It can be applied to the classification of both protein sequences and structures. Nine examples are used to demonstrate the efficiency of our new method. PMID:28350835
A Neural-Network Clustering-Based Algorithm for Privacy Preserving Data Mining
NASA Astrophysics Data System (ADS)
Tsiafoulis, S.; Zorkadis, V. C.; Karras, D. A.
The increasing use of fast and efficient data mining algorithms in huge collections of personal data, facilitated through the exponential growth of technology, in particular in the field of electronic data storage media and processing power, has raised serious ethical, philosophical and legal issues related to privacy protection. To cope with these concerns, several privacy preserving methodologies have been proposed, classified in two categories, methodologies that aim at protecting the sensitive data and those that aim at protecting the mining results. In our work, we focus on sensitive data protection and compare existing techniques according to their anonymity degree achieved, the information loss suffered and their performance characteristics. The ℓ-diversity principle is combined with k-anonymity concepts, so that background information can not be exploited to successfully attack the privacy of data subjects data refer to. Based on Kohonen Self Organizing Feature Maps (SOMs), we firstly organize data sets in subspaces according to their information theoretical distance to each other, then create the most relevant classes paying special attention to rare sensitive attribute values, and finally generalize attribute values to the minimum extend required so that both the data disclosure probability and the information loss are possibly kept negligible. Furthermore, we propose information theoretical measures for assessing the anonymity degree achieved and empirical tests to demonstrate it.
A method for locating potential tree-planting sites in urban areas: a case study of Los Angeles, USA
Chunxia Wua; Qingfu Xiaoa; Gregory E. McPherson
2008-01-01
A GIS-based method for locating potential tree-planting sites based on land cover data is introduced. Criteria were developed to identify locations that are spatially available for potential tree planting based on land cover, sufficient distance from impervious surfaces, a minimum amount of pervious surface, and no crown overlap with other trees. In an ArcGIS...
Event Recognition Based on Deep Learning in Chinese Texts
Zhang, Yajun; Liu, Zongtian; Zhou, Wen
2016-01-01
Event recognition is the most fundamental and critical task in event-based natural language processing systems. Existing event recognition methods based on rules and shallow neural networks have certain limitations. For example, extracting features using methods based on rules is difficult; methods based on shallow neural networks converge too quickly to a local minimum, resulting in low recognition precision. To address these problems, we propose the Chinese emergency event recognition model based on deep learning (CEERM). Firstly, we use a word segmentation system to segment sentences. According to event elements labeled in the CEC 2.0 corpus, we classify words into five categories: trigger words, participants, objects, time and location. Each word is vectorized according to the following six feature layers: part of speech, dependency grammar, length, location, distance between trigger word and core word and trigger word frequency. We obtain deep semantic features of words by training a feature vector set using a deep belief network (DBN), then analyze those features in order to identify trigger words by means of a back propagation neural network. Extensive testing shows that the CEERM achieves excellent recognition performance, with a maximum F-measure value of 85.17%. Moreover, we propose the dynamic-supervised DBN, which adds supervised fine-tuning to a restricted Boltzmann machine layer by monitoring its training performance. Test analysis reveals that the new DBN improves recognition performance and effectively controls the training time. Although the F-measure increases to 88.11%, the training time increases by only 25.35%. PMID:27501231
A Comparative Study of Different EEG Reference Choices for Diagnosing Unipolar Depression.
Mumtaz, Wajid; Malik, Aamir Saeed
2018-06-02
The choice of an electroencephalogram (EEG) reference has fundamental importance and could be critical during clinical decision-making because an impure EEG reference could falsify the clinical measurements and subsequent inferences. In this research, the suitability of three EEG references was compared while classifying depressed and healthy brains using a machine-learning (ML)-based validation method. In this research, the EEG data of 30 unipolar depressed subjects and 30 age-matched healthy controls were recorded. The EEG data were analyzed in three different EEG references, the link-ear reference (LE), average reference (AR), and reference electrode standardization technique (REST). The EEG-based functional connectivity (FC) was computed. Also, the graph-based measures, such as the distances between nodes, minimum spanning tree, and maximum flow between the nodes for each channel pair, were calculated. An ML scheme provided a mechanism to compare the performances of the extracted features that involved a general framework such as the feature extraction (graph-based theoretic measures), feature selection, classification, and validation. For comparison purposes, the performance metrics such as the classification accuracies, sensitivities, specificities, and F scores were computed. When comparing the three references, the diagnostic accuracy showed better performances during the REST, while the LE and AR showed less discrimination between the two groups. Based on the results, it can be concluded that the choice of appropriate reference is critical during the clinical scenario. The REST reference is recommended for future applications of EEG-based diagnosis of mental illnesses.
Event Recognition Based on Deep Learning in Chinese Texts.
Zhang, Yajun; Liu, Zongtian; Zhou, Wen
2016-01-01
Event recognition is the most fundamental and critical task in event-based natural language processing systems. Existing event recognition methods based on rules and shallow neural networks have certain limitations. For example, extracting features using methods based on rules is difficult; methods based on shallow neural networks converge too quickly to a local minimum, resulting in low recognition precision. To address these problems, we propose the Chinese emergency event recognition model based on deep learning (CEERM). Firstly, we use a word segmentation system to segment sentences. According to event elements labeled in the CEC 2.0 corpus, we classify words into five categories: trigger words, participants, objects, time and location. Each word is vectorized according to the following six feature layers: part of speech, dependency grammar, length, location, distance between trigger word and core word and trigger word frequency. We obtain deep semantic features of words by training a feature vector set using a deep belief network (DBN), then analyze those features in order to identify trigger words by means of a back propagation neural network. Extensive testing shows that the CEERM achieves excellent recognition performance, with a maximum F-measure value of 85.17%. Moreover, we propose the dynamic-supervised DBN, which adds supervised fine-tuning to a restricted Boltzmann machine layer by monitoring its training performance. Test analysis reveals that the new DBN improves recognition performance and effectively controls the training time. Although the F-measure increases to 88.11%, the training time increases by only 25.35%.
On the minimum orbital intersection distance computation: a new effective method
NASA Astrophysics Data System (ADS)
Hedo, José M.; Ruíz, Manuel; Peláez, Jesús
2018-06-01
The computation of the Minimum Orbital Intersection Distance (MOID) is an old, but increasingly relevant problem. Fast and precise methods for MOID computation are needed to select potentially hazardous asteroids from a large catalogue. The same applies to debris with respect to spacecraft. An iterative method that strictly meets these two premises is presented.
ERIC Educational Resources Information Center
Goldman, Susan R.
The comprehension of the Minimum Distance Principle was examined in three experiments, using the "tell/promise" sentence construction. Experiment one compared the listening and reading comprehension of singly presented sentences, e.g. "John tells Bill to bake the cake" and "John promises Bill to bake the cake." The…
30 CFR 77.807-3 - Movement of equipment; minimum distance from high-voltage lines.
Code of Federal Regulations, 2010 CFR
2010-07-01
... high-voltage lines. 77.807-3 Section 77.807-3 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION... WORK AREAS OF UNDERGROUND COAL MINES Surface High-Voltage Distribution § 77.807-3 Movement of equipment; minimum distance from high-voltage lines. When any part of any equipment operated on the surface of any...
30 CFR 77.807-2 - Booms and masts; minimum distance from high-voltage lines.
Code of Federal Regulations, 2010 CFR
2010-07-01
...-voltage lines. 77.807-2 Section 77.807-2 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION... WORK AREAS OF UNDERGROUND COAL MINES Surface High-Voltage Distribution § 77.807-2 Booms and masts; minimum distance from high-voltage lines. The booms and masts of equipment operated on the surface of any...
The Application of Speaker Recognition Techniques in the Detection of Tsunamigenic Earthquakes
NASA Astrophysics Data System (ADS)
Gorbatov, A.; O'Connell, J.; Paliwal, K.
2015-12-01
Tsunami warning procedures adopted by national tsunami warning centres largely rely on the classical approach of earthquake location, magnitude determination, and the consequent modelling of tsunami waves. Although this approach is based on known physics theories of earthquake and tsunami generation processes, this may be the main shortcoming due to the need to satisfy minimum seismic data requirement to estimate those physical parameters. At least four seismic stations are necessary to locate the earthquake and a minimum of approximately 10 minutes of seismic waveform observation to reliably estimate the magnitude of a large earthquake similar to the 2004 Indian Ocean Tsunami Earthquake of M9.2. Consequently the total time to tsunami warning could be more than half an hour. In attempt to reduce the time of tsunami alert a new approach is proposed based on the classification of tsunamigenic and non tsunamigenic earthquakes using speaker recognition techniques. A Tsunamigenic Dataset (TGDS) was compiled to promote the development of machine learning techniques for application to seismic trace analysis and, in particular, tsunamigenic event detection, and compare them to existing seismological methods. The TGDS contains 227 off shore events (87 tsunamigenic and 140 non-tsunamigenic earthquakes with M≥6) from Jan 2000 to Dec 2011, inclusive. A Support Vector Machine classifier using a radial-basis function kernel was applied to spectral features derived from 400 sec frames of 3-comp. 1-Hz broadband seismometer data. Ten-fold cross-validation was used during training to choose classifier parameters. Voting was applied to the classifier predictions provided from each station to form an overall prediction for an event. The F1 score (harmonic mean of precision and recall) was chosen to rate each classifier as it provides a compromise between type-I and type-II errors, and due to the imbalance between the representative number of events in the tsunamigenic and non-tsunamigenic classes. The described classifier achieved an F1 score of 0.923, with tsunamigenic classification precision and recall/sensitivity of 0.928 and 0.919 respectively. The system requires a minimum of 3 stations with ~400 seconds of data each to make a prediction. The accuracy improves as further stations and data become available.
New method for distance-based close following safety indicator.
Sharizli, A A; Rahizar, R; Karim, M R; Saifizul, A A
2015-01-01
The increase in the number of fatalities caused by road accidents involving heavy vehicles every year has raised the level of concern and awareness on road safety in developing countries like Malaysia. Changes in the vehicle dynamic characteristics such as gross vehicle weight, travel speed, and vehicle classification will affect a heavy vehicle's braking performance and its ability to stop safely in emergency situations. As such, the aim of this study is to establish a more realistic new distance-based safety indicator called the minimum safe distance gap (MSDG), which incorporates vehicle classification (VC), speed, and gross vehicle weight (GVW). Commercial multibody dynamics simulation software was used to generate braking distance data for various heavy vehicle classes under various loads and speeds. By applying nonlinear regression analysis to the simulation results, a mathematical expression of MSDG has been established. The results show that MSDG is dynamically changed according to GVW, VC, and speed. It is envisaged that this new distance-based safety indicator would provide a more realistic depiction of the real traffic situation for safety analysis.
Method of Menu Selection by Gaze Movement Using AC EOG Signals
NASA Astrophysics Data System (ADS)
Kanoh, Shin'ichiro; Futami, Ryoko; Yoshinobu, Tatsuo; Hoshimiya, Nozomu
A method to detect the direction and the distance of voluntary eye gaze movement from EOG (electrooculogram) signals was proposed and tested. In this method, AC-amplified vertical and horizontal transient EOG signals were classified into 8-class directions and 2-class distances of voluntary eye gaze movements. A horizontal and a vertical EOGs during eye gaze movement at each sampling time were treated as a two-dimensional vector, and the center of gravity of the sample vectors whose norms were more than 80% of the maximum norm was used as a feature vector to be classified. By the classification using the k-nearest neighbor algorithm, it was shown that the averaged correct detection rates on each subject were 98.9%, 98.7%, 94.4%, respectively. This method can avoid strict EOG-based eye tracking which requires DC amplification of very small signal. It would be useful to develop robust human interfacing systems based on menu selection for severely paralyzed patients.
Interstellar reddening information system
NASA Astrophysics Data System (ADS)
Burnashev, V. I.; Grigorieva, E. A.; Malkov, O. Yu.
2013-10-01
We describe an electronic bibliographic information system, based on a card catalog, containing some 2500 references (publications of 1930-2009) on interstellar extinction. We have classified the articles according to their content. We present here a list of articles devoted to two categories: maps of total extinction and variation of interstellar extinction with the distance to the object. The catalog is tested using published data on open clusters, and conclusions on the applicability of different maps of interstellar extinctions for various distances are made.
Generalising Ward's Method for Use with Manhattan Distances.
Strauss, Trudie; von Maltitz, Michael Johan
2017-01-01
The claim that Ward's linkage algorithm in hierarchical clustering is limited to use with Euclidean distances is investigated. In this paper, Ward's clustering algorithm is generalised to use with l1 norm or Manhattan distances. We argue that the generalisation of Ward's linkage method to incorporate Manhattan distances is theoretically sound and provide an example of where this method outperforms the method using Euclidean distances. As an application, we perform statistical analyses on languages using methods normally applied to biology and genetic classification. We aim to quantify differences in character traits between languages and use a statistical language signature based on relative bi-gram (sequence of two letters) frequencies to calculate a distance matrix between 32 Indo-European languages. We then use Ward's method of hierarchical clustering to classify the languages, using the Euclidean distance and the Manhattan distance. Results obtained from using the different distance metrics are compared to show that the Ward's algorithm characteristic of minimising intra-cluster variation and maximising inter-cluster variation is not violated when using the Manhattan metric.
Mei, Jiangyuan; Liu, Meizhu; Wang, Yuan-Fang; Gao, Huijun
2016-06-01
Multivariate time series (MTS) datasets broadly exist in numerous fields, including health care, multimedia, finance, and biometrics. How to classify MTS accurately has become a hot research topic since it is an important element in many computer vision and pattern recognition applications. In this paper, we propose a Mahalanobis distance-based dynamic time warping (DTW) measure for MTS classification. The Mahalanobis distance builds an accurate relationship between each variable and its corresponding category. It is utilized to calculate the local distance between vectors in MTS. Then we use DTW to align those MTS which are out of synchronization or with different lengths. After that, how to learn an accurate Mahalanobis distance function becomes another key problem. This paper establishes a LogDet divergence-based metric learning with triplet constraint model which can learn Mahalanobis matrix with high precision and robustness. Furthermore, the proposed method is applied on nine MTS datasets selected from the University of California, Irvine machine learning repository and Robert T. Olszewski's homepage, and the results demonstrate the improved performance of the proposed approach.
Automatic tissue characterization from ultrasound imagery
NASA Astrophysics Data System (ADS)
Kadah, Yasser M.; Farag, Aly A.; Youssef, Abou-Bakr M.; Badawi, Ahmed M.
1993-08-01
In this work, feature extraction algorithms are proposed to extract the tissue characterization parameters from liver images. Then the resulting parameter set is further processed to obtain the minimum number of parameters representing the most discriminating pattern space for classification. This preprocessing step was applied to over 120 pathology-investigated cases to obtain the learning data for designing the classifier. The extracted features are divided into independent training and test sets and are used to construct both statistical and neural classifiers. The optimal criteria for these classifiers are set to have minimum error, ease of implementation and learning, and the flexibility for future modifications. Various algorithms for implementing various classification techniques are presented and tested on the data. The best performance was obtained using a single layer tensor model functional link network. Also, the voting k-nearest neighbor classifier provided comparably good diagnostic rates.
Qiu, Zhijun; Zhou, Bo; Yuan, Jiangfeng
2017-11-21
Protein-protein interaction site (PPIS) prediction must deal with the diversity of interaction sites that limits their prediction accuracy. Use of proteins with unknown or unidentified interactions can also lead to missing interfaces. Such data errors are often brought into the training dataset. In response to these two problems, we used the minimum covariance determinant (MCD) method to refine the training data to build a predictor with better performance, utilizing its ability of removing outliers. In order to predict test data in practice, a method based on Mahalanobis distance was devised to select proper test data as input for the predictor. With leave-one-validation and independent test, after the Mahalanobis distance screening, our method achieved higher performance according to Matthews correlation coefficient (MCC), although only a part of test data could be predicted. These results indicate that data refinement is an efficient approach to improve protein-protein interaction site prediction. By further optimizing our method, it is hopeful to develop predictors of better performance and wide range of application. Copyright © 2017 Elsevier Ltd. All rights reserved.
Some Evidence of Continuing Linguistic Acquisitions in Learning Adolescents.
ERIC Educational Resources Information Center
Thomas, Elizabeth K.; Walmsley, Sean A.
The linguistic development of 42 learning disabled students 10-16 years old was examined. Responses were elicited to five linguistic structures, including the distinction between "ask" and "tell", pronominal restriction, and the minimum distance principle. Data were analyzed in terms of three groups based on Verbal and Performance differentials on…
About neighborhood counting measure metric and minimum risk metric.
Argentini, Andrea; Blanzieri, Enrico
2010-04-01
In a 2006 TPAMI paper, Wang proposed the Neighborhood Counting Measure, a similarity measure for the k-NN algorithm. In his paper, Wang mentioned the Minimum Risk Metric (MRM), an early distance measure based on the minimization of the risk of misclassification. Wang did not compare NCM to MRM because of its allegedly excessive computational load. In this comment paper, we complete the comparison that was missing in Wang's paper and, from our empirical evaluation, we show that MRM outperforms NCM and that its running time is not prohibitive as Wang suggested.
On the Implementation of a Land Cover Classification System for SAR Images Using Khoros
NASA Technical Reports Server (NTRS)
Medina Revera, Edwin J.; Espinosa, Ramon Vasquez
1997-01-01
The Synthetic Aperture Radar (SAR) sensor is widely used to record data about the ground under all atmospheric conditions. The SAR acquired images have very good resolution which necessitates the development of a classification system that process the SAR images to extract useful information for different applications. In this work, a complete system for the land cover classification was designed and programmed using the Khoros, a data flow visual language environment, taking full advantages of the polymorphic data services that it provides. Image analysis was applied to SAR images to improve and automate the processes of recognition and classification of the different regions like mountains and lakes. Both unsupervised and supervised classification utilities were used. The unsupervised classification routines included the use of several Classification/Clustering algorithms like the K-means, ISO2, Weighted Minimum Distance, and the Localized Receptive Field (LRF) training/classifier. Different texture analysis approaches such as Invariant Moments, Fractal Dimension and Second Order statistics were implemented for supervised classification of the images. The results and conclusions for SAR image classification using the various unsupervised and supervised procedures are presented based on their accuracy and performance.
NASA Astrophysics Data System (ADS)
Kozoderov, V. V.; Kondranin, T. V.; Dmitriev, E. V.
2017-12-01
The basic model for the recognition of natural and anthropogenic objects using their spectral and textural features is described in the problem of hyperspectral air-borne and space-borne imagery processing. The model is based on improvements of the Bayesian classifier that is a computational procedure of statistical decision making in machine-learning methods of pattern recognition. The principal component method is implemented to decompose the hyperspectral measurements on the basis of empirical orthogonal functions. Application examples are shown of various modifications of the Bayesian classifier and Support Vector Machine method. Examples are provided of comparing these classifiers and a metrical classifier that operates on finding the minimal Euclidean distance between different points and sets in the multidimensional feature space. A comparison is also carried out with the " K-weighted neighbors" method that is close to the nonparametric Bayesian classifier.
Content-based image retrieval for interstitial lung diseases using classification confidence
NASA Astrophysics Data System (ADS)
Dash, Jatindra Kumar; Mukhopadhyay, Sudipta; Prabhakar, Nidhi; Garg, Mandeep; Khandelwal, Niranjan
2013-02-01
Content Based Image Retrieval (CBIR) system could exploit the wealth of High-Resolution Computed Tomography (HRCT) data stored in the archive by finding similar images to assist radiologists for self learning and differential diagnosis of Interstitial Lung Diseases (ILDs). HRCT findings of ILDs are classified into several categories (e.g. consolidation, emphysema, ground glass, nodular etc.) based on their texture like appearances. Therefore, analysis of ILDs is considered as a texture analysis problem. Many approaches have been proposed for CBIR of lung images using texture as primitive visual content. This paper presents a new approach to CBIR for ILDs. The proposed approach makes use of a trained neural network (NN) to find the output class label of query image. The degree of confidence of the NN classifier is analyzed using Naive Bayes classifier that dynamically takes a decision on the size of the search space to be used for retrieval. The proposed approach is compared with three simple distance based and one classifier based texture retrieval approaches. Experimental results show that the proposed technique achieved highest average percentage precision of 92.60% with lowest standard deviation of 20.82%.
A novel three-stage distance-based consensus ranking method
NASA Astrophysics Data System (ADS)
Aghayi, Nazila; Tavana, Madjid
2018-05-01
In this study, we propose a three-stage weighted sum method for identifying the group ranks of alternatives. In the first stage, a rank matrix, similar to the cross-efficiency matrix, is obtained by computing the individual rank position of each alternative based on importance weights. In the second stage, a secondary goal is defined to limit the vector of weights since the vector of weights obtained in the first stage is not unique. Finally, in the third stage, the group rank position of alternatives is obtained based on a distance of individual rank positions. The third stage determines a consensus solution for the group so that the ranks obtained have a minimum distance from the ranks acquired by each alternative in the previous stage. A numerical example is presented to demonstrate the applicability and exhibit the efficacy of the proposed method and algorithms.
Comparison of Classifier Architectures for Online Neural Spike Sorting.
Saeed, Maryam; Khan, Amir Ali; Kamboh, Awais Mehmood
2017-04-01
High-density, intracranial recordings from micro-electrode arrays need to undergo Spike Sorting in order to associate the recorded neuronal spikes to particular neurons. This involves spike detection, feature extraction, and classification. To reduce the data transmission and power requirements, on-chip real-time processing is becoming very popular. However, high computational resources are required for classifiers in on-chip spike-sorters, making scalability a great challenge. In this review paper, we analyze several popular classifiers to propose five new hardware architectures using the off-chip training with on-chip classification approach. These include support vector classification, fuzzy C-means classification, self-organizing maps classification, moving-centroid K-means classification, and Cosine distance classification. The performance of these architectures is analyzed in terms of accuracy and resource requirement. We establish that the neural networks based Self-Organizing Maps classifier offers the most viable solution. A spike sorter based on the Self-Organizing Maps classifier, requires only 7.83% of computational resources of the best-reported spike sorter, hierarchical adaptive means, while offering a 3% better accuracy at 7 dB SNR.
LDPC Codes with Minimum Distance Proportional to Block Size
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Jones, Christopher; Dolinar, Samuel; Thorpe, Jeremy
2009-01-01
Low-density parity-check (LDPC) codes characterized by minimum Hamming distances proportional to block sizes have been demonstrated. Like the codes mentioned in the immediately preceding article, the present codes are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. The previously mentioned codes have low decoding thresholds and reasonably low error floors. However, the minimum Hamming distances of those codes do not grow linearly with code-block sizes. Codes that have this minimum-distance property exhibit very low error floors. Examples of such codes include regular LDPC codes with variable degrees of at least 3. Unfortunately, the decoding thresholds of regular LDPC codes are high. Hence, there is a need for LDPC codes characterized by both low decoding thresholds and, in order to obtain acceptably low error floors, minimum Hamming distances that are proportional to code-block sizes. The present codes were developed to satisfy this need. The minimum Hamming distances of the present codes have been shown, through consideration of ensemble-average weight enumerators, to be proportional to code block sizes. As in the cases of irregular ensembles, the properties of these codes are sensitive to the proportion of degree-2 variable nodes. A code having too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code having too many such nodes tends not to exhibit a minimum distance that is proportional to block size. Results of computational simulations have shown that the decoding thresholds of codes of the present type are lower than those of regular LDPC codes. Included in the simulations were a few examples from a family of codes characterized by rates ranging from low to high and by thresholds that adhere closely to their respective channel capacity thresholds; the simulation results from these examples showed that the codes in question have low error floors as well as low decoding thresholds. As an example, the illustration shows the protograph (which represents the blueprint for overall construction) of one proposed code family for code rates greater than or equal to 1.2. Any size LDPC code can be obtained by copying the protograph structure N times, then permuting the edges. The illustration also provides Field Programmable Gate Array (FPGA) hardware performance simulations for this code family. In addition, the illustration provides minimum signal-to-noise ratios (Eb/No) in decibels (decoding thresholds) to achieve zero error rates as the code block size goes to infinity for various code rates. In comparison with the codes mentioned in the preceding article, these codes have slightly higher decoding thresholds.
A unified classifier for robust face recognition based on combining multiple subspace algorithms
NASA Astrophysics Data System (ADS)
Ijaz Bajwa, Usama; Ahmad Taj, Imtiaz; Waqas Anwar, Muhammad
2012-10-01
Face recognition being the fastest growing biometric technology has expanded manifold in the last few years. Various new algorithms and commercial systems have been proposed and developed. However, none of the proposed or developed algorithm is a complete solution because it may work very well on one set of images with say illumination changes but may not work properly on another set of image variations like expression variations. This study is motivated by the fact that any single classifier cannot claim to show generally better performance against all facial image variations. To overcome this shortcoming and achieve generality, combining several classifiers using various strategies has been studied extensively also incorporating the question of suitability of any classifier for this task. The study is based on the outcome of a comprehensive comparative analysis conducted on a combination of six subspace extraction algorithms and four distance metrics on three facial databases. The analysis leads to the selection of the most suitable classifiers which performs better on one task or the other. These classifiers are then combined together onto an ensemble classifier by two different strategies of weighted sum and re-ranking. The results of the ensemble classifier show that these strategies can be effectively used to construct a single classifier that can successfully handle varying facial image conditions of illumination, aging and facial expressions.
NASA Astrophysics Data System (ADS)
Adesso, Gerardo; Giampaolo, Salvatore M.; Illuminati, Fabrizio
2007-10-01
We present a geometric approach to the characterization of separability and entanglement in pure Gaussian states of an arbitrary number of modes. The analysis is performed adapting to continuous variables a formalism based on single subsystem unitary transformations that has been recently introduced to characterize separability and entanglement in pure states of qubits and qutrits [S. M. Giampaolo and F. Illuminati, Phys. Rev. A 76, 042301 (2007)]. In analogy with the finite-dimensional case, we demonstrate that the 1×M bipartite entanglement of a multimode pure Gaussian state can be quantified by the minimum squared Euclidean distance between the state itself and the set of states obtained by transforming it via suitable local symplectic (unitary) operations. This minimum distance, corresponding to a , uniquely determined, extremal local operation, defines an entanglement monotone equivalent to the entropy of entanglement, and amenable to direct experimental measurement with linear optical schemes.
An important issue surrounding assessment of riverine fish assemblages is the minimum amount of sampling distance needed to adequately determine biotic condition. Determining adequate sampling distance is important because sampling distance affects estimates of fish assemblage c...
Using self-organizing maps to classify humpback whale song units and quantify their similarity.
Allen, Jenny A; Murray, Anita; Noad, Michael J; Dunlop, Rebecca A; Garland, Ellen C
2017-10-01
Classification of vocal signals can be undertaken using a wide variety of qualitative and quantitative techniques. Using east Australian humpback whale song from 2002 to 2014, a subset of vocal signals was acoustically measured and then classified using a Self-Organizing Map (SOM). The SOM created (1) an acoustic dictionary of units representing the song's repertoire, and (2) Cartesian distance measurements among all unit types (SOM nodes). Utilizing the SOM dictionary as a guide, additional song recordings from east Australia were rapidly (manually) transcribed. To assess the similarity in song sequences, the Cartesian distance output from the SOM was applied in Levenshtein distance similarity analyses as a weighting factor to better incorporate unit similarity in the calculation (previously a qualitative process). SOMs provide a more robust and repeatable means of categorizing acoustic signals along with a clear quantitative measurement of sound type similarity based on acoustic features. This method can be utilized for a wide variety of acoustic databases especially those containing very large datasets and can be applied across the vocalization research community to help address concerns surrounding inconsistency in manual classification.
Jaccard distance based weighted sparse representation for coarse-to-fine plant species recognition.
Zhang, Shanwen; Wu, Xiaowei; You, Zhuhong
2017-01-01
Leaf based plant species recognition plays an important role in ecological protection, however its application to large and modern leaf databases has been a long-standing obstacle due to the computational cost and feasibility. Recognizing such limitations, we propose a Jaccard distance based sparse representation (JDSR) method which adopts a two-stage, coarse to fine strategy for plant species recognition. In the first stage, we use the Jaccard distance between the test sample and each training sample to coarsely determine the candidate classes of the test sample. The second stage includes a Jaccard distance based weighted sparse representation based classification(WSRC), which aims to approximately represent the test sample in the training space, and classify it by the approximation residuals. Since the training model of our JDSR method involves much fewer but more informative representatives, this method is expected to overcome the limitation of high computational and memory costs in traditional sparse representation based classification. Comparative experimental results on a public leaf image database demonstrate that the proposed method outperforms other existing feature extraction and SRC based plant recognition methods in terms of both accuracy and computational speed.
National forest trail users: planning for recreation opportunities
John J. Daigle; Alan E. Watson; Glenn E. Haas
1994-01-01
National forest trail users in four geographical regions of the United States are described based on participation in clusters of recreation activities. Visitors are classified into day hiking, undeveloped recreation, and two developed camping and hiking activity clusters for the Appalachian, Pacific, Rocky Mountain, and Southwestern regions. Distance and time traveled...
Hybrid Stochastic Models for Remaining Lifetime Prognosis
2004-08-01
literature for techniques and comparisons. Os- ogami and Harchol-Balter [70], Perros [73], Johnson [36], and Altiok [5] provide excellent summaries of...and type of PH-distribution approximation for c2 > 0.5 is not as obvious. In order to use the minimum distance estimation, Perros [73] indicated that...moment-matching techniques. Perros [73] indicated that the maximum likelihood and minimum distance techniques require nonlinear optimization. Johnson
Mathematical values in the processing of Chinese numeral classifiers and measure words.
Her, One-Soon; Chen, Ying-Chun; Yen, Nai-Shing
2017-01-01
A numeral classifier is required between a numeral and a noun in Chinese, which comes in two varieties, sortal classifer (C) and measural classifier (M), also known as 'classifier' and 'measure word', respectively. Cs categorize objects based on semantic attributes and Cs and Ms both denote quantity in terms of mathematical values. The aim of this study was to conduct a psycholinguistic experiment to examine whether participants process C/Ms based on their mathematical values with a semantic distance comparison task, where participants judged which of the two C/M phrases was semantically closer to the target C/M. Results showed that participants performed more accurately and faster for C/Ms with fixed values than the ones with variable values. These results demonstrated that mathematical values do play an important role in the processing of C/Ms. This study may thus shed light on the influence of the linguistic system of C/Ms on magnitude cognition.
Kaplan, Mehmet; Ozcan, Onder; Bilgic, Ethem; Kaplan, Elif Tugce; Kaplan, Tugba; Kaplan, Fatma Cigdem
2017-11-01
The Limberg flap (LF) procedure is widely performed for the treatment of sacrococcygeal pilonidal sinus (SPS); however, recurrences continues to be observed. The aim of this study was to assess the relationship between LF designs and the risk of SPS recurrence. Sixty-one cases with recurrent disease (study group) and 194 controls, with a minimum of 5 recurrence-free years following surgery (control group), were included in the study. LF reconstructions performed in each group were classified as off-midline closure (OMC) and non-OMC types. Subsequently, the 2 groups were analyzed. After adjustment for all variables, non-OMC types showed the most prominent correlation with recurrence, followed by interrupted suturing type, family history of SPS, smoking, prolonged healing time, and younger age. The best cut-off value for the critical distance from the midline was found to be 11 mm (with 72% sensitivity and 95% specificity for recurrence). We recommend OMC modifications, with the flap tailored to create a safe margin of at least 2 cm between the flap borders and the midline. Copyright © 2017 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
de Rooij, Mark; Heiser, Willem J.
2005-01-01
Although RC(M)-association models have become a generally useful tool for the analysis of cross-classified data, the graphical representation resulting from such an analysis can at times be misleading. The relationships present between row category points and column category points cannot be interpreted by inter point distances but only through…
VSOP: the variable star one-shot project. I. Project presentation and first data release
NASA Astrophysics Data System (ADS)
Dall, T. H.; Foellmi, C.; Pritchard, J.; Lo Curto, G.; Allende Prieto, C.; Bruntt, H.; Amado, P. J.; Arentoft, T.; Baes, M.; Depagne, E.; Fernandez, M.; Ivanov, V.; Koesterke, L.; Monaco, L.; O'Brien, K.; Sarro, L. M.; Saviane, I.; Scharwächter, J.; Schmidtobreick, L.; Schütz, O.; Seifahrt, A.; Selman, F.; Stefanon, M.; Sterzik, M.
2007-08-01
Context: About 500 new variable stars enter the General Catalogue of Variable Stars (GCVS) every year. Most of them however lack spectroscopic observations, which remains critical for a correct assignement of the variability type and for the understanding of the object. Aims: The Variable Star One-shot Project (VSOP) is aimed at (1) providing the variability type and spectral type of all unstudied variable stars, (2) process, publish, and make the data available as automatically as possible, and (3) generate serendipitous discoveries. This first paper describes the project itself, the acquisition of the data, the dataflow, the spectroscopic analysis and the on-line availability of the fully calibrated and reduced data. We also present the results on the 221 stars observed during the first semester of the project. Methods: We used the high-resolution echelle spectrographs HARPS and FEROS in the ESO La Silla Observatory (Chile) to survey known variable stars. Once reduced by the dedicated pipelines, the radial velocities are determined from cross correlation with synthetic template spectra, and the spectral types are determined by an automatic minimum distance matching to synthetic spectra, with traditional manual spectral typing cross-checks. The variability types are determined by manually evaluating the available light curves and the spectroscopy. In the future, a new automatic classifier, currently being developed by members of the VSOP team, based on these spectroscopic data and on the photometric classifier developed for the COROT and Gaia space missions, will be used. Results: We confirm or revise spectral types of 221 variable stars from the GCVS. We identify 26 previously unknown multiple systems, among them several visual binaries with spectroscopic binary individual components. We present new individual results for the multiple systems V349 Vel and BC Gru, for the composite spectrum star V4385 Sgr, for the T Tauri star V1045 Sco, and for DM Boo which we re-classify as a BY Draconis variable. The complete data release can be accessed via the VSOP web site. Based on data obtained at the La Silla Observatory, European Southern Observatory, under program ID 077.D-0085.
NASA Astrophysics Data System (ADS)
Bal, A.; Alam, M. S.; Aslan, M. S.
2006-05-01
Often sensor ego-motion or fast target movement causes the target to temporarily go out of the field-of-view leading to reappearing target detection problem in target tracking applications. Since the target goes out of the current frame and reenters at a later frame, the reentering location and variations in rotation, scale, and other 3D orientations of the target are not known thus complicating the detection algorithm has been developed using Fukunaga-Koontz Transform (FKT) and distance classifier correlation filter (DCCF). The detection algorithm uses target and background information, extracted from training samples, to detect possible candidate target images. The detected candidate target images are then introduced into the second algorithm, DCCF, called clutter rejection module, to determine the target coordinates are detected and tracking algorithm is initiated. The performance of the proposed FKT-DCCF based target detection algorithm has been tested using real-world forward looking infrared (FLIR) video sequences.
Predicting Flavonoid UGT Regioselectivity
Jackson, Rhydon; Knisley, Debra; McIntosh, Cecilia; Pfeiffer, Phillip
2011-01-01
Machine learning was applied to a challenging and biologically significant protein classification problem: the prediction of avonoid UGT acceptor regioselectivity from primary sequence. Novel indices characterizing graphical models of residues were proposed and found to be widely distributed among existing amino acid indices and to cluster residues appropriately. UGT subsequences biochemically linked to regioselectivity were modeled as sets of index sequences. Several learning techniques incorporating these UGT models were compared with classifications based on standard sequence alignment scores. These techniques included an application of time series distance functions to protein classification. Time series distances defined on the index sequences were used in nearest neighbor and support vector machine classifiers. Additionally, Bayesian neural network classifiers were applied to the index sequences. The experiments identified improvements over the nearest neighbor and support vector machine classifications relying on standard alignment similarity scores, as well as strong correlations between specific subsequences and regioselectivities. PMID:21747849
Egg embryo development detection with hyperspectral imaging
NASA Astrophysics Data System (ADS)
Lawrence, Kurt C.; Smith, Douglas P.; Windham, William R.; Heitschmidt, Gerald W.; Park, Bosoon
2006-10-01
In the U. S. egg industry, anywhere from 130 million to over one billion infertile eggs are incubated each year. Some of these infertile eggs explode in the hatching cabinet and can potentially spread molds or bacteria to all the eggs in the cabinet. A method to detect the embryo development of incubated eggs was developed. Twelve brown-shell hatching eggs from two replicates (n=24) were incubated and imaged to identify embryo development. A hyperspectral imaging system was used to collect transmission images from 420 to 840 nm of brown-shell eggs positioned with the air cell vertical and normal to the camera lens. Raw transmission images from about 400 to 900 nm were collected for every egg on days 0, 1, 2, and 3 of incubation. A total of 96 images were collected and eggs were broken out on day 6 to determine fertility. After breakout, all eggs were found to be fertile. Therefore, this paper presents results for egg embryo development, not fertility. The original hyperspectral data and spectral means for each egg were both used to create embryo development models. With the hyperspectral data range reduced to about 500 to 700 nm, a minimum noise fraction transformation was used, along with a Mahalanobis Distance classification model, to predict development. Days 2 and 3 were all correctly classified (100%), while day 0 and day 1 were classified at 95.8% and 91.7%, respectively. Alternatively, the mean spectra from each egg were used to develop a partial least squares regression (PLSR) model. First, a PLSR model was developed with all eggs and all days. The data were multiplicative scatter corrected, spectrally smoothed, and the wavelength range was reduced to 539 - 770 nm. With a one-out cross validation, all eggs for all days were correctly classified (100%). Second, a PLSR model was developed with data from day 0 and day 3, and the model was validated with data from day 1 and 2. For day 1, 22 of 24 eggs were correctly classified (91.7%) and for day 2, all eggs were correctly classified (100%). Although the results are based on relatively small sample sizes, they are encouraging. However, larger sample sizes, from multiple flocks, will be needed to fully validate and verify these models. Additionally, future experiments must also include non-fertile eggs so the fertile / non-fertile effect can be determined.
Mao, Xue Gang; Du, Zi Han; Liu, Jia Qian; Chen, Shu Xin; Hou, Ji Yu
2018-01-01
Traditional field investigation and artificial interpretation could not satisfy the need of forest gaps extraction at regional scale. High spatial resolution remote sensing image provides the possibility for regional forest gaps extraction. In this study, we used object-oriented classification method to segment and classify forest gaps based on QuickBird high resolution optical remote sensing image in Jiangle National Forestry Farm of Fujian Province. In the process of object-oriented classification, 10 scales (10-100, with a step length of 10) were adopted to segment QuickBird remote sensing image; and the intersection area of reference object (RA or ) and intersection area of segmented object (RA os ) were adopted to evaluate the segmentation result at each scale. For segmentation result at each scale, 16 spectral characteristics and support vector machine classifier (SVM) were further used to classify forest gaps, non-forest gaps and others. The results showed that the optimal segmentation scale was 40 when RA or was equal to RA os . The accuracy difference between the maximum and minimum at different segmentation scales was 22%. At optimal scale, the overall classification accuracy was 88% (Kappa=0.82) based on SVM classifier. Combining high resolution remote sensing image data with object-oriented classification method could replace the traditional field investigation and artificial interpretation method to identify and classify forest gaps at regional scale.
NASA Technical Reports Server (NTRS)
Barthlome, D. E.
1975-01-01
Test results of a unique automatic brake control system are outlined and a comparison is made of its mode of operation to that of an existing skid control system. The purpose of the test system is to provide automatic control of braking action such that hydraulic brake pressure is maintained at a near constant, optimum value during minimum distance stops.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steer, Ian; Madore, Barry F.; Mazzarella, Joseph M.
Estimates of galaxy distances based on indicators that are independent of cosmological redshift are fundamental to astrophysics. Researchers use them to establish the extragalactic distance scale, to underpin estimates of the Hubble constant, and to study peculiar velocities induced by gravitational attractions that perturb the motions of galaxies with respect to the “Hubble flow” of universal expansion. In 2006 the NASA/IPAC Extragalactic Database (NED) began making available a comprehensive compilation of redshift-independent extragalactic distance estimates. A decade later, this compendium of distances (NED-D) now contains more than 100,000 individual estimates based on primary and secondary indicators, available for more thanmore » 28,000 galaxies, and compiled from over 2000 references in the refereed astronomical literature. This paper describes the methodology, content, and use of NED-D, and addresses challenges to be overcome in compiling such distances. Currently, 75 different distance indicators are in use. We include a figure that facilitates comparison of the indicators with significant numbers of estimates in terms of the minimum, 25th percentile, median, 75th percentile, and maximum distances spanned. Brief descriptions of the indicators, including examples of their use in the database, are given in an appendix.« less
An Improvement To The k-Nearest Neighbor Classifier For ECG Database
NASA Astrophysics Data System (ADS)
Jaafar, Haryati; Hidayah Ramli, Nur; Nasir, Aimi Salihah Abdul
2018-03-01
The k nearest neighbor (kNN) is a non-parametric classifier and has been widely used for pattern classification. However, in practice, the performance of kNN often tends to fail due to the lack of information on how the samples are distributed among them. Moreover, kNN is no longer optimal when the training samples are limited. Another problem observed in kNN is regarding the weighting issues in assigning the class label before classification. Thus, to solve these limitations, a new classifier called Mahalanobis fuzzy k-nearest centroid neighbor (MFkNCN) is proposed in this study. Here, a Mahalanobis distance is applied to avoid the imbalance of samples distribition. Then, a surrounding rule is employed to obtain the nearest centroid neighbor based on the distributions of training samples and its distance to the query point. Consequently, the fuzzy membership function is employed to assign the query point to the class label which is frequently represented by the nearest centroid neighbor Experimental studies from electrocardiogram (ECG) signal is applied in this study. The classification performances are evaluated in two experimental steps i.e. different values of k and different sizes of feature dimensions. Subsequently, a comparative study of kNN, kNCN, FkNN and MFkCNN classifier is conducted to evaluate the performances of the proposed classifier. The results show that the performance of MFkNCN consistently exceeds the kNN, kNCN and FkNN with the best classification rates of 96.5%.
Local Subspace Classifier with Transform-Invariance for Image Classification
NASA Astrophysics Data System (ADS)
Hotta, Seiji
A family of linear subspace classifiers called local subspace classifier (LSC) outperforms the k-nearest neighbor rule (kNN) and conventional subspace classifiers in handwritten digit classification. However, LSC suffers very high sensitivity to image transformations because it uses projection and the Euclidean distances for classification. In this paper, I present a combination of a local subspace classifier (LSC) and a tangent distance (TD) for improving accuracy of handwritten digit recognition. In this classification rule, we can deal with transform-invariance easily because we are able to use tangent vectors for approximation of transformations. However, we cannot use tangent vectors in other type of images such as color images. Hence, kernel LSC (KLSC) is proposed for incorporating transform-invariance into LSC via kernel mapping. The performance of the proposed methods is verified with the experiments on handwritten digit and color image classification.
A label distance maximum-based classifier for multi-label learning.
Liu, Xiaoli; Bao, Hang; Zhao, Dazhe; Cao, Peng
2015-01-01
Multi-label classification is useful in many bioinformatics tasks such as gene function prediction and protein site localization. This paper presents an improved neural network algorithm, Max Label Distance Back Propagation Algorithm for Multi-Label Classification. The method was formulated by modifying the total error function of the standard BP by adding a penalty term, which was realized by maximizing the distance between the positive and negative labels. Extensive experiments were conducted to compare this method against state-of-the-art multi-label methods on three popular bioinformatic benchmark datasets. The results illustrated that this proposed method is more effective for bioinformatic multi-label classification compared to commonly used techniques.
Distance estimation and collision prediction for on-line robotic motion planning
NASA Technical Reports Server (NTRS)
Kyriakopoulos, K. J.; Saridis, G. N.
1992-01-01
An efficient method for computing the minimum distance and predicting collisions between moving objects is presented. This problem is incorporated into the framework of an in-line motion-planning algorithm to satisfy collision avoidance between a robot and moving objects modeled as convex polyhedra. In the beginning, the deterministic problem where the information about the objects is assumed to be certain is examined. L(1) or L(infinity) norms are used to represent distance and the problem becomes a linear programming problem. The stochastic problem is formulated where the uncertainty is induced by sensing and the unknown dynamics of the moving obstacles. Two problems are considered: First, filtering of the distance between the robot and the moving object at the present time. Second, prediction of the minimum distance in the future in order to predict the collision time.
The Minimum Wage and the Employment of Teenagers. Recent Research.
ERIC Educational Resources Information Center
Fallick, Bruce; Currie, Janet
A study used individual-level data from the National Longitudinal Study of Youth to examine the effects of changes in the federal minimum wage on teenage employment. Individuals in the sample were classified as either likely or unlikely to be affected by these increases in the federal minimum wage on the basis of their wage rates and industry of…
On the Nature of Distance Education.
ERIC Educational Resources Information Center
Baath, John A.
1981-01-01
Discusses several points made by Keegan in an article of the same title which appeared in "Distance Education" in March 1980 and also classifies the different types of distance education and their respective philosophies and teaching requirements. (EAO)
Kandhasamy, Chandrasekaran; Ghosh, Kaushik
2017-02-01
Indian states are currently classified into HIV-risk categories based on the observed prevalence counts, percentage of infected attendees in antenatal clinics, and percentage of infected high-risk individuals. This method, however, does not account for the spatial dependence among the states nor does it provide any measure of statistical uncertainty. We provide an alternative model-based approach to address these issues. Our method uses Poisson log-normal models having various conditional autoregressive structures with neighborhood-based and distance-based weight matrices and incorporates all available covariate information. We use R and WinBugs software to fit these models to the 2011 HIV data. Based on the Deviance Information Criterion, the convolution model using distance-based weight matrix and covariate information on female sex workers, literacy rate and intravenous drug users is found to have the best fit. The relative risk of HIV for the various states is estimated using the best model and the states are then classified into the risk categories based on these estimated values. An HIV risk map of India is constructed based on these results. The choice of the final model suggests that an HIV control strategy which focuses on the female sex workers, intravenous drug users and literacy rate would be most effective. Copyright © 2017 Elsevier Ltd. All rights reserved.
Optimization of self-study room open problem based on green and low-carbon campus construction
NASA Astrophysics Data System (ADS)
Liu, Baoyou
2017-04-01
The optimization of self-study room open arrangement problem in colleges and universities is conducive to accelerate the fine management of the campus and promote green and low-carbon campus construction. Firstly, combined with the actual survey data, the self-study area and living area were divided into different blocks, and the electricity consumption in each self-study room and distance between different living and studying areas were normalized. Secondly, the minimum of total satisfaction index and the minimum of the total electricity consumption were selected as the optimization targets respectively. The mathematical models of linear programming were established and resolved by LINGO software. The results showed that the minimum of total satisfaction index was 4055.533 and the total minimum electricity consumption was 137216 W. Finally, some advice had been put forward on how to realize the high efficient administration of the study room.
Tracking of white-tailed deer migration by Global Positioning System
Nelson, M.E.; Mech, L.D.; Frame, P.F.
2004-01-01
Based on global positioning system (GPS) radiocollars in northeastern Minnesota, deer migrated 23-45 km in spring during 31-356 h, deviating a maximum 1.6-4.0 km perpendicular from a straight line of travel between their seasonal ranges. They migrated a minimum of 2.1-18.6 km/day over 11-56 h during 2-14 periods of travel. Minimum travel during 1-h intervals averaged 1.5 km/h. Deer paused 1-12 times, averaging 24 h/pause. Deer migrated similar distances in autumn with comparable rates and patterns of travel.
2001-10-25
Mouriño 3 , Angela Cattini 4 , Serenella Salinari 4 , Maria Grazia Marciani 2,5 and Febo Cincotti 5 1 Dip. Fisiologia umana e Farmacologia...Performing Organization Name(s) and Address(es) Dip. Fisiologia umana e Farmacologia, Università "La Sapienza", Rome, ITALY Performing Organization
Towards a Risk-Based Typology for Transnational Education
ERIC Educational Resources Information Center
Healey, Nigel Martin
2015-01-01
Transnational education (TNE) has been a growth area for UK universities over the last decade. The standard typology classifies TNE by the nature of the activity (i.e., distance learning, international branch campus, franchise, and validation). By analysing a large number of TNE partnerships around the world, this study reveals that the current…
The effect of terrain slope on firefighter safety zone effectiveness
Bret Butler; J. Forthofer; K. Shannon; D. Jimenez; D. Frankman
2010-01-01
The current safety zone guidelines used in the US were developed based on the assumption that the fire and safety zone were located on flat terrain. The minimum safe distance for a firefighter to be from a flame was calculated as that corresponding to a radiant incident energy flux level of 7.0kW-m-2. Current firefighter safety guidelines are based on the assumption...
Distance-Based and Low Energy Adaptive Clustering Protocol for Wireless Sensor Networks
Gani, Abdullah; Anisi, Mohammad Hossein; Ab Hamid, Siti Hafizah; Akhunzada, Adnan; Khan, Muhammad Khurram
2016-01-01
A wireless sensor network (WSN) comprises small sensor nodes with limited energy capabilities. The power constraints of WSNs necessitate efficient energy utilization to extend the overall network lifetime of these networks. We propose a distance-based and low-energy adaptive clustering (DISCPLN) protocol to streamline the green issue of efficient energy utilization in WSNs. We also enhance our proposed protocol into the multi-hop-DISCPLN protocol to increase the lifetime of the network in terms of high throughput with minimum delay time and packet loss. We also propose the mobile-DISCPLN protocol to maintain the stability of the network. The modelling and comparison of these protocols with their corresponding benchmarks exhibit promising results. PMID:27658194
Freitas, Daniel Roberto Coradi; Duarte, Elisabeth Carmen
2014-01-01
Objective To evaluate blood banks in the Brazilian Amazon region with regard to structure and procedures directed toward the prevention of transfusion-transmitted malaria (TTM). Methods This was a normative evaluation based on the Brazilian National Health Surveillance Agency (ANVISA) Resolution RDC No. 153/2004. Ten blood banks were included in the study and classified as ‘adequate’ (≥80 points), ‘partially adequate’ (from 50 to 80 points), or ‘inadequate’ (<50 points). The following components were evaluated: ‘donor education’ (5 points), ‘clinical screening’ (40 points), ‘laboratory screening’ (40 points) and ‘hemovigilance’ (15 points). Results The overall median score was 49.8 (minimum = 16; maximum = 78). Five blood banks were classified as ‘inadequate’ and five as ‘partially adequate’. The median clinical screening score was 26 (minimum = 16; maximum = 32). The median laboratory screening score was 20 (minimum = 0; maximum = 32). Eight blood banks performed laboratory tests for malaria; six tested all donations. Seven used thick smears, but only one performed this procedure in accordance with Ministry of Health requirements. One service had a Program of External Quality Evaluation for malaria testing. With regard to hemovigilance, two institutions reported having procedures to detect cases of transfusion-transmitted malaria. Conclusion Malaria is neglected as a blood–borne disease in the blood banks of the Brazilian Amazon region. None of the institutions were classified as ‘adequate’ in the overall classification or with regard to clinical screening and laboratory screening. Blood bank professionals, the Ministry of Health and Health Surveillance service managers need to pay more attention to this matter so that the safety procedures required by law are complied with. PMID:25453648
Stand up time in tunnel base on rock mass rating Bieniawski 1989
NASA Astrophysics Data System (ADS)
Nata, Refky Adi; M. S., Murad
2017-11-01
RMR (Rock Mass Rating), or also known as the geo mechanics classification has been modified and made as the International Standard in determination of rock mass weighting. Rock Mass Rating Classification has been developed by Bieniawski (since 1973, 1976, and 1989). The goals of this research are investigate the class of rocks base on classification rock mass rating Bieniawski 1989, to investigate the long mass of the establishment rocks, and also to investigate the distance of the opening tunnel without a support especially in underground mine. On the research measuring: strength intact rock material, RQD (Rock Quality Designation), spacing of discontinuities, condition of discontinuities, groundwater, and also adjustment for discontinuity orientations. On testing samples in the laboratory for coal obtained strong press UCS of 30.583 MPa. Based on the classification according to Bieniawski has a weight of 4. As for silt stone obtained strong press of 35.749 MPa, gained weight also by 4. From the results of the measurements obtained for coal RQD value average 97.38 %, so it has a weight of 20. While in siltstone RQD value average 90.10 % so it has weight 20 also. On the coal the average distance measured in field is 22.6 cm so as to obtain a weight of 10, while for siltstone has an average is 148 cm, so it has weight = 15. Presistence in the field vary, on coal = 57.28 cm, so it has weight is 6 and persistence on siltstone 47 cm then does it weight to 6. Base on table Rock Mass Rating according to Bieniawski 1989, aperture on coal = 0.41 mm. That is located in the range 0.1-1 mm, so it has weight is 4. Besides that, for the siltstone aperture = 21.43 mm. That is located in the range > 5 mm, so the weight = 0. Roughness condition in coal and siltstone classified into rough so it has weight 5. Infilling condition in coal and siltstone classified into none so it has weight 6. Weathering condition in coal and siltstone classified into highly weathered so it has weight 1. Groundwater condition in coal classified into dripping so it has weight 4. and siltstone classified into completely dry so it has weight 15. Discontinuity orientation in coal parallel axis of the tunnel. The range is 251°-290° so unfavorable. It has weight = -10. In siltstone also discontinuity parallel axis of the tunnel. The range located between 241°-300°. Base on weighting parameters according to Bieniawski 1989, concluded for rocks are there in class II with value is 62, and able to establishment until 6 months. For the distance of the opening tunnel without a support as far as 8 meters.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 49 Transportation 2 2014-10-01 2014-10-01 false Separation distances for undeveloped film from... Classification of Material § 175.706 Separation distances for undeveloped film from packages containing Class 7... film. Transport index Minimum separation distance to nearest undeveloped film for various times in...
Code of Federal Regulations, 2011 CFR
2011-10-01
... 49 Transportation 2 2011-10-01 2011-10-01 false Separation distances for undeveloped film from... Classification of Material § 175.706 Separation distances for undeveloped film from packages containing Class 7... film. Transport index Minimum separation distance to nearest undeveloped film for various times in...
Code of Federal Regulations, 2013 CFR
2013-10-01
... 49 Transportation 2 2013-10-01 2013-10-01 false Separation distances for undeveloped film from... Classification of Material § 175.706 Separation distances for undeveloped film from packages containing Class 7... film. Transport index Minimum separation distance to nearest undeveloped film for various times in...
Code of Federal Regulations, 2012 CFR
2012-10-01
... 49 Transportation 2 2012-10-01 2012-10-01 false Separation distances for undeveloped film from... Classification of Material § 175.706 Separation distances for undeveloped film from packages containing Class 7... film. Transport index Minimum separation distance to nearest undeveloped film for various times in...
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 2 2010-10-01 2010-10-01 false Separation distances for undeveloped film from... Classification of Material § 175.706 Separation distances for undeveloped film from packages containing Class 7... film. Transport index Minimum separation distance to nearest undeveloped film for various times in...
A Novel Locally Linear KNN Method With Applications to Visual Recognition.
Liu, Qingfeng; Liu, Chengjun
2017-09-01
A locally linear K Nearest Neighbor (LLK) method is presented in this paper with applications to robust visual recognition. Specifically, the concept of an ideal representation is first presented, which improves upon the traditional sparse representation in many ways. The objective function based on a host of criteria for sparsity, locality, and reconstruction is then optimized to derive a novel representation, which is an approximation to the ideal representation. The novel representation is further processed by two classifiers, namely, an LLK-based classifier and a locally linear nearest mean-based classifier, for visual recognition. The proposed classifiers are shown to connect to the Bayes decision rule for minimum error. Additional new theoretical analysis is presented, such as the nonnegative constraint, the group regularization, and the computational efficiency of the proposed LLK method. New methods such as a shifted power transformation for improving reliability, a coefficients' truncating method for enhancing generalization, and an improved marginal Fisher analysis method for feature extraction are proposed to further improve visual recognition performance. Extensive experiments are implemented to evaluate the proposed LLK method for robust visual recognition. In particular, eight representative data sets are applied for assessing the performance of the LLK method for various visual recognition applications, such as action recognition, scene recognition, object recognition, and face recognition.
Pairwise Trajectory Management (PTM): Concept Overview
NASA Technical Reports Server (NTRS)
Jones, Kenneth M.; Graff, Thomas J.; Chartrand, Ryan C.; Carreno, Victor; Kibler, Jennifer L.
2017-01-01
Pairwise Trajectory Management (PTM) is an Interval Management (IM) concept that utilizes airborne and ground-based capabilities to enable the implementation of airborne pairwise spacing capabilities in oceanic regions. The goal of PTM is to use airborne surveillance and tools to manage an "at or greater than" inter-aircraft spacing. Due to the precision of Automatic Dependent Surveillance-Broadcast (ADS-B) information and the use of airborne spacing guidance, the PTM minimum spacing distance will be less than distances a controller can support with current automation systems that support oceanic operations. Ground tools assist the controller in evaluating the traffic picture and determining appropriate PTM clearances to be issued. Avionics systems provide guidance information that allows the flight crew to conform to the PTM clearance issued by the controller. The combination of a reduced minimum distance and airborne spacing management will increase the capacity and efficiency of aircraft operations at a given altitude or volume of airspace. This paper provides an overview of the proposed application, description of a few key scenarios, high level discussion of expected air and ground equipment and procedure changes, overview of a potential flight crew human-machine interface that would support PTM operations and some initial PTM benefits results.
Minimum separation distances for natural gas pipeline and boilers in the 300 area, Hanford Site
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daling, P.M.; Graham, T.M.
1997-08-01
The U.S. Department of Energy (DOE) is proposing actions to reduce energy expenditures and improve energy system reliability at the 300 Area of the Hanford Site. These actions include replacing the centralized heating system with heating units for individual buildings or groups of buildings, constructing a new natural gas distribution system to provide a fuel source for many of these units, and constructing a central control building to operate and maintain the system. The individual heating units will include steam boilers that are to be housed in individual annex buildings located at some distance away from nearby 300 Area nuclearmore » facilities. This analysis develops the basis for siting the package boilers and natural gas distribution systems to be used to supply steam to 300 Area nuclear facilities. The effects of four potential fire and explosion scenarios involving the boiler and natural gas pipeline were quantified to determine minimum separation distances that would reduce the risks to nearby nuclear facilities. The resulting minimum separation distances are shown in Table ES.1.« less
Finger vein identification using fuzzy-based k-nearest centroid neighbor classifier
NASA Astrophysics Data System (ADS)
Rosdi, Bakhtiar Affendi; Jaafar, Haryati; Ramli, Dzati Athiar
2015-02-01
In this paper, a new approach for personal identification using finger vein image is presented. Finger vein is an emerging type of biometrics that attracts attention of researchers in biometrics area. As compared to other biometric traits such as face, fingerprint and iris, finger vein is more secured and hard to counterfeit since the features are inside the human body. So far, most of the researchers focus on how to extract robust features from the captured vein images. Not much research was conducted on the classification of the extracted features. In this paper, a new classifier called fuzzy-based k-nearest centroid neighbor (FkNCN) is applied to classify the finger vein image. The proposed FkNCN employs a surrounding rule to obtain the k-nearest centroid neighbors based on the spatial distributions of the training images and their distance to the test image. Then, the fuzzy membership function is utilized to assign the test image to the class which is frequently represented by the k-nearest centroid neighbors. Experimental evaluation using our own database which was collected from 492 fingers shows that the proposed FkNCN has better performance than the k-nearest neighbor, k-nearest-centroid neighbor and fuzzy-based-k-nearest neighbor classifiers. This shows that the proposed classifier is able to identify the finger vein image effectively.
NASA Astrophysics Data System (ADS)
Chen, Xiaodian; Deng, Licai; de Grijs, Richard; Wang, Shu; Feng, Yuting
2018-06-01
W Ursa Majoris (W UMa)-type contact binary systems (CBs) are useful statistical distance indicators because of their large numbers. Here, we establish (orbital) period–luminosity relations (PLRs) in 12 optical to mid-infrared bands (GBVRIJHK s W1W2W3W4) based on 183 nearby W UMa-type CBs with accurate Tycho–Gaia parallaxes. The 1σ dispersion of the PLRs decreases from optical to near- and mid-infrared wavelengths. The minimum scatter, 0.16 mag, implies that W UMa-type CBs can be used to recover distances to 7% precision. Applying our newly determined PLRs to 19 open clusters containing W UMa-type CBs demonstrates that the PLR and open cluster CB distance scales are mutually consistent to within 1%. Adopting our PLRs as secondary distance indicators, we compiled a catalog of 55,603 CB candidates, of which 80% have distance estimates based on a combination of optical, near-infrared, and mid-infrared photometry. Using Fourier decomposition, 27,318 high-probability W UMa-type CBs were selected. The resulting 8% distance accuracy implies that our sample encompasses the largest number of objects with accurate distances within a local volume with a radius of 3 kpc available to date. The distribution of W UMa-type CBs in the Galaxy suggests that in different environments, the CB luminosity function may be different: larger numbers of brighter (longer-period) W UMa-type CBs are found in younger environments.
30 CFR 75.1107-9 - Dry chemical devices; capacity; minimum requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Dry chemical devices; capacity; minimum... Dry chemical devices; capacity; minimum requirements. (a) Dry chemical fire extinguishing systems used...; (3) Hose and pipe shall be as short as possible; the distance between the chemical container and...
30 CFR 75.1107-9 - Dry chemical devices; capacity; minimum requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Dry chemical devices; capacity; minimum... Dry chemical devices; capacity; minimum requirements. (a) Dry chemical fire extinguishing systems used...; (3) Hose and pipe shall be as short as possible; the distance between the chemical container and...
30 CFR 75.1107-9 - Dry chemical devices; capacity; minimum requirements.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 30 Mineral Resources 1 2013-07-01 2013-07-01 false Dry chemical devices; capacity; minimum... Dry chemical devices; capacity; minimum requirements. (a) Dry chemical fire extinguishing systems used...; (3) Hose and pipe shall be as short as possible; the distance between the chemical container and...
30 CFR 75.1107-9 - Dry chemical devices; capacity; minimum requirements.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Dry chemical devices; capacity; minimum... Dry chemical devices; capacity; minimum requirements. (a) Dry chemical fire extinguishing systems used...; (3) Hose and pipe shall be as short as possible; the distance between the chemical container and...
Madurese cultural communication approach
NASA Astrophysics Data System (ADS)
Dharmawan, A.; Aji, G. G.; Mutiah
2018-01-01
Madura is a tribe with a cultural entity influenced by the ecological aspect and Madurese people. Assessing Madurese culture cannot be separated from the relation of society and ecological aspects that form the characteristics of Madura culture. Stereotypes of Madurese include a stubborn attitude, and carok or killing as a problem solving. On the other hand, Madurese are known to be inclusive, religious, and hardworking. The basic assumption is that the ecological conditions in Madura also shape the social and cultural life of the Madurese. Therefore, judging the Madurese cannot just be seen from a single event only. Moreover, the assessment only focuses on Madurese violence and disregards their structure and social aspects. Assessing Madura culture as a whole can explain the characteristics of Madurese community. According Hofstede culture is a characteristic mindset and perspective of individuals or groups of society in addressing a distinguished life. These differences distinguish individuals from others, or one country to the other. According to Hofstede to be able to assess the culture can be explained by four dimensions namely, individualism-collectivism, uncertainty avoidance, masculinity-femininity and power distance. The method used in this research is a case study. The Result is Madurese classified collectivism can be viewed from the pattern of settlements called kampong meji. Madurese can be classified into low and high uncertainty avoidance. the power distance for the Madurese is classified as unequally or there is a distance of power based on social groups. The element of masculinity of the Madurese is shown to be found when the earnestness of work.
Land Cover Analysis by Using Pixel-Based and Object-Based Image Classification Method in Bogor
NASA Astrophysics Data System (ADS)
Amalisana, Birohmatin; Rokhmatullah; Hernina, Revi
2017-12-01
The advantage of image classification is to provide earth’s surface information like landcover and time-series changes. Nowadays, pixel-based image classification technique is commonly performed with variety of algorithm such as minimum distance, parallelepiped, maximum likelihood, mahalanobis distance. On the other hand, landcover classification can also be acquired by using object-based image classification technique. In addition, object-based classification uses image segmentation from parameter such as scale, form, colour, smoothness and compactness. This research is aimed to compare the result of landcover classification and its change detection between parallelepiped pixel-based and object-based classification method. Location of this research is Bogor with 20 years range of observation from 1996 until 2016. This region is famous as urban areas which continuously change due to its rapid development, so that time-series landcover information of this region will be interesting.
Spatial variability in airborne pollen concentrations.
Raynor, G S; Ogden, E C; Hayes, J V
1975-03-01
Tests were conducted to determine the relationship between airborne pollen concentrations and distance. Simultaneous samples were taken in 171 tests with sets of eight rotoslide samplers spaced from one to 486 M. apart in straight lines. Use of all possible pairs gave 28 separation distances. Tests were conducted over a 2-year period in urban and rural locations distant from major pollen sources during both tree and ragweed pollen seasons. Samples were taken at a height of 1.5 M. during 5-to 20-minute periods. Tests were grouped by pollen type, location, year, and direction of the wind relative to the line. Data were analyzed to evaluate variability without regard to sampler spacing and variability as a function of separation distance. The mean, standard deviation, coefficient of variation, ratio of maximum to the mean, and ratio of minimum to the mean were calculated for each test, each group of tests, and all cases. The average coefficient of variation is 0.21, the maximum over the mean, 1.39 and the minimum over the mean, 0.69. No relationship was found with experimental conditions. Samples taken at the minimum separation distance had a mean difference of 18 per cent. Differences between pairs of samples increased with distance in 10 of 13 groups. These results suggest that airborne pollens are not always well mixed in the lower atmosphere and that a sample becomes less representative with increasing distance from the sampling location.
A novel underwater dam crack detection and classification approach based on sonar images
Shi, Pengfei; Fan, Xinnan; Ni, Jianjun; Khan, Zubair; Li, Min
2017-01-01
Underwater dam crack detection and classification based on sonar images is a challenging task because underwater environments are complex and because cracks are quite random and diverse in nature. Furthermore, obtainable sonar images are of low resolution. To address these problems, a novel underwater dam crack detection and classification approach based on sonar imagery is proposed. First, the sonar images are divided into image blocks. Second, a clustering analysis of a 3-D feature space is used to obtain the crack fragments. Third, the crack fragments are connected using an improved tensor voting method. Fourth, a minimum spanning tree is used to obtain the crack curve. Finally, an improved evidence theory combined with fuzzy rule reasoning is proposed to classify the cracks. Experimental results show that the proposed approach is able to detect underwater dam cracks and classify them accurately and effectively under complex underwater environments. PMID:28640925
A novel underwater dam crack detection and classification approach based on sonar images.
Shi, Pengfei; Fan, Xinnan; Ni, Jianjun; Khan, Zubair; Li, Min
2017-01-01
Underwater dam crack detection and classification based on sonar images is a challenging task because underwater environments are complex and because cracks are quite random and diverse in nature. Furthermore, obtainable sonar images are of low resolution. To address these problems, a novel underwater dam crack detection and classification approach based on sonar imagery is proposed. First, the sonar images are divided into image blocks. Second, a clustering analysis of a 3-D feature space is used to obtain the crack fragments. Third, the crack fragments are connected using an improved tensor voting method. Fourth, a minimum spanning tree is used to obtain the crack curve. Finally, an improved evidence theory combined with fuzzy rule reasoning is proposed to classify the cracks. Experimental results show that the proposed approach is able to detect underwater dam cracks and classify them accurately and effectively under complex underwater environments.
Communication scheme based on evolutionary spatial 2×2 games
NASA Astrophysics Data System (ADS)
Ziaukas, Pranas; Ragulskis, Tautvydas; Ragulskis, Minvydas
2014-06-01
A visual communication scheme based on evolutionary spatial 2×2 games is proposed in this paper. Self-organizing patterns induced by complex interactions between competing individuals are exploited for hiding and transmitting secret visual information. Properties of the proposed communication scheme are discussed in details. It is shown that the hiding capacity of the system (the minimum size of the detectable primitives and the minimum distance between two primitives) is sufficient for the effective transmission of digital dichotomous images. Also, it is demonstrated that the proposed communication scheme is resilient to time backwards, plain image attacks and is highly sensitive to perturbations of private and public keys. Several computational experiments are used to demonstrate the effectiveness of the proposed communication scheme.
Rotela, Camilo H; Spinsanti, Lorena I; Lamfri, Mario A; Contigiani, Marta S; Almirón, Walter R; Scavuzzo, Carlos M
2011-11-01
In response to the first human outbreak (January May 2005) of Saint Louis encephalitis (SLE) virus in Córdoba province, Argentina, we developed an environmental SLE virus risk map for the capital, i.e. Córdoba city. The aim was to provide a map capable of detecting macro-environmental factors associated with the spatial distribution of SLE cases, based on remotely sensed data and a geographical information system. Vegetation, soil brightness, humidity status, distances to water-bodies and areas covered by vegetation were assessed based on pre-outbreak images provided by the Landsat 5TM satellite. A strong inverse relationship between the number of humans infected by SLEV and distance to high-vigor vegetation was noted. A statistical non-hierarchic decision tree model was constructed, based on environmental variables representing the areas surrounding patient residences. From this point of view, 18% of the city could be classified as being at high risk for SLEV infection, while 34% carried a low risk, or none at all. Taking the whole 2005 epidemic into account, 80% of the cases came from areas classified by the model as medium-high or high risk. Almost 46% of the cases were registered in high-risk areas, while there were no cases (0%) in areas affirmed as risk free.
Tensor manifold-based extreme learning machine for 2.5-D face recognition
NASA Astrophysics Data System (ADS)
Chong, Lee Ying; Ong, Thian Song; Teoh, Andrew Beng Jin
2018-01-01
We explore the use of the Gabor regional covariance matrix (GRCM), a flexible matrix-based descriptor that embeds the Gabor features in the covariance matrix, as a 2.5-D facial descriptor and an effective means of feature fusion for 2.5-D face recognition problems. Despite its promise, matching is not a trivial problem for GRCM since it is a special instance of a symmetric positive definite (SPD) matrix that resides in non-Euclidean space as a tensor manifold. This implies that GRCM is incompatible with the existing vector-based classifiers and distance matchers. Therefore, we bridge the gap of the GRCM and extreme learning machine (ELM), a vector-based classifier for the 2.5-D face recognition problem. We put forward a tensor manifold-compliant ELM and its two variants by embedding the SPD matrix randomly into reproducing kernel Hilbert space (RKHS) via tensor kernel functions. To preserve the pair-wise distance of the embedded data, we orthogonalize the random-embedded SPD matrix. Hence, classification can be done using a simple ridge regressor, an integrated component of ELM, on the random orthogonal RKHS. Experimental results show that our proposed method is able to improve the recognition performance and further enhance the computational efficiency.
Comparison of Different EHG Feature Selection Methods for the Detection of Preterm Labor
Alamedine, D.; Khalil, M.; Marque, C.
2013-01-01
Numerous types of linear and nonlinear features have been extracted from the electrohysterogram (EHG) in order to classify labor and pregnancy contractions. As a result, the number of available features is now very large. The goal of this study is to reduce the number of features by selecting only the relevant ones which are useful for solving the classification problem. This paper presents three methods for feature subset selection that can be applied to choose the best subsets for classifying labor and pregnancy contractions: an algorithm using the Jeffrey divergence (JD) distance, a sequential forward selection (SFS) algorithm, and a binary particle swarm optimization (BPSO) algorithm. The two last methods are based on a classifier and were tested with three types of classifiers. These methods have allowed us to identify common features which are relevant for contraction classification. PMID:24454536
Stackable differential mobility analyzer for aerosol measurement
Cheng, Meng-Dawn [Oak Ridge, TN; Chen, Da-Ren [Creve Coeur, MO
2007-05-08
A multi-stage differential mobility analyzer (MDMA) for aerosol measurements includes a first electrode or grid including at least one inlet or injection slit for receiving an aerosol including charged particles for analysis. A second electrode or grid is spaced apart from the first electrode. The second electrode has at least one sampling outlet disposed at a plurality different distances along its length. A volume between the first and the second electrode or grid between the inlet or injection slit and a distal one of the plurality of sampling outlets forms a classifying region, the first and second electrodes for charging to suitable potentials to create an electric field within the classifying region. At least one inlet or injection slit in the second electrode receives a sheath gas flow into an upstream end of the classifying region, wherein each sampling outlet functions as an independent DMA stage and classifies different size ranges of charged particles based on electric mobility simultaneously.
Node Deployment with k-Connectivity in Sensor Networks for Crop Information Full Coverage Monitoring
Liu, Naisen; Cao, Weixing; Zhu, Yan; Zhang, Jingchao; Pang, Fangrong; Ni, Jun
2016-01-01
Wireless sensor networks (WSNs) are suitable for the continuous monitoring of crop information in large-scale farmland. The information obtained is great for regulation of crop growth and achieving high yields in precision agriculture (PA). In order to realize full coverage and k-connectivity WSN deployment for monitoring crop growth information of farmland on a large scale and to ensure the accuracy of the monitored data, a new WSN deployment method using a genetic algorithm (GA) is here proposed. The fitness function of GA was constructed based on the following WSN deployment criteria: (1) nodes must be located in the corresponding plots; (2) WSN must have k-connectivity; (3) WSN must have no communication silos; (4) the minimum distance between node and plot boundary must be greater than a specific value to prevent each node from being affected by the farmland edge effect. The deployment experiments were performed on natural farmland and on irregular farmland divided based on spatial differences of soil nutrients. Results showed that both WSNs gave full coverage, there were no communication silos, and the minimum connectivity of nodes was equal to k. The deployment was tested for different values of k and transmission distance (d) to the node. The results showed that, when d was set to 200 m, as k increased from 2 to 4 the minimum connectivity of nodes increases and is equal to k. When k was set to 2, the average connectivity of all nodes increased in a linear manner with the increase of d from 140 m to 250 m, and the minimum connectivity does not change. PMID:27941704
Solving a Higgs optimization problem with quantum annealing for machine learning.
Mott, Alex; Job, Joshua; Vlimant, Jean-Roch; Lidar, Daniel; Spiropulu, Maria
2017-10-18
The discovery of Higgs-boson decays in a background of standard-model processes was assisted by machine learning methods. The classifiers used to separate signals such as these from background are trained using highly unerring but not completely perfect simulations of the physical processes involved, often resulting in incorrect labelling of background processes or signals (label noise) and systematic errors. Here we use quantum and classical annealing (probabilistic techniques for approximating the global maximum or minimum of a given function) to solve a Higgs-signal-versus-background machine learning optimization problem, mapped to a problem of finding the ground state of a corresponding Ising spin model. We build a set of weak classifiers based on the kinematic observables of the Higgs decay photons, which we then use to construct a strong classifier. This strong classifier is highly resilient against overtraining and against errors in the correlations of the physical observables in the training data. We show that the resulting quantum and classical annealing-based classifier systems perform comparably to the state-of-the-art machine learning methods that are currently used in particle physics. However, in contrast to these methods, the annealing-based classifiers are simple functions of directly interpretable experimental parameters with clear physical meaning. The annealer-trained classifiers use the excited states in the vicinity of the ground state and demonstrate some advantage over traditional machine learning methods for small training datasets. Given the relative simplicity of the algorithm and its robustness to error, this technique may find application in other areas of experimental particle physics, such as real-time decision making in event-selection problems and classification in neutrino physics.
Solving a Higgs optimization problem with quantum annealing for machine learning
NASA Astrophysics Data System (ADS)
Mott, Alex; Job, Joshua; Vlimant, Jean-Roch; Lidar, Daniel; Spiropulu, Maria
2017-10-01
The discovery of Higgs-boson decays in a background of standard-model processes was assisted by machine learning methods. The classifiers used to separate signals such as these from background are trained using highly unerring but not completely perfect simulations of the physical processes involved, often resulting in incorrect labelling of background processes or signals (label noise) and systematic errors. Here we use quantum and classical annealing (probabilistic techniques for approximating the global maximum or minimum of a given function) to solve a Higgs-signal-versus-background machine learning optimization problem, mapped to a problem of finding the ground state of a corresponding Ising spin model. We build a set of weak classifiers based on the kinematic observables of the Higgs decay photons, which we then use to construct a strong classifier. This strong classifier is highly resilient against overtraining and against errors in the correlations of the physical observables in the training data. We show that the resulting quantum and classical annealing-based classifier systems perform comparably to the state-of-the-art machine learning methods that are currently used in particle physics. However, in contrast to these methods, the annealing-based classifiers are simple functions of directly interpretable experimental parameters with clear physical meaning. The annealer-trained classifiers use the excited states in the vicinity of the ground state and demonstrate some advantage over traditional machine learning methods for small training datasets. Given the relative simplicity of the algorithm and its robustness to error, this technique may find application in other areas of experimental particle physics, such as real-time decision making in event-selection problems and classification in neutrino physics.
49 CFR 176.708 - Segregation distances.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 49 Transportation 2 2011-10-01 2011-10-01 false Segregation distances. 176.708 Section 176.708... Requirements for Radioactive Materials § 176.708 Segregation distances. (a) Table IV lists minimum separation... into account any relocation of cargo during the voyage. (e) Any departure from the segregation...
Effect of quantum well position on the distortion characteristics of transistor laser
NASA Astrophysics Data System (ADS)
Piramasubramanian, S.; Ganesh Madhan, M.; Radha, V.; Shajithaparveen, S. M. S.; Nivetha, G.
2018-05-01
The effect of quantum well position on the modulation and distortion characteristics of a 1300 nm transistor laser is analyzed in this paper. Standard three level rate equations are numerically solved to study this characteristics. Modulation depth, second order harmonic and third order intermodulation distortion of the transistor laser are evaluated for different quantum well positions for a 900 MHz RF signal modulation. From the DC analysis, it is observed that optical power is maximum, when the quantum well is positioned near base-emitter interface. The threshold current of the device is found to increase with increasing the distance between the quantum well and the base-emitter junction. A maximum modulation depth of 0.81 is predicted, when the quantum well is placed at 10 nm from the base-emitter junction, under RF modulation. The magnitude of harmonic and intermodulation distortion are found to decrease with increasing current and with an increase in quantum well distance from the emitter base junction. A minimum second harmonic distortion magnitude of -25.96 dBc is predicted for quantum well position (230 nm) near to the base-collector interface for 900 MHz modulation frequency at a bias current of 20 Ibth. Similarly, a minimum third order intermodulation distortion of -38.2 dBc is obtained for the same position and similar biasing conditions.
Neuro-fuzzy model for estimating race and gender from geometric distances of human face across pose
NASA Astrophysics Data System (ADS)
Nanaa, K.; Rahman, M. N. A.; Rizon, M.; Mohamad, F. S.; Mamat, M.
2018-03-01
Classifying human face based on race and gender is a vital process in face recognition. It contributes to an index database and eases 3D synthesis of the human face. Identifying race and gender based on intrinsic factor is problematic, which is more fitting to utilizing nonlinear model for estimating process. In this paper, we aim to estimate race and gender in varied head pose. For this purpose, we collect dataset from PICS and CAS-PEAL databases, detect the landmarks and rotate them to the frontal pose. After geometric distances are calculated, all of distance values will be normalized. Implementation is carried out by using Neural Network Model and Fuzzy Logic Model. These models are combined by using Adaptive Neuro-Fuzzy Model. The experimental results showed that the optimization of address fuzzy membership. Model gives a better assessment rate and found that estimating race contributing to a more accurate gender assessment.
Rate-Compatible LDPC Codes with Linear Minimum Distance
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Jones, Christopher; Dolinar, Samuel
2009-01-01
A recently developed method of constructing protograph-based low-density parity-check (LDPC) codes provides for low iterative decoding thresholds and minimum distances proportional to block sizes, and can be used for various code rates. A code constructed by this method can have either fixed input block size or fixed output block size and, in either case, provides rate compatibility. The method comprises two submethods: one for fixed input block size and one for fixed output block size. The first mentioned submethod is useful for applications in which there are requirements for rate-compatible codes that have fixed input block sizes. These are codes in which only the numbers of parity bits are allowed to vary. The fixed-output-blocksize submethod is useful for applications in which framing constraints are imposed on the physical layers of affected communication systems. An example of such a system is one that conforms to one of many new wireless-communication standards that involve the use of orthogonal frequency-division modulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adesso, Gerardo; CNR-INFM Coherentia, Naples; CNISM, Unita di Salerno, Salerno
2007-10-15
We present a geometric approach to the characterization of separability and entanglement in pure Gaussian states of an arbitrary number of modes. The analysis is performed adapting to continuous variables a formalism based on single subsystem unitary transformations that has been recently introduced to characterize separability and entanglement in pure states of qubits and qutrits [S. M. Giampaolo and F. Illuminati, Phys. Rev. A 76, 042301 (2007)]. In analogy with the finite-dimensional case, we demonstrate that the 1xM bipartite entanglement of a multimode pure Gaussian state can be quantified by the minimum squared Euclidean distance between the state itself andmore » the set of states obtained by transforming it via suitable local symplectic (unitary) operations. This minimum distance, corresponding to a, uniquely determined, extremal local operation, defines an entanglement monotone equivalent to the entropy of entanglement, and amenable to direct experimental measurement with linear optical schemes.« less
ELECTROFISHING DISTANCE NEEDED TO ESTIMATE FISH SPECIES RICHNESS IN RAFTABLE WESTERN USA RIVERS
A critical issue in river monitoring is the minimum amount of sampling distance required to adequately represent the fish assemblage of a reach. Determining adequate sampling distance is important because it affects estimates of fish assemblage integrity and diversity at local a...
Knebelsberger, Thomas; Landi, Monica; Neumann, Hermann; Kloppmann, Matthias; Sell, Anne F; Campbell, Patrick D; Laakmann, Silke; Raupach, Michael J; Carvalho, Gary R; Costa, Filipe O
2014-09-01
Valid fish species identification is an essential step both for fundamental science and fisheries management. The traditional identification is mainly based on external morphological diagnostic characters, leading to inconsistent results in many cases. Here, we provide a sequence reference library based on mitochondrial cytochrome c oxidase subunit I (COI) for a valid identification of 93 North Atlantic fish species originating from the North Sea and adjacent waters, including many commercially exploited species. Neighbour-joining analysis based on K2P genetic distances formed nonoverlapping clusters for all species with a ≥99% bootstrap support each. Identification was successful for 100% of the species as the minimum genetic distance to the nearest neighbour always exceeded the maximum intraspecific distance. A barcoding gap was apparent for the whole data set. Within-species distances ranged from 0 to 2.35%, while interspecific distances varied between 3.15 and 28.09%. Distances between congeners were on average 51-fold higher than those within species. The validation of the sequence library by applying BOLDs barcode index number (BIN) analysis tool and a ranking system demonstrated high taxonomic reliability of the DNA barcodes for 85% of the investigated fish species. Thus, the sequence library presented here can be confidently used as a benchmark for identification of at least two-thirds of the typical fish species recorded for the North Sea. © 2014 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Feng, Wenjie; Wu, Shenghe; Yin, Yanshu; Zhang, Jiajia; Zhang, Ke
2017-07-01
A training image (TI) can be regarded as a database of spatial structures and their low to higher order statistics used in multiple-point geostatistics (MPS) simulation. Presently, there are a number of methods to construct a series of candidate TIs (CTIs) for MPS simulation based on a modeler's subjective criteria. The spatial structures of TIs are often various, meaning that the compatibilities of different CTIs with the conditioning data are different. Therefore, evaluation and optimal selection of CTIs before MPS simulation is essential. This paper proposes a CTI evaluation and optimal selection method based on minimum data event distance (MDevD). In the proposed method, a set of MDevD properties are established through calculation of the MDevD of conditioning data events in each CTI. Then, CTIs are evaluated and ranked according to the mean value and variance of the MDevD properties. The smaller the mean value and variance of an MDevD property are, the more compatible the corresponding CTI is with the conditioning data. In addition, data events with low compatibility in the conditioning data grid can be located to help modelers select a set of complementary CTIs for MPS simulation. The MDevD property can also help to narrow the range of the distance threshold for MPS simulation. The proposed method was evaluated using three examples: a 2D categorical example, a 2D continuous example, and an actual 3D oil reservoir case study. To illustrate the method, a C++ implementation of the method is attached to the paper.
Identification of terrain cover using the optimum polarimetric classifier
NASA Technical Reports Server (NTRS)
Kong, J. A.; Swartz, A. A.; Yueh, H. A.; Novak, L. M.; Shin, R. T.
1988-01-01
A systematic approach for the identification of terrain media such as vegetation canopy, forest, and snow-covered fields is developed using the optimum polarimetric classifier. The covariance matrices for various terrain cover are computed from theoretical models of random medium by evaluating the scattering matrix elements. The optimal classification scheme makes use of a quadratic distance measure and is applied to classify a vegetation canopy consisting of both trees and grass. Experimentally measured data are used to validate the classification scheme. Analytical and Monte Carlo simulated classification errors using the fully polarimetric feature vector are compared with classification based on single features which include the phase difference between the VV and HH polarization returns. It is shown that the full polarimetric results are optimal and provide better classification performance than single feature measurements.
Fifteen new earthworm mitogenomes shed new light on phylogeny within the Pheretima complex
Zhang, Liangliang; Sechi, Pierfrancesco; Yuan, Minglong; Jiang, Jibao; Dong, Yan; Qiu, Jiangping
2016-01-01
The Pheretima complex within the Megascolecidae family is a major earthworm group. Recently, the systematic status of the Pheretima complex based on morphology was challenged by molecular studies. In this study, we carry out the first comparative mitogenomic study in oligochaetes. The mitogenomes of 15 earthworm species were sequenced and compared with other 9 available earthworm mitogenomes, with the main aim to explore their phylogenetic relationships and test different analytical approaches on phylogeny reconstruction. The general earthworm mitogenomic features revealed to be conservative: all genes encoded on the same strand, all the protein coding loci shared the same initiation codon (ATG), and tRNA genes showed conserved structures. The Drawida japonica mitogenome displayed the highest A + T content, reversed AT/GC-skews and the highest genetic diversity. Genetic distances among protein coding genes displayed their maximum and minimum interspecific values in the ATP8 and CO1 genes, respectively. The 22 tRNAs showed variable substitution patterns between the considered earthworm mitogenomes. The inclusion of rRNAs positively increased phylogenetic support. Furthermore, we tested different trimming tools for alignment improvement. Our analyses rejected reciprocal monophyly among Amynthas and Metaphire and indicated that the two genera should be systematically classified into one. PMID:26833286
Biswas, Dwaipayan; Cranny, Andy; Gupta, Nayaab; Maharatna, Koushik; Achner, Josy; Klemke, Jasmin; Jöbges, Michael; Ortmann, Steffen
2015-04-01
In this paper we present a methodology for recognizing three fundamental movements of the human forearm (extension, flexion and rotation) using pattern recognition applied to the data from a single wrist-worn, inertial sensor. We propose that this technique could be used as a clinical tool to assess rehabilitation progress in neurodegenerative pathologies such as stroke or cerebral palsy by tracking the number of times a patient performs specific arm movements (e.g. prescribed exercises) with their paretic arm throughout the day. We demonstrate this with healthy subjects and stroke patients in a simple proof of concept study in which these arm movements are detected during an archetypal activity of daily-living (ADL) - 'making-a-cup-of-tea'. Data is collected from a tri-axial accelerometer and a tri-axial gyroscope located proximal to the wrist. In a training phase, movements are initially performed in a controlled environment which are represented by a ranked set of 30 time-domain features. Using a sequential forward selection technique, for each set of feature combinations three clusters are formed using k-means clustering followed by 10 runs of 10-fold cross validation on the training data to determine the best feature combinations. For the testing phase, movements performed during the ADL are associated with each cluster label using a minimum distance classifier in a multi-dimensional feature space, comprised of the best ranked features, using Euclidean or Mahalanobis distance as the metric. Experiments were performed with four healthy subjects and four stroke survivors and our results show that the proposed methodology can detect the three movements performed during the ADL with an overall average accuracy of 88% using the accelerometer data and 83% using the gyroscope data across all healthy subjects and arm movement types. The average accuracy across all stroke survivors was 70% using accelerometer data and 66% using gyroscope data. We also use a Linear Discriminant Analysis (LDA) classifier and a Support Vector Machine (SVM) classifier in association with the same set of features to detect the three arm movements and compare the results to demonstrate the effectiveness of our proposed methodology. Copyright © 2014 Elsevier B.V. All rights reserved.
Structure for identifying, locating and quantifying physical phenomena
Richardson, John G.
2006-10-24
A method and system for detecting, locating and quantifying a physical phenomena such as strain or a deformation in a structure. A minimum resolvable distance along the structure is selected and a quantity of laterally adjacent conductors is determined. Each conductor includes a plurality of segments coupled in series which define the minimum resolvable distance along the structure. When a deformation occurs, changes in the defined energy transmission characteristics along each conductor are compared to determine which segment contains the deformation.
Richardson, John G.
2006-01-24
A method and system for detecting, locating and quantifying a physical phenomena such as strain or a deformation in a structure. A minimum resolvable distance along the structure is selected and a quantity of laterally adjacent conductors is determined. Each conductor includes a plurality of segments coupled in series which define the minimum resolvable distance along the structure. When a deformation occurs, changes in the defined energy transmission characteristics along each conductor are compared to determine which segment contains the deformation.
Dissimilarity representations in lung parenchyma classification
NASA Astrophysics Data System (ADS)
Sørensen, Lauge; de Bruijne, Marleen
2009-02-01
A good problem representation is important for a pattern recognition system to be successful. The traditional approach to statistical pattern recognition is feature representation. More specifically, objects are represented by a number of features in a feature vector space, and classifiers are built in this representation. This is also the general trend in lung parenchyma classification in computed tomography (CT) images, where the features often are measures on feature histograms. Instead, we propose to build normal density based classifiers in dissimilarity representations for lung parenchyma classification. This allows for the classifiers to work on dissimilarities between objects, which might be a more natural way of representing lung parenchyma. In this context, dissimilarity is defined between CT regions of interest (ROI)s. ROIs are represented by their CT attenuation histogram and ROI dissimilarity is defined as a histogram dissimilarity measure between the attenuation histograms. In this setting, the full histograms are utilized according to the chosen histogram dissimilarity measure. We apply this idea to classification of different emphysema patterns as well as normal, healthy tissue. Two dissimilarity representation approaches as well as different histogram dissimilarity measures are considered. The approaches are evaluated on a set of 168 CT ROIs using normal density based classifiers all showing good performance. Compared to using histogram dissimilarity directly as distance in a emph{k} nearest neighbor classifier, which achieves a classification accuracy of 92.9%, the best dissimilarity representation based classifier is significantly better with a classification accuracy of 97.0% (text{emph{p" border="0" class="imgtopleft"> = 0.046).
Classifying features in CT imagery: accuracy for some single- and multiple-species classifiers
Daniel L. Schmoldt; Jing He; A. Lynn Abbott
1998-01-01
Our current approach to automatically label features in CT images of hardwood logs classifies each pixel of an image individually. These feature classifiers use a back-propagation artificial neural network (ANN) and feature vectors that include a small, local neighborhood of pixels and the distance of the target pixel to the center of the log. Initially, this type of...
Simulation of Collision of Arbitrary Shape Particles with Wall in a Viscous Fluid
NASA Astrophysics Data System (ADS)
Mohaghegh, Fazlolah; Udaykumar, H. S.
2016-11-01
Collision of finite size arbitrary shape particles with wall in a viscous flow is modeled using immersed boundary method. A potential function indicating the distance from the interface is introduced for the particles and the wall. The potential can be defined by using either an analytical expression or level set method. The collision starts when the indicator potentials of the particle and wall are overlapping based on a minimum cut off. A simplified mass spring model is used in order to apply the collision forces. Instead of using a dashpot in order to damp the energy, the spring stiffness is adjusted during the bounce. The results for the case of collision of a falling sphere with the bottom wall agrees well with the experiments. Moreover, it is shown that the results are independent from the minimum collision cut off distance value. Finally, when the particle's shape is ellipsoidal, the rotation of the particle after the collision becomes important and noticeable: At low Stokes number values, the particle almost adheres to the wall in one side and rotates until it reaches the minimum gravitational potential. At high Stokes numbers, the particle bounces and loses the energy until it reaches a situation with low Stokes number.
IMHRP: Improved Multi-Hop Routing Protocol for Wireless Sensor Networks
NASA Astrophysics Data System (ADS)
Huang, Jianhua; Ruan, Danwei; Hong, Yadong; Zhao, Ziming; Zheng, Hong
2017-10-01
Wireless sensor network (WSN) is a self-organizing system formed by a large number of low-cost sensor nodes through wireless communication. Sensor nodes collect environmental information and transmit it to the base station (BS). Sensor nodes usually have very limited battery energy. The batteries cannot be charged or replaced. Therefore, it is necessary to design an energy efficient routing protocol to maximize the network lifetime. This paper presents an improved multi-hop routing protocol (IMHRP) for homogeneous networks. In the IMHRP protocol, based on the distances to the BS, the CH nodes are divided into internal CH nodes and external CH nodes. The set-up phase of the protocol is based on the LEACH protocol and the minimum distance between CH nodes are limited to a special constant distance, so a more uniform distribution of CH nodes is achieved. In the steady-state phase, the routes of different CH nodes are created on the basis of the distances between the CH nodes. The energy efficiency of communication can be maximized. The simulation results show that the proposed algorithm can more effectively reduce the energy consumption of each round and prolong the network lifetime compared with LEACH protocol and MHT protocol.
Kivelä, Mikko; Arnaud-Haond, Sophie; Saramäki, Jari
2015-01-01
The recent application of graph-based network theory analysis to biogeography, community ecology and population genetics has created a need for user-friendly software, which would allow a wider accessibility to and adaptation of these methods. EDENetworks aims to fill this void by providing an easy-to-use interface for the whole analysis pipeline of ecological and evolutionary networks starting from matrices of species distributions, genotypes, bacterial OTUs or populations characterized genetically. The user can choose between several different ecological distance metrics, such as Bray-Curtis or Sorensen distance, or population genetic metrics such as FST or Goldstein distances, to turn the raw data into a distance/dissimilarity matrix. This matrix is then transformed into a network by manual or automatic thresholding based on percolation theory or by building the minimum spanning tree. The networks can be visualized along with auxiliary data and analysed with various metrics such as degree, clustering coefficient, assortativity and betweenness centrality. The statistical significance of the results can be estimated either by resampling the original biological data or by null models based on permutations of the data. © 2014 John Wiley & Sons Ltd.
Automated time activity classification based on global positioning system (GPS) tracking data
2011-01-01
Background Air pollution epidemiological studies are increasingly using global positioning system (GPS) to collect time-location data because they offer continuous tracking, high temporal resolution, and minimum reporting burden for participants. However, substantial uncertainties in the processing and classifying of raw GPS data create challenges for reliably characterizing time activity patterns. We developed and evaluated models to classify people's major time activity patterns from continuous GPS tracking data. Methods We developed and evaluated two automated models to classify major time activity patterns (i.e., indoor, outdoor static, outdoor walking, and in-vehicle travel) based on GPS time activity data collected under free living conditions for 47 participants (N = 131 person-days) from the Harbor Communities Time Location Study (HCTLS) in 2008 and supplemental GPS data collected from three UC-Irvine research staff (N = 21 person-days) in 2010. Time activity patterns used for model development were manually classified by research staff using information from participant GPS recordings, activity logs, and follow-up interviews. We evaluated two models: (a) a rule-based model that developed user-defined rules based on time, speed, and spatial location, and (b) a random forest decision tree model. Results Indoor, outdoor static, outdoor walking and in-vehicle travel activities accounted for 82.7%, 6.1%, 3.2% and 7.2% of manually-classified time activities in the HCTLS dataset, respectively. The rule-based model classified indoor and in-vehicle travel periods reasonably well (Indoor: sensitivity > 91%, specificity > 80%, and precision > 96%; in-vehicle travel: sensitivity > 71%, specificity > 99%, and precision > 88%), but the performance was moderate for outdoor static and outdoor walking predictions. No striking differences in performance were observed between the rule-based and the random forest models. The random forest model was fast and easy to execute, but was likely less robust than the rule-based model under the condition of biased or poor quality training data. Conclusions Our models can successfully identify indoor and in-vehicle travel points from the raw GPS data, but challenges remain in developing models to distinguish outdoor static points and walking. Accurate training data are essential in developing reliable models in classifying time-activity patterns. PMID:22082316
Automated time activity classification based on global positioning system (GPS) tracking data.
Wu, Jun; Jiang, Chengsheng; Houston, Douglas; Baker, Dean; Delfino, Ralph
2011-11-14
Air pollution epidemiological studies are increasingly using global positioning system (GPS) to collect time-location data because they offer continuous tracking, high temporal resolution, and minimum reporting burden for participants. However, substantial uncertainties in the processing and classifying of raw GPS data create challenges for reliably characterizing time activity patterns. We developed and evaluated models to classify people's major time activity patterns from continuous GPS tracking data. We developed and evaluated two automated models to classify major time activity patterns (i.e., indoor, outdoor static, outdoor walking, and in-vehicle travel) based on GPS time activity data collected under free living conditions for 47 participants (N = 131 person-days) from the Harbor Communities Time Location Study (HCTLS) in 2008 and supplemental GPS data collected from three UC-Irvine research staff (N = 21 person-days) in 2010. Time activity patterns used for model development were manually classified by research staff using information from participant GPS recordings, activity logs, and follow-up interviews. We evaluated two models: (a) a rule-based model that developed user-defined rules based on time, speed, and spatial location, and (b) a random forest decision tree model. Indoor, outdoor static, outdoor walking and in-vehicle travel activities accounted for 82.7%, 6.1%, 3.2% and 7.2% of manually-classified time activities in the HCTLS dataset, respectively. The rule-based model classified indoor and in-vehicle travel periods reasonably well (Indoor: sensitivity > 91%, specificity > 80%, and precision > 96%; in-vehicle travel: sensitivity > 71%, specificity > 99%, and precision > 88%), but the performance was moderate for outdoor static and outdoor walking predictions. No striking differences in performance were observed between the rule-based and the random forest models. The random forest model was fast and easy to execute, but was likely less robust than the rule-based model under the condition of biased or poor quality training data. Our models can successfully identify indoor and in-vehicle travel points from the raw GPS data, but challenges remain in developing models to distinguish outdoor static points and walking. Accurate training data are essential in developing reliable models in classifying time-activity patterns.
Tableau Calculus for the Logic of Comparative Similarity over Arbitrary Distance Spaces
NASA Astrophysics Data System (ADS)
Alenda, Régis; Olivetti, Nicola
The logic CSL (first introduced by Sheremet, Tishkovsky, Wolter and Zakharyaschev in 2005) allows one to reason about distance comparison and similarity comparison within a modal language. The logic can express assertions of the kind "A is closer/more similar to B than to C" and has a natural application to spatial reasoning, as well as to reasoning about concept similarity in ontologies. The semantics of CSL is defined in terms of models based on different classes of distance spaces and it generalizes the logic S4 u of topological spaces. In this paper we consider CSL defined over arbitrary distance spaces. The logic comprises a binary modality to represent comparative similarity and a unary modality to express the existence of the minimum of a set of distances. We first show that the semantics of CSL can be equivalently defined in terms of preferential models. As a consequence we obtain the finite model property of the logic with respect to its preferential semantic, a property that does not hold with respect to the original distance-space semantics. Next we present an analytic tableau calculus based on its preferential semantics. The calculus provides a decision procedure for the logic, its termination is obtained by imposing suitable blocking restrictions.
Potential energy function for CH3+CH3 ⇆ C2H6: Attributes of the minimum energy path
NASA Astrophysics Data System (ADS)
Robertson, S. H.; Wardlaw, D. M.; Hirst, D. M.
1993-11-01
The region of the potential energy surface for the title reaction in the vicinity of its minimum energy path has been predicted from the analysis of ab initio electronic energy calculations. The ab initio procedure employs a 6-31G** basis set and a configuration interaction calculation which uses the orbitals obtained in a generalized valence bond calculation. Calculated equilibrium properties of ethane and of isolated methyl radical are compared to existing theoretical and experimental results. The reaction coordinate is represented by the carbon-carbon interatomic distance. The following attributes are reported as a function of this distance and fit to functional forms which smoothly interpolate between reactant and product values of each attribute: the minimum energy path potential, the minimum energy path geometry, normal mode frequencies for vibrational motion orthogonal to the reaction coordinate, a torsional potential, and a fundamental anharmonic frequency for local mode, out-of-plane CH3 bending (umbrella motion). The best representation is provided by a three-parameter modified Morse function for the minimum energy path potential and a two-parameter hyperbolic tangent switching function for all other attributes. A poorer but simpler representation, which may be satisfactory for selected applications, is provided by a standard Morse function and a one-parameter exponential switching function. Previous applications of the exponential switching function to estimate the reaction coordinate dependence of the frequencies and geometry of this system have assumed the same value of the range parameter α for each property and have taken α to be less than or equal to the ``standard'' value of 1.0 Å-1. Based on the present analysis this is incorrect: The α values depend on the property and range from ˜1.2 to ˜1.8 Å-1.
Heesch, Kristiann C; Schramm, Amy; Debnath, Ashim Kumar; Haworth, Narelle
2017-12-01
Issues addressed Cyclists' perceptions of harassment from motorists discourages cycling. This study examined changes in cyclists' reporting of harassment pre- to post-introduction of the Queensland trial of the minimum passing distance road rule amendment (MPD-RRA). Methods Cross-sectional online surveys of cyclists in Queensland, Australia were conducted in 2009 (pre-trial; n=1758) and 2015 (post-trial commencement; n=1997). Cyclists were asked about their experiences of harassment from motorists while cycling. Logistic regression modelling was used to examine differences in the reporting of harassment between these time periods, after adjustments for demographic characteristics and cycling behaviour. Results At both time periods, the most reported types of harassment were deliberately driving too close (causing fear or anxiety), shouting abuse and making obscene gestures or engaging in sexual harassment. The percentage of cyclists who reported tailgating by motorists increased between 2009 and 2015 (15.1% to 19.5%; P<0.001). The percentage of cyclists reporting other types of harassment did not change significantly. Conclusions Cyclists in Queensland continue to perceive harassment while cycling on the road. The amendment to the minimum passing distance rule in Queensland appears to be having a negative effect on one type of harassment but no significant effects on others. So what? Minimum passing distance rules may not be improving cyclists' perceptions of motorists' behaviours. Additional strategies are required to create a supportive environment for cycling.
Human Movement Detection and Idengification Using Pyroelectric Infrared Sensors
Yun, Jaeseok; Lee, Sang-Shin
2014-01-01
Pyroelectric infrared (PIR) sensors are widely used as a presence trigger, but the analog output of PIR sensors depends on several other aspects, including the distance of the body from the PIR sensor, the direction and speed of movement, the body shape and gait. In this paper, we present an empirical study of human movement detection and idengification using a set of PIR sensors. We have developed a data collection module having two pairs of PIR sensors orthogonally aligned and modified Fresnel lenses. We have placed three PIR-based modules in a hallway for monitoring people; one module on the ceiling; two modules on opposite walls facing each other. We have collected a data set from eight subjects when walking in three different conditions: two directions (back and forth), three distance intervals (close to one wall sensor, in the middle, close to the other wall sensor) and three speed levels (slow, moderate, fast). We have used two types of feature sets: a raw data set and a reduced feature set composed of amplitude and time to peaks; and passage duration extracted from each PIR sensor. We have performed classification analysis with well-known machine learning algorithms, including instance-based learning and support vector machine. Our findings show that with the raw data set captured from a single PIR sensor of each of the three modules, we could achieve more than 92% accuracy in classifying the direction and speed of movement, the distance interval and idengifying subjects. We could also achieve more than 94% accuracy in classifying the direction, speed and distance and idengifying subjects using the reduced feature set extracted from two pairs of PIR sensors of each of the three modules. PMID:24803195
An improved global wind resource estimate for integrated assessment models
Eurek, Kelly; Sullivan, Patrick; Gleason, Michael; ...
2017-11-25
This study summarizes initial steps to improving the robustness and accuracy of global renewable resource and techno-economic assessments for use in integrated assessment models. We outline a method to construct country-level wind resource supply curves, delineated by resource quality and other parameters. Using mesoscale reanalysis data, we generate estimates for wind quality, both terrestrial and offshore, across the globe. Because not all land or water area is suitable for development, appropriate database layers provide exclusions to reduce the total resource to its technical potential. We expand upon estimates from related studies by: using a globally consistent data source of uniquelymore » detailed wind speed characterizations; assuming a non-constant coefficient of performance for adjusting power curves for altitude; categorizing the distance from resource sites to the electric power grid; and characterizing offshore exclusions on the basis of sea ice concentrations. The product, then, is technical potential by country, classified by resource quality as determined by net capacity factor. Additional classifications dimensions are available, including distance to transmission networks for terrestrial wind and distance to shore and water depth for offshore. We estimate the total global wind generation potential of 560 PWh for terrestrial wind with 90% of resource classified as low-to-mid quality, and 315 PWh for offshore wind with 67% classified as mid-to-high quality. These estimates are based on 3.5 MW composite wind turbines with 90 m hub heights, 0.95 availability, 90% array efficiency, and 5 MW/km 2 deployment density in non-excluded areas. We compare the underlying technical assumption and results with other global assessments.« less
An improved global wind resource estimate for integrated assessment models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eurek, Kelly; Sullivan, Patrick; Gleason, Michael
This study summarizes initial steps to improving the robustness and accuracy of global renewable resource and techno-economic assessments for use in integrated assessment models. We outline a method to construct country-level wind resource supply curves, delineated by resource quality and other parameters. Using mesoscale reanalysis data, we generate estimates for wind quality, both terrestrial and offshore, across the globe. Because not all land or water area is suitable for development, appropriate database layers provide exclusions to reduce the total resource to its technical potential. We expand upon estimates from related studies by: using a globally consistent data source of uniquelymore » detailed wind speed characterizations; assuming a non-constant coefficient of performance for adjusting power curves for altitude; categorizing the distance from resource sites to the electric power grid; and characterizing offshore exclusions on the basis of sea ice concentrations. The product, then, is technical potential by country, classified by resource quality as determined by net capacity factor. Additional classifications dimensions are available, including distance to transmission networks for terrestrial wind and distance to shore and water depth for offshore. We estimate the total global wind generation potential of 560 PWh for terrestrial wind with 90% of resource classified as low-to-mid quality, and 315 PWh for offshore wind with 67% classified as mid-to-high quality. These estimates are based on 3.5 MW composite wind turbines with 90 m hub heights, 0.95 availability, 90% array efficiency, and 5 MW/km 2 deployment density in non-excluded areas. We compare the underlying technical assumption and results with other global assessments.« less
Feature selection for the classification of traced neurons.
López-Cabrera, José D; Lorenzo-Ginori, Juan V
2018-06-01
The great availability of computational tools to calculate the properties of traced neurons leads to the existence of many descriptors which allow the automated classification of neurons from these reconstructions. This situation determines the necessity to eliminate irrelevant features as well as making a selection of the most appropriate among them, in order to improve the quality of the classification obtained. The dataset used contains a total of 318 traced neurons, classified by human experts in 192 GABAergic interneurons and 126 pyramidal cells. The features were extracted by means of the L-measure software, which is one of the most used computational tools in neuroinformatics to quantify traced neurons. We review some current feature selection techniques as filter, wrapper, embedded and ensemble methods. The stability of the feature selection methods was measured. For the ensemble methods, several aggregation methods based on different metrics were applied to combine the subsets obtained during the feature selection process. The subsets obtained applying feature selection methods were evaluated using supervised classifiers, among which Random Forest, C4.5, SVM, Naïve Bayes, Knn, Decision Table and the Logistic classifier were used as classification algorithms. Feature selection methods of types filter, embedded, wrappers and ensembles were compared and the subsets returned were tested in classification tasks for different classification algorithms. L-measure features EucDistanceSD, PathDistanceSD, Branch_pathlengthAve, Branch_pathlengthSD and EucDistanceAve were present in more than 60% of the selected subsets which provides evidence about their importance in the classification of this neurons. Copyright © 2018 Elsevier B.V. All rights reserved.
Scale-dependent correlation of seabirds with schooling fish in a coastal ecosystem
Schneider, Davod C.; Piatt, John F.
1986-01-01
The distribution of piscivorous seabirds relative to schooling fish was investigated by repeated censusing of 2 intersecting transects in the Avalon Channel, which carries the Labrador Current southward along the east coast of Newfoundland. Murres (primarily common murres Uria aalge), Atlantic puffins Fratercula arctica, and schooling fish (primarily capelin Mallotus villosus) were highly aggregated at spatial scales ranging from 0.25 to 15 km. Patchiness of murres, puffins and schooling fish was scale-dependent, as indicated by significantly higher variance-to-mean ratios at large measurement distances than at the minimum distance, 0.25 km. Patch scale of puffins ranged from 2.5 to 15 km, of murres from 3 to 8.75 km, and of schooling fish from 1.25 to 15 km. Patch scale of birds and schooling fish was similar m 6 out of 9 comparisons. Correlation between seabirds and schooling birds was significant at the minimum measurement distance in 6 out of 12 comparisons. Correlation was scale-dependent, as indicated by significantly higher coefficients at large measurement distances than at the minimum distance. Tracking scale, as indicated by the maximum significant correlation between birds and schooling fish, ranged from 2 to 6 km. Our analysis showed that extended aggregations of seabirds are associated with extended aggregations of schooling fish and that correlation of these marine carnivores with their prey is scale-dependent.
G0-WISHART Distribution Based Classification from Polarimetric SAR Images
NASA Astrophysics Data System (ADS)
Hu, G. C.; Zhao, Q. H.
2017-09-01
Enormous scientific and technical developments have been carried out to further improve the remote sensing for decades, particularly Polarimetric Synthetic Aperture Radar(PolSAR) technique, so classification method based on PolSAR images has getted much more attention from scholars and related department around the world. The multilook polarmetric G0-Wishart model is a more flexible model which describe homogeneous, heterogeneous and extremely heterogeneous regions in the image. Moreover, the polarmetric G0-Wishart distribution dose not include the modified Bessel function of the second kind. It is a kind of simple statistical distribution model with less parameter. To prove its feasibility, a process of classification has been tested with the full-polarized Synthetic Aperture Radar (SAR) image by the method. First, apply multilook polarimetric SAR data process and speckle filter to reduce speckle influence for classification result. Initially classify the image into sixteen classes by H/A/α decomposition. Using the ICM algorithm to classify feature based on the G0-Wshart distance. Qualitative and quantitative results show that the proposed method can classify polaimetric SAR data effectively and efficiently.
Code of Federal Regulations, 2010 CFR
2010-04-01
... distances of ammonium nitrate and blasting agents from explosives or blasting agents. 555.220 Section 555... ammonium nitrate and blasting agents from explosives or blasting agents. Table: Department of Defense... Not over Minimum separation distance of acceptor from donor when barricaded (ft.) Ammonium nitrate...
Code of Federal Regulations, 2011 CFR
2011-04-01
... distances of ammonium nitrate and blasting agents from explosives or blasting agents. 555.220 Section 555... ammonium nitrate and blasting agents from explosives or blasting agents. Table: Department of Defense... Not over Minimum separation distance of acceptor from donor when barricaded (ft.) Ammonium nitrate...
Code of Federal Regulations, 2012 CFR
2012-04-01
... distances of ammonium nitrate and blasting agents from explosives or blasting agents. 555.220 Section 555... ammonium nitrate and blasting agents from explosives or blasting agents. Table: Department of Defense... Not over Minimum separation distance of acceptor from donor when barricaded (ft.) Ammonium nitrate...
Code of Federal Regulations, 2013 CFR
2013-04-01
... distances of ammonium nitrate and blasting agents from explosives or blasting agents. 555.220 Section 555... ammonium nitrate and blasting agents from explosives or blasting agents. Table: Department of Defense... Not over Minimum separation distance of acceptor from donor when barricaded (ft.) Ammonium nitrate...
Code of Federal Regulations, 2014 CFR
2014-04-01
... distances of ammonium nitrate and blasting agents from explosives or blasting agents. 555.220 Section 555... ammonium nitrate and blasting agents from explosives or blasting agents. Table: Department of Defense... Not over Minimum separation distance of acceptor from donor when barricaded (ft.) Ammonium nitrate...
Zheng, Wenjing; Balzer, Laura; van der Laan, Mark; Petersen, Maya
2018-01-30
Binary classification problems are ubiquitous in health and social sciences. In many cases, one wishes to balance two competing optimality considerations for a binary classifier. For instance, in resource-limited settings, an human immunodeficiency virus prevention program based on offering pre-exposure prophylaxis (PrEP) to select high-risk individuals must balance the sensitivity of the binary classifier in detecting future seroconverters (and hence offering them PrEP regimens) with the total number of PrEP regimens that is financially and logistically feasible for the program. In this article, we consider a general class of constrained binary classification problems wherein the objective function and the constraint are both monotonic with respect to a threshold. These include the minimization of the rate of positive predictions subject to a minimum sensitivity, the maximization of sensitivity subject to a maximum rate of positive predictions, and the Neyman-Pearson paradigm, which minimizes the type II error subject to an upper bound on the type I error. We propose an ensemble approach to these binary classification problems based on the Super Learner methodology. This approach linearly combines a user-supplied library of scoring algorithms, with combination weights and a discriminating threshold chosen to minimize the constrained optimality criterion. We then illustrate the application of the proposed classifier to develop an individualized PrEP targeting strategy in a resource-limited setting, with the goal of minimizing the number of PrEP offerings while achieving a minimum required sensitivity. This proof of concept data analysis uses baseline data from the ongoing Sustainable East Africa Research in Community Health study. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Desert plains classification based on Geomorphometrical parameters (Case study: Aghda, Yazd)
NASA Astrophysics Data System (ADS)
Tazeh, mahdi; Kalantari, Saeideh
2013-04-01
This research focuses on plains. There are several tremendous methods and classification which presented for plain classification. One of The natural resource based classification which is mostly using in Iran, classified plains into three types, Erosional Pediment, Denudation Pediment Aggradational Piedmont. The qualitative and quantitative factors to differentiate them from each other are also used appropriately. In this study effective Geomorphometrical parameters in differentiate landforms were applied for plain. Geomorphometrical parameters are calculable and can be extracted using mathematical equations and the corresponding relations on digital elevation model. Geomorphometrical parameters used in this study included Percent of Slope, Plan Curvature, Profile Curvature, Minimum Curvature, the Maximum Curvature, Cross sectional Curvature, Longitudinal Curvature and Gaussian Curvature. The results indicated that the most important affecting Geomorphometrical parameters for plain and desert classifications includes: Percent of Slope, Minimum Curvature, Profile Curvature, and Longitudinal Curvature. Key Words: Plain, Geomorphometry, Classification, Biophysical, Yazd Khezarabad.
A minimum distance estimation approach to the two-sample location-scale problem.
Zhang, Zhiyi; Yu, Qiqing
2002-09-01
As reported by Kalbfleisch and Prentice (1980), the generalized Wilcoxon test fails to detect a difference between the lifetime distributions of the male and female mice died from Thymic Leukemia. This failure is a result of the test's inability to detect a distributional difference when a location shift and a scale change exist simultaneously. In this article, we propose an estimator based on the minimization of an average distance between two independent quantile processes under a location-scale model. Large sample inference on the proposed estimator, with possible right-censorship, is discussed. The mouse leukemia data are used as an example for illustration purpose.
A fuzzy automated object classification by infrared laser camera
NASA Astrophysics Data System (ADS)
Kanazawa, Seigo; Taniguchi, Kazuhiko; Asari, Kazunari; Kuramoto, Kei; Kobashi, Syoji; Hata, Yutaka
2011-06-01
Home security in night is very important, and the system that watches a person's movements is useful in the security. This paper describes a classification system of adult, child and the other object from distance distribution measured by an infrared laser camera. This camera radiates near infrared waves and receives reflected ones. Then, it converts the time of flight into distance distribution. Our method consists of 4 steps. First, we do background subtraction and noise rejection in the distance distribution. Second, we do fuzzy clustering in the distance distribution, and form several clusters. Third, we extract features such as the height, thickness, aspect ratio, area ratio of the cluster. Then, we make fuzzy if-then rules from knowledge of adult, child and the other object so as to classify the cluster to one of adult, child and the other object. Here, we made the fuzzy membership function with respect to each features. Finally, we classify the clusters to one with the highest fuzzy degree among adult, child and the other object. In our experiment, we set up the camera in room and tested three cases. The method successfully classified them in real time processing.
Motion data classification on the basis of dynamic time warping with a cloud point distance measure
NASA Astrophysics Data System (ADS)
Switonski, Adam; Josinski, Henryk; Zghidi, Hafedh; Wojciechowski, Konrad
2016-06-01
The paper deals with the problem of classification of model free motion data. The nearest neighbors classifier which is based on comparison performed by Dynamic Time Warping transform with cloud point distance measure is proposed. The classification utilizes both specific gait features reflected by a movements of subsequent skeleton joints and anthropometric data. To validate proposed approach human gait identification challenge problem is taken into consideration. The motion capture database containing data of 30 different humans collected in Human Motion Laboratory of Polish-Japanese Academy of Information Technology is used. The achieved results are satisfactory, the obtained accuracy of human recognition exceeds 90%. What is more, the applied cloud point distance measure does not depend on calibration process of motion capture system which results in reliable validation.
Evaluation of Semi-supervised Learning for Classification of Protein Crystallization Imagery.
Sigdel, Madhav; Dinç, İmren; Dinç, Semih; Sigdel, Madhu S; Pusey, Marc L; Aygün, Ramazan S
2014-03-01
In this paper, we investigate the performance of two wrapper methods for semi-supervised learning algorithms for classification of protein crystallization images with limited labeled images. Firstly, we evaluate the performance of semi-supervised approach using self-training with naïve Bayesian (NB) and sequential minimum optimization (SMO) as the base classifiers. The confidence values returned by these classifiers are used to select high confident predictions to be used for self-training. Secondly, we analyze the performance of Yet Another Two Stage Idea (YATSI) semi-supervised learning using NB, SMO, multilayer perceptron (MLP), J48 and random forest (RF) classifiers. These results are compared with the basic supervised learning using the same training sets. We perform our experiments on a dataset consisting of 2250 protein crystallization images for different proportions of training and test data. Our results indicate that NB and SMO using both self-training and YATSI semi-supervised approaches improve accuracies with respect to supervised learning. On the other hand, MLP, J48 and RF perform better using basic supervised learning. Overall, random forest classifier yields the best accuracy with supervised learning for our dataset.
47 CFR 73.207 - Minimum distance separation between stations.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 200 kHz below the channel under consideration), the second (400 kHz above and below), the third (600 k... Separation Requirements in Kilometers (miles) Relation Co-channel 200 kHz 400/600 kHz 10.6/10.8 MHz A to A... kW ERP and 100 meters antenna HAAT (or equivalent lower ERP and higher antenna HAAT based on a class...
47 CFR 73.207 - Minimum distance separation between stations.
Code of Federal Regulations, 2011 CFR
2011-10-01
... kW ERP and 100 meters antenna HAAT (or equivalent lower ERP and higher antenna HAAT based on a class... which have been notified internationally as Class A are limited to a maximum of 3.0 kW ERP at 100 meters... internationally as Class AA are limited to a maximum of 6.0 kW ERP at 100 meters HAAT, or the equivalent; (iii) U...
46 CFR 42.20-70 - Minimum bow height.
Code of Federal Regulations, 2012 CFR
2012-10-01
... Freeboards § 42.20-70 Minimum bow height. (a) The bow height defined as the vertical distance at the forward... 46 Shipping 2 2012-10-01 2012-10-01 false Minimum bow height. 42.20-70 Section 42.20-70 Shipping... less than 0.68. (b) Where the bow height required in paragraph (a) of this section is obtained by sheer...
46 CFR 42.20-70 - Minimum bow height.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Freeboards § 42.20-70 Minimum bow height. (a) The bow height defined as the vertical distance at the forward... 46 Shipping 2 2011-10-01 2011-10-01 false Minimum bow height. 42.20-70 Section 42.20-70 Shipping... less than 0.68. (b) Where the bow height required in paragraph (a) of this section is obtained by sheer...
46 CFR 42.20-70 - Minimum bow height.
Code of Federal Regulations, 2014 CFR
2014-10-01
... Freeboards § 42.20-70 Minimum bow height. (a) The bow height defined as the vertical distance at the forward... 46 Shipping 2 2014-10-01 2014-10-01 false Minimum bow height. 42.20-70 Section 42.20-70 Shipping... less than 0.68. (b) Where the bow height required in paragraph (a) of this section is obtained by sheer...
46 CFR 42.20-70 - Minimum bow height.
Code of Federal Regulations, 2013 CFR
2013-10-01
... Freeboards § 42.20-70 Minimum bow height. (a) The bow height defined as the vertical distance at the forward... 46 Shipping 2 2013-10-01 2013-10-01 false Minimum bow height. 42.20-70 Section 42.20-70 Shipping... less than 0.68. (b) Where the bow height required in paragraph (a) of this section is obtained by sheer...
46 CFR 42.20-70 - Minimum bow height.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Freeboards § 42.20-70 Minimum bow height. (a) The bow height defined as the vertical distance at the forward... 46 Shipping 2 2010-10-01 2010-10-01 false Minimum bow height. 42.20-70 Section 42.20-70 Shipping... less than 0.68. (b) Where the bow height required in paragraph (a) of this section is obtained by sheer...
NASA Technical Reports Server (NTRS)
Stewart, Elwood C.; Druding, Frank; Nishiura, Togo
1959-01-01
A study has been made to determine the relative importance of those factors which place an inherent limitation on the minimum obtainable miss distance for a beam-rider navigation system operating in the presence of glint noise and target evasive maneuver. Target and missile motions are assumed to be coplanar. The factors considered are the missile natural frequencies and damping ratios, missile steady-state acceleration capabilities, target evasive maneuver characteristics, and angular scintillation noise characteristics.
NASA Astrophysics Data System (ADS)
Nakada, Tomohiro; Takadama, Keiki; Watanabe, Shigeyoshi
This paper proposes the classification method using Bayesian analytical method to classify the time series data in the international emissions trading market depend on the agent-based simulation and compares the case with Discrete Fourier transform analytical method. The purpose demonstrates the analytical methods mapping time series data such as market price. These analytical methods have revealed the following results: (1) the classification methods indicate the distance of mapping from the time series data, it is easier the understanding and inference than time series data; (2) these methods can analyze the uncertain time series data using the distance via agent-based simulation including stationary process and non-stationary process; and (3) Bayesian analytical method can show the 1% difference description of the emission reduction targets of agent.
Minimum requirements for adequate nighttime conspicuity of highway signs
DOT National Transportation Integrated Search
1988-02-01
A laboratory and field study were conducted to assess the minimum luminance levels of signs to ensure that they will be detected and identified at adequate distances under nighttime driving conditions. A total of 30 subjects participated in the field...
NASA Astrophysics Data System (ADS)
Li, Q. S.; Wong, F. K. K.; Fung, T.
2017-08-01
Lightweight unmanned aerial vehicle (UAV) loaded with novel sensors offers a low cost and minimum risk solution for data acquisition in complex environment. This study assessed the performance of UAV-based hyperspectral image and digital surface model (DSM) derived from photogrammetric point clouds for 13 species classification in wetland area of Hong Kong. Multiple feature reduction methods and different classifiers were compared. The best result was obtained when transformed components from minimum noise fraction (MNF) and DSM were combined in support vector machine (SVM) classifier. Wavelength regions at chlorophyll absorption green peak, red, red edge and Oxygen absorption at near infrared were identified for better species discrimination. In addition, input of DSM data reduces overestimation of low plant species and misclassification due to the shadow effect and inter-species morphological variation. This study establishes a framework for quick survey and update on wetland environment using UAV system. The findings indicate that the utility of UAV-borne hyperspectral and derived tree height information provides a solid foundation for further researches such as biological invasion monitoring and bio-parameters modelling in wetland.
ERIC Educational Resources Information Center
García-Floriano, Andrés; Ferreira-Santiago, Angel; Yáñez-Márquez, Cornelio; Camacho-Nieto, Oscar; Aldape-Pérez, Mario; Villuendas-Rey, Yenny
2017-01-01
Social networking potentially offers improved distance learning environments by enabling the exchange of resources between learners. The existence of properly classified content results in an enhanced distance learning experience in which appropriate materials can be retrieved efficiently; however, for this to happen, metadata needs to be present.…
Stackable differential mobility analyzer for aerosol measurement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, Meng-Dawn; Chen, Da-Ren
2007-05-08
A multi-stage differential mobility analyzer (MDMA) for aerosol measurements includes a first electrode or grid including at least one inlet or injection slit for receiving an aerosol including charged particles for analysis. A second electrode or grid is spaced apart from the first electrode. The second electrode has at least one sampling outlet disposed at a plurality different distances along its length. A volume between the first and the second electrode or grid between the inlet or injection slit and a distal one of the plurality of sampling outlets forms a classifying region, the first and second electrodes for chargingmore » to suitable potentials to create an electric field within the classifying region. At least one inlet or injection slit in the second electrode receives a sheath gas flow into an upstream end of the classifying region, wherein each sampling outlet functions as an independent DMA stage and classifies different size ranges of charged particles based on electric mobility simultaneously.« less
[Fast discrimination of edible vegetable oil based on Raman spectroscopy].
Zhou, Xiu-Jun; Dai, Lian-Kui; Li, Sheng
2012-07-01
A novel method to fast discriminate edible vegetable oils by Raman spectroscopy is presented. The training set is composed of different edible vegetable oils with known classes. Based on their original Raman spectra, baseline correction and normalization were applied to obtain standard spectra. Two characteristic peaks describing the unsaturated degree of vegetable oil were selected as feature vectors; then the centers of all classes were calculated. For an edible vegetable oil with unknown class, the same pretreatment and feature extraction methods were used. The Euclidian distances between the feature vector of the unknown sample and the center of each class were calculated, and the class of the unknown sample was finally determined by the minimum distance. For 43 edible vegetable oil samples from seven different classes, experimental results show that the clustering effect of each class was more obvious and the class distance was much larger with the new feature extraction method compared with PCA. The above classification model can be applied to discriminate unknown edible vegetable oils rapidly and accurately.
Unsupervised image matching based on manifold alignment.
Pei, Yuru; Huang, Fengchun; Shi, Fuhao; Zha, Hongbin
2012-08-01
This paper challenges the issue of automatic matching between two image sets with similar intrinsic structures and different appearances, especially when there is no prior correspondence. An unsupervised manifold alignment framework is proposed to establish correspondence between data sets by a mapping function in the mutual embedding space. We introduce a local similarity metric based on parameterized distance curves to represent the connection of one point with the rest of the manifold. A small set of valid feature pairs can be found without manual interactions by matching the distance curve of one manifold with the curve cluster of the other manifold. To avoid potential confusions in image matching, we propose an extended affine transformation to solve the nonrigid alignment in the embedding space. The comparatively tight alignments and the structure preservation can be obtained simultaneously. The point pairs with the minimum distance after alignment are viewed as the matchings. We apply manifold alignment to image set matching problems. The correspondence between image sets of different poses, illuminations, and identities can be established effectively by our approach.
Özdemir, Merve Erkınay; Telatar, Ziya; Eroğul, Osman; Tunca, Yusuf
2018-05-01
Dysmorphic syndromes have different facial malformations. These malformations are significant to an early diagnosis of dysmorphic syndromes and contain distinctive information for face recognition. In this study we define the certain features of each syndrome by considering facial malformations and classify Fragile X, Hurler, Prader Willi, Down, Wolf Hirschhorn syndromes and healthy groups automatically. The reference points are marked on the face images and ratios between the points' distances are taken into consideration as features. We suggest a neural network based hierarchical decision tree structure in order to classify the syndrome types. We also implement k-nearest neighbor (k-NN) and artificial neural network (ANN) classifiers to compare classification accuracy with our hierarchical decision tree. The classification accuracy is 50, 73 and 86.7% with k-NN, ANN and hierarchical decision tree methods, respectively. Then, the same images are shown to a clinical expert who achieve a recognition rate of 46.7%. We develop an efficient system to recognize different syndrome types automatically in a simple, non-invasive imaging data, which is independent from the patient's age, sex and race at high accuracy. The promising results indicate that our method can be used for pre-diagnosis of the dysmorphic syndromes by clinical experts.
Progressive addition lenses--measurements and ratings.
Sheedy, Jim; Hardy, Raymond F; Hayes, John R
2006-01-01
This study is a followup to a previous study in which the optics of several progressive addition lens (PALs) designs were measured and analyzed. The objective was to provide information about various PAL designs to enable eye care practitioners to select designs based on the particular viewing requirements of the patient. The optical properties of 12 lenses of the same power for each of 23 different PAL designs were measured with a Rotlex Class Plus lens analyzer. Lenses were ordered through optical laboratories and specified to be plano with a +2.00 diopters add. Measurements were normalized to plano at the manufacturer-assigned location for the distance power to eliminate laboratory tolerance errors. The magnitude of unwanted astigmatism and the widths and areas of the distance, intermediate, and near viewing zones were calculated from the measured data according to the same criteria used in a previous study. The optical characteristics of the different PAL designs were significantly different from one another. The differences were significant in terms of the sizes and widths of the viewing zones, the amount of unwanted astigmatism, and the minimum fitting height. Ratings of the distance, intermediate, and near viewing areas were calculated for each PAL design based on the widths and sizes of those zones. Ratings for unwanted astigmatism and recommended minimum fitting heights were also determined. Ratings based on combinations of viewing zone ratings are also reported. The ratings are intended to be used to select a PAL design that matches the particular visual needs of the patient and to evaluate the success and performance of currently worn PALs. Reasoning and task analyses suggest that these differences can be used to select a PAL design to meet the individual visual needs of the patient; clinical trials studies are required to test this hypothesis.
Path planning for mobile robot using the novel repulsive force algorithm
NASA Astrophysics Data System (ADS)
Sun, Siyue; Yin, Guoqiang; Li, Xueping
2018-01-01
A new type of repulsive force algorithm is proposed to solve the problem of local minimum and the target unreachable of the classic Artificial Potential Field (APF) method in this paper. The Gaussian function that is related to the distance between the robot and the target is added to the traditional repulsive force, solving the problem of the goal unreachable with the obstacle nearby; variable coefficient is added to the repulsive force component to resize the repulsive force, which can solve the local minimum problem when the robot, the obstacle and the target point are in the same line. The effectiveness of the algorithm is verified by simulation based on MATLAB and actual mobile robot platform.
Handwritten character recognition using background analysis
NASA Astrophysics Data System (ADS)
Tascini, Guido; Puliti, Paolo; Zingaretti, Primo
1993-04-01
The paper describes a low-cost handwritten character recognizer. It is constituted by three modules: the `acquisition' module, the `binarization' module, and the `core' module. The core module can be logically partitioned into six steps: character dilation, character circumscription, region and `profile' analysis, `cut' analysis, decision tree descent, and result validation. Firstly, it reduces the resolution of the binarized regions and detects the minimum rectangle (MR) which encloses the character; the MR partitions the background into regions that surround the character or are enclosed by it, and allows it to define features as `profiles' and `cuts;' a `profile' is the set of vertical or horizontal minimum distances between a side of the MR and the character itself; a `cut' is a vertical or horizontal image segment delimited by the MR. Then, the core module classifies the character by descending along the decision tree on the basis of the analysis of regions around the character, in particular of the `profiles' and `cuts,' and without using context information. Finally, it recognizes the character or reactivates the core module by analyzing validation test results. The recognizer is largely insensible to character discontinuity and is able to detect Arabic numerals and English alphabet capital letters. The recognition rate of a 32 X 32 pixel character is of about 97% after the first iteration, and of over 98% after the second iteration.
Minimum variance geographic sampling
NASA Technical Reports Server (NTRS)
Terrell, G. R. (Principal Investigator)
1980-01-01
Resource inventories require samples with geographical scatter, sometimes not as widely spaced as would be hoped. A simple model of correlation over distances is used to create a minimum variance unbiased estimate population means. The fitting procedure is illustrated from data used to estimate Missouri corn acreage.
7 CFR 1703.133 - Maximum and minimum amounts.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 11 2010-01-01 2010-01-01 false Maximum and minimum amounts. 1703.133 Section 1703.133 Agriculture Regulations of the Department of Agriculture (Continued) RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE RURAL DEVELOPMENT Distance Learning and Telemedicine Combination Loan and Grant...
7 CFR 1703.133 - Maximum and minimum amounts.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 11 2011-01-01 2011-01-01 false Maximum and minimum amounts. 1703.133 Section 1703.133 Agriculture Regulations of the Department of Agriculture (Continued) RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE RURAL DEVELOPMENT Distance Learning and Telemedicine Combination Loan and Grant...
7 CFR 1703.133 - Maximum and minimum amounts.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 11 2013-01-01 2013-01-01 false Maximum and minimum amounts. 1703.133 Section 1703.133 Agriculture Regulations of the Department of Agriculture (Continued) RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE RURAL DEVELOPMENT Distance Learning and Telemedicine Combination Loan and Grant...
7 CFR 1703.133 - Maximum and minimum amounts.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 11 2012-01-01 2012-01-01 false Maximum and minimum amounts. 1703.133 Section 1703.133 Agriculture Regulations of the Department of Agriculture (Continued) RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE RURAL DEVELOPMENT Distance Learning and Telemedicine Combination Loan and Grant...
7 CFR 1703.133 - Maximum and minimum amounts.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 11 2014-01-01 2014-01-01 false Maximum and minimum amounts. 1703.133 Section 1703.133 Agriculture Regulations of the Department of Agriculture (Continued) RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE RURAL DEVELOPMENT Distance Learning and Telemedicine Combination Loan and Grant...
Abrams, Thad E; Lund, Brian C; Alexander, Bruce; Bernardy, Nancy C; Friedman, Matthew J
2015-01-01
Posttraumatic stress disorder (PTSD) is a high-priority treatment area for the Veterans Health Administration (VHA), and dissemination patterns of innovative, efficacious therapies can inform areas for potential improvement of diffusion efforts and quality prescribing. In this study, we replicated a prior examination of the period prevalence of prazosin use as a function of distance from Puget Sound, Washington, where prazosin was first tested as an effective treatment for PTSD and where prazosin use was previously shown to be much greater than in other parts of the United States. We tested the following three hypotheses related to prazosin geographic diffusion: (1) a positive geographical correlation exists between the distance from Puget Sound and the proportion of users treated according to a guideline recommended minimum therapeutic target dose (>/=6 mg/d), (2) an inverse geographic correlation exists between prazosin and benzodiazepine use, and (3) no geographical correlation exists between prazosin use and serotonin reuptake inhibitor/serotonin norepinephrine reuptake inhibitor (SSRI/SNRI) use. Among a national sample of veterans with PTSD, overall prazosin utilization increased from 5.5 to 14.8% from 2006 to 2012. During this time period, rates at the Puget Sound VHA location declined from 34.4 to 29.9%, whereas utilization rates at locations a minimum of 2,500 miles away increased from 3.0 to 12.8%. Rates of minimum target dosing fell from 42.6 to 34.6% at the Puget Sound location. In contrast, at distances of at least 2,500 miles from Puget Sound, minimum threshold dosing rates remained stable (range, 18.6 to 17.7%). No discernible association was demonstrated between SSRI/SNRI or benzodiazepine utilization and the geographic distance from Puget Sound. Minimal threshold dosing of prazosin correlated positively with increased diffusion of prazosin use, but there was still a distance diffusion gradient. Although prazosin adoption has improved, geographic differences persist in both prescribing rates and minimum target dosing. Importantly, these regional disparities appear to be limited to prazosin prescribing and are not meaningfully correlated with SSRI/SNRI and benzodiazepine use as indicators of PTSD prescribing quality.
Automatic detection and classification of obstacles with applications in autonomous mobile robots
NASA Astrophysics Data System (ADS)
Ponomaryov, Volodymyr I.; Rosas-Miranda, Dario I.
2016-04-01
Hardware implementation of an automatic detection and classification of objects that can represent an obstacle for an autonomous mobile robot using stereo vision algorithms is presented. We propose and evaluate a new method to detect and classify objects for a mobile robot in outdoor conditions. This method is divided in two parts, the first one is the object detection step based on the distance from the objects to the camera and a BLOB analysis. The second part is the classification step that is based on visuals primitives and a SVM classifier. The proposed method is performed in GPU in order to reduce the processing time values. This is performed with help of hardware based on multi-core processors and GPU platform, using a NVIDIA R GeForce R GT640 graphic card and Matlab over a PC with Windows 10.
Ling, Qing-Hua; Song, Yu-Qing; Han, Fei; Yang, Dan; Huang, De-Shuang
2016-01-01
For ensemble learning, how to select and combine the candidate classifiers are two key issues which influence the performance of the ensemble system dramatically. Random vector functional link networks (RVFL) without direct input-to-output links is one of suitable base-classifiers for ensemble systems because of its fast learning speed, simple structure and good generalization performance. In this paper, to obtain a more compact ensemble system with improved convergence performance, an improved ensemble of RVFL based on attractive and repulsive particle swarm optimization (ARPSO) with double optimization strategy is proposed. In the proposed method, ARPSO is applied to select and combine the candidate RVFL. As for using ARPSO to select the optimal base RVFL, ARPSO considers both the convergence accuracy on the validation data and the diversity of the candidate ensemble system to build the RVFL ensembles. In the process of combining RVFL, the ensemble weights corresponding to the base RVFL are initialized by the minimum norm least-square method and then further optimized by ARPSO. Finally, a few redundant RVFL is pruned, and thus the more compact ensemble of RVFL is obtained. Moreover, in this paper, theoretical analysis and justification on how to prune the base classifiers on classification problem is presented, and a simple and practically feasible strategy for pruning redundant base classifiers on both classification and regression problems is proposed. Since the double optimization is performed on the basis of the single optimization, the ensemble of RVFL built by the proposed method outperforms that built by some single optimization methods. Experiment results on function approximation and classification problems verify that the proposed method could improve its convergence accuracy as well as reduce the complexity of the ensemble system. PMID:27835638
Ling, Qing-Hua; Song, Yu-Qing; Han, Fei; Yang, Dan; Huang, De-Shuang
2016-01-01
For ensemble learning, how to select and combine the candidate classifiers are two key issues which influence the performance of the ensemble system dramatically. Random vector functional link networks (RVFL) without direct input-to-output links is one of suitable base-classifiers for ensemble systems because of its fast learning speed, simple structure and good generalization performance. In this paper, to obtain a more compact ensemble system with improved convergence performance, an improved ensemble of RVFL based on attractive and repulsive particle swarm optimization (ARPSO) with double optimization strategy is proposed. In the proposed method, ARPSO is applied to select and combine the candidate RVFL. As for using ARPSO to select the optimal base RVFL, ARPSO considers both the convergence accuracy on the validation data and the diversity of the candidate ensemble system to build the RVFL ensembles. In the process of combining RVFL, the ensemble weights corresponding to the base RVFL are initialized by the minimum norm least-square method and then further optimized by ARPSO. Finally, a few redundant RVFL is pruned, and thus the more compact ensemble of RVFL is obtained. Moreover, in this paper, theoretical analysis and justification on how to prune the base classifiers on classification problem is presented, and a simple and practically feasible strategy for pruning redundant base classifiers on both classification and regression problems is proposed. Since the double optimization is performed on the basis of the single optimization, the ensemble of RVFL built by the proposed method outperforms that built by some single optimization methods. Experiment results on function approximation and classification problems verify that the proposed method could improve its convergence accuracy as well as reduce the complexity of the ensemble system.
Ho, Jeff C; Russel, Kory C; Davis, Jennifer
2014-03-01
Support is growing for the incorporation of fetching time and/or distance considerations in the definition of access to improved water supply used for global monitoring. Current efforts typically rely on self-reported distance and/or travel time data that have been shown to be unreliable. To date, however, there has been no head-to-head comparison of such indicators with other possible distance/time metrics. This study provides such a comparison. We examine the association between both straight-line distance and self-reported one-way travel time with measured route distances to water sources for 1,103 households in Nampula province, Mozambique. We find straight-line, or Euclidean, distance to be a good proxy for route distance (R(2) = 0.98), while self-reported travel time is a poor proxy (R(2) = 0.12). We also apply a variety of time- and distance-based indicators proposed in the literature to our sample data, finding that the share of households classified as having versus lacking access would differ by more than 70 percentage points depending on the particular indicator employed. This work highlights the importance of the ongoing debate regarding valid, reliable, and feasible strategies for monitoring progress in the provision of improved water supply services.
High Performance Automatic Character Skinning Based on Projection Distance
NASA Astrophysics Data System (ADS)
Li, Jun; Lin, Feng; Liu, Xiuling; Wang, Hongrui
2018-03-01
Skeleton-driven-deformation methods have been commonly used in the character deformations. The process of painting skin weights for character deformation is a long-winded task requiring manual tweaking. We present a novel method to calculate skinning weights automatically from 3D human geometric model and corresponding skeleton. The method first, groups each mesh vertex of 3D human model to a skeleton bone by the minimum distance from a mesh vertex to each bone. Secondly, calculates each vertex's weights to the adjacent bones by the vertex's projection point distance to the bone joints. Our method's output can not only be applied to any kind of skeleton-driven deformation, but also to motion capture driven (mocap-driven) deformation. Experiments results show that our method not only has strong generality and robustness, but also has high performance.
Comparing minimum spanning trees of the Italian stock market using returns and volumes
NASA Astrophysics Data System (ADS)
Coletti, Paolo
2016-12-01
We have built the network of the top 100 Italian quoted companies in the decade 2001-2011 using four different methods, comparing the resulting minimum spanning trees for methods and industry sectors. Our starting method is based on Person's correlation of log-returns used by several other authors in the last decade. The second one is based on the correlation of symbolized log-returns, the third of log-returns and traded money and the fourth one uses a combination of log-returns with traded money. We show that some sectors correspond to the network's clusters while others are scattered, in particular the trading and apparel sectors. We analyze the different graph's measures for the four methods showing that the introduction of volumes induces larger distances and more homogeneous trees without big clusters.
Liu, Zhenqiu; Hsiao, William; Cantarel, Brandi L; Drábek, Elliott Franco; Fraser-Liggett, Claire
2011-12-01
Direct sequencing of microbes in human ecosystems (the human microbiome) has complemented single genome cultivation and sequencing to understand and explore the impact of commensal microbes on human health. As sequencing technologies improve and costs decline, the sophistication of data has outgrown available computational methods. While several existing machine learning methods have been adapted for analyzing microbiome data recently, there is not yet an efficient and dedicated algorithm available for multiclass classification of human microbiota. By combining instance-based and model-based learning, we propose a novel sparse distance-based learning method for simultaneous class prediction and feature (variable or taxa, which is used interchangeably) selection from multiple treatment populations on the basis of 16S rRNA sequence count data. Our proposed method simultaneously minimizes the intraclass distance and maximizes the interclass distance with many fewer estimated parameters than other methods. It is very efficient for problems with small sample sizes and unbalanced classes, which are common in metagenomic studies. We implemented this method in a MATLAB toolbox called MetaDistance. We also propose several approaches for data normalization and variance stabilization transformation in MetaDistance. We validate this method on several real and simulated 16S rRNA datasets to show that it outperforms existing methods for classifying metagenomic data. This article is the first to address simultaneous multifeature selection and class prediction with metagenomic count data. The MATLAB toolbox is freely available online at http://metadistance.igs.umaryland.edu/. zliu@umm.edu Supplementary data are available at Bioinformatics online.
Metabolic power demands of rugby league match play.
Kempton, Tom; Sirotic, Anita Claire; Rampinini, Ermanno; Coutts, Aaron James
2015-01-01
To describe the metabolic demands of rugby league match play for positional groups and compare match distances obtained from high-speed-running classifications with those derived from high metabolic power. Global positioning system (GPS) data were collected from 25 players from a team competing in the National Rugby League competition over 39 matches. Players were classified into positional groups (adjustables, outside backs, hit-up forwards, and wide-running forwards). The GPS devices provided instantaneous raw velocity data at 5 Hz, which were exported to a customized spreadsheet. The spreadsheet provided calculations for speed-based distances (eg, total distance; high-speed running, >14.4 km/h; and very-high-speed running, >18.1 km/h) and metabolic-power variables (eg, energy expenditure; average metabolic power; and high-power distance, >20 W/kg). The data show that speed-based distances and metabolic power varied between positional groups, although this was largely related to differences in time spent on field. The distance covered at high running speed was lower than that obtained from high-power thresholds for all positional groups; however, the difference between the 2 methods was greatest for hit-up forwards and adjustables. Positional differences existed for all metabolic parameters, although these are at least partially related to time spent on the field. Higher-speed running may underestimate the demands of match play when compared with high-power distance-although the degree of difference between the measures varied by position. The analysis of metabolic power may complement traditional speed-based classifications and improve our understanding of the demands of rugby league match play.
Identity Recognition Algorithm Using Improved Gabor Feature Selection of Gait Energy Image
NASA Astrophysics Data System (ADS)
Chao, LIANG; Ling-yao, JIA; Dong-cheng, SHI
2017-01-01
This paper describes an effective gait recognition approach based on Gabor features of gait energy image. In this paper, the kernel Fisher analysis combined with kernel matrix is proposed to select dominant features. The nearest neighbor classifier based on whitened cosine distance is used to discriminate different gait patterns. The approach proposed is tested on the CASIA and USF gait databases. The results show that our approach outperforms other state of gait recognition approaches in terms of recognition accuracy and robustness.
NASA Technical Reports Server (NTRS)
Knepper, Bryan; Hwang, Soon Muk; DeWitt, Kenneth J.
2004-01-01
Minimum ignition energies of various methanol/air mixtures were measured in a temperature controlled constant volume combustion vessel using a spark ignition method with a spark gap distance of 2 mm. The minimum ignition energies decrease rapidly as the mixture composition (equivalence ratio, Phi) changes from lean to stoichiometric, reach a minimum value, and then increase rather slowly with Phi. The minimum of the minimum ignition energy (MIE) and the corresponding mixture composition were determined to be 0.137 mJ and Phi = 1.16, a slightly rich mixture. The variation of minimum ignition energy with respect to the mixture composition is explained in terms of changes in reaction chemistry.
Rail vs truck transport of biomass.
Mahmudi, Hamed; Flynn, Peter C
2006-01-01
This study analyzes the economics of transshipping biomass from truck to train in a North American setting. Transshipment will only be economic when the cost per unit distance of a second transportation mode is less than the original mode. There is an optimum number of transshipment terminals which is related to biomass yield. Transshipment incurs incremental fixed costs, and hence there is a minimum shipping distance for rail transport above which lower costs/km offset the incremental fixed costs. For transport by dedicated unit train with an optimum number of terminals, the minimum economic rail shipping distance for straw is 170 km, and for boreal forest harvest residue wood chips is 145 km. The minimum economic shipping distance for straw exceeds the biomass draw distance for economically sized centrally located power plants, and hence the prospects for rail transport are limited to cases in which traffic congestion from truck transport would otherwise preclude project development. Ideally, wood chip transport costs would be lowered by rail transshipment for an economically sized centrally located power plant, but in a specific case in Alberta, Canada, the layout of existing rail lines precludes a centrally located plant supplied by rail, whereas a more versatile road system enables it by truck. Hence for wood chips as well as straw the economic incentive for rail transport to centrally located processing plants is limited. Rail transshipment may still be preferred in cases in which road congestion precludes truck delivery, for example as result of community objections.
Solar wind velocity and temperature in the outer heliosphere
NASA Technical Reports Server (NTRS)
Gazis, P. R.; Barnes, A.; Mihalov, J. D.; Lazarus, A. J.
1994-01-01
At the end of 1992, the Pioneer 10, Pioneer 11, and Voyager 2 spacecraft were at heliocentric distances of 56.0, 37.3, and 39.0 AU and heliographic latitudes of 3.3 deg N, 17.4 deg N, and 8.6 deg S, respectively. Pioneer 11 and Voyager 2 are at similar celestial longitudes, while Pioneer 10 is on the opposite side of the Sun. All three spacecraft have working plasma analyzers, so intercomparison of data from these spacecraft provides important information about the global character of the solar wind in the outer heliosphere. The averaged solar wind speed continued to exhibit its well-known variation with solar cycle: Even at heliocentric distances greater than 50 AU, the average speed is highest during the declining phase of the solar cycle and lowest near solar minimum. There was a strong latitudinal gradient in solar wind speed between 3 deg and 17 deg N during the last solar minimum, but this gradient has since disappeared. The solar wind temperature declined with increasing heliocentric distance out to a heliocentric distance of at least 20 AU; this decline appeared to continue at larger heliocentric distances, but temperatures in the outer heliosphere were suprisingly high. While Pioneer 10 and Voyager 2 observed comparable solar wind temperatures, the temperature at Pioneer 11 was significantly higher, which suggests the existence of a large-scale variation of temperature with heliographic longitude. There was also some suggestion that solar wind temperatures were higher near solar minimum.
NASA Technical Reports Server (NTRS)
Hou, Gene J.-W; Newman, Perry A. (Technical Monitor)
2004-01-01
A major step in a most probable point (MPP)-based method for reliability analysis is to determine the MPP. This is usually accomplished by using an optimization search algorithm. The minimum distance associated with the MPP provides a measurement of safety probability, which can be obtained by approximate probability integration methods such as FORM or SORM. The reliability sensitivity equations are derived first in this paper, based on the derivatives of the optimal solution. Examples are provided later to demonstrate the use of these derivatives for better reliability analysis and reliability-based design optimization (RBDO).
Langley Air Force Base Marina Repair Environmental Assessment
2004-08-16
of human perception for extended periods of time; cosmetic or structural damage could occur to buildings. Table 3-8 presents the minimum distance at...Hazardous Waste Storage Areas (HWSA) where they are stored until disposal is economically practicable or before 90 days has expired , whichever comes...Shop, where paintss paint thinners, ·paint mixing, and cleansing of paint equipment took place between 1950 md 1991. The other is the gasoline storage
33 CFR 67.05-20 - Minimum lighting requirements.
Code of Federal Regulations, 2013 CFR
2013-07-01
... for Lights § 67.05-20 Minimum lighting requirements. The obstruction lighting requirements prescribed... application for authorization to establish more lights, or lights of greater intensity than required to be visible at the distances prescribed: Provided, That the prescribed characteristics of color and flash...
33 CFR 67.05-20 - Minimum lighting requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
... for Lights § 67.05-20 Minimum lighting requirements. The obstruction lighting requirements prescribed... application for authorization to establish more lights, or lights of greater intensity than required to be visible at the distances prescribed: Provided, That the prescribed characteristics of color and flash...
33 CFR 67.05-20 - Minimum lighting requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... for Lights § 67.05-20 Minimum lighting requirements. The obstruction lighting requirements prescribed... application for authorization to establish more lights, or lights of greater intensity than required to be visible at the distances prescribed: Provided, That the prescribed characteristics of color and flash...
33 CFR 67.05-20 - Minimum lighting requirements.
Code of Federal Regulations, 2012 CFR
2012-07-01
... for Lights § 67.05-20 Minimum lighting requirements. The obstruction lighting requirements prescribed... application for authorization to establish more lights, or lights of greater intensity than required to be visible at the distances prescribed: Provided, That the prescribed characteristics of color and flash...
33 CFR 67.05-20 - Minimum lighting requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
... for Lights § 67.05-20 Minimum lighting requirements. The obstruction lighting requirements prescribed... application for authorization to establish more lights, or lights of greater intensity than required to be visible at the distances prescribed: Provided, That the prescribed characteristics of color and flash...
7 CFR 1703.143 - Maximum and minimum amounts.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 11 2010-01-01 2010-01-01 false Maximum and minimum amounts. 1703.143 Section 1703.143 Agriculture Regulations of the Department of Agriculture (Continued) RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE RURAL DEVELOPMENT Distance Learning and Telemedicine Loan Program § 1703.143...
7 CFR 1703.143 - Maximum and minimum amounts.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 11 2012-01-01 2012-01-01 false Maximum and minimum amounts. 1703.143 Section 1703.143 Agriculture Regulations of the Department of Agriculture (Continued) RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE RURAL DEVELOPMENT Distance Learning and Telemedicine Loan Program § 1703.143...
7 CFR 1703.143 - Maximum and minimum amounts.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 11 2013-01-01 2013-01-01 false Maximum and minimum amounts. 1703.143 Section 1703.143 Agriculture Regulations of the Department of Agriculture (Continued) RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE RURAL DEVELOPMENT Distance Learning and Telemedicine Loan Program § 1703.143...
7 CFR 1703.143 - Maximum and minimum amounts.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 11 2014-01-01 2014-01-01 false Maximum and minimum amounts. 1703.143 Section 1703.143 Agriculture Regulations of the Department of Agriculture (Continued) RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE RURAL DEVELOPMENT Distance Learning and Telemedicine Loan Program § 1703.143...
7 CFR 1703.143 - Maximum and minimum amounts.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 11 2011-01-01 2011-01-01 false Maximum and minimum amounts. 1703.143 Section 1703.143 Agriculture Regulations of the Department of Agriculture (Continued) RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE RURAL DEVELOPMENT Distance Learning and Telemedicine Loan Program § 1703.143...
40 CFR 257.25 - Assessment monitoring program.
Code of Federal Regulations, 2012 CFR
2012-07-01
...) Minimum distance between upgradient edge of the unit and downgradient monitoring well screen (minimum... that is likely to be without appreciable risk of deleterious effects during a lifetime. For purposes of this subpart, systemic toxicants include toxic chemicals that cause effects other than cancer or...
40 CFR 257.25 - Assessment monitoring program.
Code of Federal Regulations, 2013 CFR
2013-07-01
...) Minimum distance between upgradient edge of the unit and downgradient monitoring well screen (minimum... that is likely to be without appreciable risk of deleterious effects during a lifetime. For purposes of this subpart, systemic toxicants include toxic chemicals that cause effects other than cancer or...
40 CFR 257.25 - Assessment monitoring program.
Code of Federal Regulations, 2011 CFR
2011-07-01
...) Minimum distance between upgradient edge of the unit and downgradient monitoring well screen (minimum... that is likely to be without appreciable risk of deleterious effects during a lifetime. For purposes of this subpart, systemic toxicants include toxic chemicals that cause effects other than cancer or...
Construction of type-II QC-LDPC codes with fast encoding based on perfect cyclic difference sets
NASA Astrophysics Data System (ADS)
Li, Ling-xiang; Li, Hai-bing; Li, Ji-bi; Jiang, Hua
2017-09-01
In view of the problems that the encoding complexity of quasi-cyclic low-density parity-check (QC-LDPC) codes is high and the minimum distance is not large enough which leads to the degradation of the error-correction performance, the new irregular type-II QC-LDPC codes based on perfect cyclic difference sets (CDSs) are constructed. The parity check matrices of these type-II QC-LDPC codes consist of the zero matrices with weight of 0, the circulant permutation matrices (CPMs) with weight of 1 and the circulant matrices with weight of 2 (W2CMs). The introduction of W2CMs in parity check matrices makes it possible to achieve the larger minimum distance which can improve the error- correction performance of the codes. The Tanner graphs of these codes have no girth-4, thus they have the excellent decoding convergence characteristics. In addition, because the parity check matrices have the quasi-dual diagonal structure, the fast encoding algorithm can reduce the encoding complexity effectively. Simulation results show that the new type-II QC-LDPC codes can achieve a more excellent error-correction performance and have no error floor phenomenon over the additive white Gaussian noise (AWGN) channel with sum-product algorithm (SPA) iterative decoding.
NASA Astrophysics Data System (ADS)
Sutikno, Madnasri; Susilo; Arya Wijayanti, Riza
2016-08-01
A study about X-ray radiation impact on the white mice through radiation dose mapping in Medical Physic Laboratory is already done. The purpose of this research is to determine the minimum distance of radiologist to X-ray instrument through treatment on the white mice. The radiation exposure doses are measured on the some points in the distance from radiation source between 30 cm up to 80 with interval of 30 cm. The impact of radiation exposure on the white mice and the effects of radiation measurement in different directions are investigated. It is founded that minimum distance of radiation worker to radiation source is 180 cm and X-ray has decreased leukocyte number and haemoglobin and has increased thrombocyte number in the blood of white mice.
Estimates of the absolute error and a scheme for an approximate solution to scheduling problems
NASA Astrophysics Data System (ADS)
Lazarev, A. A.
2009-02-01
An approach is proposed for estimating absolute errors and finding approximate solutions to classical NP-hard scheduling problems of minimizing the maximum lateness for one or many machines and makespan is minimized. The concept of a metric (distance) between instances of the problem is introduced. The idea behind the approach is, given the problem instance, to construct another instance for which an optimal or approximate solution can be found at the minimum distance from the initial instance in the metric introduced. Instead of solving the original problem (instance), a set of approximating polynomially/pseudopolynomially solvable problems (instances) are considered, an instance at the minimum distance from the given one is chosen, and the resulting schedule is then applied to the original instance.
Christian, Josef; Kröll, Josef; Schwameder, Hermann
2017-06-01
Common summary measures of gait quality such as the Gait Profile Score (GPS) are based on the principle of measuring a distance from the mean pattern of a healthy reference group in a gait pattern vector space. The recently introduced Classifier Oriented Gait Score (COGS) is a pathology specific score that measures this distance in a unique direction, which is indicated by a linear classifier. This approach has potentially improved the discriminatory power to detect subtle changes in gait patterns but does not incorporate a profile of interpretable sub-scores like the GPS. The main aims of this study were to extend the COGS by decomposing it into interpretable sub-scores as realized in the GPS and to compare the discriminative power of the GPS and COGS. Two types of gait impairments were imitated to enable a high level of control of the gait patterns. Imitated impairments were realized by restricting knee extension and inducing leg length discrepancy. The results showed increased discriminatory power of the COGS for differentiating diverse levels of impairment. Comparison of the GPS and COGS sub-scores and their ability to indicate changes in specific variables supports the validity of both scores. The COGS is an overall measure of gait quality with increased power to detect subtle changes in gait patterns and might be well suited for tracing the effect of a therapeutic treatment over time. The newly introduced sub-scores improved the interpretability of the COGS, which is helpful for practical applications. Copyright © 2017 Elsevier B.V. All rights reserved.
Online clustering algorithms for radar emitter classification.
Liu, Jun; Lee, Jim P Y; Senior; Li, Lingjie; Luo, Zhi-Quan; Wong, K Max
2005-08-01
Radar emitter classification is a special application of data clustering for classifying unknown radar emitters from received radar pulse samples. The main challenges of this task are the high dimensionality of radar pulse samples, small sample group size, and closely located radar pulse clusters. In this paper, two new online clustering algorithms are developed for radar emitter classification: One is model-based using the Minimum Description Length (MDL) criterion and the other is based on competitive learning. Computational complexity is analyzed for each algorithm and then compared. Simulation results show the superior performance of the model-based algorithm over competitive learning in terms of better classification accuracy, flexibility, and stability.
Numerical approach of collision avoidance and optimal control on robotic manipulators
NASA Technical Reports Server (NTRS)
Wang, Jyhshing Jack
1990-01-01
Collision-free optimal motion and trajectory planning for robotic manipulators are solved by a method of sequential gradient restoration algorithm. Numerical examples of a two degree-of-freedom (DOF) robotic manipulator are demonstrated to show the excellence of the optimization technique and obstacle avoidance scheme. The obstacle is put on the midway, or even further inward on purpose, of the previous no-obstacle optimal trajectory. For the minimum-time purpose, the trajectory grazes by the obstacle and the minimum-time motion successfully avoids the obstacle. The minimum-time is longer for the obstacle avoidance cases than the one without obstacle. The obstacle avoidance scheme can deal with multiple obstacles in any ellipsoid forms by using artificial potential fields as penalty functions via distance functions. The method is promising in solving collision-free optimal control problems for robotics and can be applied to any DOF robotic manipulators with any performance indices and mobile robots as well. Since this method generates optimum solution based on Pontryagin Extremum Principle, rather than based on assumptions, the results provide a benchmark against which any optimization techniques can be measured.
Entropy-Based Registration of Point Clouds Using Terrestrial Laser Scanning and Smartphone GPS.
Chen, Maolin; Wang, Siying; Wang, Mingwei; Wan, Youchuan; He, Peipei
2017-01-20
Automatic registration of terrestrial laser scanning point clouds is a crucial but unresolved topic that is of great interest in many domains. This study combines terrestrial laser scanner with a smartphone for the coarse registration of leveled point clouds with small roll and pitch angles and height differences, which is a novel sensor combination mode for terrestrial laser scanning. The approximate distance between two neighboring scan positions is firstly calculated with smartphone GPS coordinates. Then, 2D distribution entropy is used to measure the distribution coherence between the two scans and search for the optimal initial transformation parameters. To this end, we propose a method called Iterative Minimum Entropy (IME) to correct initial transformation parameters based on two criteria: the difference between the average and minimum entropy and the deviation from the minimum entropy to the expected entropy. Finally, the presented method is evaluated using two data sets that contain tens of millions of points from panoramic and non-panoramic, vegetation-dominated and building-dominated cases and can achieve high accuracy and efficiency.
An Improved Global Wind Resource Estimate for Integrated Assessment Models: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eurek, Kelly; Sullivan, Patrick; Gleason, Michael
This paper summarizes initial steps to improving the robustness and accuracy of global renewable resource and techno-economic assessments for use in integrated assessment models. We outline a method to construct country-level wind resource supply curves, delineated by resource quality and other parameters. Using mesoscale reanalysis data, we generate estimates for wind quality, both terrestrial and offshore, across the globe. Because not all land or water area is suitable for development, appropriate database layers provide exclusions to reduce the total resource to its technical potential. We expand upon estimates from related studies by: using a globally consistent data source of uniquelymore » detailed wind speed characterizations; assuming a non-constant coefficient of performance for adjusting power curves for altitude; categorizing the distance from resource sites to the electric power grid; and characterizing offshore exclusions on the basis of sea ice concentrations. The product, then, is technical potential by country, classified by resource quality as determined by net capacity factor. Additional classifications dimensions are available, including distance to transmission networks for terrestrial wind and distance to shore and water depth for offshore. We estimate the total global wind generation potential of 560 PWh for terrestrial wind with 90% of resource classified as low-to-mid quality, and 315 PWh for offshore wind with 67% classified as mid-to-high quality. These estimates are based on 3.5 MW composite wind turbines with 90 m hub heights, 0.95 availability, 90% array efficiency, and 5 MW/km2 deployment density in non-excluded areas. We compare the underlying technical assumption and results with other global assessments.« less
NASA Astrophysics Data System (ADS)
Du, Peng; Ouahsine, Abdellatif; Sergent, Philippe
2018-05-01
Ship maneuvering in the confined inland waterway is investigated using the system-based method, where a nonlinear transient hydrodynamic model is adopted and confinement models are implemented to account for the influence of the channel bank and bottom. The maneuvering model is validated using the turning circle test, and the confinement model is validated using the experimental data. The separation distance, ship speed, and channel width are then varied to investigate their influences on ship maneuverability. With smaller separation distances and higher speeds near the bank, the ship's trajectory deviates more from the original course and the bow is repelled with a larger yaw angle, which increase the difficulty of maneuvering. Smaller channel widths induce higher advancing resistances on the ship. The minimum distance to the bank are extracted and studied. It is suggested to navigate the ship in the middle of the channel and with a reasonable speed in the restricted waterway.
Classification of Palmprint Using Principal Line
NASA Astrophysics Data System (ADS)
Prasad, Munaga V. N. K.; Kumar, M. K. Pramod; Sharma, Kuldeep
In this paper, a new classification scheme for palmprint is proposed. Palmprint is one of the reliable physiological characteristics that can be used to authenticate an individual. Palmprint classification provides an important indexing mechanism in a very large palmprint database. Here, the palmprint database is initially categorized into two groups, right hand group and left hand group. Then, each group is further classified based on the distance traveled by principal line i.e. Heart Line During pre processing, a rectangular Region of Interest (ROI) in which only heart line is present, is extracted. Further, ROI is divided into 6 regions and depending upon the regions in which the heart line traverses the palmprint is classified accordingly. Consequently, our scheme allows 64 categories for each group forming a total number of 128 possible categories. The technique proposed in this paper includes only 15 such categories and it classifies not more than 20.96% of the images into a single category.
Electrostatic Interactions Between Glycosaminoglycan Molecules
NASA Astrophysics Data System (ADS)
Song, Fan; Moyne, Christian; Bai, Yi-Long
2005-02-01
The electrostatic interactions between nearest-neighbouring chondroitin sulfate glycosaminoglycan (CS-GAG) molecular chains are obtained on the bottle brush conformation of proteoglycan aggrecan based on an asymptotic solution of the Poisson-Boltzmann equation the CS-GAGs satisfy under the physiological conditions of articular cartilage. The present results show that the interactions are associated intimately with the minimum separation distance and mutual angle between the molecular chains themselves. Further analysis indicates that the electrostatic interactions are not only expressed to be purely exponential in separation distance and decrease with the increasing mutual angle but also dependent sensitively on the saline concentration in the electrolyte solution within the tissue, which is in agreement with the existed relevant conclusions.
A new approach for categorizing pig lying behaviour based on a Delaunay triangulation method.
Nasirahmadi, A; Hensel, O; Edwards, S A; Sturm, B
2017-01-01
Machine vision-based monitoring of pig lying behaviour is a fast and non-intrusive approach that could be used to improve animal health and welfare. Four pens with 22 pigs in each were selected at a commercial pig farm and monitored for 15 days using top view cameras. Three thermal categories were selected relative to room setpoint temperature. An image processing technique based on Delaunay triangulation (DT) was utilized. Different lying patterns (close, normal and far) were defined regarding the perimeter of each DT triangle and the percentages of each lying pattern were obtained in each thermal category. A method using a multilayer perceptron (MLP) neural network, to automatically classify group lying behaviour of pigs into three thermal categories, was developed and tested for its feasibility. The DT features (mean value of perimeters, maximum and minimum length of sides of triangles) were calculated as inputs for the MLP classifier. The network was trained, validated and tested and the results revealed that MLP could classify lying features into the three thermal categories with high overall accuracy (95.6%). The technique indicates that a combination of image processing, MLP classification and mathematical modelling can be used as a precise method for quantifying pig lying behaviour in welfare investigations.
1983-06-01
60 References ........................................................... 79 AccesSqlon For NTIS rFA&I r"!’ TAU U: .,P Dist r A. -. S iv...separate exhaust nozzles for discharge of fan and turbine exhaust flows (e.g., JT15D, TFE731 , ALF-502, CF34, JT3D, CFM56, RB.211, CF6, JT9D, and PW2037...minimum radial distance from the effective source of sound at 40 Hz should then be approxi- mately 69 m. At 60 Hz, the minimum radial distance should be
Automated extraction and classification of RNA tertiary structure cyclic motifs
Lemieux, Sébastien; Major, François
2006-01-01
A minimum cycle basis of the tertiary structure of a large ribosomal subunit (LSU) X-ray crystal structure was analyzed. Most cycles are small, as they are composed of 3- to 5 nt, and repeated across the LSU tertiary structure. We used hierarchical clustering to quantify and classify the 4 nt cycles. One class is defined by the GNRA tetraloop motif. The inspection of the GNRA class revealed peculiar instances in sequence. First is the presence of UA, CA, UC and CC base pairs that substitute the usual sheared GA base pair. Second is the revelation of GNR(Xn)A tetraloops, where Xn is bulged out of the classical GNRA structure, and of GN/RA formed by the two strands of interior-loops. We were able to unambiguously characterize the cycle classes using base stacking and base pairing annotations. The cycles identified correspond to small and cyclic motifs that compose most of the LSU RNA tertiary structure and contribute to its thermodynamic stability. Consequently, the RNA minimum cycles could well be used as the basic elements of RNA tertiary structure prediction methods. PMID:16679452
Knick, Steven T.; Rotenberry, J.T.
1998-01-01
We tested the potential of a GIS mapping technique, using a resource selection model developed for black-tailed jackrabbits (Lepus californicus) and based on the Mahalanobis distance statistic, to track changes in shrubsteppe habitats in southwestern Idaho. If successful, the technique could be used to predict animal use areas, or those undergoing change, in different regions from the same selection function and variables without additional sampling. We determined the multivariate mean vector of 7 GIS variables that described habitats used by jackrabbits. We then ranked the similarity of all cells in the GIS coverage from their Mahalanobis distance to the mean habitat vector. The resulting map accurately depicted areas where we sighted jackrabbits on verification surveys. We then simulated an increase in shrublands (which are important habitats). Contrary to expectation, the new configurations were classified as lower similarity relative to the original mean habitat vector. Because the selection function is based on a unimodal mean, any deviation, even if biologically positive, creates larger Malanobis distances and lower similarity values. We recommend the Mahalanobis distance technique for mapping animal use areas when animals are distributed optimally, the landscape is well-sampled to determine the mean habitat vector, and distributions of the habitat variables does not change.
7 CFR 1703.124 - Maximum and minimum grant amounts.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 11 2010-01-01 2010-01-01 false Maximum and minimum grant amounts. 1703.124 Section 1703.124 Agriculture Regulations of the Department of Agriculture (Continued) RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE RURAL DEVELOPMENT Distance Learning and Telemedicine Grant Program § 1703.124...
7 CFR 1703.124 - Maximum and minimum grant amounts.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 11 2013-01-01 2013-01-01 false Maximum and minimum grant amounts. 1703.124 Section 1703.124 Agriculture Regulations of the Department of Agriculture (Continued) RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE RURAL DEVELOPMENT Distance Learning and Telemedicine Grant Program § 1703.124...
7 CFR 1703.124 - Maximum and minimum grant amounts.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 11 2012-01-01 2012-01-01 false Maximum and minimum grant amounts. 1703.124 Section 1703.124 Agriculture Regulations of the Department of Agriculture (Continued) RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE RURAL DEVELOPMENT Distance Learning and Telemedicine Grant Program § 1703.124...
7 CFR 1703.124 - Maximum and minimum grant amounts.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 11 2011-01-01 2011-01-01 false Maximum and minimum grant amounts. 1703.124 Section 1703.124 Agriculture Regulations of the Department of Agriculture (Continued) RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE RURAL DEVELOPMENT Distance Learning and Telemedicine Grant Program § 1703.124...
7 CFR 1703.124 - Maximum and minimum grant amounts.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 11 2014-01-01 2014-01-01 false Maximum and minimum grant amounts. 1703.124 Section 1703.124 Agriculture Regulations of the Department of Agriculture (Continued) RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE RURAL DEVELOPMENT Distance Learning and Telemedicine Grant Program § 1703.124...
Floating and Tether-Coupled Adhesion of Bacteria to Hydrophobic and Hydrophilic Surfaces
2018-01-01
Models for bacterial adhesion to substratum surfaces all include uncertainty with respect to the (ir)reversibility of adhesion. In a model, based on vibrations exhibited by adhering bacteria parallel to a surface, adhesion was described as a result of reversible binding of multiple bacterial tethers that detach from and successively reattach to a surface, eventually making bacterial adhesion irreversible. Here, we use total internal reflection microscopy to determine whether adhering bacteria also exhibit variations over time in their perpendicular distance above surfaces. Streptococci with fibrillar surface tethers showed perpendicular vibrations with amplitudes of around 5 nm, regardless of surface hydrophobicity. Adhering, nonfibrillated streptococci vibrated with amplitudes around 20 nm above a hydrophobic surface. Amplitudes did not depend on ionic strength for either strain. Calculations of bacterial energies from their distances above the surfaces using the Boltzman equation showed that bacteria with fibrillar tethers vibrated as a harmonic oscillator. The energy of bacteria without fibrillar tethers varied with distance in a comparable fashion as the DLVO (Derjaguin, Landau, Verwey, and Overbeek)-interaction energy. Distance variations above the surface over time of bacteria with fibrillar tethers are suggested to be governed by the harmonic oscillations, allowed by elasticity of the tethers, piercing through the potential energy barrier. Bacteria without fibrillar tethers “float” above a surface in the secondary energy minimum, with their perpendicular displacement restricted by their thermal energy and the width of the secondary minimum. The distinction between “tether-coupled” and “floating” adhesion is new, and may have implications for bacterial detachment strategies. PMID:29649869
Bivariate empirical mode decomposition for ECG-based biometric identification with emotional data.
Ferdinando, Hany; Seppanen, Tapio; Alasaarela, Esko
2017-07-01
Emotions modulate ECG signals such that they might affect ECG-based biometric identification in real life application. It motivated in finding good feature extraction methods where the emotional state of the subjects has minimum impacts. This paper evaluates feature extraction based on bivariate empirical mode decomposition (BEMD) for biometric identification when emotion is considered. Using the ECG signal from the Mahnob-HCI database for affect recognition, the features were statistical distributions of dominant frequency after applying BEMD analysis to ECG signals. The achieved accuracy was 99.5% with high consistency using kNN classifier in 10-fold cross validation to identify 26 subjects when the emotional states of the subjects were ignored. When the emotional states of the subject were considered, the proposed method also delivered high accuracy, around 99.4%. We concluded that the proposed method offers emotion-independent features for ECG-based biometric identification. The proposed method needs more evaluation related to testing with other classifier and variation in ECG signals, e.g. normal ECG vs. ECG with arrhythmias, ECG from various ages, and ECG from other affective databases.
NASA Astrophysics Data System (ADS)
Cheng, Shaoyong; Xiu, Shixin; Wang, Jimei; Shen, Zhengchao
2006-11-01
The greenhouse effect of SF6 is a great concern today. The development of high voltage vacuum circuit breakers becomes more important. The vacuum circuit breaker has minimum pollution to the environment. The vacuum interrupter is the key part of a vacuum circuit breaker. The interrupting characteristics in vacuum and arc-controlling technique are the main problems to be solved for a longer gap distance in developing high voltage vacuum interrupters. To understand the vacuum arc characteristics and provide effective technique to control vacuum arc in a long gap distance, the arc mode transition of a cup-type axial magnetic field electrode is observed by a high-speed charge coupled device (CCD) video camera under different gap distances while the arc voltage and arc current are recorded. The controlling ability of the axial magnetic field on vacuum arc obviously decreases when the gap distance is longer than 40 mm. The noise components and mean value of the arc voltage significantly increase. The effective method for controlling the vacuum arc characteristics is provided by long gap distances based on the test results. The test results can be used as a reference to develop high voltage and large capacity vacuum interrupters.
Wen, Tingxi; Zhang, Zhongnan
2017-01-01
Abstract In this paper, genetic algorithm-based frequency-domain feature search (GAFDS) method is proposed for the electroencephalogram (EEG) analysis of epilepsy. In this method, frequency-domain features are first searched and then combined with nonlinear features. Subsequently, these features are selected and optimized to classify EEG signals. The extracted features are analyzed experimentally. The features extracted by GAFDS show remarkable independence, and they are superior to the nonlinear features in terms of the ratio of interclass distance and intraclass distance. Moreover, the proposed feature search method can search for features of instantaneous frequency in a signal after Hilbert transformation. The classification results achieved using these features are reasonable; thus, GAFDS exhibits good extensibility. Multiple classical classifiers (i.e., k-nearest neighbor, linear discriminant analysis, decision tree, AdaBoost, multilayer perceptron, and Naïve Bayes) achieve satisfactory classification accuracies by using the features generated by the GAFDS method and the optimized feature selection. The accuracies for 2-classification and 3-classification problems may reach up to 99% and 97%, respectively. Results of several cross-validation experiments illustrate that GAFDS is effective in the extraction of effective features for EEG classification. Therefore, the proposed feature selection and optimization model can improve classification accuracy. PMID:28489789
Deng, Changjian; Lv, Kun; Shi, Debo; Yang, Bo; Yu, Song; He, Zhiyi; Yan, Jia
2018-06-12
In this paper, a novel feature selection and fusion framework is proposed to enhance the discrimination ability of gas sensor arrays for odor identification. Firstly, we put forward an efficient feature selection method based on the separability and the dissimilarity to determine the feature selection order for each type of feature when increasing the dimension of selected feature subsets. Secondly, the K-nearest neighbor (KNN) classifier is applied to determine the dimensions of the optimal feature subsets for different types of features. Finally, in the process of establishing features fusion, we come up with a classification dominance feature fusion strategy which conducts an effective basic feature. Experimental results on two datasets show that the recognition rates of Database I and Database II achieve 97.5% and 80.11%, respectively, when k = 1 for KNN classifier and the distance metric is correlation distance (COR), which demonstrates the superiority of the proposed feature selection and fusion framework in representing signal features. The novel feature selection method proposed in this paper can effectively select feature subsets that are conducive to the classification, while the feature fusion framework can fuse various features which describe the different characteristics of sensor signals, for enhancing the discrimination ability of gas sensors and, to a certain extent, suppressing drift effect.
Wen, Tingxi; Zhang, Zhongnan
2017-05-01
In this paper, genetic algorithm-based frequency-domain feature search (GAFDS) method is proposed for the electroencephalogram (EEG) analysis of epilepsy. In this method, frequency-domain features are first searched and then combined with nonlinear features. Subsequently, these features are selected and optimized to classify EEG signals. The extracted features are analyzed experimentally. The features extracted by GAFDS show remarkable independence, and they are superior to the nonlinear features in terms of the ratio of interclass distance and intraclass distance. Moreover, the proposed feature search method can search for features of instantaneous frequency in a signal after Hilbert transformation. The classification results achieved using these features are reasonable; thus, GAFDS exhibits good extensibility. Multiple classical classifiers (i.e., k-nearest neighbor, linear discriminant analysis, decision tree, AdaBoost, multilayer perceptron, and Naïve Bayes) achieve satisfactory classification accuracies by using the features generated by the GAFDS method and the optimized feature selection. The accuracies for 2-classification and 3-classification problems may reach up to 99% and 97%, respectively. Results of several cross-validation experiments illustrate that GAFDS is effective in the extraction of effective features for EEG classification. Therefore, the proposed feature selection and optimization model can improve classification accuracy.
NASA Technical Reports Server (NTRS)
Tetervin, Neal
1957-01-01
By use of the linear theory of boundary-layer stability and Schlichting's formula for the maximum amplification of a disturbance, an approximate relation is derived between the Reynolds number on a cone and the Reynolds number on a flat plate for equal closeness to transition. The indication is that the ratio of the cone Reynolds number for transition, based on the distance to the cone apex, to the plate Reynolds number for transition, based on the distance to the leading edge, is not in general equal to 3, as has been suggested by other investigators, but varies from 3 when transition occurs at the minimum critical Reynolds number to unity when transition occurs at a large multiple of the critical Reynolds number.
Cannistraci, Carlo Vittorio; Ravasi, Timothy; Montevecchi, Franco Maria; Ideker, Trey; Alessio, Massimo
2010-09-15
Nonlinear small datasets, which are characterized by low numbers of samples and very high numbers of measures, occur frequently in computational biology, and pose problems in their investigation. Unsupervised hybrid-two-phase (H2P) procedures-specifically dimension reduction (DR), coupled with clustering-provide valuable assistance, not only for unsupervised data classification, but also for visualization of the patterns hidden in high-dimensional feature space. 'Minimum Curvilinearity' (MC) is a principle that-for small datasets-suggests the approximation of curvilinear sample distances in the feature space by pair-wise distances over their minimum spanning tree (MST), and thus avoids the introduction of any tuning parameter. MC is used to design two novel forms of nonlinear machine learning (NML): Minimum Curvilinear embedding (MCE) for DR, and Minimum Curvilinear affinity propagation (MCAP) for clustering. Compared with several other unsupervised and supervised algorithms, MCE and MCAP, whether individually or combined in H2P, overcome the limits of classical approaches. High performance was attained in the visualization and classification of: (i) pain patients (proteomic measurements) in peripheral neuropathy; (ii) human organ tissues (genomic transcription factor measurements) on the basis of their embryological origin. MC provides a valuable framework to estimate nonlinear distances in small datasets. Its extension to large datasets is prefigured for novel NMLs. Classification of neuropathic pain by proteomic profiles offers new insights for future molecular and systems biology characterization of pain. Improvements in tissue embryological classification refine results obtained in an earlier study, and suggest a possible reinterpretation of skin attribution as mesodermal. https://sites.google.com/site/carlovittoriocannistraci/home.
Reconstruction of phylogenetic trees of prokaryotes using maximal common intervals.
Heydari, Mahdi; Marashi, Sayed-Amir; Tusserkani, Ruzbeh; Sadeghi, Mehdi
2014-10-01
One of the fundamental problems in bioinformatics is phylogenetic tree reconstruction, which can be used for classifying living organisms into different taxonomic clades. The classical approach to this problem is based on a marker such as 16S ribosomal RNA. Since evolutionary events like genomic rearrangements are not included in reconstructions of phylogenetic trees based on single genes, much effort has been made to find other characteristics for phylogenetic reconstruction in recent years. With the increasing availability of completely sequenced genomes, gene order can be considered as a new solution for this problem. In the present work, we applied maximal common intervals (MCIs) in two or more genomes to infer their distance and to reconstruct their evolutionary relationship. Additionally, measures based on uncommon segments (UCS's), i.e., those genomic segments which are not detected as part of any of the MCIs, are also used for phylogenetic tree reconstruction. We applied these two types of measures for reconstructing the phylogenetic tree of 63 prokaryotes with known COG (clusters of orthologous groups) families. Similarity between the MCI-based (resp. UCS-based) reconstructed phylogenetic trees and the phylogenetic tree obtained from NCBI taxonomy browser is as high as 93.1% (resp. 94.9%). We show that in the case of this diverse dataset of prokaryotes, tree reconstruction based on MCI and UCS outperforms most of the currently available methods based on gene orders, including breakpoint distance and DCJ. We additionally tested our new measures on a dataset of 13 closely-related bacteria from the genus Prochlorococcus. In this case, distances like rearrangement distance, breakpoint distance and DCJ proved to be useful, while our new measures are still appropriate for phylogenetic reconstruction. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Hiding Information Using different lighting Color images
NASA Astrophysics Data System (ADS)
Majead, Ahlam; Awad, Rash; Salman, Salema S.
2018-05-01
The host medium for the secret message is one of the important principles for the designers of steganography method. In this study, the best color image was studied to carrying any secret image.The steganography approach based Lifting Wavelet Transform (LWT) and Least Significant Bits (LSBs) substitution. The proposed method offers lossless and unnoticeable changes in the contrast carrier color image and imperceptible by human visual system (HVS), especially the host images which was captured in dark lighting conditions. The aim of the study was to study the process of masking the data in colored images with different light intensities. The effect of the masking process was examined on the images that are classified by a minimum distance and the amount of noise and distortion in the image. The histogram and statistical characteristics of the cover image the results showed the efficient use of images taken with different light intensities in hiding data using the least important bit substitution method. This method succeeded in concealing textual data without distorting the original image (low light) Lire developments due to the concealment process.The digital image segmentation technique was used to distinguish small areas with masking. The result is that smooth homogeneous areas are less affected as a result of hiding comparing with high light areas. It is possible to use dark color images to send any secret message between two persons for the purpose of secret communication with good security.
Metrics for Performance Evaluation of Patient Exercises during Physical Therapy.
Vakanski, Aleksandar; Ferguson, Jake M; Lee, Stephen
2017-06-01
The article proposes a set of metrics for evaluation of patient performance in physical therapy exercises. Taxonomy is employed that classifies the metrics into quantitative and qualitative categories, based on the level of abstraction of the captured motion sequences. Further, the quantitative metrics are classified into model-less and model-based metrics, in reference to whether the evaluation employs the raw measurements of patient performed motions, or whether the evaluation is based on a mathematical model of the motions. The reviewed metrics include root-mean square distance, Kullback Leibler divergence, log-likelihood, heuristic consistency, Fugl-Meyer Assessment, and similar. The metrics are evaluated for a set of five human motions captured with a Kinect sensor. The metrics can potentially be integrated into a system that employs machine learning for modelling and assessment of the consistency of patient performance in home-based therapy setting. Automated performance evaluation can overcome the inherent subjectivity in human performed therapy assessment, and it can increase the adherence to prescribed therapy plans, and reduce healthcare costs.
Multicategory nets of single-layer perceptrons: complexity and sample-size issues.
Raudys, Sarunas; Kybartas, Rimantas; Zavadskas, Edmundas Kazimieras
2010-05-01
The standard cost function of multicategory single-layer perceptrons (SLPs) does not minimize the classification error rate. In order to reduce classification error, it is necessary to: 1) refuse the traditional cost function, 2) obtain near to optimal pairwise linear classifiers by specially organized SLP training and optimal stopping, and 3) fuse their decisions properly. To obtain better classification in unbalanced training set situations, we introduce the unbalance correcting term. It was found that fusion based on the Kulback-Leibler (K-L) distance and the Wu-Lin-Weng (WLW) method result in approximately the same performance in situations where sample sizes are relatively small. The explanation for this observation is by theoretically known verity that an excessive minimization of inexact criteria becomes harmful at times. Comprehensive comparative investigations of six real-world pattern recognition (PR) problems demonstrated that employment of SLP-based pairwise classifiers is comparable and as often as not outperforming the linear support vector (SV) classifiers in moderate dimensional situations. The colored noise injection used to design pseudovalidation sets proves to be a powerful tool for facilitating finite sample problems in moderate-dimensional PR tasks.
Jaafar, Haryati; Ibrahim, Salwani; Ramli, Dzati Athiar
2015-01-01
Mobile implementation is a current trend in biometric design. This paper proposes a new approach to palm print recognition, in which smart phones are used to capture palm print images at a distance. A touchless system was developed because of public demand for privacy and sanitation. Robust hand tracking, image enhancement, and fast computation processing algorithms are required for effective touchless and mobile-based recognition. In this project, hand tracking and the region of interest (ROI) extraction method were discussed. A sliding neighborhood operation with local histogram equalization, followed by a local adaptive thresholding or LHEAT approach, was proposed in the image enhancement stage to manage low-quality palm print images. To accelerate the recognition process, a new classifier, improved fuzzy-based k nearest centroid neighbor (IFkNCN), was implemented. By removing outliers and reducing the amount of training data, this classifier exhibited faster computation. Our experimental results demonstrate that a touchless palm print system using LHEAT and IFkNCN achieves a promising recognition rate of 98.64%. PMID:26113861
A Virtual Blind Cane Using a Line Laser-Based Vision System and an Inertial Measurement Unit
Dang, Quoc Khanh; Chee, Youngjoon; Pham, Duy Duong; Suh, Young Soo
2016-01-01
A virtual blind cane system for indoor application, including a camera, a line laser and an inertial measurement unit (IMU), is proposed in this paper. Working as a blind cane, the proposed system helps a blind person find the type of obstacle and the distance to it. The distance from the user to the obstacle is estimated by extracting the laser coordinate points on the obstacle, as well as tracking the system pointing angle. The paper provides a simple method to classify the obstacle’s type by analyzing the laser intersection histogram. Real experimental results are presented to show the validity and accuracy of the proposed system. PMID:26771618
Noise tolerant dendritic lattice associative memories
NASA Astrophysics Data System (ADS)
Ritter, Gerhard X.; Schmalz, Mark S.; Hayden, Eric; Tucker, Marc
2011-09-01
Linear classifiers based on computation over the real numbers R (e.g., with operations of addition and multiplication) denoted by (R, +, x), have been represented extensively in the literature of pattern recognition. However, a different approach to pattern classification involves the use of addition, maximum, and minimum operations over the reals in the algebra (R, +, maximum, minimum) These pattern classifiers, based on lattice algebra, have been shown to exhibit superior information storage capacity, fast training and short convergence times, high pattern classification accuracy, and low computational cost. Such attributes are not always found, for example, in classical neural nets based on the linear inner product. In a special type of lattice associative memory (LAM), called a dendritic LAM or DLAM, it is possible to achieve noise-tolerant pattern classification by varying the design of noise or error acceptance bounds. This paper presents theory and algorithmic approaches for the computation of noise-tolerant lattice associative memories (LAMs) under a variety of input constraints. Of particular interest are the classification of nonergodic data in noise regimes with time-varying statistics. DLAMs, which are a specialization of LAMs derived from concepts of biological neural networks, have successfully been applied to pattern classification from hyperspectral remote sensing data, as well as spatial object recognition from digital imagery. The authors' recent research in the development of DLAMs is overviewed, with experimental results that show utility for a wide variety of pattern classification applications. Performance results are presented in terms of measured computational cost, noise tolerance, classification accuracy, and throughput for a variety of input data and noise levels.
ERIC Educational Resources Information Center
Saw, Kim Guan
2017-01-01
This article revisits the cognitive load theory to explore the use of worked examples to teach a selected topic in a higher level undergraduate physics course for distance learners at the School of Distance Education, Universiti Sains Malaysia. With a break of several years from receiving formal education and having only minimum science…
Takei, Yutaka; Kamikura, Takahisa; Nishi, Taiki; Maeda, Tetsuo; Sakagami, Satoru; Kubo, Minoru; Inaba, Hideo
2016-08-01
To compare the factors associated with survival after out-of-hospital cardiac arrests (OHCAs) among three time-distance areas (defined as interquartile range of time for emergency medical services response to patient's side). From a nationwide, prospectively collected data on 716,608 OHCAs between 2007 and 2012, this study analyzed 193,914 bystander-witnessed OHCAs without pre-hospital physician involvement. Overall neurologically favourable 1-month survival rates were 7.4%, 4.1% and 1.7% for close, intermediate and remote areas, respectively. We classified BCPR by type (compression-only vs. conventional) and by dispatcher-assisted CPR (DA-CPR) (with vs. without); the effects on time-distance area survival were analyzed by BCPR classification. Association of each BCPR classification with survival was affected by time-distance area and arrest aetiology (p<0.05). The survival rates in the remote area were much higher with conventional BCPR than with compression-only BCPR (odds ratio; 95% confidence interval, 1.26; 1.05-1.51) and with BCPR without DA-CPR than with BCPR with DA-CPR (1.54; 1.29-1.82). Accordingly, we classified BCPR into five groups (no BCPR, compression-only with DA-CPR, conventional with DA-CPR, compression-only without DA-CPR, and conventional without DA-CPR) and analyzed for associations with survival, both cardiac and non-cardiac related, in each time-distance area by multivariate logistic regression analysis. In the remote area, conventional BCPR without DA-CPR significantly improved survival after OHCAs of cardiac aetiology, compared with all the other BCPR groups. Other correctable factors associated with survival were short collapse-to-call and call-to-first CPR intervals. Every effort to recruit trained citizens initiating conventional BCPR should be made in remote time-distance areas. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Hydrodynamic chromatography of polystyrene microparticles in micropillar array columns.
Op de Beeck, Jeff; De Malsche, Wim; Vangelooven, Joris; Gardeniers, Han; Desmet, Gert
2010-09-24
We report on the possibility to perform HDC in micropillar array columns and the potential advantages of such a system. The HDC performance of a pillar array column with pillar diameter = 5 microm and an interpillar distance of 2.5 microm has been characterized using both a low MW tracer (FITC) and differently sized polystyrene bead samples (100, 200 and 500 nm). The reduced plate height curves that were obtained for the different investigated markers all overlapped very well, and attained a minimum value of about h(min)=0.3 (reduction based on the pillar diameter), corresponding to 1.6 microm in absolute value and giving good prospects for high efficiency separations. The obtained reduced retention time values were in fair agreement with that predicted by the Di Marzio and Guttman model for a flow between flat plates, using the minimal interpillar distance as characteristic interplate distance. Copyright 2010 Elsevier B.V. All rights reserved.
Evaluation of Semi-supervised Learning for Classification of Protein Crystallization Imagery
Sigdel, Madhav; Dinç, İmren; Dinç, Semih; Sigdel, Madhu S.; Pusey, Marc L.; Aygün, Ramazan S.
2015-01-01
In this paper, we investigate the performance of two wrapper methods for semi-supervised learning algorithms for classification of protein crystallization images with limited labeled images. Firstly, we evaluate the performance of semi-supervised approach using self-training with naïve Bayesian (NB) and sequential minimum optimization (SMO) as the base classifiers. The confidence values returned by these classifiers are used to select high confident predictions to be used for self-training. Secondly, we analyze the performance of Yet Another Two Stage Idea (YATSI) semi-supervised learning using NB, SMO, multilayer perceptron (MLP), J48 and random forest (RF) classifiers. These results are compared with the basic supervised learning using the same training sets. We perform our experiments on a dataset consisting of 2250 protein crystallization images for different proportions of training and test data. Our results indicate that NB and SMO using both self-training and YATSI semi-supervised approaches improve accuracies with respect to supervised learning. On the other hand, MLP, J48 and RF perform better using basic supervised learning. Overall, random forest classifier yields the best accuracy with supervised learning for our dataset. PMID:25914518
Analysis of signals under compositional noise with applications to SONAR data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tucker, J. Derek; Wu, Wei; Srivastava, Anuj
2013-07-09
In this paper, we consider the problem of denoising and classification of SONAR signals observed under compositional noise, i.e., they have been warped randomly along the x-axis. The traditional techniques do not account for such noise and, consequently, cannot provide a robust classification of signals. We apply a recent framework that: 1) uses a distance-based objective function for data alignment and noise reduction; and 2) leads to warping-invariant distances between signals for robust clustering and classification. We use this framework to introduce two distances that can be used for signal classification: a) a y-distance, which is the distance between themore » aligned signals; and b) an x-distance that measures the amount of warping needed to align the signals. We focus on the task of clustering and classifying objects, using acoustic spectrum (acoustic color), which is complicated by the uncertainties in aspect angles at data collections. Small changes in the aspect angles corrupt signals in a way that amounts to compositional noise. As a result, we demonstrate the use of the developed metrics in classification of acoustic color data and highlight improvements in signal classification over current methods.« less
Waste-to-Energy Thermal Destruction Identification for Forward Operating Bases
2016-07-01
waste disposal strategy is to simplify the technology development goals. Specifically, we recommend a goal of reducing total net energy consumption ...to net zero. The minimum objective should be the lowest possible fuel consumption per unit of waste disposed. By shifting the focus from W2E to waste...over long distances increases the risks to military personnel and contractors. Because fuel is a limited resource at FOBs, diesel fuel consumption
The magnetic sense and its use in long-distance navigation by animals.
Walker, Michael M; Dennis, Todd E; Kirschvink, Joseph L
2002-12-01
True navigation by animals is likely to depend on events occurring in the individual cells that detect magnetic fields. Minimum thresholds of detection, perception and 'interpretation' of magnetic field stimuli must be met if animals are to use a magnetic sense to navigate. Recent technological advances in animal tracking devices now make it possible to test predictions from models of navigation based on the use of variations in magnetic intensity.
A machine learned classifier for RR Lyrae in the VVV survey
NASA Astrophysics Data System (ADS)
Elorrieta, Felipe; Eyheramendy, Susana; Jordán, Andrés; Dékány, István; Catelan, Márcio; Angeloni, Rodolfo; Alonso-García, Javier; Contreras-Ramos, Rodrigo; Gran, Felipe; Hajdu, Gergely; Espinoza, Néstor; Saito, Roberto K.; Minniti, Dante
2016-11-01
Variable stars of RR Lyrae type are a prime tool with which to obtain distances to old stellar populations in the Milky Way. One of the main aims of the Vista Variables in the Via Lactea (VVV) near-infrared survey is to use them to map the structure of the Galactic Bulge. Owing to the large number of expected sources, this requires an automated mechanism for selecting RR Lyrae, and particularly those of the more easily recognized type ab (I.e., fundamental-mode pulsators), from the 106-107 variables expected in the VVV survey area. In this work we describe a supervised machine-learned classifier constructed for assigning a score to a Ks-band VVV light curve that indicates its likelihood of being ab-type RR Lyrae. We describe the key steps in the construction of the classifier, which were the choice of features, training set, selection of aperture, and family of classifiers. We find that the AdaBoost family of classifiers give consistently the best performance for our problem, and obtain a classifier based on the AdaBoost algorithm that achieves a harmonic mean between false positives and false negatives of ≈7% for typical VVV light-curve sets. This performance is estimated using cross-validation and through the comparison to two independent datasets that were classified by human experts.
Comparing exposure metrics for classifying ‘dangerous heat’ in heat wave and health warning systems
Zhang, Kai; Rood, Richard B.; Michailidis, George; Oswald, Evan M.; Schwartz, Joel D.; Zanobetti, Antonella; Ebi, Kristie L.; O’Neill, Marie S.
2012-01-01
Heat waves have been linked to excess mortality and morbidity, and are projected to increase in frequency and intensity with a warming climate. This study compares exposure metrics to trigger heat wave and health warning systems (HHWS), and introduces a novel multi-level hybrid clustering method to identify potential dangerously hot days. Two-level and three-level hybrid clustering analysis as well as common indices used to trigger HHWS, including spatial synoptic classification (SSC); and 90th, 95th, and 99th percentiles of minimum and relative minimum temperature (using a 10 day reference period), were calculated using a summertime weather dataset in Detroit from 1976 to 2006. The days classified as ‘hot’ with hybrid clustering analysis, SSC, minimum and relative minimum temperature methods differed by method type. SSC tended to include the days with, on average, 2.6 °C lower daily minimum temperature and 5.3 °C lower dew point than days identified by other methods. These metrics were evaluated by comparing their performance in predicting excess daily mortality. The 99th percentile of minimum temperature was generally the most predictive, followed by the three-level hybrid clustering method, the 95th percentile of minimum temperature, SSC and others. Our proposed clustering framework has more flexibility and requires less substantial meteorological prior information than the synoptic classification methods. Comparison of these metrics in predicting excess daily mortality suggests that metrics thought to better characterize physiological heat stress by considering several weather conditions simultaneously may not be the same metrics that are better at predicting heat-related mortality, which has significant implications in HHWSs. PMID:22673187
Minimum Expected Risk Estimation for Near-neighbor Classification
2006-04-01
We consider the problems of class probability estimation and classification when using near-neighbor classifiers, such as k-nearest neighbors ( kNN ...estimate for weighted kNN classifiers with different prior information, for a broad class of risk functions. Theory and simulations show how significant...the difference is compared to the standard maximum likelihood weighted kNN estimates. Comparisons are made with uniform weights, symmetric weights
de Albuquerque, Victor Hugo C.; Barbosa, Cleisson V.; Silva, Cleiton C.; Moura, Elineudo P.; Rebouças Filho, Pedro P.; Papa, João P.; Tavares, João Manuel R. S.
2015-01-01
Secondary phases, such as laves and carbides, are formed during the final solidification stages of nickel-based superalloy coatings deposited during the gas tungsten arc welding cold wire process. However, when aged at high temperatures, other phases can precipitate in the microstructure, like the γ” and δ phases. This work presents an evaluation of the powerful optimum path forest (OPF) classifier configured with six distance functions to classify background echo and backscattered ultrasonic signals from samples of the inconel 625 superalloy thermally aged at 650 and 950 °C for 10, 100 and 200 h. The background echo and backscattered ultrasonic signals were acquired using transducers with frequencies of 4 and 5 MHz. The potentiality of ultrasonic sensor signals combined with the OPF to characterize the microstructures of an inconel 625 thermally aged and in the as-welded condition were confirmed by the results. The experimental results revealed that the OPF classifier is sufficiently fast (classification total time of 0.316 ms) and accurate (accuracy of 88.75% and harmonic mean of 89.52) for the application proposed. PMID:26024416
de Albuquerque, Victor Hugo C; Barbosa, Cleisson V; Silva, Cleiton C; Moura, Elineudo P; Filho, Pedro P Rebouças; Papa, João P; Tavares, João Manuel R S
2015-05-27
Secondary phases, such as laves and carbides, are formed during the final solidification stages of nickel-based superalloy coatings deposited during the gas tungsten arc welding cold wire process. However, when aged at high temperatures, other phases can precipitate in the microstructure, like the γ'' and δ phases. This work presents an evaluation of the powerful optimum path forest (OPF) classifier configured with six distance functions to classify background echo and backscattered ultrasonic signals from samples of the inconel 625 superalloy thermally aged at 650 and 950 °C for 10, 100 and 200 h. The background echo and backscattered ultrasonic signals were acquired using transducers with frequencies of 4 and 5 MHz. The potentiality of ultrasonic sensor signals combined with the OPF to characterize the microstructures of an inconel 625 thermally aged and in the as-welded condition were confirmed by the results. The experimental results revealed that the OPF classifier is sufficiently fast (classification total time of 0.316 ms) and accurate (accuracy of 88.75%" and harmonic mean of 89.52) for the application proposed.
AVNM: A Voting based Novel Mathematical Rule for Image Classification.
Vidyarthi, Ankit; Mittal, Namita
2016-12-01
In machine learning, the accuracy of the system depends upon classification result. Classification accuracy plays an imperative role in various domains. Non-parametric classifier like K-Nearest Neighbor (KNN) is the most widely used classifier for pattern analysis. Besides its easiness, simplicity and effectiveness characteristics, the main problem associated with KNN classifier is the selection of a number of nearest neighbors i.e. "k" for computation. At present, it is hard to find the optimal value of "k" using any statistical algorithm, which gives perfect accuracy in terms of low misclassification error rate. Motivated by the prescribed problem, a new sample space reduction weighted voting mathematical rule (AVNM) is proposed for classification in machine learning. The proposed AVNM rule is also non-parametric in nature like KNN. AVNM uses the weighted voting mechanism with sample space reduction to learn and examine the predicted class label for unidentified sample. AVNM is free from any initial selection of predefined variable and neighbor selection as found in KNN algorithm. The proposed classifier also reduces the effect of outliers. To verify the performance of the proposed AVNM classifier, experiments are made on 10 standard datasets taken from UCI database and one manually created dataset. The experimental result shows that the proposed AVNM rule outperforms the KNN classifier and its variants. Experimentation results based on confusion matrix accuracy parameter proves higher accuracy value with AVNM rule. The proposed AVNM rule is based on sample space reduction mechanism for identification of an optimal number of nearest neighbor selections. AVNM results in better classification accuracy and minimum error rate as compared with the state-of-art algorithm, KNN, and its variants. The proposed rule automates the selection of nearest neighbor selection and improves classification rate for UCI dataset and manually created dataset. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
No clustering for linkage map based on low-copy and undermethylated microsatellites.
Zhou, Yi; Gwaze, David P; Reyes-Valdés, M Humberto; Bui, Thomas; Williams, Claire G
2003-10-01
Clustering has been reported for conifer genetic maps based on hypomethylated or low-copy molecular markers, resulting in uneven marker distribution. To test this, a framework genetic map was constructed from three types of microsatellites: low-copy, undermethylated, and genomic. These Pinus taeda L. microsatellites were mapped using a three-generation pedigree with 118 progeny. The microsatellites were highly informative; of the 32 markers in intercross configuration, 29 were segregating for three or four alleles in the progeny. The sex-averaged map placed 51 of the 95 markers in 15 linkage groups at LOD > 4.0. No clustering or uneven distribution across the genome was observed. The three types of P. taeda microsatellites were randomly dispersed within each linkage group. The 51 microsatellites covered a map distance of 795 cM, an average distance of 21.8 cM between markers, roughly half of the estimated total map length. The minimum and maximum distances between any two bins was 4.4 and 45.3 cM, respectively. These microsatellites provided anchor points for framework mapping for polymorphism in P. taeda and other closely related hard pines.
Multi-Sensor Optimal Data Fusion Based on the Adaptive Fading Unscented Kalman Filter
Gao, Bingbing; Hu, Gaoge; Gao, Shesheng; Gu, Chengfan
2018-01-01
This paper presents a new optimal data fusion methodology based on the adaptive fading unscented Kalman filter for multi-sensor nonlinear stochastic systems. This methodology has a two-level fusion structure: at the bottom level, an adaptive fading unscented Kalman filter based on the Mahalanobis distance is developed and serves as local filters to improve the adaptability and robustness of local state estimations against process-modeling error; at the top level, an unscented transformation-based multi-sensor optimal data fusion for the case of N local filters is established according to the principle of linear minimum variance to calculate globally optimal state estimation by fusion of local estimations. The proposed methodology effectively refrains from the influence of process-modeling error on the fusion solution, leading to improved adaptability and robustness of data fusion for multi-sensor nonlinear stochastic systems. It also achieves globally optimal fusion results based on the principle of linear minimum variance. Simulation and experimental results demonstrate the efficacy of the proposed methodology for INS/GNSS/CNS (inertial navigation system/global navigation satellite system/celestial navigation system) integrated navigation. PMID:29415509
Multi-Sensor Optimal Data Fusion Based on the Adaptive Fading Unscented Kalman Filter.
Gao, Bingbing; Hu, Gaoge; Gao, Shesheng; Zhong, Yongmin; Gu, Chengfan
2018-02-06
This paper presents a new optimal data fusion methodology based on the adaptive fading unscented Kalman filter for multi-sensor nonlinear stochastic systems. This methodology has a two-level fusion structure: at the bottom level, an adaptive fading unscented Kalman filter based on the Mahalanobis distance is developed and serves as local filters to improve the adaptability and robustness of local state estimations against process-modeling error; at the top level, an unscented transformation-based multi-sensor optimal data fusion for the case of N local filters is established according to the principle of linear minimum variance to calculate globally optimal state estimation by fusion of local estimations. The proposed methodology effectively refrains from the influence of process-modeling error on the fusion solution, leading to improved adaptability and robustness of data fusion for multi-sensor nonlinear stochastic systems. It also achieves globally optimal fusion results based on the principle of linear minimum variance. Simulation and experimental results demonstrate the efficacy of the proposed methodology for INS/GNSS/CNS (inertial navigation system/global navigation satellite system/celestial navigation system) integrated navigation.
Classification of Animal Movement Behavior through Residence in Space and Time.
Torres, Leigh G; Orben, Rachael A; Tolkova, Irina; Thompson, David R
2017-01-01
Identification and classification of behavior states in animal movement data can be complex, temporally biased, time-intensive, scale-dependent, and unstandardized across studies and taxa. Large movement datasets are increasingly common and there is a need for efficient methods of data exploration that adjust to the individual variability of each track. We present the Residence in Space and Time (RST) method to classify behavior patterns in movement data based on the concept that behavior states can be partitioned by the amount of space and time occupied in an area of constant scale. Using normalized values of Residence Time and Residence Distance within a constant search radius, RST is able to differentiate behavior patterns that are time-intensive (e.g., rest), time & distance-intensive (e.g., area restricted search), and transit (short time and distance). We use grey-headed albatross (Thalassarche chrysostoma) GPS tracks to demonstrate RST's ability to classify behavior patterns and adjust to the inherent scale and individuality of each track. Next, we evaluate RST's ability to discriminate between behavior states relative to other classical movement metrics. We then temporally sub-sample albatross track data to illustrate RST's response to less resolved data. Finally, we evaluate RST's performance using datasets from four taxa with diverse ecology, functional scales, ecosystems, and data-types. We conclude that RST is a robust, rapid, and flexible method for detailed exploratory analysis and meta-analyses of behavioral states in animal movement data based on its ability to integrate distance and time measurements into one descriptive metric of behavior groupings. Given the increasing amount of animal movement data collected, it is timely and useful to implement a consistent metric of behavior classification to enable efficient and comparative analyses. Overall, the application of RST to objectively explore and compare behavior patterns in movement data can enhance our fine- and broad- scale understanding of animal movement ecology.
NASA Astrophysics Data System (ADS)
Kuijf, Hugo J.; Moeskops, Pim; de Vos, Bob D.; Bouvy, Willem H.; de Bresser, Jeroen; Biessels, Geert Jan; Viergever, Max A.; Vincken, Koen L.
2016-03-01
Novelty detection is concerned with identifying test data that differs from the training data of a classifier. In the case of brain MR images, pathology or imaging artefacts are examples of untrained data. In this proof-of-principle study, we measure the behaviour of a classifier during the classification of trained labels (i.e. normal brain tissue). Next, we devise a measure that distinguishes normal classifier behaviour from abnormal behavior that occurs in the case of a novelty. This will be evaluated by training a kNN classifier on normal brain tissue, applying it to images with an untrained pathology (white matter hyperintensities (WMH)), and determine if our measure is able to identify abnormal classifier behaviour at WMH locations. For our kNN classifier, behaviour is modelled as the mean, median, or q1 distance to the k nearest points. Healthy tissue was trained on 15 images; classifier behaviour was trained/tested on 5 images with leave-one-out cross-validation. For each trained class, we measure the distribution of mean/median/q1 distances to the k nearest point. Next, for each test voxel, we compute its Z-score with respect to the measured distribution of its predicted label. We consider a Z-score >=4 abnormal behaviour of the classifier, having a probability due to chance of 0.000032. Our measure identified >90% of WMH volume and also highlighted other non-trained findings. The latter being predominantly vessels, cerebral falx, brain mask errors, choroid plexus. This measure is generalizable to other classifiers and might help in detecting unexpected findings or novelties by measuring classifier behaviour.
Farley, Carlton; Kassu, Aschalew; Bose, Nayana; Jackson-Davis, Armitra; Boateng, Judith; Ruffin, Paul; Sharma, Anup
2017-06-01
A short distance standoff Raman technique is demonstrated for detecting economically motivated adulteration (EMA) in extra virgin olive oil (EVOO). Using a portable Raman spectrometer operating with a 785 nm laser and a 2-in. refracting telescope, adulteration of olive oil with grapeseed oil and canola oil is detected between 1% and 100% at a minimum concentration of 2.5% from a distance of 15 cm and at a minimum concentration of 5% from a distance of 1 m. The technique involves correlating the intensity ratios of prominent Raman bands of edible oils at 1254, 1657, and 1441 cm -1 to the degree of adulteration. As a novel variation in the data analysis technique, integrated intensities over a spectral range of 100 cm -1 around the Raman line were used, making it possible to increase the sensitivity of the technique. The technique is demonstrated by detecting adulteration of EVOO with grapeseed and canola oils at 0-100%. Due to the potential of this technique for making measurements from a convenient distance, the short distance standoff Raman technique has the promise to be used for routine applications in food industry such as identifying food items and monitoring EMA at various checkpoints in the food supply chain and storage facilities.
Lazy orbits: An optimization problem on the sphere
NASA Astrophysics Data System (ADS)
Vincze, Csaba
2018-01-01
Non-transitive subgroups of the orthogonal group play an important role in the non-Euclidean geometry. If G is a closed subgroup in the orthogonal group such that the orbit of a single Euclidean unit vector does not cover the (Euclidean) unit sphere centered at the origin then there always exists a non-Euclidean Minkowski functional such that the elements of G preserve the Minkowskian length of vectors. In other words the Minkowski geometry is an alternative of the Euclidean geometry for the subgroup G. It is rich of isometries if G is "close enough" to the orthogonal group or at least to one of its transitive subgroups. The measure of non-transitivity is related to the Hausdorff distances of the orbits under the elements of G to the Euclidean sphere. Its maximum/minimum belongs to the so-called lazy/busy orbits, i.e. they are the solutions of an optimization problem on the Euclidean sphere. The extremal distances allow us to characterize the reducible/irreducible subgroups. We also formulate an upper and a lower bound for the ratio of the extremal distances. As another application of the analytic tools we introduce the rank of a closed non-transitive group G. We shall see that if G is of maximal rank then it is finite or reducible. Since the reducible and the finite subgroups form two natural prototypes of non-transitive subgroups, the rank seems to be a fundamental notion in their characterization. Closed, non-transitive groups of rank n - 1 will be also characterized. Using the general results we classify all their possible types in lower dimensional cases n = 2 , 3 and 4. Finally we present some applications of the results to the holonomy group of a metric linear connection on a connected Riemannian manifold.
47 CFR 73.610 - Minimum distance separations between stations.
Code of Federal Regulations, 2010 CFR
2010-10-01
... they fail to comply with the requirements specified in paragraphs (b), (c) and (d) of this section... separation. (c) Minimum allotment and station adjacent channel separations applicable to all zones: (1... pairs of channels (see § 73.603(a)). (d) In addition to the requirements of paragraphs (a), (b) and (c...
Utilizing patient geographic information system data to plan telemedicine service locations.
Soares, Neelkamal; Dewalle, Joseph; Marsh, Ben
2017-09-01
To understand potential utilization of clinical services at a rural integrated health care system by generating optimal groups of telemedicine locations from electronic health record (EHR) data using geographic information systems (GISs). This retrospective study extracted nonidentifiable grouped data of patients over a 2-year period from the EHR, including geomasked locations. Spatially optimal groupings were created using available telemedicine sites by calculating patients' average travel distance (ATD) to the closest clinic site. A total of 4027 visits by 2049 unique patients were analyzed. The best travel distances for site groupings of 3, 4, 5, or 6 site locations were ranked based on increasing ATD. Each one-site increase in the number of available telemedicine sites decreased minimum ATD by about 8%. For a given group size, the best groupings were very similar in minimum travel distance. There were significant differences in predicted patient load imbalance between otherwise similar groupings. A majority of the best site groupings used the same small number of sites, and urban sites were heavily used. With EHR geospatial data at an individual patient level, we can model potential telemedicine sites for specialty access in a rural geographic area. Relatively few sites could serve most of the population. Direct access to patient GIS data from an EHR provides direct knowledge of the client base compared to methods that allocate aggregated data. Geospatial data and methods can assist health care location planning, generating data about load, load balance, and spatial accessibility. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com
DISCOVERY OF NUCLEAR WATER MASER EMISSION IN CENTAURUS A
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ott, Juergen; Meier, David S.; Walter, Fabian
2013-07-10
We report the detection of a 22 GHz water maser line in the nearest (D {approx} 3.8 Mpc) radio galaxy Centaurus A (Cen A) using the Australia Telescope Compact Array (ATCA). The line is centered at a velocity of {approx}960 km s{sup -1}, which is redshifted by about 415 km s{sup -1} from the systemic velocity. Such an offset, as well as the width of {approx}120 km s{sup -1}, could be consistent with either a nuclear maser arising from an accretion disk of the central supermassive black hole (SMBH), or with a jet maser that is emitted from the materialmore » that is shocked near the base of the jet in Cen A. The best spatial resolution of our ATCA data constrains the origin of the maser feature within <3 pc of the SMBH. The maser exhibits an isotropic luminosity of {approx}1 L{sub Sun }, which classifies it as a kilomaser, and appears to be variable on timescales of months. A kilomaser can also be emitted by shocked gas in star-forming regions. Given the small projected distance from the core, the large offset from systemic velocity, and the smoothness of the line feature, we conclude that a jet maser line emitted by shocked gas around the base of the active galactic nucleus is the most likely explanation. For this scenario we can infer a minimum density of the radio jet of {approx}> 10 cm{sup -3}, which indicates substantial mass entrainment of surrounding gas into the propagating jet material.« less
[Urban heat island intensity and its grading in Liaoning Province of Northeast China].
Li, Li-Guang; Wang, Hong-Bo; Jia, Qing-Yu; Lü, Guo-Hong; Wang, Xiao-Ying; Zhang, Yu-Shu; Ai, Jing-Feng
2012-05-01
According to the recorded air temperature data and their continuity of each weather station, the location of each weather station, the numbers of and the distances among the weather stations, and the records on the weather stations migration, several weather stations in Liaoning Province were selected as the urban and rural representative stations to study the characteristics of urban heat island (UHI) intensity in the province. Based on the annual and monthly air temperature data of the representative stations, the ranges and amplitudes of the UHI intensity were analyzed, and the grades of the UHI intensity were classified. The Tieling station, Dalian station, Anshan station, Chaoyang station, Dandong station, and Jinzhou station and the 18 stations including Tai' an station were selected as the representative urban and rural weather stations, respectively. In 1980-2009, the changes of the annual UHI intensity in the 6 representative cities differed. The annual UHI intensity in Tieling was in a decreasing trend, while that in the other five cities was in an increasing trend. The UHI intensity was strong in Tieling but weak in Dalian. The changes of the monthly UHI intensity in the 6 representative cities also differed. The distribution of the monthly UHI intensity in Dandong, Jinzhou and Tieling took a "U" shape, with the maximum and minimum appeared in January and in May-August, respectively, indicating that the monthly UHI intensity was strong in winter and weak in summer. The ranges of the annual and monthly UHI intensity in the 6 cities were 0.57-2.15 degrees C and -0.70-4.60 degrees C, and the ranges of 0.5-2.0 degrees C accounted for 97.8% and 72.3%, respectively. The UHI intensity in the province could be classified into 4 grades, i. e., weak, strong, stronger and strongest.
Interest communities and flow roles in directed networks: the Twitter network of the UK riots
Beguerisse-Díaz, Mariano; Garduño-Hernández, Guillermo; Vangelov, Borislav; Yaliraki, Sophia N.; Barahona, Mauricio
2014-01-01
Directionality is a crucial ingredient in many complex networks in which information, energy or influence are transmitted. In such directed networks, analysing flows (and not only the strength of connections) is crucial to reveal important features of the network that might go undetected if the orientation of connections is ignored. We showcase here a flow-based approach for community detection through the study of the network of the most influential Twitter users during the 2011 riots in England. Firstly, we use directed Markov Stability to extract descriptions of the network at different levels of coarseness in terms of interest communities, i.e. groups of nodes within which flows of information are contained and reinforced. Such interest communities reveal user groupings according to location, profession, employer and topic. The study of flows also allows us to generate an interest distance, which affords a personalized view of the attention in the network as viewed from the vantage point of any given user. Secondly, we analyse the profiles of incoming and outgoing long-range flows with a combined approach of role-based similarity and the novel relaxed minimum spanning tree algorithm to reveal that the users in the network can be classified into five roles. These flow roles go beyond the standard leader/follower dichotomy and differ from classifications based on regular/structural equivalence. We then show that the interest communities fall into distinct informational organigrams characterized by a different mix of user roles reflecting the quality of dialogue within them. Our generic framework can be used to provide insight into how flows are generated, distributed, preserved and consumed in directed networks. PMID:25297320
Distance learning in discriminative vector quantization.
Schneider, Petra; Biehl, Michael; Hammer, Barbara
2009-10-01
Discriminative vector quantization schemes such as learning vector quantization (LVQ) and extensions thereof offer efficient and intuitive classifiers based on the representation of classes by prototypes. The original methods, however, rely on the Euclidean distance corresponding to the assumption that the data can be represented by isotropic clusters. For this reason, extensions of the methods to more general metric structures have been proposed, such as relevance adaptation in generalized LVQ (GLVQ) and matrix learning in GLVQ. In these approaches, metric parameters are learned based on the given classification task such that a data-driven distance measure is found. In this letter, we consider full matrix adaptation in advanced LVQ schemes. In particular, we introduce matrix learning to a recent statistical formalization of LVQ, robust soft LVQ, and we compare the results on several artificial and real-life data sets to matrix learning in GLVQ, a derivation of LVQ-like learning based on a (heuristic) cost function. In all cases, matrix adaptation allows a significant improvement of the classification accuracy. Interestingly, however, the principled behavior of the models with respect to prototype locations and extracted matrix dimensions shows several characteristic differences depending on the data sets.
NASA Astrophysics Data System (ADS)
De, Sandip; Schaefer, Bastian; Sadeghi, Ali; Sicher, Michael; Kanhere, D. G.; Goedecker, Stefan
2014-02-01
Based on a recently introduced metric for measuring distances between configurations, we introduce distance-energy (DE) plots to characterize the potential energy surface of clusters. Producing such plots is computationally feasible on the density functional level since it requires only a few hundred stable low energy configurations including the global minimum. By using standard criteria based on disconnectivity graphs and the dynamics of Lennard-Jones clusters, we show that the DE plots convey the necessary information about the character of the potential energy surface and allow us to distinguish between glassy and nonglassy systems. We then apply this analysis to real clusters at the density functional theory level and show that both glassy and nonglassy clusters can be found in simulations. It turns out that among our investigated clusters only those can be synthesized experimentally which exhibit a nonglassy landscape.
"Hot-wire" microfluidic flowmeter based on a microfiber coupler.
Yan, Shao-Cheng; Liu, Zeng-Yong; Li, Cheng; Ge, Shi-Jun; Xu, Fei; Lu, Yan-Qing
2016-12-15
Using an optical microfiber coupler (MC), we present a microfluidic platform for strong direct or indirect light-liquid interaction by wrapping a MC around a functionalized capillary. The light propagating in the MC and the liquid flowing in the capillary can be combined and divorced smoothly, keeping a long-distance interaction without the conflict of input and output coupling. Using this approach, we experimentally demonstrate a "hot-wire" microfluidic flowmeter based on a gold-integrated helical MC device. The microfluid inside the glass channel takes away the heat, then cools the MC and shifts the resonant wavelength. Due to the long-distance interaction and high temperature sensitivity, the proposed microfluidic flowmeter shows an ultrahigh flow rate sensitivity of 2.183 nm/(μl/s) at a flow rate of 1 μl/s. The minimum detectable change of the flow rate is around 9 nl/s at 1 μl/s.
Near-Optimal Re-Entry Trajectories for Reusable Launch Vehicles
NASA Technical Reports Server (NTRS)
Chou, H.-C.; Ardema, M. D.; Bowles, J. V.
1997-01-01
A near-optimal guidance law for the descent trajectory for earth orbit re-entry of a fully reusable single-stage-to-orbit pure rocket launch vehicle is derived. A methodology is developed to investigate using both bank angle and altitude as control variables and selecting parameters that maximize various performance functions. The method is based on the energy-state model of the aircraft equations of motion. The major task of this paper is to obtain optimal re-entry trajectories under a variety of performance goals: minimum time, minimum surface temperature, minimum heating, and maximum heading change; four classes of trajectories were investigated: no banking, optimal left turn banking, optimal right turn banking, and optimal bank chattering. The cost function is in general a weighted sum of all performance goals. In particular, the trade-off between minimizing heat load into the vehicle and maximizing cross range distance is investigated. The results show that the optimization methodology can be used to derive a wide variety of near-optimal trajectories.
Isothermal elastohydrodynamic lubrication of point contacts. 4: Starvation results
NASA Technical Reports Server (NTRS)
Hamrock, B. J.; Dowson, D.
1976-01-01
The influence of lubricant starvation on minimum film thickness was investigated by moving the inlet boundary closer to the contact center. The following expression was derived for the dimensionless inlet distance at the boundary between the fully flooded and starved conditions: m* = 1 + 3.06 ((R/b)(R/b)H) to the power 0.58, where R is the effective radius of curvature, b is the semiminor axis of the contact ellipse, and H is the central film thickness for fully flooded conditions. A corresponding expression was also given based on the minimum film thickness for fully flooded conditions. Therefore, for m m*, starvation occurs and, for m m*, a fully flooded condition exists. Two other expressions were also derived for the central and minimum film thicknesses for a starved condition. Contour plots of the pressure and the film thickness in and around the contact are shown for the fully flooded and starved lubricating conditions, from which the film thickness was observed to decrease substantially as starvation increases.
An absolute method for determination of misalignment of an immersion ultrasonic transducer.
Narayanan, M M; Singh, Narender; Kumar, Anish; Babu Rao, C; Jayakumar, T
2014-12-01
An absolute methodology has been developed for quantification of misalignment of an ultrasonic transducer using a corner-cube retroreflector. The amplitude based and the time of flight (TOF) based C-scans of the reflector are obtained for various misalignments of the transducer. At zero degree orientation of the transducer, the vertical positions of the maximum amplitude and the minimum TOF in the C-scan coincide. At any other orientation of the transducer with the horizontal plane, there is a vertical shift in the position of the maximum amplitude with respect to the minimum TOF. The position of the minimum (TOF) remains the same irrespective of the orientation of the transducer and hence is used as a reference for any misalignment of the transducer. With the measurement of the vertical shift and the horizontal distance between the transducer and the vertex of the reflector, the misalignment of the transducer is quantified. Based on the methodology developed in the present study, retroreflectors are placed in the Indian 500MWe Prototype Fast Breeder Reactor for assessment of the orientation of the ultrasonic transducer prior to the under-sodium ultrasonic scanning for detection of any protrusion of the subassemblies. Copyright © 2014 Elsevier B.V. All rights reserved.
Functional characteristics of the calcium modulated proteins seen from an evolutionary perspective
NASA Technical Reports Server (NTRS)
Kretsinger, R. H.; Nakayama, S.; Moncrief, N. D.
1991-01-01
We have constructed dendrograms relating 173 EF-hand proteins of known amino acid sequence. We aligned all of these proteins by their EF-hand domains, omitting interdomain regions. Initial dendrograms were computed by minimum mutation distance methods. Using these as starting points, we determined the best dendrogram by the method of maximum parsimony, scored by minimum mutation distance. We identified 14 distinct subfamilies as well as 6 unique proteins that are perhaps the sole representatives of other subfamilies. This information is given in tabular form. Within subfamilies one can easily align interdomain regions. The resulting dendrograms are very similar to those computed using domains only. Dendrograms constructed using pairs of domains show general congruence. However, there are enough exceptions to caution against an overly simple scheme in which one pair of gene duplications leads from one domain precurser to a four domain prototype from which all other forms evolved. The ability to bind calcium was lost and acquired several times during evolution. The distribution of introns does not conform to the dendrogram based on amino acid sequences. The rates of evolution appear to be much slower within subfamilies, especially within calmodulin, than those prior to the definition of subfamily.
Multi-Band Received Signal Strength Fingerprinting Based Indoor Location System
NASA Astrophysics Data System (ADS)
Sertthin, Chinnapat; Fujii, Takeo; Ohtsuki, Tomoaki; Nakagawa, Masao
This paper proposes a new multi-band received signal strength (MRSS) fingerprinting based indoor location system, which employs the frequency diversity on the conventional single-band received signal strength (RSS) fingerprinting based indoor location system. In the proposed system, the impacts of frequency diversity on the enhancements of positioning accuracy are analyzed. Effectiveness of the proposed system is proved by experimental approach, which was conducted in non line-of-sight (NLOS) environment under the area of 103m2 at Yagami Campus, Keio University. WLAN access points, which simultaneously transmit dual-band signal of 2.4 and 5.2GHz, are utilized as transmitters. Likewise, a dual-band WLAN receiver is utilized as a receiver. Signal distances calculated by both Manhattan and Euclidean were classified by K-Nearest Neighbor (KNN) classifier to illustrate the performance of the proposed system. The results confirmed that Frequency diversity attributions of multi-band signal provide accuracy improvement over 50% of the conventional single-band.
Robust stereo matching with trinary cross color census and triple image-based refinements
NASA Astrophysics Data System (ADS)
Chang, Ting-An; Lu, Xiao; Yang, Jar-Ferr
2017-12-01
For future 3D TV broadcasting systems and navigation applications, it is necessary to have accurate stereo matching which could precisely estimate depth map from two distanced cameras. In this paper, we first suggest a trinary cross color (TCC) census transform, which can help to achieve accurate disparity raw matching cost with low computational cost. The two-pass cost aggregation (TPCA) is formed to compute the aggregation cost, then the disparity map can be obtained by a range winner-take-all (RWTA) process and a white hole filling procedure. To further enhance the accuracy performance, a range left-right checking (RLRC) method is proposed to classify the results as correct, mismatched, or occluded pixels. Then, the image-based refinements for the mismatched and occluded pixels are proposed to refine the classified errors. Finally, the image-based cross voting and a median filter are employed to complete the fine depth estimation. Experimental results show that the proposed semi-global stereo matching system achieves considerably accurate disparity maps with reasonable computation cost.
Shape Classification Using Wasserstein Distance for Brain Morphometry Analysis.
Su, Zhengyu; Zeng, Wei; Wang, Yalin; Lu, Zhong-Lin; Gu, Xianfeng
2015-01-01
Brain morphometry study plays a fundamental role in medical imaging analysis and diagnosis. This work proposes a novel framework for brain cortical surface classification using Wasserstein distance, based on uniformization theory and Riemannian optimal mass transport theory. By Poincare uniformization theorem, all shapes can be conformally deformed to one of the three canonical spaces: the unit sphere, the Euclidean plane or the hyperbolic plane. The uniformization map will distort the surface area elements. The area-distortion factor gives a probability measure on the canonical uniformization space. All the probability measures on a Riemannian manifold form the Wasserstein space. Given any 2 probability measures, there is a unique optimal mass transport map between them, the transportation cost defines the Wasserstein distance between them. Wasserstein distance gives a Riemannian metric for the Wasserstein space. It intrinsically measures the dissimilarities between shapes and thus has the potential for shape classification. To the best of our knowledge, this is the first. work to introduce the optimal mass transport map to general Riemannian manifolds. The method is based on geodesic power Voronoi diagram. Comparing to the conventional methods, our approach solely depends on Riemannian metrics and is invariant under rigid motions and scalings, thus it intrinsically measures shape distance. Experimental results on classifying brain cortical surfaces with different intelligence quotients demonstrated the efficiency and efficacy of our method.
Shape Classification Using Wasserstein Distance for Brain Morphometry Analysis
Su, Zhengyu; Zeng, Wei; Wang, Yalin; Lu, Zhong-Lin; Gu, Xianfeng
2015-01-01
Brain morphometry study plays a fundamental role in medical imaging analysis and diagnosis. This work proposes a novel framework for brain cortical surface classification using Wasserstein distance, based on uniformization theory and Riemannian optimal mass transport theory. By Poincare uniformization theorem, all shapes can be conformally deformed to one of the three canonical spaces: the unit sphere, the Euclidean plane or the hyperbolic plane. The uniformization map will distort the surface area elements. The area-distortion factor gives a probability measure on the canonical uniformization space. All the probability measures on a Riemannian manifold form the Wasserstein space. Given any 2 probability measures, there is a unique optimal mass transport map between them, the transportation cost defines the Wasserstein distance between them. Wasserstein distance gives a Riemannian metric for the Wasserstein space. It intrinsically measures the dissimilarities between shapes and thus has the potential for shape classification. To the best of our knowledge, this is the first work to introduce the optimal mass transport map to general Riemannian manifolds. The method is based on geodesic power Voronoi diagram. Comparing to the conventional methods, our approach solely depends on Riemannian metrics and is invariant under rigid motions and scalings, thus it intrinsically measures shape distance. Experimental results on classifying brain cortical surfaces with different intelligence quotients demonstrated the efficiency and efficacy of our method. PMID:26221691
Temporal variation of VOC emission from solvent and water based wood stains
NASA Astrophysics Data System (ADS)
de Gennaro, Gianluigi; Loiotile, Annamaria Demarinis; Fracchiolla, Roberta; Palmisani, Jolanda; Saracino, Maria Rosaria; Tutino, Maria
2015-08-01
Solvent- and water-based wood stains were monitored using a small test emission chamber in order to characterize their emission profiles in terms of Total and individual VOCs. The study of concentration-time profiles of individual VOCs enabled to identify the compounds emitted at higher concentration for each type of stain, to examine their decay curve and finally to estimate the concentration in a reference room. The solvent-based wood stain was characterized by the highest Total VOCs emission level (5.7 mg/m3) that decreased over time more slowly than those related to water-based ones. The same finding was observed for the main detected compounds: Benzene, Toluene, Ethylbenzene, Xylenes, Styrene, alpha-Pinene and Camphene. On the other hand, the highest level of Limonene was emitted by a water-based wood stain. However, the concentration-time profile showed that water-based product was characterized by a remarkable reduction of the time of maximum and minimum emission: Limonene concentration reached the minimum concentration in about half the time compared to the solvent-based product. According to AgBB evaluation scheme, only one of the investigated water-based wood stains can be classified as a low-emitting product whose use may not determine any potential adverse effect on human health.
NASA Astrophysics Data System (ADS)
Chang, Juntao; Hu, Qinghua; Yu, Daren; Bao, Wen
2011-11-01
Start/unstart detection is one of the most important issues of hypersonic inlets and is also the foundation of protection control of scramjet. The inlet start/unstart detection can be attributed to a standard pattern classification problem, and the training sample costs have to be considered for the classifier modeling as the CFD numerical simulations and wind tunnel experiments of hypersonic inlets both cost time and money. To solve this problem, the CFD simulation of inlet is studied at first step, and the simulation results could provide the training data for pattern classification of hypersonic inlet start/unstart. Then the classifier modeling technology and maximum classifier utility theories are introduced to analyze the effect of training data cost on classifier utility. In conclusion, it is useful to introduce support vector machine algorithms to acquire the classifier model of hypersonic inlet start/unstart, and the minimum total cost of hypersonic inlet start/unstart classifier can be obtained by the maximum classifier utility theories.
Ji, Hongwei; He, Jiangping; Yang, Xin; Deklerck, Rudi; Cornelis, Jan
2013-05-01
In this paper, we present an autocontext model(ACM)-based automatic liver segmentation algorithm, which combines ACM, multiatlases, and mean-shift techniques to segment liver from 3-D CT images. Our algorithm is a learning-based method and can be divided into two stages. At the first stage, i.e., the training stage, ACM is performed to learn a sequence of classifiers in each atlas space (based on each atlas and other aligned atlases). With the use of multiple atlases, multiple sequences of ACM-based classifiers are obtained. At the second stage, i.e., the segmentation stage, the test image will be segmented in each atlas space by applying each sequence of ACM-based classifiers. The final segmentation result will be obtained by fusing segmentation results from all atlas spaces via a multiclassifier fusion technique. Specially, in order to speed up segmentation, given a test image, we first use an improved mean-shift algorithm to perform over-segmentation and then implement the region-based image labeling instead of the original inefficient pixel-based image labeling. The proposed method is evaluated on the datasets of MICCAI 2007 liver segmentation challenge. The experimental results show that the average volume overlap error and the average surface distance achieved by our method are 8.3% and 1.5 m, respectively, which are comparable to the results reported in the existing state-of-the-art work on liver segmentation.
Human-tracking strategies for a six-legged rescue robot based on distance and view
NASA Astrophysics Data System (ADS)
Pan, Yang; Gao, Feng; Qi, Chenkun; Chai, Xun
2016-03-01
Human tracking is an important issue for intelligent robotic control and can be used in many scenarios, such as robotic services and human-robot cooperation. Most of current human-tracking methods are targeted for mobile/tracked robots, but few of them can be used for legged robots. Two novel human-tracking strategies, view priority strategy and distance priority strategy, are proposed specially for legged robots, which enable them to track humans in various complex terrains. View priority strategy focuses on keeping humans in its view angle arrange with priority, while its counterpart, distance priority strategy, focuses on keeping human at a reasonable distance with priority. To evaluate these strategies, two indexes(average and minimum tracking capability) are defined. With the help of these indexes, the view priority strategy shows advantages compared with distance priority strategy. The optimization is done in terms of these indexes, which let the robot has maximum tracking capability. The simulation results show that the robot can track humans with different curves like square, circular, sine and screw paths. Two novel control strategies are proposed which specially concerning legged robot characteristics to solve human tracking problems more efficiently in rescue circumstances.
True Zero-Training Brain-Computer Interfacing – An Online Study
Kindermans, Pieter-Jan; Schreuder, Martijn; Schrauwen, Benjamin; Müller, Klaus-Robert; Tangermann, Michael
2014-01-01
Despite several approaches to realize subject-to-subject transfer of pre-trained classifiers, the full performance of a Brain-Computer Interface (BCI) for a novel user can only be reached by presenting the BCI system with data from the novel user. In typical state-of-the-art BCI systems with a supervised classifier, the labeled data is collected during a calibration recording, in which the user is asked to perform a specific task. Based on the known labels of this recording, the BCI's classifier can learn to decode the individual's brain signals. Unfortunately, this calibration recording consumes valuable time. Furthermore, it is unproductive with respect to the final BCI application, e.g. text entry. Therefore, the calibration period must be reduced to a minimum, which is especially important for patients with a limited concentration ability. The main contribution of this manuscript is an online study on unsupervised learning in an auditory event-related potential (ERP) paradigm. Our results demonstrate that the calibration recording can be bypassed by utilizing an unsupervised trained classifier, that is initialized randomly and updated during usage. Initially, the unsupervised classifier tends to make decoding mistakes, as the classifier might not have seen enough data to build a reliable model. Using a constant re-analysis of the previously spelled symbols, these initially misspelled symbols can be rectified posthoc when the classifier has learned to decode the signals. We compare the spelling performance of our unsupervised approach and of the unsupervised posthoc approach to the standard supervised calibration-based dogma for n = 10 healthy users. To assess the learning behavior of our approach, it is unsupervised trained from scratch three times per user. Even with the relatively low SNR of an auditory ERP paradigm, the results show that after a limited number of trials (30 trials), the unsupervised approach performs comparably to a classic supervised model. PMID:25068464
Multi-level bandwidth efficient block modulation codes
NASA Technical Reports Server (NTRS)
Lin, Shu
1989-01-01
The multilevel technique is investigated for combining block coding and modulation. There are four parts. In the first part, a formulation is presented for signal sets on which modulation codes are to be constructed. Distance measures on a signal set are defined and their properties are developed. In the second part, a general formulation is presented for multilevel modulation codes in terms of component codes with appropriate Euclidean distances. The distance properties, Euclidean weight distribution and linear structure of multilevel modulation codes are investigated. In the third part, several specific methods for constructing multilevel block modulation codes with interdependency among component codes are proposed. Given a multilevel block modulation code C with no interdependency among the binary component codes, the proposed methods give a multilevel block modulation code C which has the same rate as C, a minimum squared Euclidean distance not less than that of code C, a trellis diagram with the same number of states as that of C and a smaller number of nearest neighbor codewords than that of C. In the last part, error performance of block modulation codes is analyzed for an AWGN channel based on soft-decision maximum likelihood decoding. Error probabilities of some specific codes are evaluated based on their Euclidean weight distributions and simulation results.
Classification of Kiwifruit Grades Based on Fruit Shape Using a Single Camera
Fu, Longsheng; Sun, Shipeng; Li, Rui; Wang, Shaojin
2016-01-01
This study aims to demonstrate the feasibility for classifying kiwifruit into shape grades by adding a single camera to current Chinese sorting lines equipped with weight sensors. Image processing methods are employed to calculate fruit length, maximum diameter of the equatorial section, and projected area. A stepwise multiple linear regression method is applied to select significant variables for predicting minimum diameter of the equatorial section and volume and to establish corresponding estimation models. Results show that length, maximum diameter of the equatorial section and weight are selected to predict the minimum diameter of the equatorial section, with the coefficient of determination of only 0.82 when compared to manual measurements. Weight and length are then selected to estimate the volume, which is in good agreement with the measured one with the coefficient of determination of 0.98. Fruit classification based on the estimated minimum diameter of the equatorial section achieves a low success rate of 84.6%, which is significantly improved using a linear combination of the length/maximum diameter of the equatorial section and projected area/length ratios, reaching 98.3%. Thus, it is possible for Chinese kiwifruit sorting lines to reach international standards of grading kiwifruit on fruit shape classification by adding a single camera. PMID:27376292
Bayes classification of terrain cover using normalized polarimetric data
NASA Technical Reports Server (NTRS)
Yueh, H. A.; Swartz, A. A.; Kong, J. A.; Shin, R. T.; Novak, L. M.
1988-01-01
The normalized polarimetric classifier (NPC) which uses only the relative magnitudes and phases of the polarimetric data is proposed for discrimination of terrain elements. The probability density functions (PDFs) of polarimetric data are assumed to have a complex Gaussian distribution, and the marginal PDF of the normalized polarimetric data is derived by adopting the Euclidean norm as the normalization function. The general form of the distance measure for the NPC is also obtained. It is demonstrated that for polarimetric data with an arbitrary PDF, the distance measure of NPC will be independent of the normalization function selected even when the classifier is mistrained. A complex Gaussian distribution is assumed for the polarimetric data consisting of grass and tree regions. The probability of error for the NPC is compared with those of several other single-feature classifiers. The classification error of NPCs is shown to be independent of the normalization function.
Random forest classification of stars in the Galactic Centre
NASA Astrophysics Data System (ADS)
Plewa, P. M.
2018-05-01
Near-infrared high-angular resolution imaging observations of the Milky Way's nuclear star cluster have revealed all luminous members of the existing stellar population within the central parsec. Generally, these stars are either evolved late-type giants or massive young, early-type stars. We revisit the problem of stellar classification based on intermediate-band photometry in the K band, with the primary aim of identifying faint early-type candidate stars in the extended vicinity of the central massive black hole. A random forest classifier, trained on a subsample of spectroscopically identified stars, performs similarly well as competitive methods (F1 = 0.85), without involving any model of stellar spectral energy distributions. Advantages of using such a machine-trained classifier are a minimum of required calibration effort, a predictive accuracy expected to improve as more training data become available, and the ease of application to future, larger data sets. By applying this classifier to archive data, we are also able to reproduce the results of previous studies of the spatial distribution and the K-band luminosity function of both the early- and late-type stars.
Planer, Katarina; Hagel, Anja
2018-01-01
A validity test was conducted to determine how care level–based nurse-to-resident ratios compare with actual daily care times per resident in Germany. Stability across different long-term care facilities was tested. Care level–based nurse-to-resident ratios were compared with the standard minimum nurse-to-resident ratios. Levels of care are determined by classification authorities in long-term care insurance programs and are used to distribute resources. Care levels are a powerful tool for classifying authorities in long-term care insurance. We used observer-based measurement of assignable direct and indirect care time in 68 nursing units for 2028 residents across 2 working days. Organizational data were collected at the end of the quarter in which the observation was made. Data were collected from January to March, 2012. We used a null multilevel model with random intercepts and multilevel models with fixed and random slopes to analyze data at both the organization and resident levels. A total of 14% of the variance in total care time per day was explained by membership in nursing units. The impact of care levels on care time differed significantly between nursing units. Forty percent of residents at the lowest care level received less than the standard minimum registered nursing time per day. For facilities that have been significantly disadvantaged in the current staffing system, a higher minimum standard will function more effectively than a complex classification system without scientific controls. PMID:29442533
Brühl, Albert; Planer, Katarina; Hagel, Anja
2018-01-01
A validity test was conducted to determine how care level-based nurse-to-resident ratios compare with actual daily care times per resident in Germany. Stability across different long-term care facilities was tested. Care level-based nurse-to-resident ratios were compared with the standard minimum nurse-to-resident ratios. Levels of care are determined by classification authorities in long-term care insurance programs and are used to distribute resources. Care levels are a powerful tool for classifying authorities in long-term care insurance. We used observer-based measurement of assignable direct and indirect care time in 68 nursing units for 2028 residents across 2 working days. Organizational data were collected at the end of the quarter in which the observation was made. Data were collected from January to March, 2012. We used a null multilevel model with random intercepts and multilevel models with fixed and random slopes to analyze data at both the organization and resident levels. A total of 14% of the variance in total care time per day was explained by membership in nursing units. The impact of care levels on care time differed significantly between nursing units. Forty percent of residents at the lowest care level received less than the standard minimum registered nursing time per day. For facilities that have been significantly disadvantaged in the current staffing system, a higher minimum standard will function more effectively than a complex classification system without scientific controls.
On Utilizing Optimal and Information Theoretic Syntactic Modeling for Peptide Classification
NASA Astrophysics Data System (ADS)
Aygün, Eser; Oommen, B. John; Cataltepe, Zehra
Syntactic methods in pattern recognition have been used extensively in bioinformatics, and in particular, in the analysis of gene and protein expressions, and in the recognition and classification of bio-sequences. These methods are almost universally distance-based. This paper concerns the use of an Optimal and Information Theoretic (OIT) probabilistic model [11] to achieve peptide classification using the information residing in their syntactic representations. The latter has traditionally been achieved using the edit distances required in the respective peptide comparisons. We advocate that one can model the differences between compared strings as a mutation model consisting of random Substitutions, Insertions and Deletions (SID) obeying the OIT model. Thus, in this paper, we show that the probability measure obtained from the OIT model can be perceived as a sequence similarity metric, using which a Support Vector Machine (SVM)-based peptide classifier, referred to as OIT_SVM, can be devised.
Influence of the geomembrane on time-lapse ERT measurements for leachate injection monitoring.
Audebert, M; Clément, R; Grossin-Debattista, J; Günther, T; Touze-Foltz, N; Moreau, S
2014-04-01
Leachate recirculation is a key process in the operation of municipal waste landfills as bioreactors. To quantify the water content and to evaluate the leachate injection system, in situ methods are required to obtain spatially distributed information, usually electrical resistivity tomography (ERT). However, this method can present false variations in the observations due to several parameters. This study investigates the impact of the geomembrane on ERT measurements. Indeed, the geomembrane tends to be ignored in the inversion process in most previously conducted studies. The presence of the geomembrane can change the boundary conditions of the inversion models, which have classically infinite boundary conditions. Using a numerical modelling approach, the authors demonstrate that a minimum distance is required between the electrode line and the geomembrane to satisfy the good conditions of use of the classical inversion tools. This distance is a function of the electrode line length (i.e. of the unit electrode spacing) used, the array type and the orientation of the electrode line. Moreover, this study shows that if this criterion on the minimum distance is not satisfied, it is possible to significantly improve the inversion process by introducing the complex geometry and the geomembrane location into the inversion tools. These results are finally validated on a field data set gathered on a small municipal solid waste landfill cell where this minimum distance criterion cannot be satisfied. Copyright © 2014 Elsevier Ltd. All rights reserved.
27 CFR 555.218 - Table of distances for storage of explosive materials.
Code of Federal Regulations, 2010 CFR
2010-04-01
... with traffic volume of more than 3,000 vehicles/day Barricaded Unbarricaded Separation of magazines... explosive materials are defined in § 555.11. (2) When two or more storage magazines are located on the same property, each magazine must comply with the minimum distances specified from inhabited buildings, railways...
An ensemble predictive modeling framework for breast cancer classification.
Nagarajan, Radhakrishnan; Upreti, Meenakshi
2017-12-01
Molecular changes often precede clinical presentation of diseases and can be useful surrogates with potential to assist in informed clinical decision making. Recent studies have demonstrated the usefulness of modeling approaches such as classification that can predict the clinical outcomes from molecular expression profiles. While useful, a majority of these approaches implicitly use all molecular markers as features in the classification process often resulting in sparse high-dimensional projection of the samples often comparable to that of the sample size. In this study, a variant of the recently proposed ensemble classification approach is used for predicting good and poor-prognosis breast cancer samples from their molecular expression profiles. In contrast to traditional single and ensemble classifiers, the proposed approach uses multiple base classifiers with varying feature sets obtained from two-dimensional projection of the samples in conjunction with a majority voting strategy for predicting the class labels. In contrast to our earlier implementation, base classifiers in the ensembles are chosen based on maximal sensitivity and minimal redundancy by choosing only those with low average cosine distance. The resulting ensemble sets are subsequently modeled as undirected graphs. Performance of four different classification algorithms is shown to be better within the proposed ensemble framework in contrast to using them as traditional single classifier systems. Significance of a subset of genes with high-degree centrality in the network abstractions across the poor-prognosis samples is also discussed. Copyright © 2017 Elsevier Inc. All rights reserved.
14 CFR 91.177 - Minimum altitudes for IFR operations.
Code of Federal Regulations, 2010 CFR
2010-01-01
..., an altitude of 2,000 feet above the highest obstacle within a horizontal distance of 4 nautical miles from the course to be flown; or (ii) In any other case, an altitude of 1,000 feet above the highest... 14 Aeronautics and Space 2 2010-01-01 2010-01-01 false Minimum altitudes for IFR operations. 91...
Featureless classification of light curves
NASA Astrophysics Data System (ADS)
Kügler, S. D.; Gianniotis, N.; Polsterer, K. L.
2015-08-01
In the era of rapidly increasing amounts of time series data, classification of variable objects has become the main objective of time-domain astronomy. Classification of irregularly sampled time series is particularly difficult because the data cannot be represented naturally as a vector which can be directly fed into a classifier. In the literature, various statistical features serve as vector representations. In this work, we represent time series by a density model. The density model captures all the information available, including measurement errors. Hence, we view this model as a generalization to the static features which directly can be derived, e.g. as moments from the density. Similarity between each pair of time series is quantified by the distance between their respective models. Classification is performed on the obtained distance matrix. In the numerical experiments, we use data from the OGLE (Optical Gravitational Lensing Experiment) and ASAS (All Sky Automated Survey) surveys and demonstrate that the proposed representation performs up to par with the best currently used feature-based approaches. The density representation preserves all static information present in the observational data, in contrast to a less-complete description by features. The density representation is an upper boundary in terms of information made available to the classifier. Consequently, the predictive power of the proposed classification depends on the choice of similarity measure and classifier, only. Due to its principled nature, we advocate that this new approach of representing time series has potential in tasks beyond classification, e.g. unsupervised learning.
[Genetic differentiation of Isaria farinosa populations in Anhui Province of East China].
Sun, Zhao-Hong; Luan, Feng-Gang; Zhang, Da-Min; Chen, Ming-Jun; Wang, Bin; Li, Zeng-Zhi
2011-11-01
Isaria farinosa is an important entomopathogenic fungus. By using ISSR, this paper studied the genetic heterogeneity of six I. farinosa populations at different localities of Anhui Province, East China. A total of 98.5% polymorphic loci were amplified with ten polymorphic primers, but the polymorphism at population level varied greatly, within the range of 59.6%-93.2%. The genetic differentiation index (G(st)) between the populations based on Nei's genetic heterogenesis analysis was 0.3365, and the gene flow (N(m)) was 0.4931. The genetic differentiation between the populations was lower than that within the populations, suggesting that the genetic variation of I. farinosa mainly come from the interior of the populations. The UPGMA clustering based on the genetic similarities between the isolates revealed that the Xishan population was monophylectic, while the other five populations were polyphylectic, with the Yaoluoping population being the most heterogenic and the Langyashan population being the least heterogenic. No correlations were observed between the geographic distance and the genetic distance of the populations. According to the UPGMA clustering based on the genetic distance between the populations, the six populations were classified into three groups, and this classification was accorded with the clustering based on geographic environment, suggesting the effects of environmental heterogeneity on the population heterogeneity.
Detection of cracks in shafts with the Approximated Entropy algorithm
NASA Astrophysics Data System (ADS)
Sampaio, Diego Luchesi; Nicoletti, Rodrigo
2016-05-01
The Approximate Entropy is a statistical calculus used primarily in the fields of Medicine, Biology, and Telecommunication for classifying and identifying complex signal data. In this work, an Approximate Entropy algorithm is used to detect cracks in a rotating shaft. The signals of the cracked shaft are obtained from numerical simulations of a de Laval rotor with breathing cracks modelled by the Fracture Mechanics. In this case, one analysed the vertical displacements of the rotor during run-up transients. The results show the feasibility of detecting cracks from 5% depth, irrespective of the unbalance of the rotating system and crack orientation in the shaft. The results also show that the algorithm can differentiate the occurrence of crack only, misalignment only, and crack + misalignment in the system. However, the algorithm is sensitive to intrinsic parameters p (number of data points in a sample vector) and f (fraction of the standard deviation that defines the minimum distance between two sample vectors), and good results are only obtained by appropriately choosing their values according to the sampling rate of the signal.
An Isometric Mapping Based Co-Location Decision Tree Algorithm
NASA Astrophysics Data System (ADS)
Zhou, G.; Wei, J.; Zhou, X.; Zhang, R.; Huang, W.; Sha, H.; Chen, J.
2018-05-01
Decision tree (DT) induction has been widely used in different pattern classification. However, most traditional DTs have the disadvantage that they consider only non-spatial attributes (ie, spectral information) as a result of classifying pixels, which can result in objects being misclassified. Therefore, some researchers have proposed a co-location decision tree (Cl-DT) method, which combines co-location and decision tree to solve the above the above-mentioned traditional decision tree problems. Cl-DT overcomes the shortcomings of the existing DT algorithms, which create a node for each value of a given attribute, which has a higher accuracy than the existing decision tree approach. However, for non-linearly distributed data instances, the euclidean distance between instances does not reflect the true positional relationship between them. In order to overcome these shortcomings, this paper proposes an isometric mapping method based on Cl-DT (called, (Isomap-based Cl-DT), which is a method that combines heterogeneous and Cl-DT together. Because isometric mapping methods use geodetic distances instead of Euclidean distances between non-linearly distributed instances, the true distance between instances can be reflected. The experimental results and several comparative analyzes show that: (1) The extraction method of exposed carbonate rocks is of high accuracy. (2) The proposed method has many advantages, because the total number of nodes, the number of leaf nodes and the number of nodes are greatly reduced compared to Cl-DT. Therefore, the Isomap -based Cl-DT algorithm can construct a more accurate and faster decision tree.
Heterogeneous Multi-Metric Learning for Multi-Sensor Fusion
2011-07-01
distance”. One of the most widely used methods is the k-nearest neighbor ( KNN ) method [4], which labels an input data sample to be the class with majority...despite of its simplicity, it can be an effective candidate and can be easily extended to handle multiple sensors. Distance based method such as KNN relies...Neighbor (LMNN) method [21] which will be briefly reviewed in the sequel. LMNN method tries to learn an optimal metric specifically for KNN classifier. The
Classification of Pelteobagrus fish in Poyang Lake based on mitochondrial COI gene sequence.
Zhong, Bin; Chen, Ting-Ting; Gong, Rui-Yue; Zhao, Zhe-Xia; Wang, Binhua; Fang, Chunlin; Mao, Hui-Ling
2016-11-01
We use DNA molecular marker technology to correct the deficiency of traditional morphological taxonomy. Totality 770 Pelteobagrus fish from Poyang Lake were collected. After preliminary morphological classification, random selected eight samples in each species for DNA extraction. Mitochondrial COI gene sequence was cloned with universal primers and sequenced. The results showed that there are four species of Pelteobagrus living in Poyang Lake. The average of intraspecific genetic distance value was 0.003, while the average interspecific genetic distance was 0.128. The interspecific genetic distance is far more than intraspecific genetic distance. Besides, phylogenetic tree analysis revealed that molecular systematics was in accord with morphological classification. It indicated that COI gene is an effective DNA molecular marker in Pelteobagrus classification. Surprisingly, the intraspecific difference of some individuals (P. e6, P. n6, P. e5, and P. v4) from their original named exceeded species threshold (2%), which should be renewedly classified into Pelteobagrus fulvidraco. However, another individual P. v3 was very different, because its genetic distance was over 8.4% difference from original named Pelteobagrus vachelli. Its taxonomic status remained to be further studied.
NASA Astrophysics Data System (ADS)
Elbakary, M. I.; Alam, M. S.; Aslan, M. S.
2008-03-01
In a FLIR image sequence, a target may disappear permanently or may reappear after some frames and crucial information such as direction, position and size related to the target are lost. If the target reappears at a later frame, it may not be tracked again because the 3D orientation, size and location of the target might be changed. To obtain information about the target before disappearing and to detect the target after reappearing, distance classifier correlation filter (DCCF) is trained manualy by selecting a number of chips randomly. This paper introduces a novel idea to eliminates the manual intervention in training phase of DCCF. Instead of selecting the training chips manually and selecting the number of the training chips randomly, we adopted the K-means algorithm to cluster the training frames and based on the number of clusters we select the training chips such that a training chip for each cluster. To detect and track the target after reappearing in the field-ofview ,TBF and DCCF are employed. The contduced experiemnts using real FLIR sequences show results similar to the traditional agorithm but eleminating the manual intervention is the advantage of the proposed algorithm.
What Is the Optimal Minimum Penetration Depth for "All-Inside" Meniscal Repairs?
McCulloch, Patrick C; Jones, Hugh L; Lue, Jeffrey; Parekh, Jesal N; Noble, Philip C
2016-08-01
To identify desired minimum depth setting for safe, effective placement of the all-inside meniscal suture anchors. Using 16 cadaveric knees and standard arthroscopic techniques, 3-dimensional surfaces of the meniscocapsular junction and posterior capsule were digitized. Using standard anteromedial and anterolateral portals, the distance from the meniscocapsular junction to the posterior capsule outer wall was measured for 3 locations along the posterior half of medial and lateral menisci. Multiple all-inside meniscal repairs were performed on 7 knees to determine an alternate measure of capsular thickness (X2) and compared with the digitized results. In the digitized group, the distance (X1) from the capsular junction to the posterior capsular wall was averaged in both menisci for 3 regions using anteromedial and anterolateral portals. Mean distances of 6.4 to 8.8 mm were found for the lateral meniscus and 6.5 to 9.1 mm for the medial meniscus. The actual penetration depth was determined in the repair group and labeled X2. It showed a similar pattern to the variation seen in X1 by region, although it exceeded predicted distances an average 1.7 mm in the medial and 1.5 mm in the lateral meniscus owing to visible deformation of the capsule as it pierced. Capsular thickness during arthroscopic repair measures approximately 6 to 9 mm (X1), with 1.5 to 2 mm additional depth needed to ensure penetration rather than bulging of the posterior capsule (X2), resulting in 8 to 10 mm minimum penetration depth range. Surgeons can add desired distance away from the meniscocapsular junction (L) at device implantation, finding optimal minimal setting for penetration depth (X2 + L), which for most repairable tears may be as short as 8 mm and not likely to be greater than 16 mm. Minimum depth setting for optimal placement of all-inside meniscal suture anchors when performing all-inside repair of the medial or lateral meniscus reduces risk of harming adjacent structures secondary to overpenetration and underpenetration of the posterior capsule. Copyright © 2016 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.
Estimating cetacean carrying capacity based on spacing behaviour.
Braithwaite, Janelle E; Meeuwig, Jessica J; Jenner, K Curt S
2012-01-01
Conservation of large ocean wildlife requires an understanding of how they use space. In Western Australia, the humpback whale (Megaptera novaeangliae) population is growing at a minimum rate of 10% per year. An important consideration for conservation based management in space-limited environments, such as coastal resting areas, is the potential expansion in area use by humpback whales if the carrying capacity of existing areas is exceeded. Here we determined the theoretical carrying capacity of a known humpback resting area based on the spacing behaviour of pods, where a resting area is defined as a sheltered embayment along the coast. Two separate approaches were taken to estimate this distance. The first used the median nearest neighbour distance between pods in relatively dense areas, giving a spacing distance of 2.16 km (± 0.94). The second estimated the spacing distance as the radius at which 50% of the population included no other pods, and was calculated as 1.93 km (range: 1.62-2.50 km). Using these values, the maximum number of pods able to fit into the resting area was 698 and 872 pods, respectively. Given an average observed pod size of 1.7 whales, this equates to a carrying capacity estimate of between 1187 and 1482 whales at any given point in time. This study demonstrates that whale pods do maintain a distance from each other, which may determine the number of animals that can occupy aggregation areas where space is limited. This requirement for space has implications when considering boundaries for protected areas or competition for space with the fishing and resources sectors.
Francis, Alexander L.; Kaganovich, Natalya; Driscoll-Huber, Courtney
2008-01-01
In English, voiced and voiceless syllable-initial stop consonants differ in both fundamental frequency at the onset of voicing (onset F0) and voice onset time (VOT). Although both correlates, alone, can cue the voicing contrast, listeners weight VOT more heavily when both are available. Such differential weighting may arise from differences in the perceptual distance between voicing categories along the VOT versus onset F0 dimensions, or it may arise from a bias to pay more attention to VOT than to onset F0. The present experiment examines listeners’ use of these two cues when classifying stimuli in which perceptual distance was artificially equated along the two dimensions. Listeners were also trained to categorize stimuli based on one cue at the expense of another. Equating perceptual distance eliminated the expected bias toward VOT before training, but successfully learning to base decisions more on VOT and less on onset F0 was easier than vice versa. Perceptual distance along both dimensions increased for both groups after training, but only VOT-trained listeners showed a decrease in Garner interference. Results lend qualified support to an attentional model of phonetic learning in which learning involves strategic redeployment of selective attention across integral acoustic cues. PMID:18681610
Hwang, Wonjun; Wang, Haitao; Kim, Hyunwoo; Kee, Seok-Cheol; Kim, Junmo
2011-04-01
The authors present a robust face recognition system for large-scale data sets taken under uncontrolled illumination variations. The proposed face recognition system consists of a novel illumination-insensitive preprocessing method, a hybrid Fourier-based facial feature extraction, and a score fusion scheme. First, in the preprocessing stage, a face image is transformed into an illumination-insensitive image, called an "integral normalized gradient image," by normalizing and integrating the smoothed gradients of a facial image. Then, for feature extraction of complementary classifiers, multiple face models based upon hybrid Fourier features are applied. The hybrid Fourier features are extracted from different Fourier domains in different frequency bandwidths, and then each feature is individually classified by linear discriminant analysis. In addition, multiple face models are generated by plural normalized face images that have different eye distances. Finally, to combine scores from multiple complementary classifiers, a log likelihood ratio-based score fusion scheme is applied. The proposed system using the face recognition grand challenge (FRGC) experimental protocols is evaluated; FRGC is a large available data set. Experimental results on the FRGC version 2.0 data sets have shown that the proposed method shows an average of 81.49% verification rate on 2-D face images under various environmental variations such as illumination changes, expression changes, and time elapses.
Computing group cardinality constraint solutions for logistic regression problems.
Zhang, Yong; Kwon, Dongjin; Pohl, Kilian M
2017-01-01
We derive an algorithm to directly solve logistic regression based on cardinality constraint, group sparsity and use it to classify intra-subject MRI sequences (e.g. cine MRIs) of healthy from diseased subjects. Group cardinality constraint models are often applied to medical images in order to avoid overfitting of the classifier to the training data. Solutions within these models are generally determined by relaxing the cardinality constraint to a weighted feature selection scheme. However, these solutions relate to the original sparse problem only under specific assumptions, which generally do not hold for medical image applications. In addition, inferring clinical meaning from features weighted by a classifier is an ongoing topic of discussion. Avoiding weighing features, we propose to directly solve the group cardinality constraint logistic regression problem by generalizing the Penalty Decomposition method. To do so, we assume that an intra-subject series of images represents repeated samples of the same disease patterns. We model this assumption by combining series of measurements created by a feature across time into a single group. Our algorithm then derives a solution within that model by decoupling the minimization of the logistic regression function from enforcing the group sparsity constraint. The minimum to the smooth and convex logistic regression problem is determined via gradient descent while we derive a closed form solution for finding a sparse approximation of that minimum. We apply our method to cine MRI of 38 healthy controls and 44 adult patients that received reconstructive surgery of Tetralogy of Fallot (TOF) during infancy. Our method correctly identifies regions impacted by TOF and generally obtains statistically significant higher classification accuracy than alternative solutions to this model, i.e., ones relaxing group cardinality constraints. Copyright © 2016 Elsevier B.V. All rights reserved.
A Note on the Kirchhoff and Additive Degree-Kirchhoff Indices of Graphs
NASA Astrophysics Data System (ADS)
Yang, Yujun; Klein, Douglas J.
2015-06-01
Two resistance-distance-based graph invariants, namely, the Kirchhoff index and the additive degree-Kirchhoff index, are studied. A relation between them is established, with inequalities for the additive degree-Kirchhoff index arising via the Kirchhoff index along with minimum, maximum, and average degrees. Bounds for the Kirchhoff and additive degree-Kirchhoff indices are also determined, and extremal graphs are characterised. In addition, an upper bound for the additive degree-Kirchhoff index is established to improve a previously known result.
Qualitative Research in Distance Education: An Analysis of Journal Literature 2005-2012
ERIC Educational Resources Information Center
Hauser, Laura
2013-01-01
This review study examines the current research literature in distance education for the years 2005 to 2012. The author found 382 research articles published during that time in four prominent peer-reviewed research journals. The articles were classified and coded as quantitative, qualitative, or mixed methods. Further analysis found another…
Trends in Correlation-Based Pattern Recognition and Tracking in Forward-Looking Infrared Imagery
Alam, Mohammad S.; Bhuiyan, Sharif M. A.
2014-01-01
In this paper, we review the recent trends and advancements on correlation-based pattern recognition and tracking in forward-looking infrared (FLIR) imagery. In particular, we discuss matched filter-based correlation techniques for target detection and tracking which are widely used for various real time applications. We analyze and present test results involving recently reported matched filters such as the maximum average correlation height (MACH) filter and its variants, and distance classifier correlation filter (DCCF) and its variants. Test results are presented for both single/multiple target detection and tracking using various real-life FLIR image sequences. PMID:25061840
NASA Astrophysics Data System (ADS)
Métivier, L.; Brossier, R.; Mérigot, Q.; Oudet, E.; Virieux, J.
2016-04-01
Full waveform inversion using the conventional L2 distance to measure the misfit between seismograms is known to suffer from cycle skipping. An alternative strategy is proposed in this study, based on a measure of the misfit computed with an optimal transport distance. This measure allows to account for the lateral coherency of events within the seismograms, instead of considering each seismic trace independently, as is done generally in full waveform inversion. The computation of this optimal transport distance relies on a particular mathematical formulation allowing for the non-conservation of the total energy between seismograms. The numerical solution of the optimal transport problem is performed using proximal splitting techniques. Three synthetic case studies are investigated using this strategy: the Marmousi 2 model, the BP 2004 salt model, and the Chevron 2014 benchmark data. The results emphasize interesting properties of the optimal transport distance. The associated misfit function is less prone to cycle skipping. A workflow is designed to reconstruct accurately the salt structures in the BP 2004 model, starting from an initial model containing no information about these structures. A high-resolution P-wave velocity estimation is built from the Chevron 2014 benchmark data, following a frequency continuation strategy. This estimation explains accurately the data. Using the same workflow, full waveform inversion based on the L2 distance converges towards a local minimum. These results yield encouraging perspectives regarding the use of the optimal transport distance for full waveform inversion: the sensitivity to the accuracy of the initial model is reduced, the reconstruction of complex salt structure is made possible, the method is robust to noise, and the interpretation of seismic data dominated by reflections is enhanced.
Relation between inflammables and ignition sources in aircraft environments
NASA Technical Reports Server (NTRS)
Scull, Wilfred E
1951-01-01
A literature survey was conducted to determine the relation between aircraft ignition sources and inflammables. Available literature applicable to the problem of aircraft fire hazards is analyzed and discussed. Data pertaining to the effect of many variables on ignition temperatures, minimum ignition pressures, minimum spark-ignition energies of inflammables, quenching distances of electrode configurations, and size of openings through which flame will not propagate are presented and discussed. Ignition temperatures and limits of inflammability of gasoline in air in different test environments, and the minimum ignition pressures and minimum size of opening for flame propagation in gasoline-air mixtures are included; inerting of gasoline-air mixtures is discussed.
NASA Astrophysics Data System (ADS)
Ortega, R.; Gutierrez, E.; Carciumaru, D. D.; Huesca-Perez, E.
2017-12-01
We present a method to compute the conditional and no-conditional probability density function (PDF) of the finite fault distance distribution (FFDD). Two cases are described: lines and areas. The case of lines has a simple analytical solution while, in the case of areas, the geometrical probability of a fault based on the strike, dip, and fault segment vertices is obtained using the projection of spheres in a piecewise rectangular surface. The cumulative distribution is computed by measuring the projection of a sphere of radius r in an effective area using an algorithm that estimates the area of a circle within a rectangle. In addition, we introduce the finite fault distance metrics. This distance is the distance where the maximum stress release occurs within the fault plane and generates a peak ground motion. Later, we can apply the appropriate ground motion prediction equations (GMPE) for PSHA. The conditional probability of distance given magnitude is also presented using different scaling laws. A simple model of constant distribution of the centroid at the geometrical mean is discussed, in this model hazard is reduced at the edges because the effective size is reduced. Nowadays there is a trend of using extended source distances in PSHA, however it is not possible to separate the fault geometry from the GMPE. With this new approach, it is possible to add fault rupture models separating geometrical and propagation effects.
Averós, Xavier; Lorea, Areta; Beltrán de Heredia, Ignacia; Arranz, Josune; Ruiz, Roberto; Estevez, Inma
2014-01-01
Space availability is essential to grant the welfare of animals. To determine the effect of space availability on movement and space use in pregnant ewes (Ovis aries), 54 individuals were studied during the last 11 weeks of gestation. Three treatments were tested (1, 2, and 3 m2/ewe; 6 ewes/group). Ewes' positions were collected for 15 minutes using continuous scan samplings two days/week. Total and net distance, net/total distance ratio, maximum and minimum step length, movement activity, angular dispersion, nearest, furthest and mean neighbour distance, peripheral location ratio, and corrected peripheral location ratio were calculated. Restriction in space availability resulted in smaller total travelled distance, net to total distance ratio, maximum step length, and angular dispersion but higher movement activity at 1 m2/ewe as compared to 2 and 3 m2/ewe (P<0.01). On the other hand, nearest and furthest neighbour distances increased from 1 to 3 m2/ewe (P<0.001). Largest total distance, maximum and minimum step length, and movement activity, as well as lowest net/total distance ratio and angular dispersion were observed during the first weeks (P<0.05) while inter-individual distances increased through gestation. Results indicate that movement patterns and space use in ewes were clearly restricted by limitations of space availability to 1 m2/ewe. This reflected in shorter, more sinuous trajectories composed of shorter steps, lower inter-individual distances and higher movement activity potentially linked with higher restlessness levels. On the contrary, differences between 2 and 3 m2/ewe, for most variables indicate that increasing space availability from 2 to 3 m2/ewe would appear to have limited benefits, reflected mostly in a further increment in the inter-individual distances among group members. No major variations in spatial requirements were detected through gestation, except for slight increments in inter-individual distances and an initial adaptation period, with ewes being restless and highly motivated to explore their new environment.
A Classroom Note on: The Average Distance in an Ellipse
ERIC Educational Resources Information Center
Gordon, Sheldon P.
2011-01-01
This article presents an applied calculus exercise that can be easily shared with students. One of Kepler's greatest discoveries was the fact that the planets move in elliptic orbits with the sun at one focus. Astronomers characterize the orbits of particular planets by their minimum and maximum distances to the sun, known respectively as the…
Threshold Assessment of Gear Diagnostic Tools on Flight and Test Rig Data
NASA Technical Reports Server (NTRS)
Dempsey, Paula J.; Mosher, Marianne; Huff, Edward M.
2003-01-01
A method for defining thresholds for vibration-based algorithms that provides the minimum number of false alarms while maintaining sensitivity to gear damage was developed. This analysis focused on two vibration based gear damage detection algorithms, FM4 and MSA. This method was developed using vibration data collected during surface fatigue tests performed in a spur gearbox rig. The thresholds were defined based on damage progression during tests with damage. The thresholds false alarm rates were then evaluated on spur gear tests without damage. Next, the same thresholds were applied to flight data from an OH-58 helicopter transmission. Results showed that thresholds defined in test rigs can be used to define thresholds in flight to correctly classify the transmission operation as normal.
Physical layer security in fiber-optic MIMO-SDM systems: An overview
NASA Astrophysics Data System (ADS)
Guan, Kyle; Cho, Junho; Winzer, Peter J.
2018-02-01
Fiber-optic transmission systems provide large capacities over enormous distances but are vulnerable to simple eavesdropping attacks at the physical layer. We classify key-based and keyless encryption and physical layer security techniques and discuss them in the context of optical multiple-input-multiple-output space-division multiplexed (MIMO-SDM) fiber-optic communication systems. We show that MIMO-SDM not only increases system capacity, but also ensures the confidentiality of information transmission. Based on recent numerical and experimental results, we review how the unique channel characteristics of MIMO-SDM can be exploited to provide various levels of physical layer security.
Cloud layer thicknesses from a combination of surface and upper-air observations
NASA Technical Reports Server (NTRS)
Poore, Kirk D.; Wang, Junhong; Rossow, William B.
1995-01-01
Cloud layer thicknesses are derived from base and top altitudes by combining 14 years (1975-1988) of surface and upper-air observations at 63 sites in the Northern Hemisphere. Rawinsonde observations are employed to determine the locations of cloud-layer top and base by testing for dewpoint temperature depressions below some threshold value. Surface observations serve as quality checks on the rawinsonde-determined cloud properties and provide cloud amount and cloud-type information. The dataset provides layer-cloud amount, cloud type, high, middle, or low height classes, cloud-top heights, base heights and layer thicknesses, covering a range of latitudes from 0 deg to 80 deg N. All data comes from land sites: 34 are located in continental interiors, 14 are near coasts, and 15 are on islands. The uncertainties in the derived cloud properties are discussed. For clouds classified by low-, mid-, and high-top altitudes, there are strong latitudinal and seasonal variations in the layer thickness only for high clouds. High-cloud layer thickness increases with latitude and exhibits different seasonal variations in different latitude zones: in summer, high-cloud layer thickness is a maximum in the Tropics but a minimum at high latitudes. For clouds classified into three types by base altitude or into six standard morphological types, latitudinal and seasonal variations in layer thickness are very small. The thickness of the clear surface layer decreases with latitude and reaches a summer minimum in the Tropics and summer maximum at higher latitudes over land, but does not vary much over the ocean. Tropical clouds occur in three base-altitude groups and the layer thickness of each group increases linearly with top altitude. Extratropical clouds exhibit two groups, one with layer thickness proportional to their cloud-top altitude and one with small (less than or equal to 1000 m) layer thickness independent of cloud-top altitude.
Simulator Evaluation of Airborne Information for Lateral Spacing (AILS) Concept
NASA Technical Reports Server (NTRS)
Abbott, Terence S.; Elliott, Dawn M.
2001-01-01
The Airborne Information for Lateral Spacing (AILS) concept is designed to support independent parallel approach operations to runways spaced as close as 2500 ft. This report describes the AILS operational concept and the results of a ground-based flight simulation experiment of one implementation of this concept. The focus of this simulation experiment was to evaluate pilot performance, pilot acceptability, and minimum miss-distances for the rare situation in which all aircraft oil one approach intrudes into the path of an aircraft oil the other approach. Results from this study showed that the design-goal mean miss-distance of 1200 ft to potential collision situations was surpassed with an actual mean miss-distance of 2236 ft. Pilot reaction times to the alerting system, which was an operational concern, averaged 1.11 sec, well below the design-goal reaction time 2.0 sec.These quantitative results and pilot subjective data showed that the AILS concept is reasonable from an operational standpoint.
The mean coronal magnetic field determined from Helios Faraday rotation measurements
NASA Technical Reports Server (NTRS)
Patzold, M.; Bird, M. K.; Volland, H.; Levy, G. S.; Seidel, B. L.; Stelzried, C. T.
1987-01-01
Coronal Faraday rotation of the linearly polarized carrier signals of the Helios spacecraft was recorded during the regularly occurring solar occultations over almost a complete solar cycle from 1975 to 1984. These measurements are used to determine the average strength and radial variation of the coronal magnetic field at solar minimum at solar distances from 3-10 solar radii, i.e., the range over which the complex fields at the coronal base are transformed into the interplanetary spiral. The mean coronal magnetic field in 1975-1976 was found to decrease with radial distance according to r exp-alpha, where alpha = 2.7 + or - 0.2. The mean field magnitude was 1.0 + or - 0.5 x 10 to the -5th tesla at a nominal solar distance of 5 solar radii. Possibly higher magnetic field strengths were indicated at solar maximum, but a lack of data prevented a statistical determination of the mean coronal field during this epoch.
Deeth, Robert J
2008-08-04
A general molecular mechanics method is presented for modeling the symmetric bidentate, asymmetric bidentate, and bridging modes of metal-carboxylates with a single parameter set by using a double-minimum M-O-C angle-bending potential. The method is implemented within the Molecular Operating Environment (MOE) with parameters based on the Merck molecular force field although, with suitable modifications, other MM packages and force fields could easily be used. Parameters for high-spin d (5) manganese(II) bound to carboxylate and water plus amine, pyridyl, imidazolyl, and pyrazolyl donors are developed based on 26 mononuclear and 29 dinuclear crystallographically characterized complexes. The average rmsd for Mn-L distances is 0.08 A, which is comparable to the experimental uncertainty required to cover multiple binding modes, and the average rmsd in heavy atom positions is around 0.5 A. In all cases, whatever binding mode is reported is also computed to be a stable local minimum. In addition, the structure-based parametrization implicitly captures the energetics and gives the same relative energies of symmetric and asymmetric coordination modes as density functional theory calculations in model and "real" complexes. Molecular dynamics simulations show that carboxylate rotation is favored over "flipping" while a stochastic search algorithm is described for randomly searching conformational space. The model reproduces Mn-Mn distances in dinuclear systems especially accurately, and this feature is employed to illustrate how MM calculations on models for the dimanganese active site of methionine aminopeptidase can help determine some of the details which may be missing from the experimental structure.
Guo, Hao; Qin, Mengna; Chen, Junjie; Xu, Yong; Xiang, Jie
2017-01-01
High-order functional connectivity networks are rich in time information that can reflect dynamic changes in functional connectivity between brain regions. Accordingly, such networks are widely used to classify brain diseases. However, traditional methods for processing high-order functional connectivity networks generally include the clustering method, which reduces data dimensionality. As a result, such networks cannot be effectively interpreted in the context of neurology. Additionally, due to the large scale of high-order functional connectivity networks, it can be computationally very expensive to use complex network or graph theory to calculate certain topological properties. Here, we propose a novel method of generating a high-order minimum spanning tree functional connectivity network. This method increases the neurological significance of the high-order functional connectivity network, reduces network computing consumption, and produces a network scale that is conducive to subsequent network analysis. To ensure the quality of the topological information in the network structure, we used frequent subgraph mining technology to capture the discriminative subnetworks as features and combined this with quantifiable local network features. Then we applied a multikernel learning technique to the corresponding selected features to obtain the final classification results. We evaluated our proposed method using a data set containing 38 patients with major depressive disorder and 28 healthy controls. The experimental results showed a classification accuracy of up to 97.54%.
Qin, Mengna; Chen, Junjie; Xu, Yong; Xiang, Jie
2017-01-01
High-order functional connectivity networks are rich in time information that can reflect dynamic changes in functional connectivity between brain regions. Accordingly, such networks are widely used to classify brain diseases. However, traditional methods for processing high-order functional connectivity networks generally include the clustering method, which reduces data dimensionality. As a result, such networks cannot be effectively interpreted in the context of neurology. Additionally, due to the large scale of high-order functional connectivity networks, it can be computationally very expensive to use complex network or graph theory to calculate certain topological properties. Here, we propose a novel method of generating a high-order minimum spanning tree functional connectivity network. This method increases the neurological significance of the high-order functional connectivity network, reduces network computing consumption, and produces a network scale that is conducive to subsequent network analysis. To ensure the quality of the topological information in the network structure, we used frequent subgraph mining technology to capture the discriminative subnetworks as features and combined this with quantifiable local network features. Then we applied a multikernel learning technique to the corresponding selected features to obtain the final classification results. We evaluated our proposed method using a data set containing 38 patients with major depressive disorder and 28 healthy controls. The experimental results showed a classification accuracy of up to 97.54%. PMID:29387141
Mallik, Saurav; Bhadra, Tapas; Mukherji, Ayan; Mallik, Saurav; Bhadra, Tapas; Mukherji, Ayan; Mallik, Saurav; Bhadra, Tapas; Mukherji, Ayan
2018-04-01
Association rule mining is an important technique for identifying interesting relationships between gene pairs in a biological data set. Earlier methods basically work for a single biological data set, and, in maximum cases, a single minimum support cutoff can be applied globally, i.e., across all genesets/itemsets. To overcome this limitation, in this paper, we propose dynamic threshold-based FP-growth rule mining algorithm that integrates gene expression, methylation and protein-protein interaction profiles based on weighted shortest distance to find the novel associations among different pairs of genes in multi-view data sets. For this purpose, we introduce three new thresholds, namely, Distance-based Variable/Dynamic Supports (DVS), Distance-based Variable Confidences (DVC), and Distance-based Variable Lifts (DVL) for each rule by integrating co-expression, co-methylation, and protein-protein interactions existed in the multi-omics data set. We develop the proposed algorithm utilizing these three novel multiple threshold measures. In the proposed algorithm, the values of , , and are computed for each rule separately, and subsequently it is verified whether the support, confidence, and lift of each evolved rule are greater than or equal to the corresponding individual , , and values, respectively, or not. If all these three conditions for a rule are found to be true, the rule is treated as a resultant rule. One of the major advantages of the proposed method compared with other related state-of-the-art methods is that it considers both the quantitative and interactive significance among all pairwise genes belonging to each rule. Moreover, the proposed method generates fewer rules, takes less running time, and provides greater biological significance for the resultant top-ranking rules compared to previous methods.
Caprihan, A; Pearlson, G D; Calhoun, V D
2008-08-15
Principal component analysis (PCA) is often used to reduce the dimension of data before applying more sophisticated data analysis methods such as non-linear classification algorithms or independent component analysis. This practice is based on selecting components corresponding to the largest eigenvalues. If the ultimate goal is separation of data in two groups, then these set of components need not have the most discriminatory power. We measured the distance between two such populations using Mahalanobis distance and chose the eigenvectors to maximize it, a modified PCA method, which we call the discriminant PCA (DPCA). DPCA was applied to diffusion tensor-based fractional anisotropy images to distinguish age-matched schizophrenia subjects from healthy controls. The performance of the proposed method was evaluated by the one-leave-out method. We show that for this fractional anisotropy data set, the classification error with 60 components was close to the minimum error and that the Mahalanobis distance was twice as large with DPCA, than with PCA. Finally, by masking the discriminant function with the white matter tracts of the Johns Hopkins University atlas, we identified left superior longitudinal fasciculus as the tract which gave the least classification error. In addition, with six optimally chosen tracts the classification error was zero.
Particle Filtering for Obstacle Tracking in UAS Sense and Avoid Applications
Moccia, Antonio
2014-01-01
Obstacle detection and tracking is a key function for UAS sense and avoid applications. In fact, obstacles in the flight path must be detected and tracked in an accurate and timely manner in order to execute a collision avoidance maneuver in case of collision threat. The most important parameter for the assessment of a collision risk is the Distance at Closest Point of Approach, that is, the predicted minimum distance between own aircraft and intruder for assigned current position and speed. Since assessed methodologies can cause some loss of accuracy due to nonlinearities, advanced filtering methodologies, such as particle filters, can provide more accurate estimates of the target state in case of nonlinear problems, thus improving system performance in terms of collision risk estimation. The paper focuses on algorithm development and performance evaluation for an obstacle tracking system based on a particle filter. The particle filter algorithm was tested in off-line simulations based on data gathered during flight tests. In particular, radar-based tracking was considered in order to evaluate the impact of particle filtering in a single sensor framework. The analysis shows some accuracy improvements in the estimation of Distance at Closest Point of Approach, thus reducing the delay in collision detection. PMID:25105154
Medical Parasitology Taxonomy Update: January 2012 to December 2015.
Simner, P J
2017-01-01
Parasites of medical importance have long been classified taxonomically by morphological characteristics. However, molecular-based techniques have been increasingly used and relied on to determine evolutionary distances for the basis of rational hierarchal classifications. This has resulted in several different classification schemes for parasites and changes in parasite taxonomy. The purpose of this Minireview is to provide a single reference for diagnostic laboratories that summarizes new and revised clinically relevant parasite taxonomy from January 2012 through December 2015. Copyright © 2016 American Society for Microbiology.
Electrostatic attraction of charged drops of water inside dropwise cluster
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shavlov, A. V.; Tyumen State Oil and Gas University, 38, Volodarskogo Str., Tyumen 625000; Dzhumandzhi, V. A.
2013-08-15
Based on the analytical solution of the Poisson-Boltzmann equation, we demonstrate that inside the electrically neutral system of charges an electrostatic attraction can occur between the like-charged particles, where charge Z ≫ 1 (in terms of elementary charge) and radius R > 0, whereas according to the literature, only repulsion is possible inside non-electrically neutral systems. We calculate the free energy of the charged particles of water inside a cluster and demonstrate that its minimum is when the interdroplet distance equals several Debye radii defined based on the light plasma component. The deepest minimum depth is in a cluster withmore » close spatial packing of drops by type, in a face-centered cubic lattice, if almost all the electric charge of one sign is concentrated on the drops and that of the other sign is concentrated on the light compensation carriers of charge, where the charge moved by equilibrium carriers is rather small.« less
NASA Astrophysics Data System (ADS)
Tibi, R.; Young, C. J.; Koper, K. D.; Pankow, K. L.
2017-12-01
Seismic event discrimination methods exploit the differing characteristics—in terms of amplitude and/or frequency content—of the generated seismic phases among the event types to be classified. Most of the commonly used seismic discrimination methods are designed for regional data recorded at distances of about 200 to 2000 km. Relatively little attention has focused on discriminants for local distances (< 200 km), the range at which the smallest events are recorded. Short-period fundamental mode Rayleigh waves (Rg) are commonly observed on seismograms of man-made seismic events, and shallow, naturally occurring tectonic earthquakes recorded at local distances. We leverage the well-known notion that Rg amplitude decreases dramatically with increasing event depth to propose a new depth discriminant based on Rg-to-Sg spectral amplitude ratios. The approach is successfully used to discriminate shallow events from deeper tectonic earthquakes in the Utah region recorded at local distances (< 150 km) by the University of Utah Seismographic Stations (UUSS) regional seismic network. Using Mood's median test, we obtained probabilities of nearly zero that the median Rg-to-Sg spectral amplitude ratios are the same between shallow events on one side (including both shallow tectonic earthquakes and man-made events), and deeper earthquakes on the other side, suggesting that there is a statistically significant difference in the estimated Rg-to-Sg ratios between the two populations. We also observed consistent disparities between the different types of shallow events (e.g., explosions vs. mining-induced events), implying that it may be possible to separate the sub-populations that make up this group. This suggests that using local distance Rg-to-Sg spectral amplitude ratios one can not only discriminate shallow from deeper events, but may also be able to discriminate different populations of shallow events. We also experimented with Pg-to-Sg amplitude ratios in multi-frequency linear discriminant functions to classify man-made events and tectonic earthquakes in Utah. Initial results are very promising, showing probabilities of misclassification of only 2.4-14.3%.
The analysis of a generic air-to-air missile simulation model
NASA Technical Reports Server (NTRS)
Kaplan, Joseph A.; Chappell, Alan R.; Mcmanus, John W.
1994-01-01
A generic missile model was developed to evaluate the benefits of using a dynamic missile fly-out simulation system versus a static missile launch envelope system for air-to-air combat simulation. This paper examines the performance of a launch envelope model and a missile fly-out model. The launch envelope model bases its probability of killing the target aircraft on the target aircraft's position at the launch time of the weapon. The benefits gained from a launch envelope model are the simplicity of implementation and the minimal computational overhead required. A missile fly-out model takes into account the physical characteristics of the missile as it simulates the guidance, propulsion, and movement of the missile. The missile's probability of kill is based on the missile miss distance (or the minimum distance between the missile and the target aircraft). The problems associated with this method of modeling are a larger computational overhead, the additional complexity required to determine the missile miss distance, and the additional complexity of determining the reason(s) the missile missed the target. This paper evaluates the two methods and compares the results of running each method on a comprehensive set of test conditions.
Combination of minimum enclosing balls classifier with SVM in coal-rock recognition.
Song, QingJun; Jiang, HaiYan; Song, Qinghui; Zhao, XieGuang; Wu, Xiaoxuan
2017-01-01
Top-coal caving technology is a productive and efficient method in modern mechanized coal mining, the study of coal-rock recognition is key to realizing automation in comprehensive mechanized coal mining. In this paper we propose a new discriminant analysis framework for coal-rock recognition. In the framework, a data acquisition model with vibration and acoustic signals is designed and the caving dataset with 10 feature variables and three classes is got. And the perfect combination of feature variables can be automatically decided by using the multi-class F-score (MF-Score) feature selection. In terms of nonlinear mapping in real-world optimization problem, an effective minimum enclosing ball (MEB) algorithm plus Support vector machine (SVM) is proposed for rapid detection of coal-rock in the caving process. In particular, we illustrate how to construct MEB-SVM classifier in coal-rock recognition which exhibit inherently complex distribution data. The proposed method is examined on UCI data sets and the caving dataset, and compared with some new excellent SVM classifiers. We conduct experiments with accuracy and Friedman test for comparison of more classifiers over multiple on the UCI data sets. Experimental results demonstrate that the proposed algorithm has good robustness and generalization ability. The results of experiments on the caving dataset show the better performance which leads to a promising feature selection and multi-class recognition in coal-rock recognition.
Combination of minimum enclosing balls classifier with SVM in coal-rock recognition
Song, QingJun; Jiang, HaiYan; Song, Qinghui; Zhao, XieGuang; Wu, Xiaoxuan
2017-01-01
Top-coal caving technology is a productive and efficient method in modern mechanized coal mining, the study of coal-rock recognition is key to realizing automation in comprehensive mechanized coal mining. In this paper we propose a new discriminant analysis framework for coal-rock recognition. In the framework, a data acquisition model with vibration and acoustic signals is designed and the caving dataset with 10 feature variables and three classes is got. And the perfect combination of feature variables can be automatically decided by using the multi-class F-score (MF-Score) feature selection. In terms of nonlinear mapping in real-world optimization problem, an effective minimum enclosing ball (MEB) algorithm plus Support vector machine (SVM) is proposed for rapid detection of coal-rock in the caving process. In particular, we illustrate how to construct MEB-SVM classifier in coal-rock recognition which exhibit inherently complex distribution data. The proposed method is examined on UCI data sets and the caving dataset, and compared with some new excellent SVM classifiers. We conduct experiments with accuracy and Friedman test for comparison of more classifiers over multiple on the UCI data sets. Experimental results demonstrate that the proposed algorithm has good robustness and generalization ability. The results of experiments on the caving dataset show the better performance which leads to a promising feature selection and multi-class recognition in coal-rock recognition. PMID:28937987
NASA Technical Reports Server (NTRS)
Parada, N. D. J. (Principal Investigator); Dutra, L. V.; Mascarenhas, N. D. A.; Mitsuo, Fernando Augusta, II
1984-01-01
A study area near Ribeirao Preto in Sao Paulo state was selected, with predominance in sugar cane. Eight features were extracted from the 4 original bands of LANDSAT image, using low-pass and high-pass filtering to obtain spatial features. There were 5 training sites in order to acquire the necessary parameters. Two groups of four channels were selected from 12 channels using JM-distance and entropy criterions. The number of selected channels was defined by physical restrictions of the image analyzer and computacional costs. The evaluation was performed by extracting the confusion matrix for training and tests areas, with a maximum likelihood classifier, and by defining performance indexes based on those matrixes for each group of channels. Results show that in spatial features and supervised classification, the entropy criterion is better in the sense that allows a more accurate and generalized definition of class signature. On the other hand, JM-distance criterion strongly reduces the misclassification within training areas.
ERIC Educational Resources Information Center
Pietras, Jesse John
Connecticut has proposed legislation to augment the remote education infrastructure which includes public libraries, public schools, and institutions of higher learning. The purpose of one bill is to explore the possibilities of transmitting interactive distance education to all schools intrastate and to classify public libraries at a cheaper…
Theory and Practice in the Teaching of Composition: Processing, Distancing, and Modeling.
ERIC Educational Resources Information Center
Myers, Miles, Ed.; Gray, James, Ed.
Intended to show teachers how their approaches to the teaching of writing reflect a particular area of research and to show researchers how the intuitions of teachers reflect research findings, the articles in this book are classified according to three approaches to writing: processing, distancing, and modeling. After an introductory essay that…
Probst, R.; Lin, J.; Komaee, A.; Nacev, A.; Cummins, Z.
2010-01-01
Any single permanent or electro magnet will always attract a magnetic fluid. For this reason it is difficult to precisely position and manipulate ferrofluid at a distance from magnets. We develop and experimentally demonstrate optimal (minimum electrical power) 2-dimensional manipulation of a single droplet of ferrofluid by feedback control of 4 external electromagnets. The control algorithm we have developed takes into account, and is explicitly designed for, the nonlinear (fast decay in space, quadratic in magnet strength) nature of how the magnets actuate the ferrofluid, and it also corrects for electro-magnet charging time delays. With this control, we show that dynamic actuation of electro-magnets held outside a domain can be used to position a droplet of ferrofluid to any desired location and steer it along any desired path within that domain – an example of precision control of a ferrofluid by magnets acting at a distance. PMID:21218157
Prevalence of alveolar bone loss in healthy children treated at private pediatric dentistry clinics
GUIMARÃES, Maria do Carmo Machado; de ARAÚJO, Valéria Martins; AVENA, Márcia Raquel; DUARTE, Daniel Rocha da Silva; FREITAS, Francisco Valter
2010-01-01
Objectives The purpose of this study was to evaluate the prevalence of alveolar bone loss (BL) in healthy children treated at private pediatric dentistry clinics in Brasília, Brazil. Material and Methods The research included 7,436 sites present in 885 radiographs from 450 children. The BL prevalence was estimated by measuring the distance from the cementoenamel junction (CEJ) to alveolar bone crest (ABC). Data were divided in groups: (I) No BL: distance from CEJ to ABC is ≤2 mm; (II) questionable BL (QBL): distance from CEJ to ABC is >2 and <3 mm; (III) definite BL (DBL): distance from CEJ to ABC ≥3 mm. Data were treated by the chi-square nonparametric test and Fisher's exact test (p<0.05). Results Among males, 89.31% were classified in group I, 9.82% were classified in group II and 0.85% in group III. Among females, 93.05%, 6.48% and 0.46% patients were classified in Group I, II and III, respectively. The differences between genders were not statistically significant (Chi-square test, p = 0.375). Group composition according to patients’ age showed that 91.11% of individuals were classified as group I, 8.22% in group II and 0.67% in group III. The differences among the age ranges were not statistically significant (Chi-square test, p = 0.418). The mesial and distal sites showed a higher prevalence of BL in the jaw, QBL (89.80%) and DBL (79.40%), and no significant difference was observed in the distribution of QBL (Fisher’s exact test p = 0.311) and DBL (Fisher’s exact test p = 0.672) in the dental arches. The distal sites exhibited higher prevalence of both QBL (77.56%) and DBL (58.82%). Conclusions The periodontal status of children should never be underestimated because BL occurs even in healthy populations, although in a lower frequency. PMID:20857009
Prevalence of alveolar bone loss in healthy children treated at private pediatric dentistry clinics.
Guimarães, Maria do Carmo Machado; de Araújo, Valéria Martins; Avena, Márcia Raquel; Duarte, Daniel Rocha da Silva; Freitas, Francisco Valter
2010-01-01
The purpose of this study was to evaluate the prevalence of alveolar bone loss (BL) in healthy children treated at private pediatric dentistry clinics in Brasília, Brazil. The research included 7,436 sites present in 885 radiographs from 450 children. The BL prevalence was estimated by measuring the distance from the cementoenamel junction (CEJ) to alveolar bone crest (ABC). Data were divided in groups: (I) No BL: distance from CEJ to ABC is <2 mm; (II) questionable BL (QBL): distance from CEJ to ABC is >2 and <3 mm; (III) definite BL (DBL): distance from CEJ to ABC >3 mm. Data were treated by the chi-square nonparametric test and Fisher's exact test (p<0.05). Among males, 89.31% were classified in group I, 9.82% were classified in group II and 0.85% in group III. Among females, 93.05%, 6.48% and 0.46% patients were classified in Group I, II and III, respectively. The differences between genders were not statistically significant (Chi-square test, p = 0.375). Group composition according to patients' age showed that 91.11% of individuals were classified as group I, 8.22% in group II and 0.67% in group III. The differences among the age ranges were not statistically significant (Chi-square test, p = 0.418). The mesial and distal sites showed a higher prevalence of BL in the jaw, QBL (89.80%) and DBL (79.40%), and no significant difference was observed in the distribution of QBL (Fisher's exact test p = 0.311) and DBL (Fisher's exact test p = 0.672) in the dental arches. The distal sites exhibited higher prevalence of both QBL (77.56%) and DBL (58.82%). The periodontal status of children should never be underestimated because BL occurs even in healthy populations, although in a lower frequency.
Sikder, Helena Akhter; Suthawaree, Jeeranut; Kato, Shungo; Kajii, Yoshizumi
2011-03-01
Simultaneous ground-based measurements of ozone and carbon monoxide were performed at Oki, Japan, from January 2001 to September 2002 in order to investigate the O(3) and CO characteristics and their distributions. The observations revealed that O(3) and CO concentrations were maximum in springtime and minimum in the summer. The monthly averaged concentrations of O(3) and CO were 60 and 234 ppb in spring and 23 and 106 ppb in summer, respectively. Based on direction, 5-day isentropic backward trajectory analysis was carried out to determine the transport path of air masses, preceding their arrival at Oki. Comparison between classified results from present work and results from the year 1994-1996 was carried out. The O(3) and CO concentration results of classified air masses in our analysis show similar concentration trends to previous findings; highest in the WNW/W, lowest in N/NE and medium levels in NW. Moreover, O(3) levels are higher and CO levels are lower in the present study in all categories. Copyright © 2010 Elsevier Ltd. All rights reserved.
32 CFR 154.18 - Certain positions not necessarily requiring access to classified information.
Code of Federal Regulations, 2013 CFR
2013-07-01
... to meet the high standards required. At a minimum, all such personnel shall have had a favorably... information systems may be assigned to one of three position sensitivity designations (in accordance with...
32 CFR 154.18 - Certain positions not necessarily requiring access to classified information.
Code of Federal Regulations, 2014 CFR
2014-07-01
... to meet the high standards required. At a minimum, all such personnel shall have had a favorably... information systems may be assigned to one of three position sensitivity designations (in accordance with...
A Large-scale Distributed Indexed Learning Framework for Data that Cannot Fit into Memory
2015-03-27
learn a classifier. Integrating three learning techniques (online, semi-supervised and active learning ) together with a selective sampling with minimum communication between the server and the clients solved this problem.
NASA Astrophysics Data System (ADS)
Cortesi, Nicola; Peña-Angulo, Dhais; Simolo, Claudia; Stepanek, Peter; Brunetti, Michele; Gonzalez-Hidalgo, José Carlos
2014-05-01
One of the key point in the develop of the MOTEDAS dataset (see Poster 1 MOTEDAS) in the framework of the HIDROCAES Project (Impactos Hidrológicos del Calentamiento Global en España, Spanish Ministery of Research CGL2011-27574-C02-01) is the reference series for which no generalized metadata exist. In this poster we present an analysis of spatial variability of monthly minimum and maximum temperatures in the conterminous land of Spain (Iberian Peninsula, IP), by using the Correlation Decay Distance function (CDD), with the aim of evaluating, at sub-regional level, the optimal threshold distance between neighbouring stations for producing the set of reference series used in the quality control (see MOTEDAS Poster 1) and the reconstruction (see MOREDAS Poster 3). The CDD analysis for Tmax and Tmin was performed calculating a correlation matrix at monthly scale between 1981-2010 among monthly mean values of maximum (Tmax) and minimum (Tmin) temperature series (with at least 90% of data), free of anomalous data and homogenized (see MOTEDAS Poster 1), obtained from AEMEt archives (National Spanish Meteorological Agency). Monthly anomalies (difference between data and mean 1981-2010) were used to prevent the dominant effect of annual cycle in the CDD annual estimation. For each station, and time scale, the common variance r2 (using the square of Pearson's correlation coefficient) was calculated between all neighbouring temperature series and the relation between r2 and distance was modelled according to the following equation (1): Log (r2ij) = b*°dij (1) being Log(rij2) the common variance between target (i) and neighbouring series (j), dij the distance between them and b the slope of the ordinary least-squares linear regression model applied taking into account only the surrounding stations within a starting radius of 50 km and with a minimum of 5 stations required. Finally, monthly, seasonal and annual CDD values were interpolated using the Ordinary Kriging with a spherical variogram over conterminous land of Spain, and converted on a regular 10 km2 grid (resolution similar to the mean distance between stations) to map the results. In the conterminous land of Spain the distance at which couples of stations have a common variance in temperature (both maximum Tmax, and minimum Tmin) above the selected threshold (50%, r Pearson ~0.70) on average does not exceed 400 km, with relevant spatial and temporal differences. The spatial distribution of the CDD shows a clear coastland-to-inland gradient at annual, seasonal and monthly scale, with highest spatial variability along the coastland areas and lower variability inland. The highest spatial variability coincide particularly with coastland areas surrounded by mountain chains and suggests that the orography is one of the most driving factor causing higher interstation variability. Moreover, there are some differences between the behaviour of Tmax and Tmin, being Tmin spatially more homogeneous than Tmax, but its lower CDD values indicate that night-time temperature is more variable than diurnal one. The results suggest that in general local factors affects the spatial variability of monthly Tmin more than Tmax and then higher network density would be necessary to capture the higher spatial variability highlighted for Tmin respect to Tmax. The results suggest that in general local factors affects the spatial variability of Tmin more than Tmax and then higher network density would be necessary to capture the higher spatial variability highlighted for minimum temperature respect to maximum temperature. A conservative distance for reference series could be evaluated in 200 km, that we propose for continental land of Spain and use in the development of MOTEDAS.
The Effects of Long Distance Running on Preadolescent Children.
ERIC Educational Resources Information Center
Covington, N. Kay
This study investigated the effects of selected physiological variables on preadolescent male and female long distance runners. The trained group was comprised of 20 children between the ages of 8 and 10 who had been running a minimum of 20 miles per week for two months or longer. The control group was made up of 20 children of the same ages who…
Optimizing the Launch of a Projectile to Hit a Target
ERIC Educational Resources Information Center
Mungan, Carl E.
2017-01-01
Some teenagers are exploring the outer perimeter of a castle. They notice a spy hole in its wall, across the moat a horizontal distance "x" and vertically up the wall a distance "y." They decide to throw pebbles at the hole. One girl wants to use physics to throw with the minimum speed necessary to hit the hole. What is the…
Blood vessel segmentation in color fundus images based on regional and Hessian features.
Shah, Syed Ayaz Ali; Tang, Tong Boon; Faye, Ibrahima; Laude, Augustinus
2017-08-01
To propose a new algorithm of blood vessel segmentation based on regional and Hessian features for image analysis in retinal abnormality diagnosis. Firstly, color fundus images from the publicly available database DRIVE were converted from RGB to grayscale. To enhance the contrast of the dark objects (blood vessels) against the background, the dot product of the grayscale image with itself was generated. To rectify the variation in contrast, we used a 5 × 5 window filter on each pixel. Based on 5 regional features, 1 intensity feature and 2 Hessian features per scale using 9 scales, we extracted a total of 24 features. A linear minimum squared error (LMSE) classifier was trained to classify each pixel into a vessel or non-vessel pixel. The DRIVE dataset provided 20 training and 20 test color fundus images. The proposed algorithm achieves a sensitivity of 72.05% with 94.79% accuracy. Our proposed algorithm achieved higher accuracy (0.9206) at the peripapillary region, where the ocular manifestations in the microvasculature due to glaucoma, central retinal vein occlusion, etc. are most obvious. This supports the proposed algorithm as a strong candidate for automated vessel segmentation.
NASA Astrophysics Data System (ADS)
Ray, Sibdas; Das, Aniruddha
2015-06-01
Reaction of 2-ethoxymethyleneamino-2-cyanoacetamide with primary alkyl amines in acetonitrile solvent affords 1-substituted-5-aminoimidazole-4-carboxamides. Single crystal X-ray diffraction studies of these imidazole compounds show that there are both anti-parallel and syn-parallel π-π stackings between two imidazole units in parallel-displaced (PD) conformations and the distance between two π-π stacked imidazole units depends mainly on the anti/ syn-parallel nature and to some extent on the alkyl group attached to N-1 of imidazole; molecules with anti-parallel PD-stacking arrangements of the imidazole units have got vertical π-π stacking distance short enough to impart stabilization whereas the imidazole unit having syn-parallel stacking arrangement have got much larger π-π stacking distances. DFT studies on a pair of anti-parallel imidazole units of such an AICA lead to curves for 'π-π stacking stabilization energy vs. π-π stacking distance' which have got similarity with the 'Morse potential energy diagram for a diatomic molecule' and this affords to find out a minimum π-π stacking distance corresponding to the maximum stacking stabilization energy between the pair of imidazole units. On the other hand, a DFT calculation based curve for 'π-π stacking stabilization energy vs. π-π stacking distance' of a pair of syn-parallel imidazole units is shown to have an exponential nature.
Microsatellite-based phylogeny of Indian domestic goats
Rout, Pramod K; Joshi, Manjunath B; Mandal, Ajoy; Laloe, D; Singh, Lalji; Thangaraj, Kumarasamy
2008-01-01
Background The domestic goat is one of the important livestock species of India. In the present study we assess genetic diversity of Indian goats using 17 microsatellite markers. Breeds were sampled from their natural habitat, covering different agroclimatic zones. Results The mean number of alleles per locus (NA) ranged from 8.1 in Barbari to 9.7 in Jakhrana goats. The mean expected heterozygosity (He) ranged from 0.739 in Barbari to 0.783 in Jakhrana goats. Deviations from Hardy-Weinberg Equilibrium (HWE) were statistically significant (P < 0.05) for 5 loci breed combinations. The DA measure of genetic distance between pairs of breeds indicated that the lowest distance was between Marwari and Sirohi (0.135). The highest distance was between Pashmina and Black Bengal. An analysis of molecular variance indicated that 6.59% of variance exists among the Indian goat breeds. Both a phylogenetic tree and Principal Component Analysis showed the distribution of breeds in two major clusters with respect to their geographic distribution. Conclusion Our study concludes that Indian goat populations can be classified into distinct genetic groups or breeds based on the microsatellites as well as mtDNA information. PMID:18226239
Traveling salesman problems with PageRank Distance on complex networks reveal community structure
NASA Astrophysics Data System (ADS)
Jiang, Zhongzhou; Liu, Jing; Wang, Shuai
2016-12-01
In this paper, we propose a new algorithm for community detection problems (CDPs) based on traveling salesman problems (TSPs), labeled as TSP-CDA. Since TSPs need to find a tour with minimum cost, cities close to each other are usually clustered in the tour. This inspired us to model CDPs as TSPs by taking each vertex as a city. Then, in the final tour, the vertices in the same community tend to cluster together, and the community structure can be obtained by cutting the tour into a couple of paths. There are two challenges. The first is to define a suitable distance between each pair of vertices which can reflect the probability that they belong to the same community. The second is to design a suitable strategy to cut the final tour into paths which can form communities. In TSP-CDA, we deal with these two challenges by defining a PageRank Distance and an automatic threshold-based cutting strategy. The PageRank Distance is designed with the intrinsic properties of CDPs in mind, and can be calculated efficiently. In the experiments, benchmark networks with 1000-10,000 nodes and varying structures are used to test the performance of TSP-CDA. A comparison is also made between TSP-CDA and two well-established community detection algorithms. The results show that TSP-CDA can find accurate community structure efficiently and outperforms the two existing algorithms.
Enhancing the Performance of LibSVM Classifier by Kernel F-Score Feature Selection
NASA Astrophysics Data System (ADS)
Sarojini, Balakrishnan; Ramaraj, Narayanasamy; Nickolas, Savarimuthu
Medical Data mining is the search for relationships and patterns within the medical datasets that could provide useful knowledge for effective clinical decisions. The inclusion of irrelevant, redundant and noisy features in the process model results in poor predictive accuracy. Much research work in data mining has gone into improving the predictive accuracy of the classifiers by applying the techniques of feature selection. Feature selection in medical data mining is appreciable as the diagnosis of the disease could be done in this patient-care activity with minimum number of significant features. The objective of this work is to show that selecting the more significant features would improve the performance of the classifier. We empirically evaluate the classification effectiveness of LibSVM classifier on the reduced feature subset of diabetes dataset. The evaluations suggest that the feature subset selected improves the predictive accuracy of the classifier and reduce false negatives and false positives.
Tan, Robin; Perkowski, Marek
2017-01-01
Electrocardiogram (ECG) signals sensed from mobile devices pertain the potential for biometric identity recognition applicable in remote access control systems where enhanced data security is demanding. In this study, we propose a new algorithm that consists of a two-stage classifier combining random forest and wavelet distance measure through a probabilistic threshold schema, to improve the effectiveness and robustness of a biometric recognition system using ECG data acquired from a biosensor integrated into mobile devices. The proposed algorithm is evaluated using a mixed dataset from 184 subjects under different health conditions. The proposed two-stage classifier achieves a total of 99.52% subject verification accuracy, better than the 98.33% accuracy from random forest alone and 96.31% accuracy from wavelet distance measure algorithm alone. These results demonstrate the superiority of the proposed algorithm for biometric identification, hence supporting its practicality in areas such as cloud data security, cyber-security or remote healthcare systems. PMID:28230745
Tan, Robin; Perkowski, Marek
2017-02-20
Electrocardiogram (ECG) signals sensed from mobile devices pertain the potential for biometric identity recognition applicable in remote access control systems where enhanced data security is demanding. In this study, we propose a new algorithm that consists of a two-stage classifier combining random forest and wavelet distance measure through a probabilistic threshold schema, to improve the effectiveness and robustness of a biometric recognition system using ECG data acquired from a biosensor integrated into mobile devices. The proposed algorithm is evaluated using a mixed dataset from 184 subjects under different health conditions. The proposed two-stage classifier achieves a total of 99.52% subject verification accuracy, better than the 98.33% accuracy from random forest alone and 96.31% accuracy from wavelet distance measure algorithm alone. These results demonstrate the superiority of the proposed algorithm for biometric identification, hence supporting its practicality in areas such as cloud data security, cyber-security or remote healthcare systems.
Warner, Echo L; Fowler, Brynn; Pannier, Samantha T; Salmon, Sara K; Fair, Douglas; Spraker-Perlman, Holly; Yancey, Jeffrey; Randall, R Lor; Kirchhoff, Anne C
2018-05-03
To describe how distance to treatment location influences patient navigation preferences for adolescent and young adult (AYA) cancer patients and survivors. This study is part of a statewide needs assessment to inform the development of an AYA cancer patient and survivor navigation program. Participants were recruited from outpatient oncology clinics in Utah. Eligible participants had been diagnosed with cancer between ages 15-39 and had completed at least 1 month of treatment. Participants completed a semi-structured interview on preferences for patient navigation. Summary statistics of demographic and cancer characteristics were generated. Thematic content analysis was used to describe navigation preferences among participants classified as distance (≥20 miles) and local (<20 miles), to explain differences in their needs based on distance from their treatment center. The top three patient navigation needs were general information, financial, and emotional support. More local patients were interested in patient navigation services (95.2%) compared to distance participants (77.8%). Fewer local (38.1%) than distance participants (61.1%) reported challenges getting to appointments, and distance patients needed specific financial support to support their travel (e.g., fuel, lodging). Both local and distance patients desired to connect with a navigator in person before using another form of communication and wanted to connect with a patient navigator at the time of initial diagnosis. Distance from treatment center is an important patient navigation consideration for AYA cancer patients and survivors. After initially connecting with AYAs in person, patient navigators can provide resources remotely to help reduce travel burden.
Resistor-logic demultiplexers for nanoelectronics based on constant-weight codes.
Kuekes, Philip J; Robinett, Warren; Roth, Ron M; Seroussi, Gadiel; Snider, Gregory S; Stanley Williams, R
2006-02-28
The voltage margin of a resistor-logic demultiplexer can be improved significantly by basing its connection pattern on a constant-weight code. Each distinct code determines a unique demultiplexer, and therefore a large family of circuits is defined. We consider using these demultiplexers for building nanoscale crossbar memories, and determine the voltage margin of the memory system based on a particular code. We determine a purely code-theoretic criterion for selecting codes that will yield memories with large voltage margins, which is to minimize the ratio of the maximum to the minimum Hamming distance between distinct codewords. For the specific example of a 64 × 64 crossbar, we discuss what codes provide optimal performance for a memory.
Tensho, Keiji; Shimodaira, Hiroki; Akaoka, Yusuke; Koyama, Suguru; Hatanaka, Daisuke; Ikegami, Shota; Kato, Hiroyuki; Saito, Naoto
2018-05-02
The tibial tubercle deviation associated with recurrent patellar dislocation (RPD) has not been studied sufficiently. New methods of evaluation were used to verify the extent of tubercle deviation in a group with patellar dislocation compared with that in a control group, the frequency of patients who demonstrated a cutoff value indicating that tubercle transfer was warranted on the basis of the control group distribution, and the validity of these methods of evaluation for diagnosing RPD. Sixty-six patients with a history of patellar dislocation (single in 19 [SPD group] and recurrent in 47 [RPD group]) and 66 age and sex-matched controls were analyzed with the use of computed tomography (CT). The tibial tubercle-posterior cruciate ligament (TT-PCL) distance, TT-PCL ratio, and tibial tubercle lateralization (TTL) in the SPD and RPD groups were compared with those in the control group. Cutoff values to warrant 10 mm of transfer were based on either the minimum or -2SD (2 standard deviations below the mean) value in the control group, and the prevalences of patients in the RPD group with measurements above these cutoff values were calculated. The area under the curve (AUC) in receiver operating characteristic (ROC) curve analysis was used to assess the effectiveness of the measurements as predictors of RPD. The mean TT-PCL distance, TT-PCL ratio, and TTL were all significantly greater in the RPD group than in the control group. The numbers of patients in the RPD group who satisfied the cutoff criteria when they were based on the minimum TT-PCL distance, TT-PCL ratio, and TTL in the control group were 11 (23%), 7 (15%), and 6 (13%), respectively. When the cutoff values were based on the -2SD values in the control group, the numbers of patients were 8 (17%), 6 (13%), and 0, respectively. The AUC of the ROC curve for TT-PCL distance, TT-PCL ratio, and TTL was 0.66, 0.72, and 0.72, respectively. The extent of TTL in the RPD group was not substantial, and the percentages of patients for whom 10 mm of medial transfer was indicated were small. Prognostic Level III. See Instructions for Authors for a complete description of levels of evidence.
NASA Astrophysics Data System (ADS)
Schmitt, Oliver; Steinmann, Paul
2018-06-01
We introduce a manufacturing constraint for controlling the minimum member size in structural shape optimization problems, which is for example of interest for components fabricated in a molding process. In a parameter-free approach, whereby the coordinates of the FE boundary nodes are used as design variables, the challenging task is to find a generally valid definition for the thickness of non-parametric geometries in terms of their boundary nodes. Therefore we use the medial axis, which is the union of all points with at least two closest points on the boundary of the domain. Since the effort for the exact computation of the medial axis of geometries given by their FE discretization highly increases with the number of surface elements we use the distance function instead to approximate the medial axis by a cloud of points. The approximation is demonstrated on three 2D examples. Moreover, the formulation of a minimum thickness constraint is applied to a sensitivity-based shape optimization problem of one 2D and one 3D model.
NASA Astrophysics Data System (ADS)
Schmitt, Oliver; Steinmann, Paul
2017-09-01
We introduce a manufacturing constraint for controlling the minimum member size in structural shape optimization problems, which is for example of interest for components fabricated in a molding process. In a parameter-free approach, whereby the coordinates of the FE boundary nodes are used as design variables, the challenging task is to find a generally valid definition for the thickness of non-parametric geometries in terms of their boundary nodes. Therefore we use the medial axis, which is the union of all points with at least two closest points on the boundary of the domain. Since the effort for the exact computation of the medial axis of geometries given by their FE discretization highly increases with the number of surface elements we use the distance function instead to approximate the medial axis by a cloud of points. The approximation is demonstrated on three 2D examples. Moreover, the formulation of a minimum thickness constraint is applied to a sensitivity-based shape optimization problem of one 2D and one 3D model.
NASA Astrophysics Data System (ADS)
Hren, Rok
1998-06-01
Using computer simulations, we systematically investigated the limitations of an inverse solution that employs the potential distribution on the epicardial surface as an equivalent source model in localizing pre-excitation sites in Wolff-Parkinson-White syndrome. A model of the human ventricular myocardium that features an anatomically accurate geometry, an intramural rotating anisotropy and a computational implementation of the excitation process based on electrotonic interactions among cells, was used to simulate body surface potential maps (BSPMs) for 35 pre-excitation sites positioned along the atrioventricular ring. Two individualized torso models were used to account for variations in torso boundaries. Epicardial potential maps (EPMs) were computed using the L-curve inverse solution. The measure for accuracy of the localization was the distance between a position of the minimum in the inverse EPMs and the actual site of pre-excitation in the ventricular model. When the volume conductor properties and lead positions of the torso were precisely known and the measurement noise was added to the simulated BSPMs, the minimum in the inverse EPMs was at 12 ms after the onset on average within
cm of the pre-excitation site. When the standard torso model was used to localize the sites of onset of the pre-excitation sequence initiated in individualized male and female torso models, the mean distance between the minimum and the pre-excitation site was
cm for the male torso and
cm for the female torso. The findings of our study indicate that a location of the minimum in EPMs computed using the inverse solution can offer non-invasive means for pre-interventional planning of the ablative treatment.
Relation Between Inflammables and Ignition Sources in Aircraft Environments
NASA Technical Reports Server (NTRS)
Scull, Wilfred E
1950-01-01
A literature survey was conducted to determine the relation between aircraft ignition sources and inflammables. Available literature applicable to the problem of aircraft fire hazards is analyzed and, discussed herein. Data pertaining to the effect of many variables on ignition temperatures, minimum ignition pressures, and minimum spark-ignition energies of inflammables, quenching distances of electrode configurations, and size of openings incapable of flame propagation are presented and discussed. The ignition temperatures and the limits of inflammability of gasoline in air in different test environments, and the minimum ignition pressure and the minimum size of openings for flame propagation of gasoline - air mixtures are included. Inerting of gasoline - air mixtures is discussed.
On the star partition dimension of comb product of cycle and path
NASA Astrophysics Data System (ADS)
Alfarisi, Ridho; Darmaji
2017-08-01
Let G = (V, E) be a connected graphs with vertex set V(G), edge set E(G) and S ⊆ V(G). Given an ordered partition Π = {S1, S2, S3, …, Sk} of the vertex set V of G, the representation of a vertex v ∈ V with respect to Π is the vector r(v|Π) = (d(v, S1), d(v, S2), …, d(v, Sk)), where d(v, Sk) represents the distance between the vertex v and the set Sk and d(v, Sk) = min{d(v, x)|x ∈ Sk }. A partition Π of V(G) is a resolving partition if different vertices of G have distinct representations, i.e., for every pair of vertices u, v ∈ V(G), r(u|Π) ≠ r(v|Π). The minimum k of Π resolving partition is a partition dimension of G, denoted by pd(G). The resolving partition Π = {S1, S2, S3, …, Sk } is called a star resolving partition for G if it is a resolving partition and each subgraph induced by Si, 1 ≤ i ≤ k, is a star. The minimum k for which there exists a star resolving partition of V(G) is the star partition dimension of G, denoted by spd(G). Finding the star partition dimension of G is classified to be a NP-Hard problem. In this paper, we will show that the partition dimension of comb product of cycle and path namely Cm⊳Pn and Pn⊳Cm for n ≥ 2 and m ≥ 3.
Velmurugan, J.; Mirkin, M. V.; Svirsky, M. A.; Lalwani, A. K.; Llinas, R. R.
2014-01-01
A growing number of minimally invasive surgical and diagnostic procedures require the insertion of an optical, mechanical, or electronic device in narrow spaces inside a human body. In such procedures, precise motion control is essential to avoid damage to the patient’s tissues and/or the device itself. A typical example is the insertion of a cochlear implant which should ideally be done with minimum physical contact between the moving device and the cochlear canal walls or the basilar membrane. Because optical monitoring is not possible, alternative techniques for sub millimeter-scale distance control can be very useful for such procedures. The first requirement for distance control is distance sensing. We developed a novel approach to distance sensing based on the principles of scanning electrochemical microscopy (SECM). The SECM signal, i.e., the diffusion current to a microelectrode, is very sensitive to the distance between the probe surface and any electrically insulating object present in its proximity. With several amperometric microprobes fabricated on the surface of an insertable device, one can monitor the distances between different parts of the moving implant and the surrounding tissues. Unlike typical SECM experiments, in which a disk-shaped tip approaches a relatively smooth sample, complex geometries of the mobile device and its surroundings make distance sensing challenging. Additional issues include the possibility of electrode surface contamination in biological fluids and the requirement for a biologically compatible redox mediator. PMID:24845292
Transport of Escherichia coli in 25 m quartz sand columns
NASA Astrophysics Data System (ADS)
Lutterodt, G.; Foppen, J. W. A.; Maksoud, A.; Uhlenbrook, S.
2011-01-01
To help improve the prediction of bacteria travel distances in aquifers laboratory experiments were conducted to measure the distant dependent sticking efficiencies of two low attaching Escherichia coli strains (UCFL-94 and UCFL-131). The experimental set up consisted of a 25 m long helical column with a diameter of 3.2 cm packed with 99.1% pure-quartz sand saturated with a solution of magnesium sulfate and calcium chloride. Bacteria mass breakthrough at sampling distances ranging from 6 to 25.65 m were observed to quantify bacteria attachment over total transport distances ( αL) and sticking efficiencies at large intra-column segments ( αi) (> 5 m). Fractions of cells retained ( Fi) in a column segment as a function of αi were fitted with a power-law distribution from which the minimum sticking efficiency defined as the sticking efficiency of 0.001% bacteria fraction of the total input mass retained that results in a 5 log removal were extrapolated. Low values of αL in the order 10 - 4 and 10 - 3 were obtained for UCFL-94 and UCFL-131 respectively, while αi-values ranged between 10 - 6 to 10 - 3 for UCFL-94 and 10 - 5 to 10 - 4 for UCFL-131. In addition, both αL and αi reduced with increasing transport distance, and high coefficients of determination (0.99) were obtained for power-law distributions of αi for the two strains. Minimum sticking efficiencies extrapolated were 10 - 7 and 10 - 8 for UCFL-94 and UCFL-131, respectively. Fractions of cells exiting the column were 0.19 and 0.87 for UCFL-94 and UCL-131, respectively. We concluded that environmentally realistic sticking efficiency values in the order of 10 - 4 and 10 - 3 and much lower sticking efficiencies in the order 10 - 5 are measurable in the laboratory, Also power-law distributions in sticking efficiencies commonly observed for limited intra-column distances (< 2 m) are applicable at large transport distances(> 6 m) in columns packed with quartz grains. High fractions of bacteria populations may possess the so-called minimum sticking efficiency, thus expressing their ability to be transported over distances longer than what might be predicted using measured sticking efficiencies from experiments with both short (< 1 m) and long columns (> 25 m). Also variable values of sticking efficiencies within and among the strains show heterogeneities possibly due to variations in cell surface characteristics of the strains. The low sticking efficiency values measured express the importance of the long columns used in the experiments and the lower values of extrapolated minimum sticking efficiencies makes the method a valuable tool in delineating protection areas in real-world scenarios.
Cloud field classification based on textural features
NASA Technical Reports Server (NTRS)
Sengupta, Sailes Kumar
1989-01-01
An essential component in global climate research is accurate cloud cover and type determination. Of the two approaches to texture-based classification (statistical and textural), only the former is effective in the classification of natural scenes such as land, ocean, and atmosphere. In the statistical approach that was adopted, parameters characterizing the stochastic properties of the spatial distribution of grey levels in an image are estimated and then used as features for cloud classification. Two types of textural measures were used. One is based on the distribution of the grey level difference vector (GLDV), and the other on a set of textural features derived from the MaxMin cooccurrence matrix (MMCM). The GLDV method looks at the difference D of grey levels at pixels separated by a horizontal distance d and computes several statistics based on this distribution. These are then used as features in subsequent classification. The MaxMin tectural features on the other hand are based on the MMCM, a matrix whose (I,J)th entry give the relative frequency of occurrences of the grey level pair (I,J) that are consecutive and thresholded local extremes separated by a given pixel distance d. Textural measures are then computed based on this matrix in much the same manner as is done in texture computation using the grey level cooccurrence matrix. The database consists of 37 cloud field scenes from LANDSAT imagery using a near IR visible channel. The classification algorithm used is the well known Stepwise Discriminant Analysis. The overall accuracy was estimated by the percentage or correct classifications in each case. It turns out that both types of classifiers, at their best combination of features, and at any given spatial resolution give approximately the same classification accuracy. A neural network based classifier with a feed forward architecture and a back propagation training algorithm is used to increase the classification accuracy, using these two classes of features. Preliminary results based on the GLDV textural features alone look promising.
Fang, Hongqing; He, Lei; Si, Hao; Liu, Peng; Xie, Xiaolei
2014-09-01
In this paper, Back-propagation(BP) algorithm has been used to train the feed forward neural network for human activity recognition in smart home environments, and inter-class distance method for feature selection of observed motion sensor events is discussed and tested. And then, the human activity recognition performances of neural network using BP algorithm have been evaluated and compared with other probabilistic algorithms: Naïve Bayes(NB) classifier and Hidden Markov Model(HMM). The results show that different feature datasets yield different activity recognition accuracy. The selection of unsuitable feature datasets increases the computational complexity and degrades the activity recognition accuracy. Furthermore, neural network using BP algorithm has relatively better human activity recognition performances than NB classifier and HMM. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Li, Qingbo; Hao, Can; Kang, Xue; Zhang, Jialin; Sun, Xuejun; Wang, Wenbo; Zeng, Haishan
2017-11-27
Combining Fourier transform infrared spectroscopy (FTIR) with endoscopy, it is expected that noninvasive, rapid detection of colorectal cancer can be performed in vivo in the future. In this study, Fourier transform infrared spectra were collected from 88 endoscopic biopsy colorectal tissue samples (41 colitis and 47 cancers). A new method, viz., entropy weight local-hyperplane k-nearest-neighbor (EWHK), which is an improved version of K-local hyperplane distance nearest-neighbor (HKNN), is proposed for tissue classification. In order to avoid limiting high dimensions and small values of the nearest neighbor, the new EWHK method calculates feature weights based on information entropy. The average results of the random classification showed that the EWHK classifier for differentiating cancer from colitis samples produced a sensitivity of 81.38% and a specificity of 92.69%.
Liu, Chang; Wang, Guofeng; Xie, Qinglu; Zhang, Yanchao
2014-01-01
Effective fault classification of rolling element bearings provides an important basis for ensuring safe operation of rotating machinery. In this paper, a novel vibration sensor-based fault diagnosis method using an Ellipsoid-ARTMAP network (EAM) and a differential evolution (DE) algorithm is proposed. The original features are firstly extracted from vibration signals based on wavelet packet decomposition. Then, a minimum-redundancy maximum-relevancy algorithm is introduced to select the most prominent features so as to decrease feature dimensions. Finally, a DE-based EAM (DE-EAM) classifier is constructed to realize the fault diagnosis. The major characteristic of EAM is that the sample distribution of each category is realized by using a hyper-ellipsoid node and smoothing operation algorithm. Therefore, it can depict the decision boundary of disperse samples accurately and effectively avoid over-fitting phenomena. To optimize EAM network parameters, the DE algorithm is presented and two objectives, including both classification accuracy and nodes number, are simultaneously introduced as the fitness functions. Meanwhile, an exponential criterion is proposed to realize final selection of the optimal parameters. To prove the effectiveness of the proposed method, the vibration signals of four types of rolling element bearings under different loads were collected. Moreover, to improve the robustness of the classifier evaluation, a two-fold cross validation scheme is adopted and the order of feature samples is randomly arranged ten times within each fold. The results show that DE-EAM classifier can recognize the fault categories of the rolling element bearings reliably and accurately. PMID:24936949
Djennad, Abdelmajid; Lo Iacono, Giovanni; Sarran, Christophe; Fleming, Lora E; Kessel, Anthony; Haines, Andy; Nichols, Gordon L
2018-04-27
To understand the impact of weather on infectious diseases, information on weather parameters at patient locations is needed, but this is not always accessible due to confidentiality or data availability. Weather parameters at nearby locations are often used as a proxy, but the accuracy of this practice is not known. Daily Campylobacter and Cryptosporidium cases across England and Wales were linked to local temperature and rainfall at the residence postcodes of the patients and at the corresponding postcodes of the laboratory where the patient's specimen was tested. The paired values of daily rainfall and temperature for the laboratory versus residence postcodes were interpolated from weather station data, and the results were analysed for agreement using linear regression. We also assessed potential dependency of the findings on the relative geographic distance between the patient's residence and the laboratory. There was significant and strong agreement between the daily values of rainfall and temperature at diagnostic laboratories with the values at the patient residence postcodes for samples containing the pathogens Campylobacter or Cryptosporidium. For rainfall, the R-squared was 0.96 for the former and 0.97 for the latter, and for maximum daily temperature, the R-squared was 0.99 for both. The overall mean distance between the patient residence and the laboratory was 11.9 km; however, the distribution of these distances exhibited a heavy tail, with some rare situations where the distance between the patient residence and the laboratory was larger than 500 km. These large distances impact the distributions of the weather variable discrepancies (i.e. the differences between weather parameters estimated at patient residence postcodes and those at laboratory postcodes), with discrepancies up to ±10 °C for the minimum and maximum temperature and 20 mm for rainfall. Nevertheless, the distributions of discrepancies (estimated separately for minimum and maximum temperature and rainfall), based on the cases where the distance between the patient residence and the laboratory was within 20 km, still exhibited tails somewhat longer than the corresponding exponential fits suggesting modest small scale variations in temperature and rainfall. The findings confirm that, for the purposes of studying the relationships between meteorological variables and infectious diseases using data based on laboratory postcodes, the weather results are sufficiently similar to justify the use of laboratory postcode as a surrogate for domestic postcode. Exclusion of the small percentage of cases where there is a large distance between the residence and the laboratory could increase the precision of estimates, but there are generally strong associations between daily weather parameters at residence and laboratory.
Chen, Hsing-Yu; Kaneda, Noriaki; Lee, Jeffrey; Chen, Jyehong; Chen, Young-Kai
2017-03-20
The feasibility of a single sideband (SSB) PAM4 intensity-modulation and direct-detection (IM/DD) transmission based on a CMOS ADC and DAC is experimentally demonstrated in this work. To cost effectively build a >50 Gb/s system as well as to extend the transmission distance, a low cost EML and a passive optical filter are utilized to generate the SSB signal. However, the EML-induced chirp and dispersion-induced power fading limit the requirements of the SSB filter. To separate the effect of signal-signal beating interference, filters with different roll-off factors are employed to demonstrate the performance tolerance at different transmission distance. Moreover, a high resolution spectrum analysis is proposed to depict the system limitation. Experimental results show that a minimum roll-off factor of 7 dB/10GHz is required to achieve a 51.84Gb/s 40-km transmission with only linear feed-forward equalization.
EEG character identification using stimulus sequences designed to maximize mimimal hamming distance.
Fukami, Tadanori; Shimada, Takamasa; Forney, Elliott; Anderson, Charles W
2012-01-01
In this study, we have improved upon the P300 speller Brain-Computer Interface paradigm by introducing a new character encoding method. Our concept in detection of the intended character is not based on a classification of target and nontarget responses, but based on an identifaction of the character which maximize the difference between P300 amplitudes in target and nontarget stimuli. Each bit included in the code corresponds to flashing character, '1', and non-flashing, '0'. Here, the codes were constructed in order to maximize the minimum hamming distance between the characters. Electroencephalography was used to identify the characters using a waveform calculated by adding and subtracting the response of the target and non-target stimulus according the codes respectively. This stimulus presentation method was applied to a 3×3 character matrix, and the results were compared with that of a conventional P300 speller of the same size. Our method reduced the time until the correct character was obtained by 24%.
Geographic market definition: the case of Medicare-reimbursed skilled nursing facility care.
Bowblis, John R; North, Phillip
2011-01-01
Correct geographic market definition is important to study the impact of competition. In the nursing home industry, most studies use geopolitical boundaries to define markets. This paper uses the Minimum Data Set to generate an alternative market definition based on patient flows for Medicare skilled nursing facilities. These distances are regressed against a range of nursing home and area characteristics to determine what influences market size. We compared Herfindahl-Hirschman Indices based on county and resident-flow measures of geographic market definition. Evidence from this comparison suggests that using the county for the market definition is not appropriate across all states.
New nearby white dwarfs from Gaia DR1 TGAS and UCAC5/URAT
NASA Astrophysics Data System (ADS)
Scholz, R.-D.; Meusinger, H.; Jahreiß, H.
2018-05-01
Aims: Using an accurate Tycho-Gaia Astrometric Solution (TGAS) 25 pc sample that is nearly complete for GK stars and selecting common proper motion (CPM) candidates from the 5th United States Naval Observatory CCD Astrograph Catalog (UCAC5), we search for new white dwarf (WD) companions around nearby stars with relatively small proper motions. Methods: To investigate known CPM systems in TGAS and to select CPM candidates in TGAS+UCAC5, we took into account the expected effect of orbital motion on the proper motion and proper motion catalogue errors. Colour-magnitude diagrams (CMDs) MJ /J - Ks and MG /G - J were used to verify CPM candidates from UCAC5. Assuming their common distance with a given TGAS star, we searched for candidates that occupied similar regions in the CMDs as the few known nearby WDs (four in TGAS) and WD companions (three in TGAS+UCAC5). The CPM candidates with colours and absolute magnitudes corresponding neither to the main sequence nor to the WD sequence were considered as doubtful or subdwarf candidates. Results: With a minimum proper motion of 60 mas yr-1, we selected three WD companion candidates; two of which are also confirmed by their significant parallaxes measured in URAT data, whereas the third may also be a chance alignment of a distant halo star with a nearby TGAS star that has an angular separation of about 465 arcsec. One additional nearby WD candidate was found from its URAT parallax and GJKs photometry. With HD 166435 B orbiting a well-known G1 star at ≈24.6 pc with a projected physical separation of ≈700 AU, we discovered one of the hottest WDs, classified by us as DA2.0 ± 0.2, in the solar neighbourhood. We also found TYC 3980-1081-1 B, a strong cool WD companion candidate around a recently identified new solar neighbour with a TGAS parallax corresponding to a distance of ≈8.3 pc and our photometric classification as ≈M2 dwarf. This raises the question of whether previous assumptions on the completeness of the WD sample to a distance of 13 pc were correct. Partly based on observations with the 2.2 m telescope of the German-Spanish Astronomical Centre at Calar Alto, Spain
Kim, Taehyun; Lee, Kiyoung; Yang, Wonho; Yu, Seung Do
2012-08-01
Although the global positioning system (GPS) has been suggested as an alternative way to determine time-location patterns, its use has been limited. The purpose of this study was to evaluate a new analytical method of classifying time-location data obtained by GPS. A field technician carried a GPS device while simulating various scripted activities and recorded all movements by the second in an activity diary. The GPS device recorded geological data once every 15 s. The daily monitoring was repeated 18 times. The time-location data obtained by the GPS were compared with the activity diary to determine selection criteria for the classification of the GPS data. The GPS data were classified into four microenvironments (residential indoors, other indoors, transit, and walking outdoors); the selection criteria used were used number of satellites (used-NSAT), speed, and distance from residence. The GPS data were classified as indoors when the used-NSAT was below 9. Data classified as indoors were further classified as residential indoors when the distance from the residence was less than 40 m; otherwise, they were classified as other indoors. Data classified as outdoors were further classified as being in transit when the speed exceeded 2.5 m s(-1); otherwise, they were classified as walking outdoors. The average simple percentage agreement between the time-location classifications and the activity diary was 84.3 ± 12.4%, and the kappa coefficient was 0.71. The average differences between the time diary and the GPS results were 1.6 ± 2.3 h for the time spent in residential indoors, 0.9 ± 1.7 h for the time spent in other indoors, 0.4 ± 0.4 h for the time spent in transit, and 0.8 ± 0.5 h for the time spent walking outdoors. This method can be used to determine time-activity patterns in exposure-science studies.
Ryan, S E; Blasi, D A; Anglin, C O; Bryant, A M; Rickard, B A; Anderson, M P; Fike, K E
2010-07-01
Use of electronic animal identification technologies by livestock managers is increasing, but performance of these technologies can be variable when used in livestock production environments. This study was conducted to determine whether 1) read distance of low-frequency radio frequency identification (RFID) transceivers is affected by type of transponder being interrogated; 2) read distance variation of low-frequency RFID transceivers is affected by transceiver manufacturer; and 3) read distance of various transponder-transceiver manufacturer combinations meet the 2004 United States Animal Identification Plan (USAIP) bovine standards subcommittee minimum read distance recommendation of 60 cm. Twenty-four transceivers (n = 5 transceivers per manufacturer for Allflex, Boontech, Farnam, and Osborne; n = 4 transceivers for Destron Fearing) were tested with 60 transponders [n = 10 transponders per type for Allflex full duplex B (FDX-B), Allflex half duplex (HDX), Destron Fearing FDX-B, Farnam FDX-B, and Y-Tex FDX-B; n = 6 for Temple FDX-B (EM Microelectronic chip); and n = 4 for Temple FDX-B (HiTag chip)] presented in the parallel orientation. All transceivers and transponders met International Organization for Standardization 11784 and 11785 standards. Transponders represented both one-half duplex and full duplex low-frequency air interface technologies. Use of a mechanical trolley device enabled the transponders to be presented to the center of each transceiver at a constant rate, thereby reducing human error. Transponder and transceiver manufacturer interacted (P < 0.0001) to affect read distance, indicating that transceiver performance was greatly dependent upon the transponder type being interrogated. Twenty-eight of 30 combinations of transceivers and transponders evaluated met the minimum recommended USAIP read distance. The mean read distance across all 30 combinations was 45.1 to 129.4 cm. Transceiver manufacturer and transponder type interacted to affect read distance variance (P < 0.05). Maximum read distance performance of low-frequency RFID technologies with low variance can be achieved by selecting specific transponder-transceiver combinations.
Zhou, L; Goodman, G; Martikainen, A
2013-01-01
Continuous airflow monitoring can improve the safety of the underground work force by ensuring the uninterrupted and controlled distribution of mine ventilation to all working areas. Air velocity measurements vary significantly and can change rapidly depending on the exact measurement location and, in particular, due to the presence of obstructions in the air stream. Air velocity must be measured at locations away from obstructions to avoid the vortices and eddies that can produce inaccurate readings. Further, an uninterrupted measurement path cannot always be guaranteed when using continuous airflow monitors due to the presence of nearby equipment, personnel, roof falls and rib rolls. Effective use of these devices requires selection of a minimum distance from an obstacle, such that an air velocity measurement can be made but not affected by the presence of that obstacle. This paper investigates the impacts of an obstruction on the behavior of downstream airflow using a numerical CFD model calibrated with experimental test results from underground testing. Factors including entry size, obstruction size and the inlet or incident velocity are examined for their effects on the distributions of airflow around an obstruction. A relationship is developed between the minimum measurement distance and the hydraulic diameters of the entry and the obstruction. A final analysis considers the impacts of continuous monitor location on the accuracy of velocity measurements and on the application of minimum measurement distance guidelines.
Zhou, L.; Goodman, G.; Martikainen, A.
2015-01-01
Continuous airflow monitoring can improve the safety of the underground work force by ensuring the uninterrupted and controlled distribution of mine ventilation to all working areas. Air velocity measurements vary significantly and can change rapidly depending on the exact measurement location and, in particular, due to the presence of obstructions in the air stream. Air velocity must be measured at locations away from obstructions to avoid the vortices and eddies that can produce inaccurate readings. Further, an uninterrupted measurement path cannot always be guaranteed when using continuous airflow monitors due to the presence of nearby equipment, personnel, roof falls and rib rolls. Effective use of these devices requires selection of a minimum distance from an obstacle, such that an air velocity measurement can be made but not affected by the presence of that obstacle. This paper investigates the impacts of an obstruction on the behavior of downstream airflow using a numerical CFD model calibrated with experimental test results from underground testing. Factors including entry size, obstruction size and the inlet or incident velocity are examined for their effects on the distributions of airflow around an obstruction. A relationship is developed between the minimum measurement distance and the hydraulic diameters of the entry and the obstruction. A final analysis considers the impacts of continuous monitor location on the accuracy of velocity measurements and on the application of minimum measurement distance guidelines. PMID:26388684
Determination of the Sun's offset from the Galactic plane using pulsars
NASA Astrophysics Data System (ADS)
Yao, J. M.; Manchester, R. N.; Wang, N.
2017-07-01
We derive the Sun's offset from the local mean Galactic plane (z⊙) using the observed z-distribution of young pulsars. Pulsar distances are obtained from measurements of annual parallax, H I absorption spectra or associations where available and otherwise from the observed pulsar dispersion and a model for the distribution of free electrons in the Galaxy. We fit the cumulative distribution function for a sech2(z)-distribution function, representing an isothermal self-gravitating disc, with uncertainties being estimated using the bootstrap method. We take pulsars having characteristic age τc ≲ 106.5 yr and located within 4.5 kpc of the Sun, omitting those within the local spiral arm and those significantly affected by the Galactic warp, and solve for z⊙ and the scaleheight, H, for different cut-offs in τc. We compute these quantities using just the independently determined distances and these together with dispersion measure (DM)-based distances separately using the YMW16 and NE2001 Galactic electron density models. We find that an age cut-off at 105.75 yr with YMW16 DM distances gives the best results with a minimum uncertainty in z⊙ and an asymptotically stable value for H showing that, at this age and below, the observed pulsar z-distribution is dominated by the dispersion in their birth locations. From this sample of 115 pulsars, we obtain z⊙ = 13.4 ± 4.4 pc and H = 56.9 ± 6.5 pc, similar to estimated scaleheights for OB stars and open clusters. Consistent results are obtained using the independent-only distances and using the NE2001 model for the DM-based distances.
NASA Astrophysics Data System (ADS)
Sneath, P. H. A.
A BASIC program is presented for significance tests to determine whether a dendrogram is derived from clustering of points that belong to a single multivariate normal distribution. The significance tests are based on statistics of the Kolmogorov—Smirnov type, obtained by comparing the observed cumulative graph of branch levels with a graph for the hypothesis of multivariate normality. The program also permits testing whether the dendrogram could be from a cluster of lower dimensionality due to character correlations. The program makes provision for three similarity coefficients, (1) Euclidean distances, (2) squared Euclidean distances, and (3) Simple Matching Coefficients, and for five cluster methods (1) WPGMA, (2) UPGMA, (3) Single Linkage (or Minimum Spanning Trees), (4) Complete Linkage, and (5) Ward's Increase in Sums of Squares. The program is entitled DENBRAN.
Open and Distance Learning for Health: Supporting Health Workers through Education and Training
ERIC Educational Resources Information Center
Dodds, Tony
2011-01-01
This case study surveys the growing use of open and distance learning approaches to the provision of support, education and training to health workers over the past few decades. It classifies such uses under four headings, providing brief descriptions from the literature of a few examples of each group. In conclusion, it identifies key lessons…
YBYRÁ facilitates comparison of large phylogenetic trees.
Machado, Denis Jacob
2015-07-01
The number and size of tree topologies that are being compared by phylogenetic systematists is increasing due to technological advancements in high-throughput DNA sequencing. However, we still lack tools to facilitate comparison among phylogenetic trees with a large number of terminals. The "YBYRÁ" project integrates software solutions for data analysis in phylogenetics. It comprises tools for (1) topological distance calculation based on the number of shared splits or clades, (2) sensitivity analysis and automatic generation of sensitivity plots and (3) clade diagnoses based on different categories of synapomorphies. YBYRÁ also provides (4) an original framework to facilitate the search for potential rogue taxa based on how much they affect average matching split distances (using MSdist). YBYRÁ facilitates comparison of large phylogenetic trees and outperforms competing software in terms of usability and time efficiency, specially for large data sets. The programs that comprises this toolkit are written in Python, hence they do not require installation and have minimum dependencies. The entire project is available under an open-source licence at http://www.ib.usp.br/grant/anfibios/researchSoftware.html .
Spatial forecast of landslides in three gorges based on spatial data mining.
Wang, Xianmin; Niu, Ruiqing
2009-01-01
The Three Gorges is a region with a very high landslide distribution density and a concentrated population. In Three Gorges there are often landslide disasters, and the potential risk of landslides is tremendous. In this paper, focusing on Three Gorges, which has a complicated landform, spatial forecasting of landslides is studied by establishing 20 forecast factors (spectra, texture, vegetation coverage, water level of reservoir, slope structure, engineering rock group, elevation, slope, aspect, etc). China-Brazil Earth Resources Satellite (Cbers) images were adopted based on C4.5 decision tree to mine spatial forecast landslide criteria in Guojiaba Town (Zhigui County) in Three Gorges and based on this knowledge, perform intelligent spatial landslide forecasts for Guojiaba Town. All landslides lie in the dangerous and unstable regions, so the forecast result is good. The method proposed in the paper is compared with seven other methods: IsoData, K-Means, Mahalanobis Distance, Maximum Likelihood, Minimum Distance, Parallelepiped and Information Content Model. The experimental results show that the method proposed in this paper has a high forecast precision, noticeably higher than that of the other seven methods.
Spatial Forecast of Landslides in Three Gorges Based On Spatial Data Mining
Wang, Xianmin; Niu, Ruiqing
2009-01-01
The Three Gorges is a region with a very high landslide distribution density and a concentrated population. In Three Gorges there are often landslide disasters, and the potential risk of landslides is tremendous. In this paper, focusing on Three Gorges, which has a complicated landform, spatial forecasting of landslides is studied by establishing 20 forecast factors (spectra, texture, vegetation coverage, water level of reservoir, slope structure, engineering rock group, elevation, slope, aspect, etc). China-Brazil Earth Resources Satellite (Cbers) images were adopted based on C4.5 decision tree to mine spatial forecast landslide criteria in Guojiaba Town (Zhigui County) in Three Gorges and based on this knowledge, perform intelligent spatial landslide forecasts for Guojiaba Town. All landslides lie in the dangerous and unstable regions, so the forecast result is good. The method proposed in the paper is compared with seven other methods: IsoData, K-Means, Mahalanobis Distance, Maximum Likelihood, Minimum Distance, Parallelepiped and Information Content Model. The experimental results show that the method proposed in this paper has a high forecast precision, noticeably higher than that of the other seven methods. PMID:22573999
Multivariate pattern analysis for MEG: A comparison of dissimilarity measures.
Guggenmos, Matthias; Sterzer, Philipp; Cichy, Radoslaw Martin
2018-06-01
Multivariate pattern analysis (MVPA) methods such as decoding and representational similarity analysis (RSA) are growing rapidly in popularity for the analysis of magnetoencephalography (MEG) data. However, little is known about the relative performance and characteristics of the specific dissimilarity measures used to describe differences between evoked activation patterns. Here we used a multisession MEG data set to qualitatively characterize a range of dissimilarity measures and to quantitatively compare them with respect to decoding accuracy (for decoding) and between-session reliability of representational dissimilarity matrices (for RSA). We tested dissimilarity measures from a range of classifiers (Linear Discriminant Analysis - LDA, Support Vector Machine - SVM, Weighted Robust Distance - WeiRD, Gaussian Naïve Bayes - GNB) and distances (Euclidean distance, Pearson correlation). In addition, we evaluated three key processing choices: 1) preprocessing (noise normalisation, removal of the pattern mean), 2) weighting decoding accuracies by decision values, and 3) computing distances in three different partitioning schemes (non-cross-validated, cross-validated, within-class-corrected). Four main conclusions emerged from our results. First, appropriate multivariate noise normalization substantially improved decoding accuracies and the reliability of dissimilarity measures. Second, LDA, SVM and WeiRD yielded high peak decoding accuracies and nearly identical time courses. Third, while using decoding accuracies for RSA was markedly less reliable than continuous distances, this disadvantage was ameliorated by decision-value-weighting of decoding accuracies. Fourth, the cross-validated Euclidean distance provided unbiased distance estimates and highly replicable representational dissimilarity matrices. Overall, we strongly advise the use of multivariate noise normalisation as a general preprocessing step, recommend LDA, SVM and WeiRD as classifiers for decoding and highlight the cross-validated Euclidean distance as a reliable and unbiased default choice for RSA. Copyright © 2018 Elsevier Inc. All rights reserved.
Sivapalan, Sean T.; Vella, Jarrett H.; Yang, Timothy K.; Dalton, Matthew J.; Haley, Joy E.; Cooper, Thomas M.; Urbas, Augustine M.; Tan, Loon-Seng; Murphy, Catherine J.
2013-01-01
Surface-plasmon-initiated interference effects of polyelectrolyte-coated gold nanorods on the two-photon absorption of an organic chromophore were investigated. With polyelectrolyte bearing gold nanorods of 2,4,6 and 8 layers, the role of the plasmonic fields as function of distance on such effects was examined. An unusual distance dependence was found: enhancements in the two-photon cross-section were at a minimum at an intermediate distance, then rose again at a further distance. The observed values of enhancement were compared to theoretical predictions using finite element analysis and showed good agreementdue to constructive and destructive interference effects. PMID:23687561
NASA Astrophysics Data System (ADS)
Choi, Eun-Jin; Jeong, Moon-Taeg; Jang, Seong-Joo; Choi, Nam-Gil; Han, Jae-Bok; Yang, Nam-Hee; Dong, Kyung-Rae; Chung, Woon-Kwan; Lee, Yun-Jong; Ryu, Young-Hwan; Choi, Sung-Hyun; Seong, Kyeong-Jeong
2014-01-01
This study examined whether scanning could be performed with minimum dose and minimum exposure to the patient after an attenuation correction. A Hoffman 3D Brain Phantom was used in BIO_40 and D_690 PET/CT scanners, and the CT dose for the equipment was classified as a low dose (minimum dose), medium dose (general dose for scanning) and high dose (dose with use of contrast medium) before obtaining the image at a fixed kilo-voltage-peak (kVp) and milliampere (mA) that were adjusted gradually in 17-20 stages. A PET image was then obtained to perform an attenuation correction based on an attenuation map before analyzing the dose difference. Depending on tube current in the range of 33-190 milliampere-second (mAs) when BIO_40 was used, a significant difference in the effective dose was observed between the minimum and the maximum mAs (p < 0.05). According to a Scheffe post-hoc test, the ratio of the minimum to the maximum of the effective dose was increased by approximately 5.26-fold. Depending on the change in the tube current in the range of 10-200 mA when D_690 was used, a significant difference in the effective dose was observed between the minimum and the maximum of mA (p < 0.05). The Scheffe posthoc test revealed a 20.5-fold difference. In conclusion, because effective exposure dose increases with increasing operating current, it is possible to reduce the exposure limit in a brain scan can be reduced if the CT dose can be minimized for a transmission scan.
NASA Astrophysics Data System (ADS)
Farshadfar, M.; Farshadfar, E.
The present research was conducted to determine the genetic variability of 18 Lucerne cultivars, based on morphological and biochemical markers. The traits studied were plant height, tiller number, biomass, dry yield, dry yield/biomass, dry leaf/dry yield, macro and micro elements, crude protein, dry matter, crude fiber and ash percentage and SDS- PAGE in seed and leaf samples. Field experiments included 18 plots of two meter rows. Data based on morphological, chemical and SDS-PAGE markers were analyzed using SPSSWIN soft ware and the multivariate statistical procedures: cluster analysis (UPGMA), principal component. Analysis of analysis of variance and mean comparison for morphological traits reflected significant differences among genotypes. Genotype 13 and 15 had the greatest values for most traits. The Genotypic Coefficient of Variation (GCV), Phenotypic Coefficient of Variation (PCV) and Heritability (Hb) parameters for different characters raged from 12.49 to 26.58% for PCV, hence the GCV ranged from 6.84 to 18.84%. The greatest value of Hb was 0.94 for stem number. Lucerne genotypes could be classified, based on morphological traits, into four clusters and 94% of the variance among the genotypes was explained by two PCAs: Based on chemical traits they were classified into five groups and 73.492% of variance was explained by four principal components: Dry matter, protein, fiber, P, K, Na, Mg and Zn had higher variance. Genotypes based on the SDS-PAGE patterns all genotypes were classified into three clusters. The greatest genetic distance was between cultivar 10 and others, therefore they would be suitable parent in a breeding program.
Sequence data - Magnitude and implications of some ambiguities.
NASA Technical Reports Server (NTRS)
Holmquist, R.; Jukes, T. H.
1972-01-01
A stochastic model is applied to the divergence of the horse-pig lineage from a common ansestor in terms of the alpha and beta chains of hemoglobin and fibrinopeptides. The results are compared with those based on the minimum mutation distance model of Fitch (1972). Buckwheat and cauliflower cytochrome c sequences are analyzed to demonstrate their ambiguities. A comparative analysis of evolutionary rates for various proteins of horses and pigs shows that errors of considerable magnitude are introduced by Glx and Asx ambiguities into evolutionary conclusions drawn from sequences of incompletely analyzed proteins.
Semi-supervised morphosyntactic classification of Old Icelandic.
Urban, Kryztof; Tangherlini, Timothy R; Vijūnas, Aurelijus; Broadwell, Peter M
2014-01-01
We present IceMorph, a semi-supervised morphosyntactic analyzer of Old Icelandic. In addition to machine-read corpora and dictionaries, it applies a small set of declension prototypes to map corpus words to dictionary entries. A web-based GUI allows expert users to modify and augment data through an online process. A machine learning module incorporates prototype data, edit-distance metrics, and expert feedback to continuously update part-of-speech and morphosyntactic classification. An advantage of the analyzer is its ability to achieve competitive classification accuracy with minimum training data.
Pattern recognition for passive polarimetric data using nonparametric classifiers
NASA Astrophysics Data System (ADS)
Thilak, Vimal; Saini, Jatinder; Voelz, David G.; Creusere, Charles D.
2005-08-01
Passive polarization based imaging is a useful tool in computer vision and pattern recognition. A passive polarization imaging system forms a polarimetric image from the reflection of ambient light that contains useful information for computer vision tasks such as object detection (classification) and recognition. Applications of polarization based pattern recognition include material classification and automatic shape recognition. In this paper, we present two target detection algorithms for images captured by a passive polarimetric imaging system. The proposed detection algorithms are based on Bayesian decision theory. In these approaches, an object can belong to one of any given number classes and classification involves making decisions that minimize the average probability of making incorrect decisions. This minimum is achieved by assigning an object to the class that maximizes the a posteriori probability. Computing a posteriori probabilities requires estimates of class conditional probability density functions (likelihoods) and prior probabilities. A Probabilistic neural network (PNN), which is a nonparametric method that can compute Bayes optimal boundaries, and a -nearest neighbor (KNN) classifier, is used for density estimation and classification. The proposed algorithms are applied to polarimetric image data gathered in the laboratory with a liquid crystal-based system. The experimental results validate the effectiveness of the above algorithms for target detection from polarimetric data.
Doulamis, A; Doulamis, N; Ntalianis, K; Kollias, S
2003-01-01
In this paper, an unsupervised video object (VO) segmentation and tracking algorithm is proposed based on an adaptable neural-network architecture. The proposed scheme comprises: 1) a VO tracking module and 2) an initial VO estimation module. Object tracking is handled as a classification problem and implemented through an adaptive network classifier, which provides better results compared to conventional motion-based tracking algorithms. Network adaptation is accomplished through an efficient and cost effective weight updating algorithm, providing a minimum degradation of the previous network knowledge and taking into account the current content conditions. A retraining set is constructed and used for this purpose based on initial VO estimation results. Two different scenarios are investigated. The first concerns extraction of human entities in video conferencing applications, while the second exploits depth information to identify generic VOs in stereoscopic video sequences. Human face/ body detection based on Gaussian distributions is accomplished in the first scenario, while segmentation fusion is obtained using color and depth information in the second scenario. A decision mechanism is also incorporated to detect time instances for weight updating. Experimental results and comparisons indicate the good performance of the proposed scheme even in sequences with complicated content (object bending, occlusion).
NASA Astrophysics Data System (ADS)
Huang, Caroline D.; Riess, Adam G.; Hoffmann, Samantha L.; Klein, Christopher; Bloom, Joshua; Yuan, Wenlong; Macri, Lucas M.; Jones, David O.; Whitelock, Patricia A.; Casertano, Stefano; Anderson, Richard I.
2018-04-01
We present year-long, near-infrared (NIR) Hubble Space Telescope (HST) WFC3 observations of Mira variables in the water megamaser host galaxy NGC 4258. Miras are asymptotic giant branch variables that can be divided into oxygen- (O-) and carbon- (C-) rich subclasses. Oxygen-rich Miras follow a tight (scatter ∼0.14 mag) period–luminosity relation (PLR) in the NIR and can be used to measure extragalactic distances. The water megamaser in NGC 4258 gives a geometric distance to the galaxy accurate to 2.6% that can serve to calibrate the Mira PLR. We develop criteria for detecting and classifying O-rich Miras with optical and NIR data as well as NIR data alone. In total, we discover 438 Mira candidates that we classify with high confidence as O-rich. Our most stringent criteria produce a sample of 139 Mira candidates that we use to measure a PLR. We use the OGLE-III sample of O-rich Miras in the Large Magellanic Cloud to obtain a relative distance modulus, μ 4258 ‑ μ LMC = 10.95 ± 0.01 (statistical) ±0.06 (systematic) mag, that is statistically consistent with the relative distance determined using Cepheids. These results demonstrate the feasibility of discovering and characterizing Miras using the NIR with the HST and the upcoming James Webb Space Telescope and using those Miras to measure extragalactic distances and determine the Hubble constant.
Liu, Haibo; Li, Qian-shu; Xie, Yaoming; King, R Bruce; Schaefer, Henry F
2010-08-12
The triple-decker sandwich compound trans-Cp(2)V(2)(eta(6):eta(6)-mu-C(6)H(6)) has been synthesized, as well as "slipped" sandwich compounds of the type trans-Cp(2)Co(2)(eta(4):eta(4)-mu-arene) and the cis-Cp(2)Fe(2)(eta(4):eta(4)-mu-C(6)R(6)) derivatives with an Fe-Fe bond (Cp = eta(5)-cyclopentadienyl). Theoretical studies show that the symmetrical triple-decker sandwich structures trans-Cp(2)M(2)(eta(6):eta(6)-mu-C(6)H(6)) are the global minima for M = Ti, V, and Mn but lie approximately 10 kcal/mol above the global minimum for M = Cr. The nonbonding M...M distances and spin states in these triple decker sandwich compounds can be related to the occupancies of the frontier bonding molecular orbitals. The global minimum for the chromium derivative is a singlet spin state cis-Cp(2)Cr(2)(eta(4):eta(4)-mu-C(6)H(6)) structure with a very short CrCr distance of 2.06 A, suggesting a formal quadruple bond. A triplet state cis-Cp(2)Cr(2)(eta(4):eta(4)-mu-C(6)H(6)) structure with a predicted Cr[triple bond]Cr distance of 2.26 A lies only approximately 3 kcal/mol above this global minimum. For the later transition metals the global minima are predicted to be cis-Cp(2)M(2)(eta(6):eta(6)-mu-C(6)H(6)) structures with a metal-metal bond, rather than triple decker sandwiches. These include singlet cis-Cp(2)Fe(2)(eta(4):eta(4)-mu-C(6)H(6)) with a predicted Fe=Fe double bond distance of 2.43 A, singlet cis-Cp(2)Co(2)(eta(3):eta(3)-mu-C(6)H(6)) with a predicted Co-Co single bond distance of 2.59 A, and triplet cis-Cp(2)Ni(2)(eta(3):eta(3)-mu-C(6)H(6)) with a predicted Ni-Ni distance of 2.71 A.
A tri-fold hybrid classification approach for diagnostics with unexampled faulty states
NASA Astrophysics Data System (ADS)
Tamilselvan, Prasanna; Wang, Pingfeng
2015-01-01
System health diagnostics provides diversified benefits such as improved safety, improved reliability and reduced costs for the operation and maintenance of engineered systems. Successful health diagnostics requires the knowledge of system failures. However, with an increasing system complexity, it is extraordinarily difficult to have a well-tested system so that all potential faulty states can be realized and studied at product testing stage. Thus, real time health diagnostics requires automatic detection of unexampled system faulty states based upon sensory data to avoid sudden catastrophic system failures. This paper presents a trifold hybrid classification (THC) approach for structural health diagnosis with unexampled health states (UHS), which comprises of preliminary UHS identification using a new thresholded Mahalanobis distance (TMD) classifier, UHS diagnostics using a two-class support vector machine (SVM) classifier, and exampled health states diagnostics using a multi-class SVM classifier. The proposed THC approach, which takes the advantages of both TMD and SVM-based classification techniques, is able to identify and isolate the unexampled faulty states through interactively detecting the deviation of sensory data from the exampled health states and forming new ones autonomously. The proposed THC approach is further extended to a generic framework for health diagnostics problems with unexampled faulty states and demonstrated with health diagnostics case studies for power transformers and rolling bearings.
A novel edge-preserving nonnegative matrix factorization method for spectral unmixing
NASA Astrophysics Data System (ADS)
Bao, Wenxing; Ma, Ruishi
2015-12-01
Spectral unmixing technique is one of the key techniques to identify and classify the material in the hyperspectral image processing. A novel robust spectral unmixing method based on nonnegative matrix factorization(NMF) is presented in this paper. This paper used an edge-preserving function as hypersurface cost function to minimize the nonnegative matrix factorization. To minimize the hypersurface cost function, we constructed the updating functions for signature matrix of end-members and abundance fraction respectively. The two functions are updated alternatively. For evaluation purpose, synthetic data and real data have been used in this paper. Synthetic data is used based on end-members from USGS digital spectral library. AVIRIS Cuprite dataset have been used as real data. The spectral angle distance (SAD) and abundance angle distance(AAD) have been used in this research for assessment the performance of proposed method. The experimental results show that this method can obtain more ideal results and good accuracy for spectral unmixing than present methods.
Probabilistic Multi-Person Tracking Using Dynamic Bayes Networks
NASA Astrophysics Data System (ADS)
Klinger, T.; Rottensteiner, F.; Heipke, C.
2015-08-01
Tracking-by-detection is a widely used practice in recent tracking systems. These usually rely on independent single frame detections that are handled as observations in a recursive estimation framework. If these observations are imprecise the generated trajectory is prone to be updated towards a wrong position. In contrary to existing methods our novel approach uses a Dynamic Bayes Network in which the state vector of a recursive Bayes filter, as well as the location of the tracked object in the image are modelled as unknowns. These unknowns are estimated in a probabilistic framework taking into account a dynamic model, and a state-of-the-art pedestrian detector and classifier. The classifier is based on the Random Forest-algorithm and is capable of being trained incrementally so that new training samples can be incorporated at runtime. This allows the classifier to adapt to the changing appearance of a target and to unlearn outdated features. The approach is evaluated on a publicly available benchmark. The results confirm that our approach is well suited for tracking pedestrians over long distances while at the same time achieving comparatively good geometric accuracy.
Rate-compatible protograph LDPC code families with linear minimum distance
NASA Technical Reports Server (NTRS)
Divsalar, Dariush (Inventor); Dolinar, Jr., Samuel J. (Inventor); Jones, Christopher R. (Inventor)
2012-01-01
Digital communication coding methods are shown, which generate certain types of low-density parity-check (LDPC) codes built from protographs. A first method creates protographs having the linear minimum distance property and comprising at least one variable node with degree less than 3. A second method creates families of protographs of different rates, all structurally identical for all rates except for a rate-dependent designation of certain variable nodes as transmitted or non-transmitted. A third method creates families of protographs of different rates, all structurally identical for all rates except for a rate-dependent designation of the status of certain variable nodes as non-transmitted or set to zero. LDPC codes built from the protographs created by these methods can simultaneously have low error floors and low iterative decoding thresholds.